Awesome, not awesome.
#Awesome
“Doctors can detect heart failure from a single heartbeat with 100% accuracy using a new artificial intelligence-driven neural network…“[t]he application of organizational neuroscience, and specifically of neural network approaches to healthcare issues promises to open breakthrough frontiers for both clinical research and practice.”” — Nicholas Fearn, Journalist Learn More from Forbes >
#Not Awesome
“The rising use of facial recognition by law enforcement, immigration services, banks, and other institutions has provoked fears that such tools will be used to cause harm. There’s a growing body of evidence that the nascent technology struggles with both racial and gender bias. A January study from the MIT Media Lab found that Amazon’s Rekognition tool misidentified darker-skinned women as men one-third of the time. The software even mislabeled white women as men at higher rates than white men.” — Amrita Khalid, Reporter Learn More from Quartz >
What we’re reading.
1/ Major governments are using machine learning algorithms to exploit poor populations with little public debate — cutting welfare payments, reclaiming debt, and running biometric experiments sometimes with fatal results. Learn More from The Guardian >
2/ OpenAI, the leading AI research lab, uses showy demonstrations to get attention from the press — some experts believe this strategy makes it hard for the public to understand the actual progress happening in the field of AI. Learn More from The New York Times >
3/ Algorithms may one day help us find our soulmates, but what side effects could eliminating the emotional work required to understand oneself and others have? Learn More from OZY >
4/ Pinterests uses “old-fashioned, subjective, human judgement” to evaluate impacts of changes they make to their algorithm — and this could be the main reason why their platform is less frequently abused for misinformation campaigns. Learn More from OneZero >
5/ AI is sometimes better than humans at battling cybersecurity threats because it can detect completely new attacks, while security experts typically only look for incremental attacks. Learn More from WIRED >
6/ Machine learning systems are good at detecting text written by other machines, but not aren’t so good at determining whether a story is true or false. Learn More from Axios >
7/ To reduce bias in algorithms, we may just have to teach them something deeply ingrained in the human psyche — fairness. Learn More from Human Readable Magazine >
Links from the community.
“How The New York Times is Experimenting with Recommendation Algorithms” submitted by Samiur Rahman (@samiur1204). Learn More from the Times Open >
“FeatureUnion: a Time-Saver When Building a Machine Learning Model” by Alaa Sinjab. Learn More from Noteworthy >
“On Medium’s Data Science Strategy” by Tony Yiu. Learn More from Noteworthy >
🤖 First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >
Using machine learning to exploit poor populations was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story.
Leave a Reply