Awesome, not awesome.
#Awesome
“Scientists have harnessed artificial intelligence to translate brain signals into speech, in a step toward brain implants that one day could let people with impaired abilities speak their minds, according to a new study…When perfected, the system could give people who can’t speak, such as stroke patients, cancer victims, and those suffering from amyotrophic lateral sclerosis — or Lou Gehrig’s disease — the ability to conduct conversations at a natural pace, the researchers said.” — Robert Lee Hotz, Writer Learn More from The Wall Street Journal >
#Not Awesome
“…The calls for ethics in AI have been strong and understandable. AI is powerful technology that can and already has gone terribly wrong, as in the advertising and recommendation algorithms used by Facebook and Youtube. Now some of this stuff is obvious and there has been no lack of people pointing out the problems. If you train an algorithm on engagement for example, it will surface more content that confirms users’ existing beliefs and skews towards emotional content that appeals to our instincts (rather than requiring us to engage our rationality which requires effort). “ — Albert Wenger, Investor Learn More from Continuations >
What we’re reading.
1/ Twitter won’t use machine learning algorithms to ban white supremacists’ accounts from the platform out of fear that they would affect some republican politicians’ accounts too. Learn More from Motherboard >
2/ If we as a society want our Tech companies to build ethical AI tools, we’ll need to protect employees who speak out against questionable practices. Learn More from The New York Times >
3/ Machine learning algorithms aren’t inherently biased, but problems arise when the “raw” data we feed them are actually cooked without care. Learn More from Benedict Evans >
4/ Facebook claims its AI filters failed to flag the Christchurch mass-shooting video as a “harmful act” because it was filmed from a first-person perspective. Learn More from Bloomberg >
5/ Elon Musk wants Tesla to operate a fleet of “robo taxis” by the end of next year — here’s a video of one of their self-driving vehicles in action. Learn More from YouTube >
6/ Measuring AI advances against human performance is creating incentives for companies to create technologies that replace human efforts, not augment them. Learn More from Axios >
7/ AI algorithms can capture and “regurgitat[e] some statistical variation” of music it hears, but does that make them creative? Learn More from MIT Technology Review >
Links from the community.
“Announcing our series B funding… and what it means for the future of work” submitted by Dan Turchin (@dturchin). Learn More from Astound >
“I trained an AI on Mark Zuckerberg’s Facebook posts and it has thoughts about AI.” submitted by Max Woolf (@minimaxir). Learn More from Twitter >
“From Principles to Action: How do we Implement Tech Ethics?” by Industry Ethicists. Learn More from Noteworthy >
“Zuck on security: “The only hope is building AI systems that can either identify things…” submitted by Samiur Rahman (@samiur1204). Learn More from Twitter >
“Convolutional Neural Network on Oil Spills in Niger Delta” by Kehinde Ogunyale. Learn More from Noteworthy >
“So you are thinking of taking the AWS Certified Machine Learning Specialty exam” by Alberto Artasanchez. Learn More from Noteworthy >
?First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >
AI and White Supremacy on Twitter was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story.
Leave a Reply