Explaining AI
a16z a16z
122K subscribers
10,944 views
0

 Published On Jan 16, 2020

From movie recommendations to medical diagnoses, people are increasingly comfortable with AI making recommendations, or even decisions. However, AI often inherits bias from the datasets that train it, so how do we know we can trust it? Dr. Harry Shum, Head of Microsoft’s AI and Research, breaks down some of the current biases in AI models. And then calls for us to open the "black box" in order to develop the transparency, fairness, and trust needed for continued AI adoption.

Highlights
The latest AI breakthroughs [0:24]
Xiaoice, the Chinese AI with EQ (as well as IQ) [2:42]
Why EQ leads to better digital assistants and chat bots [3:50]
How Japanese and Chinese businesses are using Xiaoice for sales and financial reports [4:51]
Gender bias in current AI models [6:22]
Mapping the gender bias with word pairings [8:33]
Harry Shum makes the case for transparent AI [12:21]
3 Reasons why we need explainable AI [12:58]
The tradeoff between accuracy and explainability in AI models [14:20]

Pull Quote
"...with IQ, we're helping people to accomplish tasks. And with EQ, we have empathy, we have the social skills, and the understanding of human beings' feelings and the emotions."

show more

Share/Embed