Click here to Download this video

To download video: Right click on the following button and select "Save Link As"

DeepMind x UCL | Deep Learning Lectures | 12/12 | Responsible Innovation

DeepMind

Subscribers 356K subscribers

15,506 views


Share with Link:
Published on Jul 09, 2020
What can we do to build algorithms that are safe, reliable and robust? And what are the responsibilities of technologists who work in this area? In this talk, Chongli Qin and Iason Gabriel explore these questions — connected through the lens of responsible innovation — in two parts. In the first part, Chongli explores the question of why and how we can design algorithms that are safe, reliable and trustworthy through the lens of specification driven machine learning. In the second part, Iason looks more closely at ethical dimensions of machine learning, at the responsibility of researchers, and at processes that can structure ethical deliberation in this domain. Taken together, they suggest that there are important measures that we can, and should, put in place — if we want to build systems that are beneficial to society.

Download the slides here:
https://storage.googleapis.com/deepmind-media/UCLxDeepMind_2020/L12%20-%20UCLxDeepMind%20DL2020.pdf

Find out more about how DeepMind increases access to science here:
https://deepmind.com/about#access_to_science

Speaker Bios:

Chongli Qin is a research scientist at DeepMind, her primary interest is in building safer, more reliable and more trustworthy machine learning algorithms. Over the past several years, she has contributed in developing algorithms to make neural networks more robust to noise. Key parts of her research focuses on functional analysis: properties of neural network that can naturally enhance robustness. She has also contributed in building mathematical frameworks to verify/guarantee that certain properties hold for neural networks. Prior to DeepMind, Chongli studied in Cambridge, where she studied the mathematics tripos and scientific computing before doing a PhD in bioinformatics.

Iason Gabriel is a Senior Research Scientist at DeepMind where he works in the ethics research team. His work focuses on the applied ethics of artificial intelligence, human rights, and the question of how to align technology with human values. Before joining DeepMind, Iason was a Fellow in Politics at St John’s College, Oxford, and a member of the Centre for the Study of Social Justice (CSSJ). He holds a doctorate in Political Theory from the University of Oxford and spent a number of years working for the United Nations in post-conflict environments.

About the lecture series:

The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.

In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
Up next
Autoplay