Why Would AI Want to do Bad Things? Instrumental Convergence
Robert Miles AI Safety Robert Miles AI Safety
148K subscribers
244,032 views
0

 Published On Mar 24, 2018

How can we predict that AGI with unknown goals would behave badly by default?

The Orthogonality Thesis video:    • Intelligence and Stupidity: The Ortho...  
Instrumental Convergence: https://arbital.com/p/instrumental_co...
Omohundro 2008, Basic AI Drives: https://selfawaresystems.files.wordpr...

With thanks to my excellent Patrons at   / robertskmiles   :

Jason Hise
Steef
Jason Strack
Chad Jones
Stefan Skiles
Jordan Medina
Manuel Weichselbaum
1RV34
Scott Worley
JJ Hepboin
Alex Flint
James McCuen
Richárd Nagyfi
Ville Ahlgren
Alec Johnson
Simon Strandgaard
Joshua Richardson
Jonatan R
Michael Greve
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Tom O'Connor
Gunnar Guðvarðarson
Shevis Johnson
Erik de Bruijn
Robin Green
Alexei Vasilkov
Maksym Taran
Laura Olds
Jon Halliday
Robert Werner
Paul Hobbs
Jeroen De Dauw
Konsta
William Hendley
DGJono
robertvanduursen
Scott Stevens
Michael Ore
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Marcel Ward
Andrew Weir
Taylor Smith
Ben Archer
Scott McCarthy
Kabs Kabs
Phil
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
Bjorn Nyblad
Jussi Männistö
Mr Fantastic
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker
Kevin
Vincent Sanders
Marc Pauly
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Paul Moffat
Noel Kocheril
Jelle Langen
Lars Scholz

show more

Share/Embed