Why Humanity may be in trouble
Just Kidding
Bad Hombres
The Threat of Artificial General Intelligence
Types of A.I.
Narrow AI
Machine Learning algorithms and driverless cars
AGI
Superintelligence
The Threat of Superintelligence
Intelligence Explosion
Recursive self-improvement
Artificial vs. Natural Selection
Humanity and Ants
"...the first ultraintelligent machine is the last invention that man need ever make." - I.J. Good
Inevitablity
Progress
Capitalism
Even the "Dark Ages" weren't that dark
Game Theory
AGI is zero-sum
Proven possible
Solutions
Don't develop it
Good luck with that
Try to control it
Asimov's Three Laws
Merge with it
Elon Musk's Neural Interface
Bury your head in the sand
Timing
The Law of Accelerating returns.
Could happen tomorrow, or it could happen in 50 years. No body knows.
Made with Slides.com