Wired's founding executive editor Kevin Kelly wrote a 5,000-word takedown on "the myth of a superhuman AI," challenging dire warnings from Bill Gates, Stephen Hawking, and Elon Musk about the potential extinction of humanity at the hands of a superintelligent constructs. Slashdot reader mirandakatz calls it an "impeccably argued debunking of this pervasive myth." Kelly writes:
Buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence...
1.) Artificial intelligence is already getting smarter than us, at an exponential rate.
2.) We'll make AIs into a general purpose intelligence, like our own.
3.) We can make human intelligence in silicon.
4.) Intelligence can be expanded without limit.
5.) Once we have exploding superintelligence it can solve most of our problems...
If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth
Kelly proposes "five heresies" which he says have more evidence to support them -- including the prediction that emulating human intelligence "will be constrained by cost" -- and he likens artificial intelligence to the physical powers of machines.