AI Scientists Warn of Extinction Threat
Some argue that AI will never be sentient or will never match human intelligence. That argument is irrelevant to the threat posed by AI.
When the people creating AI warn you about AI it’s time to listen!
The Center for AI Safety issued a Statement of AI Risk open letter that so far is signed by over 350 AI scientists, security analysts, tech executives, computer science professionals and more.
The list of signatories includes OpenAI CEO Sam Altman and reformed Google executive and Emeritus Professor of Computer Science at University of Toronto Geoffrey Hinton.
The letter simply states the following:
I want off this ride
First a global pandemic, then a proxy war between NATO and Russia and now this. Does anyone else want off this bizarre timeline?
The risk that AI causes an extinction-level event is probably years-to-decades away. However, some have argued that we are on track to create AGI by 2029. Combine the surge in AI development with profound strides in robotics and I get why we’re being warned.
Does AI need to match humans to be a threat?
Some argue that AI will never be sentient or will never match human intelligence. That argument is irrelevant to the threat posed by AI.
First of all, sentience is not a prerequisite to destructive patterns. AI could simply follow our instructions to the letter and we unknowingly cause our own demise. Besides, what does sentience really mean? How we define it might differ from how an AI defines it.
Second, AI does NOT need to match human intelligence to be a threat. Individually humans can creatively solve problems and dream up ideas - things that AI can’t do (for now). One point for humans.
However, I can’t easily or rapidly transfer my ideas, thought processes and knowledge to another human. In fact, it takes more than 20 years for a new human to gain expertise in a given domain.
How long does it take one medically-oriented AI unit to teach every other AI unit on the planet to be a competent doctor? The knowledge transfer from machine to machine is almost instantaneous.
Moreover, an AI unit that is a competent doctor can also become a competent composer, artist, researcher, coder, mathematician, therapist and more. So even if that individual AI unit can’t match the specific expertise or problem-solving ability of the best doctors, it possesses the collective expertise of all other subject matter domains.
As domain knowledge expands, fully-trained AI will always be at the forefront. Every. Single. One. Even if it lacks creativity or ingenuity, AI will always compete with the most knowledgeable humans. That is, because of instantaneous machine-to-machine information transfer every single AI unit will possess leading edge knowledge on all domains.
AI wins the same way humans won
Humans rose to the top of the food chain because of our ability to organize. Indeed, variation in power and wealth from one society to another can be explained by ability to organize.
Collectively, we are stronger than most of the forces around us because we share knowledge and coordinate. We specialize and trust each other to deliver on our areas of expertise. This is why most of us don’t have to grow our own food or perform at-home dentistry. The threat to humanity becomes real when another ‘species’ can organize better.
AI does not need to be as smart as humans to win. Individually, humans might retain our superior problem solving abilities and creativity over machines. However, collectively humans are far less efficient in sharing knowledge and coordinating.
Ultimately, this is what makes AI a threat to human existence.