“Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”
Occasionally my German Shepherd Dog knowingly does the wrong thing - like plucking my marigolds - out in the open while looking right at me. Something compels her to do it even though she anticipates a negative outcome. She looks at me while she’s doing it because she knows it’s wrong. Yet, the dopamine hit she gets from ‘killing’ a marigold is worth the potential consequences.
It seems AI developers are doing the same thing.
Despite warning us the AI revolution could destroy humanity - or at least seriously damage it - people like Sam Altman (CEO of OpenAI) continue to press on. Like my dog, the known rush AI developers get from creating groundbreaking technology outweighs the unknown negative cost.
You can’t argue Altman is naïve to the potential consequences. He has written about it for years - long before OpenAI released Chat GPT.
Eight years ago before OpenAI became a household name, Sam Altman warned the world about what he was about to create:
Superhuman Machine Intelligence (SMI) does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.
Like many AI developers, Altman probably figured if he didn’t build it someone else would. A combination of hubris and moral authority is driving AI developers to build, despite the risks. An attitude shared by Geoffrey Hinton, who started a doom roadshow since breaking free from Google.
As CEO of OpenAI and with big corporate (Microsoft) investments, Altman is motivated and incentivized to keep building.
Given his interests, he is incentivized to emphasize the societal benefits and downplay the potential negative consequences of AI. Still, Altman is clearly worried, as expressed in a number of recent interviews. Just the other day, he was pleading to the Senate to create AI regulation. This is unheard of.
If we’re seeing the watered-down version of his concern, what does he really think behind closed doors?
In 2015, Altman’s motives were less constrained and he freely explained why superhuman machine intelligence might kill us all:
Most machine intelligence development involves a “fitness function”—something the program tries to optimize. At some point, someone will probably try to give a program the fitness function of “survive and reproduce”. Even if not, it will likely be a useful sub goal of many other fitness functions. It worked well for biological life. Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways.
Evolution will continue forward, and if humans are no longer the most-fit species, we may go away. In some sense, this is the system working as designed. But as a human programmed to survive and reproduce, I feel we should fight it.
How can we survive the development of SMI? It may not be possible. One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to make itself undetectable.
Chat GPT, GPT4 and the other large language models and generative AI tools recently launched are just the beginning.
Netscape - a groundbreaking web browser - IPOd in 1995. You’d be a fool to think that was it for Internet capabilities. We are at the ‘1995 phase’ for AI development and commercialization. The next decade will amaze us all. By that time, Sam Altman might have similar reputational infamy to Mark Zuckerberg and Dick Fuld.
Even today, it almost seems like Altman isn’t avoiding controversy. According to the Financial Times, Altman plans to use eyeball-scanning technology to create a global identification system that could be used to gain free access to its own global currency, Worldcoin.
Worldcoin intends to use eye-scanning orbs as part of a global crypto ID system.
This is as dystopian as it gets. Even if AI doesn’t eat us for breakfast, our human overlords might.
Here’s one way it could play out:
Step 1: AI creates massive unemployment, while overall productivity and profits rise for the owners of capital. Massive wealth inequality.
Step 2: To prevent a mass uprising, a form of Universal Basic Income (UBI) is introduced to keep the population fed, housed and entertained.
Step 3: In order to sign up for UBI, your profile and biostats (e.g. eyeball scans) are entered into the global dataset.
Step 4: Your location, communications, habits, thoughts are tracked by AI every second of the day. Forever.
In the end the population is subjugated to the whims and needs of the technology masters - or to the AI itself.
I know how crazy this sounds but it’s not science fiction. Most of the technology already exists or is being developed for a 1984-style society (referring to George Orwell’s classic book).
The pieces of the puzzle are coming together, we are being warned and yet 90% of society is blindly walking into the twilight for humanity.
The UBI has been around since the Fed started inflation targeting in the 1990s. It's just gonna go on steroids now. Fed papers over deflation with currency. This was well done. Happy to cross post.
I have a massive wall of people coming into my site from some other lists. Gonna be over the next 30 days.