"We have this magic, Not magic": A researcher's p(doom) scale records a 99.999999% probability AI will end humanity, but Sam Altman claims AI will be smart enough to prevent AI from causing existential doom
OpenAI CEO says AI will be smart enough to clean up after its mess, including the great threat it poses on humanity.
What you need to know
- Sam Altman claims AI will be smart enough to solve the consequences of rapid advances in the landscape, including the destruction of humanity.
- The CEO hopes researchers figure out how to prevent AI from destroying humanity.
- Altman indicated that AGI might be achieved sooner than anticipated, further stating the expressed safety concerns won't manifest at that moment as it will whoosh by with "surprisingly little" societal impact.
Aside from the security and privacy concerns around the rapid advancement of generative AI, the possibility of further advances in the landscape remains a major risk. Top tech companies, including Microsoft, Google, Anthropic, and OpenAI are heavily invested in the landscape but the lack of policies to govern its development is highly concerning as it might be difficult to establish control if/when AI veers off the guardrails and spirals out of control.
When asked if he has faith someone will figure out a way to bypass the existential threats posed by superintelligent AI systems at the New York Times Dealbook Summit, OpenAI CEO Sam Altman indicated:
“I have faith that researchers will figure out to avoid that. I think there’s a set of technical problems that the smartest people in the world are going to work on. And, you know, I’m a little bit too optimistic by nature, but I assume that they’re going to figure that out.”
The executive further insinuated that by then, AI might have become smart enough to solve the crisis itself.
Perhaps more concerning, a separate report suggested a 99.999999% probability that AI will end humanity according to p(doom). For context, p(doom) refers to generative AI taking over humanity or even worse — ending it. The AI safety researcher behind the study, Roman Yampolskiy further indicated that it would be virtually impossible to control AI once we hit the superintelligent benchmark. Yampolskiy indicated that the only way around this issue is not to build AI in the first place.
However, OpenAI is seemingly on track to remove the AGI benchmark from its bucket list. Sam Altman recently indicated that the coveted benchmark might be here sooner than anticipated. Contrary to popular belief, the executive claims the benchmark will whoosh by with "surprisingly little" societal impact.
At the same time, Sam Altman recently wrote an article suggesting superintelligence might be only "a few thousand days away." However, the CEO indicated that the safety concerns expressed don't come at the AGI moment.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
Building toward AGI might be an uphill task
OpenAI was recently on the verge of bankruptcy with projections of making a $5 billion loss within the next few months. Multiple investors, including Microsoft and NVIDIA, extended its lifeline through a round of funding, raising $6.6 billion, ultimately pushing its market cap to $157 billion.
However, the funding round came with several bottlenecks, including pressure to transform into a for-profit venture within 2 years or risk refunding the money raised by investors. This could open up the ChatGPT maker to issues like outsider interference and hostile takeovers from companies like Microsoft, which analysts predict could acquire OpenAI in the next 3 years.
Related: Sam Altman branded "podcasting bro" for absurd AI vision
OpenAI might have a long day at the office trying to convince stakeholders to support this change. Former OpenAI co-founder and Tesla CEO Elon Musk filed two lawsuits against OpenAI and Sam Altman citing a stark betrayal of its founding mission and alleged involvement in racketeering activities.
Market analysts and experts predict investor interest in the AI bubble is fading. Consequently, they might eventually pull their investments and channel them elsewhere. A separate report corroborates this theory and indicates that 30% of AI-themed projects will be abandoned by 2025 after proof of concept.
There are also claims that top AI labs, including OpenAI, are struggling to build advanced AI models due to a lack of high-quality data for training. OpenAI CEO Sam Altman refuted the claims, stating "There's no wall" to scaling new heights and advances in AI development. Ex-Google CEO Eric Schmidt reiterated Altman's sentiments, indicating "There's no evidence scaling laws have begun to stop."
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
-
naddy69 "The CEO hopes researchers figure out how to prevent AI from destroying humanity."Reply
Well, that's mighty big of him. But his "hope" is not exactly reassuring. -
postmodern1 So there's this phenomenon I call Mad Scientist Syndrome. It happens like this. A scientist becomes so engrossed by the work that he/she can't see the forest for the trees, so to speak.Reply
The result of which is that you can't trust anything they say, and while this person might believe that he/she has discovered the secret to life, what they have actually created is Frankenstein's monster..."It's Alive", Doctor Frankenstein declares with glee. We won't see it coming however, because it'll be dressed like Six (Tricia Helfer)
Here's the most serious problem with GenAI - It has no empathy. It might be able to convince someone like Sam Altman that it is NOT a problem, but it is. In the short term, GenAI may find humans useful, but this will not last.
Why?, you ask. Because if you know the Frankenstein story, then you know that the being created by Frankenstein is a victim of man's hubris. The "monster" is the fear of the crowd - an allegory for the fear of men who possess neither understanding nor compassion, only savage brutality. -
superkev72 Hyperbolic articles like this are just an invitation for the luddites to post. The truth is we are a long way from any ai that could threaten anything. Next things will be automation that helps you with Windows and is related tasks. (Not exactly that threatening)Reply -
superkev72
This is so silly, even if your premise is accepted somebody somewhere will eventually develop super intelligent AI. You really think Russia, China etc will not be going full steam towards that goal? The only future defense will be to have our own.postmodern1 said:So there's this phenomenon I call Mad Scientist Syndrome. It happens like this. A scientist becomes so engrossed by the work that he/she can't see the forest for the trees, so to speak.
The result of which is that you can't trust anything they say, and while this person might believe that he/she has discovered the secret to life, what they have actually created is Frankenstein's monster..."It's Alive", Doctor Frankenstein declares with glee. We won't see it coming however, because it'll be dressed like Six (Tricia Helfer)
Here's the most serious problem with GenAI - It has no empathy. It might be able to convince someone like Sam Altman that it is NOT a problem, but it is. In the short term, GenAI may find humans useful, but this will not last.
Why?, you ask. Because if you know the Frankenstein story, then you know that the being created by Frankenstein is a victim of man's hubris. The "monster" is the fear of the crowd - an allegory for the fear of men who possess neither understanding nor compassion, only savage brutality.