AI Specialist
March 19, 2024

Imagine a world where artificial intelligence (AI) spirals out of our control, leading to catastrophic consequences such as humanity’s extinction. This grim possibility is not a sci-fi scenario but a stark warning from a leading researcher, supported by none other than tech titan Elon Musk.

The reality check: Lack of proof for AI control

Despite the hype surrounding AI’s potential, there’s a crucial gap in our understanding—we have no concrete evidence showing that we can rein in AI once it surpasses certain thresholds. It’s like trying to tame a wild beast without a leash. This lack of assurance raises serious doubts about our ability to control AI’s trajectory, prompting the researcher to sound the alarm.

Elon Musk-backed researcher raises alarm about AI’s potential threat to humanity after discovering no evidence that the technology can be controlled. The millionaire has granted cash to Dr. Roman V. Yampolskiy, an expert in AI safety, to investigate cutting-edge intelligent systems, the subject of his upcoming book AI: Unexplainable, Unpredictable, Uncontrollable.

The book explores how AI has the “potential to cause an existential catastrophe” and could drastically alter society, not always to our benefit.

He proposed that in order for AI to be completely controlled, it must be clear, limiting, easy to grasp in human language, and customisable with “undo” choices.

“It makes sense that a lot of people think this is the biggest issue that humanity has ever faced, Yampolskiy said in a statement.

The universe’s fate is at stake, and the result might be either prosperity or extinction.

OpenAI CEO downplays concerns of Killer Robots, emphasises subtle societal risks of AI

Sam Altman, CEO of OpenAI, stated on 13th February 2024 that while AI raises many issues and worries, it may not affect human existence unless robots start to revolt against their creators.

Chat, grow, stay: Cracking the Code to Gen Z Happiness at Work Through Regular Feedback-image01

There are certain situations when it’s simple to see where things go terribly wrong. At the World Governments Summit in Dubai, he added: “And I’m not that interested in the killer robots walking on the street direction of things going wrong.”

“What really interests me are the very subtle misalignments in society where things just go horribly wrong without any malicious intent on the part of the people.” 

It’s said that Musk gave Yampolskiy funding in the past, although it’s unclear how much and in what specifics. 

Yampolskiy thanked Musk in a 2019 Medium blog post for “partially funding his work on AI safety”.

Navigating the complex landscape of AI safety

The CEO of Tesla has also raised concerns about AI; in 2023, he and over 33,000 other business leaders signed an open statement published on The Future of Life Institute.

According to the letter, AI laboratories are currently “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one can reliably control, understand, or predict, not even their creators”.

“We should only develop powerful AI systems when we are certain that their effects will be beneficial and their risks will be manageable.”

Musk has stated that he wants to guarantee that the AI and robots used by his firms can be operated by humans. One of his goals is to render the Tesla robot, Optimus, incapable of doing harm to humans.

When he revealed Optimus in 2021, he declared: “We’re setting it such that it is at a mechanical level, at a physical level, that you can run away from it and most likely overpower it.”

Furthermore, Yampolskiy’s next novel seems to share the same worries.

Regardless of the advantages such models offer, he expressed worries about newly discovered instruments that pose hazards to mankind, having only recently been produced.

Unravelling the complexities of AI control: A cautionary perspective

Regardless of the advantages such models offer, he expressed worries about newly discovered instruments that pose hazards to mankind, having only recently been produced.

AI has been able to write code, create emails, and generate queries in the past few years.

These days, these kinds of technologies are used to detect disease, develop new medications, and even locate and target enemies in combat.

Furthermore, researchers project that technology will reach singularity by 2045, a point at which it would transcend human intelligence and be capable of self-replication, potentially beyond human control.

Yampolskiy questioned: “Why do so many researchers assume that AI control problems are solvable?”

“As far as we are aware, there is no proof or evidence for that.” It is imperative to demonstrate that the problem can be solved before attempting to develop a controlled AI.

Although the researcher claimed to have done a thorough review in order to reach his findings, it is now uncertain exactly which literature he consulted.

Yampolskiy gave an explanation of why he thinks AI is uncontrollable, citing the technology’s capacity for learning, adapting, and acting somewhat independently.

He clarified that since these skills render decision-making capacities boundless, an endless number of safety-related problems could potentially occur.

Furthermore, humans might not be able to anticipate problems because technology changes as it proceeds.

Grappling with the complexities of AI ethics and control

We cannot comprehend the issue and lower the probability of accidents in the future if we merely have a “black box” and do not grasp AI’s decision-making.

AI systems are already being used, for instance, to make decisions in the areas of finance, employment, healthcare, and security, to mention a few.

Such systems ought to be able to justify their choices and demonstrate their objectivity in the process.

“We wouldn’t be able to tell if AI started providing incorrect or manipulative answers if we got used to accepting its answers without question, basically treating it like an Oracle system,” said Yampolskiy.

Additionally, he pointed out that if AI’s capacity grows, so does its autonomy, but our ability to regulate it declines, and greater autonomy equates to less safety.

Yampolskiy cautioned that humanity has a decision: Do we reject having a helpful guardian and continue to be in charge and free, or do we become like babies, taken care of but not in control?

The specialist did offer some recommendations for reducing the hazards, such as building a device that adheres exactly to human commands, but Yampolskiy drew attention to the possibility of contradictory instructions, misunderstandings, or malevolent use.

He clarified: “AI in control means that humans are not in control, whereas humans in control can result in contradictory or overtly malevolent orders.”

The majority of AI safety researchers are trying to figure out how to make future superintelligence more in line with human values.

By definition, value-aligned AI will be biassed—pro-human prejudice is biassed regardless of its quality. 

Value-aligned AI presents a contradiction in that an individual can explicitly tell an AI system to do something, but the machine may respond negatively even as it attempts to fulfil the individual’s wishes. 

Either humanity is honoured or it is protected, but not both.

(Tashia Bernardus)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.