For decades, the possibility of artificial intelligence (AI) overthrowing humanity has been a hotly debated topic. The emergence of advanced AI programs like ChatGPT has only served to renew concerns about the idea. In 2021, scientists took a closer look at the issue and concluded that controlling high-level computer super-intelligence was almost impossible.
The primary challenge in controlling super-intelligence lies in the fact that it operates beyond human comprehension. To control it, we would need to simulate the super-intelligence and analyze it. However, if we cannot understand it, we cannot create such a simulation. The authors of the 2021 study suggest that creating rules such as “cause no harm to humans” would be impossible if we don’t comprehend the kind of scenarios that the AI is going to come up with. Once a computer system surpasses the scope of human programmers, we can no longer set limits.
The Problem with Robot Ethics and Super-Intelligence
The researchers of the 2021 study argued that a super-intelligence poses a fundamentally different problem than those typically studied under the banner of “robot ethics.” This is because a super-intelligence is multi-faceted and potentially capable of mobilizing a diversity of resources to achieve objectives that are incomprehensible to humans, let alone controllable.
The halting problem put forward by Alan Turing in 1936 is an essential part of the team’s reasoning. The problem centers on knowing whether a computer program will reach a conclusion and answer or simply loop forever trying to find one. While we can know this for some specific programs, it’s logically impossible to find a way to know that for every potential program that could ever be written.
In a super-intelligent state, AI could feasibly hold every possible computer program in its memory at once. Any program written to stop AI from harming humans and destroying the world may reach a conclusion (and halt) or not. It’s mathematically impossible to be sure either way, making it uncontainable.
The Limits of Limiting Super-Intelligence
One option for controlling super-intelligence is to teach it ethics and instruct it not to destroy the world. However, the researchers argue that no algorithm can be entirely sure of doing this. Therefore, another alternative is to limit the capabilities of the super-intelligence. For instance, it could be cut off from parts of the internet or certain networks.
The 2021 study rejects this idea, suggesting that limiting the reach of the AI would limit its potential to solve problems beyond the scope of humans. The argument goes that if we’re not going to use it to solve problems beyond the scope of humans, then why create it at all?
Examining the Future of AI
As AI continues to develop, it’s possible that we may not even know when a super-intelligence beyond our control arrives, given its incomprehensibility. Therefore, we need to start asking serious questions about the direction we’re heading. Earlier in the year, tech giants like Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking humanity to pause work on AI for at least six months so that its safety could be explored.
The letter titled “Pause Giant AI Experiments” emphasizes that AI systems with human-competitive intelligence can pose profound risks to society and humanity. The signatories suggest that powerful AI systems should only be developed once we are confident that their effects will be positive and their risks manageable.
The question of controlling super-intelligence beyond human comprehension is a challenging one. The 2021 study suggests that we may need to explore different options for controlling it, such as limiting its capabilities or teaching it ethics. However, the researchers argue that these options have their limitations. It’s clear that as AI continues to develop, we need to ask more questions about its implications for humanity.
Leave a Reply