Question
In the movie, The Terminator (1984) the U.S. put its nuclear arsenal under the control of the Skynet software program. This was done in order
In the movie, The Terminator (1984) the U.S. put its nuclear arsenal under the control of the Skynet software program. This was done in order to remove errored or capricious human decision making from the system. Skynet began to learn at a rapid rate, became self aware, and quickly concluded that humans were a threat to its existence. Government analysts detected this and tried to pull the plug on Skynet. In retaliation, Skynet launched nuclear missiles against Russia, correctly anticipating a nuclear counterattack. In the aftermath of the world-wide devastation, Skynet created an army of hunter killer machines and other weaponry, in order to round up and exterminate all humans. Of course, in the end, humans prevailed. An entertaining science fiction story. The operative word in that literary label is fiction, not science. Right?
AI practitioners work on knowledge representation and reasoning, machine learning, natural language processing, machine vision, robotics, and related areas. Progress in these areas has meant progress in automation and increased productivity. Of course, automation has always implied some lost employment, but otherwise the benefits to society from productivity increases are substantial.
Thus, you would think that AI has been a good thing, and that we should support its development. If so, you might be surprised to learn that many prominent scientists, for example Stephen Hawking and Elon Musk, have seen AI as a threat. They fear that humans may lose control of AI in the very long run. As time goes on, computers and networks will become faster. Machine learning will improve, giving rise to intelligence and knowledge that humans cannot attain. Computer systems will learn to improve their own software and hardware without human intervention. Individual computer systems could develop ways to connect and work together over large scale networks. At some point, computer systems could see their own claim on the worlds resources, and their own goals, as superior to human claims. At that point, our status, if not our existence, would be imperiled.
Musk suggests that a U.S. government agency study the growth of AI, and propose rules governing its long-term development. Musk is an initial investor in the Future of Life Institute, which is dedicated to research on controlling AI development. The Institutes 2017 open letter stated: The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Many thousands of scientists, and other prominent people, signed this open letter.
Musk and eight other investors have pledged to invest a billion dollars in the OpenAI company. The companys charter states: OpenAIs mission is to ensure that artificial general intelligence (AGI)by which we mean highly autonomous systems that outperform humans at most economically valuable workbenefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome OpenAI scientists and Elon Musk believe in the democratization of AI. This means that the development of AI should not be centralized in the hands of a few large corporations. To counter centralization, OpenAI is developing AI software tools, and releasing them to the world.
In his last book, published posthumously in October 2018, Stephen Hawking warned us about challenges that threaten mankind, including the long-term development of AI. He says, the real risk with AI isnt malice, but competence i.e., future AI-based systems will try to achieve their goals, which may not square with ours. Hawking is known for making his points with humor. He cites a typical objection to his fears. A person asked him, Why are we so worried about AI? Surely humans are always able to pull the plug? His response: People asked a computer, Is there a God? And the computer said, There is now, and fused the plug.
Read the text and answer,
Should We Be Afraid of AI?
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started