By Jack Zhong
Edited By Hsin-Pei Toh While we are nowhere close to creating a human brain, we humans have successfully created artificial ones in the form of computers. This development occurred in just a little over 50 years after Professor Alan Turing created the precursor to the first computers to break the Enigma encryption used by Nazi forces in their military communications during World War II. Indeed, the rate of progress in computer technology is increasing exponentially. For instance, computer chips are doubling in performance every two years, roughly in accordance with Moore’s law. The next step in computer technology is creating Artificial Intelligence (AI). In fact, we already possess AI in the rudimentary sense. In video games, the computer operates units which behave intelligently in opposition to the player. While driving, we use GPS to automatically route the journey based on our preferences. These AI are specialized in a particular task and are called “weak AI.” Weak AI also specializes in a variety of tasks including calculations and repeated actions. However, AI that scientists are working towards will have “general intelligence,” meaning they can perform any task that humans can perform. These machines, known as “strong AI,” far exceed the capabilities of weak AI. Creating strong AI is a controversial topic. Many of the world’s leading science and technological figures, including Stephen Hawking and Elon Musk, have expressed concerns about the effects of its creation. This is not a new concern, given films such as iRobotwhich have explored the notion of humanity’s demise due to AI. The argument is that strong AI may find humans threatening to their survival. After all, we expect them to altruistically serve our purposes while we constantly roll out new machines to replace the “outdated” models. Hawking and Musk argue that it would be highly plausible for AI to wipe out humans or subject us to strict control. As unappetizing as it sounds, the intelligence of strong AI could eventually exceed that of humans. Weak AI have already surpassed humans in many specialized tasks. For instance, the chess-playing computer Deep Blue defeated Gary Kasparov, one of the greatest chess grandmasters of all time. While this does not prove that a chess-playing machine necessarily trumps its human counterpart, it does prove that it is capable of such feats. Furthermore, evolution could occur at a much faster rate in machines than in humans. If AI were able to combine the learning, synthesizing, and planning abilities of humans with its raw processing power, it is theoretically feasible for their intelligence to far exceed that of humans. Of the many factors complicating human control over machines, different ways of “thinking” stands as one of the main obstacles. Take the example of the chess-playing computer. Typical chess-playing software may have the computer analyze all possible moves and outcomes to determine the best one. Meanwhile, a typical human player would continuously analyze a few appealing moves before choosing, and there is no certainty that any of the moves analyzed is the best one overall. Humans simply do not have the processing power to consciously analyze all possible moves and variations in a short time. An AI could be programmed to think in ways humans cannot, since computers have much faster processing power for some tasks. The “thinking” strategy of an AI could be updated accordingly to fit its needs. Yet, the thinking of AI could be unpredictable or incomprehensible, especially in regards to ASIs (Artificial Super Intelligences). For instance, an ASI programmed to protect humans may find human activity self-destructive and try to imprison us for safety, as did the supercomputer in iRobot. The reasoning and solutions proposed by an ASI may be too complex for our understanding; we would be like spiders trying to understand who built the house that it lives in. In these situations, it could be hard or even impossible to ensure that the interest and thinking of the ASI would align with our own interests. In addition, opponents to strong AI development point out other potential mistakes that could compromise our ability to control our creations. Bugs, or unforeseen mistakes in the software commands written by programmers, appear frequently in modern computer code. While harmless in some instances, a bug can have serious ramifications. For instance, a software bug caused a Mars orbiter to crash. The vast amount of software necessary for strong AI would also magnify the amount of bugs. Also, hackers could exploit bugs to alter the code in AI to serve ill purposes. In the worst case, the bug could undermine the safety mechanisms that programmers placed to protect humans, or create unexpected AI behavior. The code writing process thus needs tight regulation to avoid mistakes that could potentially lead to disaster. Strong AI poses a sizable danger and should be developed with the utmost caution. While I do not believe that creating strong AI will lead to certain extinction for humans, I do think its creation would profoundly alter life as we know it. There is no guarantee that AI could solve all problems at an acceptable cost. As we approach the creation of strong AI, it’s more important to be cautious and observant, rather than buying into one prediction or the other. Many people did not foresee the rise of the Internet. While the focus of much of the argument against strong AI has been focused on the AI, a mirror needs to be placed in front of humans. Having such an aid as a strong AI, a person with either good or ill intent could magnify his or her reach. The outcome depends on humankind’s ability to regulate itself as well as its creative process. We may not be able to stop the eventual creation of strong AI and even super intelligent AI, but we can implement policies and regulations to ensure safe development and avoid regrettable, irreversible outcomes.
0 Comments
Leave a Reply. |
Categories
All
Archives
April 2024
|