Columbia Science Review
  • Home
  • About
    • Executive Board
    • Editorial Board
  • Blog
  • Events
    • 2022-2023
    • 2021-2022
    • 2020-2021
    • 2019-2020
    • 2018-2019
    • 2017-2018
    • 2016-2017
  • Publications
  • COVID-19 Public Hub
    • Interviews >
      • Biology of COVID-19
      • Public Health
      • Technology & Data
    • Frontline Stories >
      • Healthcare Workers
      • Global Health
      • Volunteer Efforts
    • Resources & Links >
      • FAQ's
      • Resource Hubs
      • Student Opportunities
      • Podcasts & Graphics
      • Mental Health Resources
      • Twitter Feeds
      • BLM Resources
    • Columbia Events >
      • Campus Events
      • CUMC COVID-19 Symposium
      • CSR Events
    • Our Team
  • Contact

Artificial Intelligence: Creating our own extinction?

4/14/2015

0 Comments

 
Picture
By Jack Zhong
Edited By Hsin-Pei Toh

While we are nowhere close to creating a human brain, we humans have successfully created artificial ones in the form of computers. This development occurred in just a little over 50 years after Professor Alan Turing created the precursor to the first computers to break the Enigma encryption used by Nazi forces in their military communications during World War II. Indeed, the rate of progress in computer technology is increasing exponentially. For instance, computer chips are doubling in performance every two years, roughly in accordance with Moore’s law.

The next step in computer technology is creating Artificial Intelligence (AI). In fact, we already possess AI in the rudimentary sense. In video games, the computer operates units which behave intelligently in opposition to the player.  While driving, we use GPS to automatically route the journey based on our preferences. These AI are specialized in a particular task and are called “weak AI.” Weak AI also specializes in a variety of tasks including calculations and repeated actions. However, AI that scientists are working towards will have “general intelligence,” meaning they can perform any task that humans can perform. These machines, known as “strong AI,” far exceed the capabilities of weak AI.

Creating strong AI is a controversial topic. Many of the world’s leading science and technological figures, including Stephen Hawking and Elon Musk, have expressed concerns about the effects of its creation. This is not a new concern, given films such as iRobotwhich have explored the notion of humanity’s demise due to AI. The argument is that strong AI may find humans threatening to their survival. After all, we expect them to altruistically serve our purposes while we constantly roll out new machines to replace the “outdated” models. Hawking and Musk argue that it would be highly plausible for AI to wipe out humans or subject us to strict control.

As unappetizing as it sounds, the intelligence of strong AI could eventually exceed that of humans. Weak AI have already surpassed humans in many specialized tasks. For instance, the chess-playing computer Deep Blue defeated Gary Kasparov, one of the greatest chess grandmasters of all time. While this does not prove that a chess-playing machine necessarily trumps its human counterpart, it does prove that it is capable of such feats. Furthermore, evolution could occur at a much faster rate in machines than in humans.  If AI were able to combine the learning, synthesizing, and planning abilities of humans with its raw processing power, it is theoretically feasible for their intelligence to far exceed that of humans.

Of the many factors complicating human control over machines, different ways of “thinking” stands as one of the main obstacles. Take the example of the chess-playing computer. Typical chess-playing software may have the computer analyze all possible moves and outcomes to determine the best one. Meanwhile, a typical human player would continuously analyze a few appealing moves before choosing, and there is no certainty that any of the moves analyzed is the best one overall. Humans simply do not have the processing power to consciously analyze all possible moves and variations in a short time. An AI could be programmed to think in ways humans cannot, since computers have much faster processing power for some tasks. The “thinking” strategy of an AI could be updated accordingly to fit its needs. Yet, the thinking of AI could be unpredictable or incomprehensible, especially in regards to ASIs (Artificial Super Intelligences). For instance, an ASI programmed to protect humans may find human activity self-destructive and try to imprison us for safety, as did the supercomputer in iRobot. The reasoning and solutions proposed by an ASI may be too complex for our understanding; we would be like spiders trying to understand who built the house that it lives in. In these situations, it could be hard or even impossible to ensure that the interest and thinking of the ASI would align with our own interests.

In addition, opponents to strong AI development point out other potential mistakes that could compromise our ability to control our creations. Bugs, or unforeseen mistakes in the software commands written by programmers, appear frequently in modern computer code. While harmless in some instances, a bug can have serious ramifications. For instance, a software bug caused a Mars orbiter to crash.  The vast amount of software necessary for strong AI would also magnify the amount of bugs. Also, hackers could exploit bugs to alter the code in AI to serve ill purposes. In the worst case, the bug could undermine the safety mechanisms that programmers placed to protect humans, or create unexpected AI behavior. The code writing process thus needs tight regulation to avoid mistakes that could potentially lead to disaster.

Strong AI poses a sizable danger and should be developed with the utmost caution. While I do not believe that creating strong AI will lead to certain extinction for humans, I do think its creation would profoundly alter life as we know it. There is no guarantee that AI could solve all problems at an acceptable cost. As we approach the creation of strong AI, it’s more important to be cautious and observant, rather than buying into one prediction or the other. Many people did not foresee the rise of the Internet. While the focus of much of the argument against strong AI has been focused on the AI, a mirror needs to be placed in front of humans. Having such an aid as a strong AI, a person with either good or ill intent could magnify his or her reach. The outcome depends on humankind’s ability to regulate itself as well as its creative process. We may not be able to stop the eventual creation of strong AI and even super intelligent AI, but we can implement policies and regulations to ensure safe development and avoid regrettable, irreversible outcomes.
0 Comments



Leave a Reply.

    Categories

    All
    Artificial Intelligence
    Halloween 2022

    Archives

    November 2022
    October 2022
    June 2022
    January 2022
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    November 2019
    October 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    April 2018
    March 2018
    February 2018
    November 2017
    October 2017
    May 2017
    April 2017
    April 2016
    March 2016
    February 2016
    December 2015
    November 2015
    October 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    May 2014
    April 2014
    March 2014
    February 2014
    December 2013
    November 2013
    October 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    April 2011
    March 2011
    February 2011
    September 2010
    August 2010
    July 2010
    June 2010
    May 2010
    April 2010
    March 2010
    February 2010
    January 2010
    December 2009
    November 2009
    July 2009
    May 2009

Columbia Science Review
© COPYRIGHT 2022. ALL RIGHTS RESERVED.
Photos used under Creative Commons from driver Photographer, BrevisPhotography, digitalbob8, Rennett Stowe, Kristine Paulus
  • Home
  • About
    • Executive Board
    • Editorial Board
  • Blog
  • Events
    • 2022-2023
    • 2021-2022
    • 2020-2021
    • 2019-2020
    • 2018-2019
    • 2017-2018
    • 2016-2017
  • Publications
  • COVID-19 Public Hub
    • Interviews >
      • Biology of COVID-19
      • Public Health
      • Technology & Data
    • Frontline Stories >
      • Healthcare Workers
      • Global Health
      • Volunteer Efforts
    • Resources & Links >
      • FAQ's
      • Resource Hubs
      • Student Opportunities
      • Podcasts & Graphics
      • Mental Health Resources
      • Twitter Feeds
      • BLM Resources
    • Columbia Events >
      • Campus Events
      • CUMC COVID-19 Symposium
      • CSR Events
    • Our Team
  • Contact