Columbia Science Review
  • Home
  • About
    • Executive Board
    • Editorial Board
  • Blog
  • Events
    • 2022-2023
    • 2021-2022
    • 2020-2021
    • 2019-2020
    • 2018-2019
    • 2017-2018
    • 2016-2017
  • Publications
  • COVID-19 Public Hub
    • Interviews >
      • Biology of COVID-19
      • Public Health
      • Technology & Data
    • Frontline Stories >
      • Healthcare Workers
      • Global Health
      • Volunteer Efforts
    • Resources & Links >
      • FAQ's
      • Resource Hubs
      • Student Opportunities
      • Podcasts & Graphics
      • Mental Health Resources
      • Twitter Feeds
      • BLM Resources
    • Columbia Events >
      • Campus Events
      • CUMC COVID-19 Symposium
      • CSR Events
    • Our Team
  • Contact

Integrated Information Theory: A Theory of Consciousness

2/25/2021

1 Comment

 
Picture
Illustrated by Rebecca Siegel
By Kevin Wang

What is consciousness? Philosophers have tackled this question for hundreds of years, yet have always fallen short of reaching an objective conclusion. With simple technology and a limited understanding of the human brain, they had trouble grasping the workings of our minds. Many thought that our selves and our consciousness were ethereal—stemming from a magical dimension, rather than rooted in a physiological source.

​However, as technology has developed, neuroscientists have been able to develop a better understanding of how the human brain works. With this research has come a better understanding of the mind and an ability to tackle consciousness from a purely biological perspective. Many have created theories for consciousness, one such being
Integrated Information Theory, or IIT.


IIT is an attempt to build a theory of consciousness from the top down, rather than the bottom up. In other words, rather than philosophically theorizing about what consciousness is, it looks at consciousness in humans and attempts to create a list of qualities that define a conscious being. 

IIT begins by listing a series of axioms, or basic requirements for consciousness. These axioms are numerous and complex, but let’s begin by discussing two of the major ones. 

The first axiom is intrinsic existence—that is, consciousness exists, and your experiences are real. This seems blatantly obvious and doesn’t seem to prove anything special. But the axiom actually highlights an important fact about consciousness—we can be certain that we are conscious. This harkens back to the thinking of philosopher René Descartes, who coined the phrase “I think, therefore I am.” Descartes posited that we can question nearly any fact about the world—the sky is blue, the year is 2020, 2+2 = 4. This is because our senses may lie to us, and we may make any errors in logic. However, the only thing we cannot doubt is that we are thinking, for to deny that would be to think in and of itself. Thus, when we consider consciousness and experience, we cannot deny the fact that it exists. For without consciousness, we could not consider it in the first place.

The second relevant axiom is that of integration—our experience cannot be partitioned, or separated into different parts. For example, consider that you look at a white cue ball. The experience of seeing a white cue ball cannot be the summation of the experience of seeing white and that of seeing a cue ball. In fact, it’s unclear what it would even mean to see a cue ball without any color. Rather, experience is integrated—any given experience is simply that, an experience, and cannot be divided into parts.

From these, the theory develops certain postulates—more complex results that are derived from the axioms. One major postulate is that of cause-effect relationships in an integrated system. The different parts of our consciousness do not occur separately, but rather have causal power over each other, wherein different “parts” of our consciousness directly affect each other. As a result, we do not have multiple senses that are added up, but rather one unified experience.

This is best explained by taking the example of the human body. At first, it does not appear to be truly integrated. We have different organs, from our eyes, ears, mouths, and noses, all of which process different senses separately. What you hear does not seem to have an effect on what you see. However, this hypothesis falls apart once we take a closer look at the brain, specifically the cerebral cortex, where sensory information is processed. Information from sensory organs arrives here in the form of signals from neurons. The way these signals are processed, though, is not separate. While there are different parts of the cerebral cortex for different senses—for example, the visual cortex, the auditory cortex—they do not operate separately. Rather, each sense has a direct effect on others: a cause-effect relationship. 

The exact method by which this occurs is not yet certain, but we can empirically verify this, by the fact that the summation of the senses has been measured to be non-linear. In other words, the information from each sense is not added together, but certain parts are exaggerated or diminished depending on the mix of signals that come in. Therefore, what we perceive is created by an integration of our senses, not simply by adding them all together.

The developers of IIT have even found a way to measure the amount of “consciousness” of a system with a Φ value between 0 and 1. The calculations for this value are complex, but they essentially measure how well a system functions when taken apart. If, when taken apart, a system functions just as well as it did when put together, then it isn’t very integrated, and there will be a low Φ value. However, if the parts are all cause-effect, and are able to affect each other, then the way the system works when taken apart will be very different—leading to a high Φ value.

This theory is revolutionary, not only for providing a biologically accurate theory of consciousness, but because of its implications for AI. If consciousness can be quantitatively measured, it seems likely that it could be constructed in machines. We could create machines that are increasingly advanced and have higher and higher levels of consciousness. In fact, there is no reason why the human experience would be the pinnacle of consciousness. If consciousness is a sliding scale, it appears that we could create a machine that somehow experiences a higher level of reality than we do.

It is hard to picture what this looks like or even means, but the implications of IIT are impressive. We may soon be navigating a future where machines demonstrate human-like consciousness—in fact, they might even be beginning to do so. Most importantly, though, this philosophy demonstrates that our own consciousness is not a magical construct, but rather, the inner, complex workings of our brains. 
​


1 Comment
Grant Castillou
10/26/2021 11:22:45 am

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Reply



Leave a Reply.

    Categories

    All
    Artificial Intelligence
    Halloween 2022

    Archives

    November 2022
    October 2022
    June 2022
    January 2022
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    November 2019
    October 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    April 2018
    March 2018
    February 2018
    November 2017
    October 2017
    May 2017
    April 2017
    April 2016
    March 2016
    February 2016
    December 2015
    November 2015
    October 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    May 2014
    April 2014
    March 2014
    February 2014
    December 2013
    November 2013
    October 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    April 2011
    March 2011
    February 2011
    September 2010
    August 2010
    July 2010
    June 2010
    May 2010
    April 2010
    March 2010
    February 2010
    January 2010
    December 2009
    November 2009
    July 2009
    May 2009

Columbia Science Review
© COPYRIGHT 2022. ALL RIGHTS RESERVED.
Photos used under Creative Commons from driver Photographer, BrevisPhotography, digitalbob8, Rennett Stowe, Kristine Paulus
  • Home
  • About
    • Executive Board
    • Editorial Board
  • Blog
  • Events
    • 2022-2023
    • 2021-2022
    • 2020-2021
    • 2019-2020
    • 2018-2019
    • 2017-2018
    • 2016-2017
  • Publications
  • COVID-19 Public Hub
    • Interviews >
      • Biology of COVID-19
      • Public Health
      • Technology & Data
    • Frontline Stories >
      • Healthcare Workers
      • Global Health
      • Volunteer Efforts
    • Resources & Links >
      • FAQ's
      • Resource Hubs
      • Student Opportunities
      • Podcasts & Graphics
      • Mental Health Resources
      • Twitter Feeds
      • BLM Resources
    • Columbia Events >
      • Campus Events
      • CUMC COVID-19 Symposium
      • CSR Events
    • Our Team
  • Contact