Columbia Science Review
  • Home
  • About
    • Our Team
  • Blog
  • Events
    • 2022-2023
    • 2021-2022
    • 2020-2021
    • 2019-2020
    • 2018-2019
    • 2017-2018
    • 2016-2017
  • Publications
  • COVID-19 Public Hub
    • Interviews >
      • Biology of COVID-19
      • Public Health
      • Technology & Data
    • Frontline Stories >
      • Healthcare Workers
      • Global Health
      • Volunteer Efforts
    • Resources & Links >
      • FAQ's
      • Resource Hubs
      • Student Opportunities
      • Podcasts & Graphics
      • Mental Health Resources
      • Twitter Feeds
      • BLM Resources
    • Columbia Events >
      • Campus Events
      • CUMC COVID-19 Symposium
      • CSR Events
    • Our Team
  • Contact

This is Your Brain at the Movies

12/14/2012

0 Comments

 
Picture
By Emma Meyers

Neuroscientists these days know a lot about how we see things. They know that when you look at an object, the visual information that enters your brain through your eyes must go through many hierarchical levels of processing in order to go from light waves to electrical impulses to green coffee cup. They know that during the early stages of this processing in the primary visual cortex (V1), visual information’s first cortical stop for decoding, our brains’ representations of objects are coarse, yet-unidentified conglomerations of edges, luminance, shadow, and color.

What they don’t know, though, is how motion contributes to the rough picture starting to come together in V1.

About a year ago, neuroscientists of UC Berkeley came up with a solution to this problem. Using functional magnetic resonance imaging (fMRI), a technique that detects which areas of the brain are most active, they looked at the brains of subjects watching movies and, through complex computations, were able to reconstruct models of what moving images might look like at the V1 level of processing.
This is what they found:
​

Pretty cool, huh?

In order to build these reconstructions, Professor Jack Gallant and his colleagues recorded the brain activity of subjects watching movies in an fMRI scanner and developed an algorithm for reconstruction, a sort of “key” or “dictionary” that could be used to translate the raw data from the fMRI into moving images. The subjects were then shown more movie clips, this time to test the reconstruction algorithm. The result is the video above. As we might expect from the V1 level, the images are coarse and rely heavily on edges, color, and contrast between light and shadow to create objects, but they’re surprisingly similar to the original clips.

Excited about their findings, the Berkeley group suggests that their new technology could one day allow us to watch our own dreams to see into the minds of coma patients. Until then, though, we’re a step closer to understanding how we process the world we move through every day.
​

You can read the full report of this study here, or watch an interesting video of the scientists explaining their work here.
0 Comments



Leave a Reply.

    Categories

    All
    Artificial Intelligence
    Halloween 2022
    Winter 2022-2023

    Archives

    April 2024
    January 2024
    February 2023
    November 2022
    October 2022
    June 2022
    January 2022
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    November 2019
    October 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    April 2018
    March 2018
    February 2018
    November 2017
    October 2017
    May 2017
    April 2017
    April 2016
    March 2016
    February 2016
    December 2015
    November 2015
    October 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    May 2014
    April 2014
    March 2014
    February 2014
    December 2013
    November 2013
    October 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    April 2011
    March 2011
    February 2011
    September 2010
    August 2010
    July 2010
    June 2010
    May 2010
    April 2010
    March 2010
    February 2010
    January 2010
    December 2009
    November 2009
    July 2009
    May 2009

Columbia Science Review
© COPYRIGHT 2022. ALL RIGHTS RESERVED.
Photos from driver Photographer, BrevisPhotography, digitalbob8, Rennett Stowe, Kristine Paulus, Tony Webster, CodonAUG, Tony Webster, spurekar, europeanspaceagency, Christoph Scholz, verchmarco, rockindave1, robynmack96, Homedust, The Nutrition Insider
  • Home
  • About
    • Our Team
  • Blog
  • Events
    • 2022-2023
    • 2021-2022
    • 2020-2021
    • 2019-2020
    • 2018-2019
    • 2017-2018
    • 2016-2017
  • Publications
  • COVID-19 Public Hub
    • Interviews >
      • Biology of COVID-19
      • Public Health
      • Technology & Data
    • Frontline Stories >
      • Healthcare Workers
      • Global Health
      • Volunteer Efforts
    • Resources & Links >
      • FAQ's
      • Resource Hubs
      • Student Opportunities
      • Podcasts & Graphics
      • Mental Health Resources
      • Twitter Feeds
      • BLM Resources
    • Columbia Events >
      • Campus Events
      • CUMC COVID-19 Symposium
      • CSR Events
    • Our Team
  • Contact