Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain
2008

Decoding Faces in the Human Brain

Sample size: 9 publication 10 minutes Evidence: moderate

Author Information

Author(s): Tsuchiya Naotsugu, Kawasaki Hiroto, Oya Hiroyuki, Howard Matthew A. III, Adolphs Ralph

Primary Institution: California Institute of Technology

Hypothesis

The study investigates how different aspects of faces are processed in the human brain, particularly in the ventral and lateral temporal cortex.

Conclusion

The study found that both invariant and changeable aspects of faces are better represented in the ventral temporal cortex than in the lateral temporal cortex.

Supporting Evidence

  • Better representation of both invariant and changeable aspects of faces was found in the ventral temporal cortex.
  • Decoding performance was significantly higher for faces compared to checkerboard patterns.
  • Task-relevant attention improved decoding accuracy in the ventral temporal cortex.

Takeaway

Scientists looked at how our brains recognize faces and found that one part of the brain is better at understanding who someone is, while another part is better at figuring out how they feel.

Methodology

The study used intracranial recordings from 9 neurosurgical patients while they viewed static and dynamic facial expressions, applying decoding analyses to the power spectrogram of electrocorticograms.

Potential Biases

Potential biases may arise from the small sample size and the specific clinical population studied.

Limitations

The electrode placements varied across subjects, which may affect the generalizability of the findings.

Participant Demographics

Nine neurosurgical patients with medically intractable epilepsy.

Statistical Information

P-Value

p<0.05

Statistical Significance

p<0.05

Digital Object Identifier (DOI)

10.1371/journal.pone.0003892

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication