Brain-model neural similarity reveals abstractive summarization performance
2025

Understanding How Deep Language Models Mimic Human Brain Language Processing

Sample size: 13 publication Evidence: high

Author Information

Author(s): Zhang Zhejun, Guo Shaoting, Zhou Wenqing, Luo Yingying, Zhu Yingqi, Zhang Lin, Li Lei

Primary Institution: Beijing University of Posts and Telecommunications

Hypothesis

The hidden layers of a model that exhibit a higher degree of representational similarity to the human brain’s activation patterns play a more critical role in the model’s performance on the abstractive summarization task.

Conclusion

The study found that deeper layers of language models become increasingly similar to human brain activity, which correlates with better performance in summarization tasks.

Supporting Evidence

  • As the depth of hidden layers increases, the models’ text encoding becomes increasingly similar to the human brain’s language representations.
  • Manipulating deeper layers leads to a more substantial decline in summarization performance compared to shallower layers.
  • The correlation between hidden layers’ similarity to human brain activity patterns and their impact on model summarization performance was statistically significant.

Takeaway

This study shows that the deeper parts of language models work more like our brains, helping them summarize text better.

Methodology

The study used EEG to measure brain activity while participants read texts, comparing this with the internal representations of language models through representational similarity analysis.

Limitations

The study did not isolate the effects of individual factors among different models and did not deeply analyze changes at the individual hidden layer level.

Participant Demographics

13 participants, 9 males and 4 females, mean age 24.31 years, all right-handed with normal or corrected-to-normal vision.

Statistical Information

P-Value

p<0.05

Statistical Significance

p<0.05

Digital Object Identifier (DOI)

10.1038/s41598-024-84530-w

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication