Deep neural networks and humans both benefit from compositional language structure
2024

Deep Neural Networks and Humans Benefit from Compositional Language Structure

Sample size: 12 publication Evidence: high

Author Information

Author(s): Galke Lukas, Ram Yoav, Raviv Limor

Primary Institution: University of Southern Denmark

Hypothesis

Do deep neural network models exhibit the same learning and generalization advantage when trained on more structured linguistic input as human adults?

Conclusion

The study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input.

Supporting Evidence

  • Neural networks exposed to more compositional languages show more systematic generalization.
  • Greater agreement between different agents was observed when the input was more compositional.
  • Neural networks trained on highly structured languages produced more transparent generalizations.

Takeaway

This study shows that both humans and deep neural networks learn languages better when those languages have clear and structured rules.

Methodology

The study compared the learning and generalization capabilities of deep neural networks and humans using ten input languages with varying degrees of compositional structure.

Participant Demographics

Adult human participants were involved in the study.

Statistical Information

P-Value

p<0.001

Statistical Significance

p<0.001

Digital Object Identifier (DOI)

10.1038/s41467-024-55158-1

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication