Large language models for human-machine collaborative particle accelerator tuning through natural language
2025

Using Large Language Models for Tuning Particle Accelerators

Sample size: 14 publication 10 minutes Evidence: moderate

Author Information

Author(s): Jan Kaiser, Anne Lauscher, Annika Eichler

Primary Institution: Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany

Hypothesis

Can large language models (LLMs) effectively tune particle accelerators using natural language prompts?

Conclusion

Large language models can tune particle accelerators based on natural language prompts, but they are not yet competitive with state-of-the-art tuning algorithms.

Supporting Evidence

  • LLMs can tune an accelerator subsystem based on a natural language prompt.
  • Performance of LLMs was compared to state-of-the-art optimization algorithms.
  • LLMs showed potential for solving complex optimization tasks.

Takeaway

This study shows that computers can help adjust particle accelerators just by talking to them, but they still need to get better to match human experts.

Methodology

The study evaluated 14 different LLMs on their ability to tune a particle accelerator using various prompts and compared their performance to traditional optimization algorithms.

Limitations

LLMs are not yet competitive with traditional tuning algorithms and have high computational costs.

Digital Object Identifier (DOI)

10.1126/sciadv.adr4173

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication