I am a Research Engineer at FAIR, Meta, researching the use of language models in mathematics and theoretical physics. I am also affiliated with Ecole des Ponts / CERMICS. I graduated from Ecole Polytechnique in 1987 and ENSAE (Ecole Nationale de la Statistique et de l’Administration Economique) in 1990, majoring in statistics. After a career in media, advertising and software development (my LinkedIn profile, I joined Meta as a Research Engineer, working on AI for mathematics and theoretical physics (Google Scholar).
I can be reached at fcharton@gmail.com. My recent scientific news can usually be found on my twitter account.
Selected Publications
Full list on Google scholar.
AI for Maths
- Global Lyapunov functions: a long-standing open problem in mathematics, with symbolic transformers, with Alberto Alfarano and Amaury Hayat, NeurIPS 2024.
- PatternBoost: Constructions in mathematics with a little help from AI, with Adam Wagner, Jordan Ellenberg and Geordie Williamson, generative models for combinatorics, 2024.
- Linear algebra with transformers, solo paper, transformers can learn linear algebra, matrix operations, eigen decomposition, TMLR 2022.
- Learning advanced mathematical computations from examples, with Amaury Hayat and Guillaume Lample: learning proposerties of differential systems, convergence at a critical point (aka the Spectral Mapping Theorem), controllability of overparametrized systems, integrability of some partial differential equations (code), ICLR 2021.
- Deep learning for symbolic mathematics, with Guillaume Lample: symbolic integration with trasnformers (code), ICLR 2020.
AI for physics
- Transforming the bootstrap: Using transformers to compute scattering amplitudes in planar n= 4 super yang-mills theory, with Tianji Cai, Garrett Mertz, Niklas Nolte, Kyle Cranmer, Matthias Wilhelm and Lance Dixon, MLST 2024.
- Is Tokenization Needed for Masked Particle Modelling?, with Matthew Leigh, Samuel Klein, Tobias Golling, Lukas Heinrich, Michael Kagan, Inês Ochoa and Margarita Osadchy, 2024.
- Extrapolating Jet Radiation with Autoregressive Transformers with Anja Butter, Javier Mariño Villadamigo, Ayodele Ore, Tilman Plehn and Jonas Spinner, 2024.
Maths for understanding AI
- Learning the greatest common divisor: explaining transformer predictions (2024), solo paper, ICLR 2024 spotlight.
- Emergent properties with repeated examples, with Julia Kempe, won the Debunking Challenge, in the NeurIPS 2024 workshop on scientific methods for understanding deep learning.
- A tale of tails: Model collapse as a change of scaling laws, with Elvis Dohmatob, Yunzhen Feng, Pu Yang and Julia Kempe, ICML 2024.
- Length generalization in arithmetic transformers, with Samy Jelassi, Stéphane d’Ascoli, Carles Domingo-Enrich, Yuhuai Wu, and Yuanzhi Li, 2023.
- What is my math transformer doing? Three results on interpretability and generalization
Symbolic regression
- End-to-end symbolic regression with transformers, with Pierre-Alexandre Kamienny, Stéphane d’Ascoli and Guillaume Lample: trasnformer based symbolic regression, NeurIPS 2022.
- Deep Symbolic Regression for Recurrent Sequences, with Stéphane d’Ascoli, Pierre-Alexandre Kamienny and Guillaume Lample: recovering underlying recurrence relations from a sequence of numbers, ICML 2022.
Cryptanalysis
- SALSA: attacking lattice cryptography with transformers (2022), with Emily Wenger, Mingjie Chen, and Kristin Lauter, NeurIPS 2022
- SALSA PICANTE: a machine learning attack on LWE with binary secrets (2023), with Cathy Li, Jana Sotakova, Mohamed Mahlou, Evrard Garcelon, Emily Wenger and Kristin Lauter, CCS 2023
- SALSA VERDE: a machine learning attack on Learning With Errors with sparse small secrets (2023), with Cathy Li, Emily Wenger, Zeyuan Allen-Zhu, and Kristin Lauter, NeurIPS 2023
Workshops and Programs I co-organized
- Mathematics and Machine Learning program, Harvard CMSA (Autumn 2024), with Michael Douglas (Harvard), Michael Freedman (Harvard), Geordie Williamson (Sydney Mathematicam Research Institute), Fabian Ruehle (Northwestern), (video).
- Maths for and by large language models, IHES, May 2024, with Michael Douglas (Harvard) and Yiannis Vlassopoulos (IHES).
Invited talks
- December 2024, Yukawa Institute, Kyoto, String Data 2024, Language models for amplitude bootstrap
- November 2024, University Wisconsin-Madison, Symbolic AI for science
- November 2024, Collège de France, seminar of Prof. Timothy Gowers, how do language models learn arithmetic (video).
- October 2024, Aspen meeting on foundation models, Transformers meet Lyapunov.
- October 2024, Geneva Institute for Theoretical Sciences, Transformers meet Lyapunov.
- September 2024, AIFI, MIT, Transformers meet Lyapunov (video).
- September 2024, Center for Mathematical Science and Applications, Harvard, Transformers meet Lyapunov (video).
- June 2024 Niels Bohr Institute, Copenhagen, Language models for amplitude bootstrap.
- June 2024 Institute for Advanced Study, Princeton, Amplitudes 2024, Language models for amplitude bootstrap (video).
- May 2024, Institut des Hautes Etudes Scientifiques, Math for and by LLM seminar, AI from generated data (video).
- April 2024, EU coalition for AI in fundamental physics, Amsterdam, plenary talk, Transformers for maths and physics.
- March 2024, 1st LIPS conference on Language models in the physics, DESY, Hamburg, AI from generated data.
- February 2024, Ecole des Ponts, Champs sur Marne, CERMICS seminar, Transformer for maths, maths for transformers.
- January 2024, Inter-experiment Machine Learning Workshop, CERN, Mathematics as a translation task.
- November 2023, Sydney Mathematical Research Institute, Mathematical challenges in AI, Transformers for maths, maths for transformers (video).
- November 2023, Nancy-Liège workshop on mathematical intuition, Learning mathematics from examples only.
- September 2023, Center for Mathematical Science and Applications, Harvard, Transformer for maths, maths for transformer (video).
- September 2023, Flatiron Institute, New York, Transformer for maths, maths for transformers.
- September 2023, Center for Data Science, Courant Institute, New York, Transformer for maths, maths for transformers.
- September 2023, University of Geneva, Transformers in mathematics and physics.
- Juin 2023, US National Academy of Science, Washington DC, Transformers In mathematics and physics.
- January 2023, Collège de France, Prof. Stanislas Dehaene seminar, Can Artificial Intelligence model mathematical language? (video).
- December 2022, Math & AI workshop, NeurIPS 2022, New Orleans, What is my math transformer doing?
- October 2022, Institut Henri Poincaré, Paris, Maths with transformers.
- September 2022, Physics Meets ML, online seminar, Maths with transformers (video).
- August 2022, Weizmann Institute, Rehovot, Hammers and Nails workshop, Transformers for mathematics (and theoretical physics?).
- April 2022, Stanford Linear Accelerator (SLAC), Theory and ML group seminar, Maths with transformers.
- September 2020, Conference on Automatic Interactive Theorem Proving (AITP), Aussois, Deep Learning for Symbolic Mathematics.
- April 2020, Stanford EE380 Collloquium on computer systems, Deep Learning for Symbolic Mathematics (video)