Podcast cover for "Enhancing next token prediction based pre-training for jet foundation models" by Joschka Birk et al.
Episode

Enhancing next token prediction based pre-training for jet foundation models

Dec 3, 20258:42
hep-phMachine Learningphysics.data-an
No ratings yet

Abstract

Next token prediction is an attractive pre-training task for jet foundation models, in that it is simulation free and enables excellent generative capabilities that can transfer across datasets. Here we study multiple improvements to next token prediction, building on the initial work of OmniJet-$α$. Instead of tokenizing particles and subsequently only using the token-ID as the model input for both the generative and the classification task, we adopt a hybrid setup, which allows us to use continuous feature vectors as model input while only using token-IDs in the next token prediction target. Secondly, we explore a combined pre-training strategy that combines masked particle modeling and generative learning objectives. Taken together, these changes greatly improve the performance in downstream classification tasks without any loss in generative performance.

Links & Resources

Authors

Cite This Paper

Year:2025
Category:hep-ph
APA

Birk, J., Hallin, A., Kasieczka, G., Madzharova, N., Pang, I., Shih, D. (2025). Enhancing next token prediction based pre-training for jet foundation models. arXiv preprint arXiv:2512.04149.

MLA

Joschka Birk, Anna Hallin, Gregor Kasieczka, Nikol Madzharova, Ian Pang, and David Shih. "Enhancing next token prediction based pre-training for jet foundation models." arXiv preprint arXiv:2512.04149 (2025).