Zador, Anthony, Benjamin, Ari, Liu, Tingkai (March 2025) Token-Level Uncertainty-Aware Objective for Language Model Post-Training. arXiv. ISSN 2331-8422 (Submitted)
Abstract
In the current work, we connect token-level uncertainty in causal language modeling to two types of training objectives: 1) masked maximum likelihood (MLE), 2) self-distillation. We show that masked MLE is effective in reducing epistemic uncertainty, and serve as an effective token-level automatic curriculum learning technique. However, masked MLE is prone to overfitting and requires self-distillation regularization to improve or maintain performance on out-of-distribution tasks. We demonstrate significant performance gain via the proposed training objective - combined masked MLE and self-distillation - across multiple architectures (Gemma, LLaMA, Phi) and datasets (Alpaca, ShareGPT, GSM8K), mitigating overfitting while maintaining adaptability during post-training. Our findings suggest that uncertainty-aware training provides an effective mechanism for enhancing language model training.
| Item Type: | Paper |
|---|---|
| Subjects: | neurobiology neurobiology > neuroscience |
| CSHL Authors: | |
| Communities: | CSHL labs > Zador lab |
| SWORD Depositor: | CSHL Elements |
| Depositing User: | CSHL Elements |
| Date: | 15 March 2025 |
| Date Deposited: | 01 May 2026 15:01 |
| Last Modified: | 01 May 2026 15:01 |
| Related URLs: | |
| URI: | https://repository.cshl.edu/id/eprint/42186 |
Actions (login required)
![]() |
Administrator's edit/view item |


