Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation 提出了两个指标来直观观察lm的Error Accumulation现象 2025-06-23 学习笔记 #LLM #KD
NOT ALL LLM-GENERATED DATA ARE EQUAL: RETHINKING DATA WEIGHTING IN TEXT CLASSIFICATION 尝试通过引入sample-wise loss weight来缓解train-inference mismatch问题 2025-06-23 学习笔记 #LLM #KD