共计 25 篇文章
2025
Different Designs For LLM KD Loss
Importance-Aware Data Selection for Efficient LLM Instruction Tuning
Training-Inference Mismatch In LLM KD(II)
FROM CORRECTION TO MASTERY: REINFORCED DISTILLATION OF LARGE LANGUAGE MODEL AGENTS
Merge-of-Thought Distillation
Delta Knowledge Distillation for Large Language Models
Massive Activations in Large Language Models
BOND: Aligning LLMs with Best-of-N distillation
Evaluating Position Bias in Large Language Model Recommendations
DATASET DISTILLATION VIA KNOWLEDGE DISTILLATION: TOWARDS EFFICIENT SELF-SUPERVISED PRETRAINING OF DEEP NETWORKS