共计 17 篇文章
2025
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
FROM CORRECTION TO MASTERY: REINFORCED DISTILLATION OF LARGE LANGUAGE MODEL AGENTS
Merge-of-Thought Distillation
Delta Knowledge Distillation for Large Language Models
Dataset Condensation for Recommendation
BOND: Aligning LLMs with Best-of-N distillation
DATASET DISTILLATION VIA KNOWLEDGE DISTILLATION: TOWARDS EFFICIENT SELF-SUPERVISED PRETRAINING OF DEEP NETWORKS
Distilling the Knowledge in Data Pruning
DA-KD: Difficulty-Aware Knowledge Distillation for Efficient Large Language Models