Importance-Aware Data Selection for Efficient LLM Instruction Tuning llm微调时的一种数据选择策略 2025-11-17 学习笔记 #LLM
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory 优化MTT,使其可以scale到更大规模的cv数据集上 2025-10-13 学习笔记 #KD #Dataset_Condensation
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective 对传统数据集蒸馏的双层优化结构进行解耦,实现线性复杂度 2025-10-13 学习笔记 #KD #Dataset_Condensation
FROM CORRECTION TO MASTERY: REINFORCED DISTILLATION OF LARGE LANGUAGE MODEL AGENTS 学生生成SGO时,教师在必要时给予干预,压缩理论误差上界 2025-09-28 学习笔记 #LLM #KD