Sophilex's Blog
  • Home
  • Archive
  • Category
  • Tags
  • About me
  • Friends

共计 55 篇文章


2025

11-17
Different Designs For LLM KD Loss
11-17
Importance-Aware Data Selection for Efficient LLM Instruction Tuning
10-13
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
10-13
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
10-11
Training-Inference Mismatch In LLM KD(II)
09-28
FROM CORRECTION TO MASTERY: REINFORCED DISTILLATION OF LARGE LANGUAGE MODEL AGENTS
09-28
Merge-of-Thought Distillation
09-28
Delta Knowledge Distillation for Large Language Models
09-21
Massive Activations in Large Language Models
09-21
TD3: Tucker Decomposition Based Dataset Distillation Method for Sequential Recommendation
123…6

搜索

Hexo Fluid
京ICP证123456号 | police-icon 京公网安备12345678号
载入天数... 载入时分秒...