学习笔记
9
DipSVD: Dual-importance Protected SVD for Efficient LLM Compression
SVD-LLM: TRUNCATION-AWARE SINGULAR VALUE DECOMPOSITION FOR LARGE LANGUAGE MODEL COMPRESSION
LANGUAGE MODEL COMPRESSION WITH WEIGHTED LOW-RANK FACTORIZATION
ASVD: ACTIVATION-AWARE SINGULAR VALUE DECOMPOSITION FOR COMPRESSING LARGE LANGUAGE MODELS
Dual-Space Knowledge Distillation for Large Language Models
Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation
NOT ALL LLM-GENERATED DATA ARE EQUAL: RETHINKING DATA WEIGHTING IN TEXT CLASSIFICATION
关于浮点数存储精度
LightGCN:简化和增强图卷积网络的推荐