SlimLlama: Large Language Models Compression via Low-Rank Feature Distillation - Department of Natural Language Processing & Knowledge Discovery
Pré-Publication, Document De Travail Année : 2024

SlimLlama: Large Language Models Compression via Low-Rank Feature Distillation

Yaya Sy
  • Fonction : Auteur
  • PersonId : 1479000
Irina Illina

Résumé

Current LLM structured pruning methods involve two steps: (1) compressing with calibration data and (2) continued pretraining on billions of tokens to recover the lost performance. This costly second step is needed as the first step significantly impacts performance. Previous studies have found that pretrained Transformer weights aren’t inherently low-rank, unlike their activations, which may explain this performance drop. Based on this observation, we introduce a one-shot compression method that locally distills low-rank weights. We accelerate convergence by initializing the low-rank weights with SVD and using a joint loss that combines teacher and student activations. We reduce memory requirements by applying local gradient updates only. Our approach can compress Mixtral-8x7B within minutes on a single A100 GPU, removing 10 billion parameters while maintaining over 95% of the original performance. Phi-2~3B can be compressed by 40% using only 13 million calibration tokens into a small model that competes with recent models of similar size. We show our method generalizes well to non-transformer architectures: we compress Mamba-3B by 20% while maintaining 99% of its performance.

Mots clés

Fichier principal
Vignette du fichier
acl_latex.pdf (1.49 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04838586 , version 1 (17-12-2024)

Identifiants

  • HAL Id : hal-04838586 , version 1

Citer

Yaya Sy, Christophe Cerisara, Irina Illina. SlimLlama: Large Language Models Compression via Low-Rank Feature Distillation. 2024. ⟨hal-04838586v1⟩
0 Consultations
0 Téléchargements

Partager

More