Tempo14
's Collections
Fine-Tuning
updated
PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Paper
•
2310.17752
•
Published
•
12
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Paper
•
2311.03285
•
Published
•
28
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Paper
•
2311.06243
•
Published
•
17
Fine-tuning Language Models for Factuality
Paper
•
2311.08401
•
Published
•
28
SiRA: Sparse Mixture of Low Rank Adaptation
Paper
•
2311.09179
•
Published
•
8
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer
Learning
Paper
•
2311.11077
•
Published
•
24
LLaMA Pro: Progressive LLaMA with Block Expansion
Paper
•
2401.02415
•
Published
•
53
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper
•
2402.10193
•
Published
•
19
How to Train Data-Efficient LLMs
Paper
•
2402.09668
•
Published
•
40
A Survey on Data Selection for LLM Instruction Tuning
Paper
•
2402.05123
•
Published
•
3
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
Paper
•
2405.12130
•
Published
•
46
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Paper
•
2405.11157
•
Published
•
27
Trans-LoRA: towards data-free Transferable Parameter
Efficient Finetuning
Paper
•
2405.17258
•
Published
•
14
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Paper
•
2406.11194
•
Published
•
15
Iterative Length-Regularized Direct Preference Optimization: A Case
Study on Improving 7B Language Models to GPT-4 Level
Paper
•
2406.11817
•
Published
•
12
Ferret: Federated Full-Parameter Tuning at Scale for Large Language
Models
Paper
•
2409.06277
•
Published
•
14
RobustFT: Robust Supervised Fine-tuning for Large Language Models under
Noisy Response
Paper
•
2412.14922
•
Published
•
83