Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
LoRA-TMLR-2024
's Collections
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
Continued Pretraining - Code (StarCoder-Python)
Instruction Finetuning - Math (MetaMathQA)
Continued Pretraining - Math (OpenWebMath)
Instruction Finetuning - Math (MetaMathQA)
updated
Sep 25, 2024
Model and LoRA adapter checkpoints for Llama-2-7B finetuned on MetaMathQA
Upvote
-
LoRA-TMLR-2024/metamath-lora-rank-16-alpha-32
Updated
Sep 25, 2024
•
2
LoRA-TMLR-2024/metamath-lora-rank-256-alpha-512
Updated
Sep 25, 2024
LoRA-TMLR-2024/metamath-lora-rank-64-alpha-128
Updated
Sep 25, 2024
LoRA-TMLR-2024/metamath-full-finetuning-lr-1e-05
Updated
Sep 27, 2024
Upvote
-
Share collection
View history
Collection guide
Browse collections