File size: 301 Bytes
819b848
 
 
 
 
1
2
3
4
5
This is a fine-tuning of the LLaMa13B model in the style of the Alpaca dataset and setting but using LoRa.

For details of the data and hyper params - https://crfm.stanford.edu/2023/03/13/alpaca.html

This repo only contains the LoRa weights and not the original LLaMa weights which are research only.