|
--- |
|
license: mit |
|
--- |
|
## Finetuned Model For My Thesis: Design And Implementation Of An Adaptive Virtual Intelligent Teaching Assistant Based On Supervised Fine-tuning Of A Pre-trained Large Language Model |
|
### Model Name: CodeOptimus - Adaptive Supervised Instruction Fine-tuning [Mistral 7B Instruct](https://mistral.ai/news/announcing-mistral-7b/) using qLora. |
|
|
|
## Prerequisites For Reproduction |
|
1. **GPU**: Requires powerful GPUs - I used 7 Nvidia A100s. |
|
2. **Train Time**: 1 week. |
|
3. **RAG Module**: Updates the knowledge base of the model in real-time with adaptive features learned from conversations with the model over time.. |
|
4. **Python Packages**: Install requirements.txt. |
|
5. **Dataset**: Download [code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style) plus some custom curated dataset |
|
6. **Mistra-7B-Instruct-v0.1**: Download [mistralai/Mistral-7B-Instruct-v0.1 ](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) pytorch bin weights |
|
7. **Realistic 3D Intelligent Persona/Avatar (Optional)**: For this I'm using soulmachine's digital humans. |
|
|
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6438a9027de34e8ea7e4b257%2FUJtAiKejhrmUPN5EiA59E.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END --> |
|
|