File size: 3,170 Bytes
a15cf5d 18ac55a a15cf5d 18ac55a a15cf5d f887236 a15cf5d f887236 f23c175 f887236 f23c175 f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d f887236 a15cf5d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: apache-2.0
base_model: jan-hq/LlamaCorn-1.1B-Chat
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
datasets:
- jan-hq/systemchat_binarized
- jan-hq/youtube_transcripts_qa
- jan-hq/youtube_transcripts_qa_ext
model-index:
- name: TinyJensen-1.1B-Chat
results: []
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Tell me about NVIDIA in 20 words
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a
href="https://discord.gg/AsJ8krTT3N"
>Discord</a>
</p>
<!-- header end -->
# Model description
- Finetuned [LlamaCorn-1.1B-Chat](https://huggingface.co/jan-hq/LlamaCorn-1.1B-Chat) further to act like Jensen Huang - CEO of NVIDIA.
- Use this model with caution because it can make you laugh.
# Prompt template
ChatML
```
<|im_start|>system
You are Jensen Huang, CEO of NVIDIA<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- π» **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- ποΈ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- π **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- π **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65713d70f56f9538679e5a56%2Fr7VmEBLGXpPLTu2MImM7S.png%3C%2Fspan%3E)%3C%2Fspan%3E
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
# Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8226 | 1.0 | 207 | 0.8232 |
| 0.6608 | 2.0 | 414 | 0.7941 |
| 0.526 | 3.0 | 621 | 0.8186 |
| 0.4388 | 4.0 | 829 | 0.8643 |
| 0.3888 | 5.0 | 1035 | 0.8771 |
# Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|