Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ model-index:
|
|
21 |
|
22 |
# TinytarCoderPy
|
23 |
|
24 |
-
This is a 159M parameters model with
|
25 |
for ~6 epochs which amounts to 100B tokens.
|
26 |
|
27 |
|
@@ -58,10 +58,6 @@ outputs = model.generate(inputs)
|
|
58 |
print(tokenizer.decode(outputs[0]))
|
59 |
```
|
60 |
|
61 |
-
# Limitations
|
62 |
-
|
63 |
-
The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
|
64 |
-
|
65 |
# Training
|
66 |
|
67 |
## Model
|
|
|
21 |
|
22 |
# TinytarCoderPy
|
23 |
|
24 |
+
This is a 159M parameters model with the same architecture as [StarCoder](https://huggingface.co/bigcode/starcoder) (8k context length, MQA & FIM). It was trained on the Python data from [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata)
|
25 |
for ~6 epochs which amounts to 100B tokens.
|
26 |
|
27 |
|
|
|
58 |
print(tokenizer.decode(outputs[0]))
|
59 |
```
|
60 |
|
|
|
|
|
|
|
|
|
61 |
# Training
|
62 |
|
63 |
## Model
|