serdarcaglar commited on
Commit
225b7a5
·
1 Parent(s): 3c31c29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -26,7 +26,7 @@ This model is a powerful natural language processing model trained on Turkish sc
26
 
27
  - **Tokenizer**: The model utilizes a BPE (Byte Pair Encoding) tokenizer to process the data effectively, breaking down the text into subword tokens.
28
 
29
- - **Training Details**: The model was trained on a large dataset of Turkish sentences. The training spanned 10 epochs 5M Steps, totaling 240 hours, and the model was built from scratch. No fine-tuning was applied.
30
 
31
  ## Usage
32
 
 
26
 
27
  - **Tokenizer**: The model utilizes a BPE (Byte Pair Encoding) tokenizer to process the data effectively, breaking down the text into subword tokens.
28
 
29
+ - **Training Details**: The model was trained on a large dataset of Turkish sentences. The training spanned 2M Steps, totaling 3+ days, and the model was built from scratch. No fine-tuning was applied.
30
 
31
  ## Usage
32