Update README.md
Browse files
README.md
CHANGED
@@ -32,8 +32,8 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
|
|
32 |
|
33 |
|
34 |
**TTM-R2 comprises TTM variants pre-trained on larger pretraining datasets (~700M samples).** We have another set of TTM models released under `TTM-R1` trained on ~250M samples
|
35 |
-
which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1) In general, `TTM-R2` models perform better than `TTM-R1` models as they are
|
36 |
-
trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to try both
|
37 |
R1 and R2 variants and pick the best for your data.
|
38 |
|
39 |
|
|
|
32 |
|
33 |
|
34 |
**TTM-R2 comprises TTM variants pre-trained on larger pretraining datasets (~700M samples).** We have another set of TTM models released under `TTM-R1` trained on ~250M samples
|
35 |
+
which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1). In general, `TTM-R2` models perform better than `TTM-R1` models as they are
|
36 |
+
trained on larger pretraining dataset. In standard benchmarks, TTM-R2 outperform TTM-R1 by over 15%. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to try both
|
37 |
R1 and R2 variants and pick the best for your data.
|
38 |
|
39 |
|