Update README.md
Browse files
README.md
CHANGED
@@ -132,9 +132,9 @@ The below model scripts can be used for any of the above TTM models. Please upda
|
|
132 |
TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
|
133 |
Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
|
134 |
adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
|
135 |
-
- TTM-B referred in the paper maps to the
|
136 |
-
- TTM-E referred in the paper maps to the
|
137 |
-
- TTM-A referred in the paper maps to the
|
138 |
|
139 |
Please note that the Granite TTM models are pre-trained exclusively on datasets
|
140 |
with clear commercial-use licenses that are approved by our legal team. As a result, the pre-training dataset used in this release differs slightly from the one used in the research
|
|
|
132 |
TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
|
133 |
Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
|
134 |
adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
|
135 |
+
- TTM-B referred in the paper maps to the 512 context models.
|
136 |
+
- TTM-E referred in the paper maps to the 1024 context models.
|
137 |
+
- TTM-A referred in the paper maps to the 1536 context models.
|
138 |
|
139 |
Please note that the Granite TTM models are pre-trained exclusively on datasets
|
140 |
with clear commercial-use licenses that are approved by our legal team. As a result, the pre-training dataset used in this release differs slightly from the one used in the research
|