vijaye12 commited on
Commit
341a8dd
·
verified ·
1 Parent(s): 0104059

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -132,9 +132,9 @@ The below model scripts can be used for any of the above TTM models. Please upda
132
  TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
133
  Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
134
  adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
135
- - TTM-B referred in the paper maps to the `512-96-r2` model.
136
- - TTM-E referred in the paper maps to the `1024-96-r2` model.
137
- - TTM-A referred in the paper maps to the `1536-96-r2` model
138
 
139
  Please note that the Granite TTM models are pre-trained exclusively on datasets
140
  with clear commercial-use licenses that are approved by our legal team. As a result, the pre-training dataset used in this release differs slightly from the one used in the research
 
132
  TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
133
  Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
134
  adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
135
+ - TTM-B referred in the paper maps to the 512 context models.
136
+ - TTM-E referred in the paper maps to the 1024 context models.
137
+ - TTM-A referred in the paper maps to the 1536 context models.
138
 
139
  Please note that the Granite TTM models are pre-trained exclusively on datasets
140
  with clear commercial-use licenses that are approved by our legal team. As a result, the pre-training dataset used in this release differs slightly from the one used in the research