luisroque commited on
Commit
942330b
·
1 Parent(s): eeb8a49

update readme

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md CHANGED
@@ -1,3 +1,91 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ # Dataset Repository
6
+
7
+ This repository includes several datasets: **Houston Crime Dataset**, **Tourism in Australia**, **Prison in Australia**, and **M5**. These datasets consist of time series data representing various metrics across different categories and groups.
8
+
9
+ ## Dataset Structure
10
+
11
+ Each dataset is divided into training and prediction sets, with features such as groups, indices, and time series data. Below is a general overview of the dataset structure:
12
+
13
+ ### Training Data
14
+ The training data contains time series data with the following structure:
15
+ - **x_values**: List of time steps.
16
+ - **groups_idx**: Indices representing different group categories (e.g., Crime, Beat, Street, ZIP for Houston Crime).
17
+ - **groups_n**: Number of unique values in each group category.
18
+ - **groups_names**: Names corresponding to group indices.
19
+ - **n**: Number of time series.
20
+ - **s**: Length of each time series.
21
+ - **n_series_idx**: Indices of the time series.
22
+ - **n_series**: Indices for each series.
23
+ - **g_number**: Number of group categories.
24
+ - **data**: Matrix of time series data.
25
+
26
+ ### Prediction Data
27
+ The prediction data has a similar structure to the training data and is used for forecasting purposes.
28
+
29
+ **Note:** It contains the complete data, including training and predict.
30
+
31
+ ### Additional Metadata
32
+ - **seasonality**: Seasonality of the data.
33
+ - **h**: Forecast horizon.
34
+ - **dates**: Timestamps corresponding to the time steps.
35
+
36
+ ## Example Usage
37
+
38
+ Below is an example of how to load and use the datasets using the `datasets` library:
39
+
40
+ ```python
41
+ import pickle
42
+
43
+ def load_pickle(file_path):
44
+ with open(file_path, 'rb') as file:
45
+ data = pickle.load(file)
46
+ return data
47
+
48
+ # Paths to your datasets
49
+ m5_path = 'path/to/m5.pkl'
50
+ police_path = 'path/to/police.pkl'
51
+ prison_path = 'path/to/prison.pkl'
52
+ tourism_path = 'path/to/tourism.pkl'
53
+
54
+ m5_data = load_pickle(m5_path)
55
+ police_data = load_pickle(police_path)
56
+ prison_data = load_pickle(prison_path)
57
+ tourism_data = load_pickle(tourism_path)
58
+
59
+ # Example: Accessing specific data from the datasets
60
+ print("M5 Data:", m5_data)
61
+ print("Police Data:", police_data)
62
+ print("Prison Data:", prison_data)
63
+ print("Tourism Data:", tourism_data)
64
+
65
+ # Access the training data
66
+ train_data = prison_data["train"]
67
+
68
+ # Access the prediction data
69
+ predict_data = prison_data["predict"]
70
+
71
+ # Example: Extracting x_values and data
72
+ x_values = train_data["x_values"]
73
+ data = train_data["data"]
74
+
75
+ print(f"x_values: {x_values}")
76
+ print(f"data shape: {data.shape}")
77
+ ```
78
+
79
+ ### Steps to Follow:
80
+
81
+ 1. **Clone the Repository:**
82
+ ```sh
83
+ git clone https://huggingface.co/datasets/zaai-ai/hierarchical_datasets.git
84
+ cd hierarchical_datasets
85
+ ```
86
+ 2. **Update the File Paths:**
87
+ - Ensure the paths to the .pkl files are correct in your Python script.
88
+
89
+ 3. **Load the Datasets:**
90
+ - Use the `pickle` library in Python to load the `.pkl` files.
91
+