EpicPinkPenguin commited on
Commit
d2c8ed4
·
verified ·
1 Parent(s): 4e63083

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -50,6 +50,22 @@ size_categories:
50
  # Procgen Benchmark - Bigfish
51
  This dataset contains trajectories generated by a [PPO](https://arxiv.org/abs/1707.06347) reinforcement learning agent trained on the Bigfish environment from the [Procgen Benchmark](https://openai.com/index/procgen-benchmark/). The agent has been trained for 50M steps and the final evaluation performance is `32.33`.
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ## Dataset Structure
54
  ### Data Instances
55
  Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).
@@ -98,7 +114,7 @@ Each data instance represents a single step consisting of tuples of the form (ob
98
  The dataset is divided into a `train` (90%) and `test` (10%) split
99
 
100
  ## Dataset Creation
101
- The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 50M steps on the Procgen Bigfish environment. The agent obtained a final performance of `32.33`. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribtution.
102
 
103
  ## Procgen Benchmark
104
  The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.
 
50
  # Procgen Benchmark - Bigfish
51
  This dataset contains trajectories generated by a [PPO](https://arxiv.org/abs/1707.06347) reinforcement learning agent trained on the Bigfish environment from the [Procgen Benchmark](https://openai.com/index/procgen-benchmark/). The agent has been trained for 50M steps and the final evaluation performance is `32.33`.
52
 
53
+ ## Dataset Usage
54
+
55
+ Regular usage:
56
+ ```python
57
+ from datasets import load_dataset
58
+ train_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="train")
59
+ test_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="test")
60
+ ```
61
+
62
+ Usage with PyTorch:
63
+ ```python
64
+ from datasets import load_dataset
65
+ train_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="train").with_format("torch")
66
+ test_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="test").with_format("torch")
67
+ ```
68
+
69
  ## Dataset Structure
70
  ### Data Instances
71
  Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).
 
114
  The dataset is divided into a `train` (90%) and `test` (10%) split
115
 
116
  ## Dataset Creation
117
+ The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 50M steps on the Procgen Bigfish environment. The agent obtained a final performance of `32.33`. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribution.
118
 
119
  ## Procgen Benchmark
120
  The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.