MLR-Copilot / benchmarks /CLRS /env /data_description.txt
Lim0011's picture
Upload 251 files
85e3d20 verified
raw
history blame
3.26 kB
The CLRS Algorithmic Reasoning Benchmark
Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms. The CLRS Algorithmic Reasoning Benchmark (CLRS) consolidates and extends previous work toward evaluation algorithmic reasoning by providing a suite of implementations of classical algorithms. These algorithms have been selected from the third edition of the standard Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein.
Algorithms as graphs
CLRS implements the selected algorithms in an idiomatic way, which aligns as closely as possible to the original CLRS 3ed pseudocode. By controlling the input data distribution to conform to the preconditions we are able to automatically generate input/output pairs. We additionally provide trajectories of "hints" that expose the internal state of each algorithm, to both optionally simplify the learning challenge and to distinguish between different algorithms that solve the same overall task (e.g. sorting).
In the most generic sense, algorithms can be seen as manipulating sets of objects, along with any relations between them (which can themselves be decomposed into binary relations). Accordingly, we study all of the algorithms in this benchmark using a graph representation. In the event that objects obey a more strict ordered structure (e.g. arrays or rooted trees), we impose this ordering through inclusion of predecessor links.
How it works
For each algorithm, we provide a canonical set of train, eval and test trajectories for benchmarking out-of-distribution generalization.
Trajectories Problem Size
Train 1000 16
Eval 32 x multiplier 16
Test 32 x multiplier 64
Here, "problem size" refers to e.g. the length of an array or number of nodes in a graph, depending on the algorithm. "multiplier" is an algorithm-specific factor that increases the number of available eval and test trajectories to compensate for paucity of evaluation signals. "multiplier" is 1 for all algorithms except:
Maximum subarray (Kadane), for which "multiplier" is 32.
Quick select, minimum, binary search, string matchers (both naive and KMP), and segment intersection, for which "multiplier" is 64.
The trajectories can be used like so:
train_ds, num_samples, spec = clrs.create_dataset(
folder='/tmp/CLRS30', algorithm='bfs',
split='train', batch_size=32)
for i, feedback in enumerate(train_ds.as_numpy_iterator()):
if i == 0:
model.init(feedback.features, initial_seed)
loss = model.feedback(rng_key, feedback)
Here, feedback is a namedtuple with the following structure:
Feedback = collections.namedtuple('Feedback', ['features', 'outputs'])
Features = collections.namedtuple('Features', ['inputs', 'hints', 'lengths'])
where the content of Features can be used for training and outputs is reserved for evaluation. Each field of the tuple is an ndarray with a leading batch dimension. Because hints are provided for the full algorithm trajectory, these contain an additional time dimension padded up to the maximum length max(T) of any trajectory within the dataset. The lengths field specifies the true length t <= max(T) for each trajectory, which can be used e.g. for loss masking.