text
stringclasses 2
values |
---|
# GRAPH-AWARE ISOMORPHIC ATTENTION FOR ADAPTIVE DYNAMICS IN TRANSFORMERS
A PREPRINT
@ Markus J. Buehler* Laboratory for Atomistic and Molecular Mechanics (LAMM) MIT Cambridge, MA 02139, USA [email protected]
January 9, 2025
### ABSTRACT
We present an approach to modifying Transformer architectures by integrating graph-aware relational reasoning into the attention mechanism, merging concepts from graph neural networks and language modeling. Building on the inherent connection between attention and graph theory, we reformulate the Transformer's attention mechanism as a graph operation and propose Graph-Aware Isomorphic Attention. This method leverages advanced graph modeling strategies, including Graph Isomorphism Networks (GIN), to enrich the representation of relational structures. Our approach improves the model's ability to capture complex dependencies and generalize across tasks, as evidenced by a reduced generalization gap and improved learning performance. We expand the concept of graph-aware attention to introduce Sparse-GIN-Attention, a fine-tuning approach that enhances the adaptability of pre-trained foundational models with minimal computational overhead, endowing them with graph-aware capabilities. We show that the Sparse-GIN-Attention framework leverages compositional principles from category theory to align relational reasoning with sparsified graph structures, while modeling hierarchical representation learning that bridges local interactions and global task objectives across diverse domains. Our results demonstrate that graph-aware attention mechanisms outperform traditional attention in both training efficiency and validation performance. These insights bridge graph theory and Transformer architectures and uncover latent graph-like structures within traditional attention mechanisms, offering a new lens through which Transformers can be optimized. By evolving Transformers as hierarchical GIN models, we reveal their implicit capacity for graph-level relational reasoning with profound implications for foundational model development and applications in bioinformatics, materials science, language modeling, and beyond, setting the stage for interpretable and generalizable modeling strategies.
Keywords Transformer · Graph Theory · Networks · Graph Neural Networks · Category Theory · Hierarchical Systems · Language Modeling · Artificial Intelligence · Science · Engineering · Materiomics
### Introduction 1
The evolution of attention mechanisms in neural networks has significantly influenced the field of artificial intelligence and machine learning, from early work [1, 2, 3, 4, 5] to more recent developments [6, 7, 8, 9, 10, 11, 12, 13, 14]. Originally vaguely inspired by cognitive processes in humans, attention mechanisms have become integral to modern neural network architectures like the Transformer [5]. These mechanisms dynamically allocate computational resources to the most relevant parts of input data, optimizing the processing of information and reducing computational redundancy.
Decode-only Transformers [6], designed specifically for autoregressive tasks, have emerged as an efficient subclass of the Transformer architecture, focusing on multimodal tasks like text generation, image modeling, audio modeling,
<sup>*</sup>Corresponding author.
language modeling, and sequential data prediction, among many other modalities (Figure 1). These architectures utilize an embedding laver to convert discrete tokens (which can represent diverse types of data, like text, chemical structures. images/pixels, symbols, and others) into dense vector representations, enabling the model to process flexible inputs. The core of the architecture is the self-attention mechanism, which operates causally to ensure that each token attends only to its past context, maintaining the autoregressive property essential for tasks like generative AI. Multi-head self-attention enhances the model's ability to capture diverse relationships between tokens by allowing parallel attention computations.
In addition to attention, decode-only Transformers such as those used in LLama foundation models [10] integrate feedforward layers (FF), usually implemented via a multi-layer perceptron (MLP), following the attention mechanism. These layers expand and transform the attention outputs, introducing non-linearity and enabling the model to learn complex patterns. The architecture also employs several residual connections and layer normalization, ensuring stability during training and facilitating the flow of gradients. Positional encodings are incorporated to inject sequence order information into the embeddings [5, 15], addressing the lack of inherent order in self-attention computations. Finally, the autoregressive nature of these models ensures that outputs are generated token-by-token, leveraging causal masking to prevent access to future tokens during training and inference. Together, these components create a robust and scalable framework for generative tasks.
Graph Neural Networks (GNNs) [16, 17, 18, 19] represent another influential class of models that have effectively incorporated attention mechanisms, primarily to enhance their capacity for learning on graph-structured data. Attention in GNNs has been extensively utilized to assign varying levels of importance to nodes, edges, or subgraphs, enabling models to focus on the most relevant components for a given task [18, 19]. However, these approaches are predominantly designed for known or static graph structures, where the relationships and topology remain fixed throughout the reasoning process. These methods often lack the flexibility to dynamically adapt and reason over evolving relationships or integrate seamlessly into general-purpose Transformer architectures. Our work attempts to explore possibilities at this intersection.
### 1.1 Invoking graph theory and category theory
Category theory [20, 21], with its abstract structures and relationships, provides a powerful lens through which we can understand the generalization capabilities of Transformers. By interpreting Transformer models as functors that map input representations to output representations, we can appreciate how these models capture abstract relationships and generalization of concepts in disparate fields. Some recent work has demonstrated universal feature activation in large models, providing evidence for such features in existing pre-trained models [22, 23, 24, 25]. In particular, functors preserve structural consistency while allowing transformations across different categories, akin to how Transformers maintain contextual relevance while adapting to varied tasks and domains. This perspective sheds light on the inherent modularity of Transformers, where layers and substructures can be seen as morphisms within a category, facilitating compositional reasoning and hierarchical abstraction. Here, the attention mechanism itself can be viewed as a mapping that defines and strengthens relationships within a category, dynamically adjusting based on contextual importance, which can be interpreted as a graph as shown in Figure 2. The integration of such graph-based modeling enhances this mapping by incorporating relational structures explicitly, allowing for richer representations that align with categorical transformations and hence better generalizability and predictive power, especially for scientific applications. In the context of the attention mechanism depicted in Figure 1, the interpretation of attention as graph-forming operator produces a causal adjacency matrix A ∈ R^=«×^^xq at each layer in the model. The core graph operation in the attention mechanism is, however, linear in nature.
The importance of isomorphic mappings as a framework for scientific generalization can be exemplified in an example of a study of biomaterials, such as spider silk, and its relation to music, drawing parallels in hierarchical structure and transformation across domains [26, 27, 28, 29, 30]. Spider silk's remarkable mechanical properties arise from its hierarchical organization, spanning molecular arrangements to macroscopic fibers. In category-theoretic terms, the molecular and macroscopic structures can be viewed as objects within distinct categories, with the Transformer acting as a functor that maps molecular-level features (e.g., amino acid sequences) to macroscopic properties (e.g., tensile strength or elasticity) while preserving key relationships. Similarly, music composition involves hierarchical structures, where sequences of notes form motifs, motifs form phrases, and phrases form complete compositions. Here, a Transformer functor can map abstract patterns in one domain, such as the hierarchical structures of biomaterials, to analogous patterns in another, such as the rhythmic or melodic structures in music. The attention mechanism, by dynamically constructing adjacency matrices, captures critical intra- and inter-level relationship - whether between molecular components in silk or between notes in a melody. Integrating graph-based neural networks into this framework enhances the mapping by explicitly propagating relational structures, enabling the discovery of universal principles underlying
Figure 1: Decoder-only transformer architecture (panel A), adapted here by using a GNN-based self-attention mechanism with a graph neural network (Figure 3 shows how GNN-Attention is constructed for the specific case of GIN-Attention). Thereby Q and K values are used to construct a per-head adjacency matrix, which is then used to define a causal graph. Whereas in standard transformer models the multiplication with V corresponds to a summation aggregation via a single linear layer, in GNN-Attenion we conduct more complex graph operations, including the designation of a GIN and PNA variant. As another variant (panel B) suitable for fine-tuning a pre-trained model akin to a LoRA model, we introduce another option where we retain the adjacency mattix predicted by the pretrained model but instead use it to construct a sparse adjacency matrix. A Sparse GIN is defined based on this and the signal from the original attention mechanism and the GIN output is added, whereas the GIN signal is scaled by a trainable scale parameter. In this variant, the pre-trained Transformer architecture is kept intact except for the addition of the Sparse GIN block.
Figure 2: Visualization of adjacency matrices and interpretation of corresponding causal graphs. Panel A: Visual representation of an adjacency matrix for one specific layer and one head, extracted from a pretrained model. Panel B, left shows a large-scale adjacency matrix, where interaction strengths are color-coded, with annotations highlighting specific points of interest. Panel B, right displays the corresponding causal graph, illustrating directional relationships between nodes based on the adjacency matrix. These visualizations provide insights into the structural and causal relationships encoded in the adjacency matrices.
both biomaterials and music. This perspective highlights the power of category theory in unifying seemingly disparate domains through abstract structural consistency and transformation.
In this interpretation, graph theory bridges these categorical insights into practical implementations. Attention mechanisms can be seen as dynamically evolving adjacency matrices, where learned relationships between input elements correspond to edge weights in a graph (Figure 2). Once the adjacency matrix A is constructed, it is used to perform a simeorrespond to sum over the value matrix V E RNº«><d», where each row of V represents the value vector corresponding to a token. Leveraging this insight, one can hypothesize that enhancing these adjacency matrices with powerful Graph Neural Network (GNN), such as Graph Isomorphism Networks (GIN) [31] or Principal Neighborhood Aggregation (PNA) models [32], can result in improved learning, abstraction and generalization capabilities. By strengthening the ability to capture graph isomorphism properties, GNNs can align relational abstractions within the Transformer framework, further solidifying contextual relationships through the lens of isomorphic relational discoveries. This approach hypothesizes that enhancing attention's capacity to capture and manipulate structural relationships will enable models to generalize better across domains and tasks. That is, unlike traditional graph-attention methods, which typically focus on local neighborhoods or hierarchical scales (and known graph structures), our approach integrates a fine-tuning strategy using sparse graph neural networks. By interpreting dynamically generated attention matrices as adjacency graphs and enabling their dynamic adjustment during training, we achieve a versatile framework that bridges sequential and relational data. Furthermore, our theoretical framing of Transformers as implicitly graph-based models offers novel insights into their generalization capabilities, paving the way for more interpretable and adaptable architectures that extend beyond the constraints of existing graph-attention models.
This perspective not only deepens our theoretical understanding of Transformers but also suggests pathways for structural innovation in how this class of neural networks can be adapted for scientific applications. For example, leveraging categorical constructs such as natural transformations could inform the design of more robust attention mechanisms, enabling seamless adaptation across increasingly complex data landscapes. The core idea behind this paper is to use concepts inspired by category theory with a practical implementation to synthesize an improved framework to train and fine-tune Transformer models. Within this scope, this work focuses on architectural details tested on a variety of sample training sets, incorporating graph-based relational structures to enhance the generalization and abstraction capabilities of the Transformer architecture.
### 1.2 Outline
We explore the theoretical foundations of attention mechanisms and propose a series of alterations to the conventional architecture. We interpret the Transformer's attention mechanism as a graph generating operator and enhance its structure by introducing more complex graph neural networks such as the Graph Isomorphic Neural network (GIN). Additionally, we develop a novel fine-tuning technique that can serve as a replacement or complement to LoRA with graph neural networks, interpreting the adjacency matrix predicted by conventional attention as an input to a graph neural network. This approach enhances the model's ability to learn from both sequential data and structured graph data. Through these enhancements, our aim is to provide a robust framework for leveraging graph structures in transformer-based models, offering valuable insights into enhancing attention mechanisms.
#### 2 Results and Discussion
#### Theoretical foundations: Graph-aware attention 2.1
We first review details of the conventional attention mechanism and then describe our revised approach to introduce graph-aware or GNN-Attention in various flavors.
#### Interpreting Attention as a linear GNN 2.1.1
The attention mechanism can be interpreted as a form of message passing on a fully connected graph, where each token in the input sequence corresponds to a node, and the learned attention scores define the weighted edges. Mathematically, the adjacency matrix A E R 1sea X Naa is computed using scaled dot-product attention:
$$A_{ij}=\text{softmax}\left(\frac{Q_{i}\cdot K_{j}^{\top}}{\sqrt{d_{k}}}\right),\tag{1}$$
where Q, K ∈ RNon Xdk are the query and key matrices, derived from the input embeddings X as Q = XWo and K = XW x, with WQ, Wx ∈ Remodel X dx being learnable projection matrices. The softmax ensures that the rows of A sum to 1, effectively normalizing the attention scores into a probability distribution. The value matrix V ∈ ₨상‰×예٥ is then aggregated using A:
$$Y=A V\,,$$
where Y ∈ RªM¤×d+ represents the output of the attention layer. This aggregation via V can be viewed as a graph neural network operation, where A ;; serves as the weight of the edge from node i to node j, and V; is the feature vector of node j. The output I ; for each node i is a weighted sum of its neighbors' features, aligned with the aggregation strategy used in graph neural networks.
In the context of multi-head attention, multiple attention heads independently compute adjacency matrices An and corresponding outputs Yn = An V2, where h & {1, ... , H } denotes the attention head and Vn = XWvy . That is, each attention head computes an independent adjacency matrix An and performs GIN aggregation. The resulting embeddings are concatenated across heads and linearly transformed to produce the final output, allowing each head to capture diverse relational patterns while maintaining computational efficiency.
The outputs from all heads are then concatenated and linearly transformed over all heads:
$$Y_{\mathrm{multi-head}}=\left(\bigoplus_{i=1}^{H}Y_{i}\right)W_{O},$$
with Wo E R(H d.,) Xdmodel as the output projection matrix. The per-head mechanism allows the model to attend to different representations of the input data simultaneously. Specifically, this interpretation of attention as a graph operation highlights how the learned adjacency matrix A dynamically adjusts to capture task-specific relationships. By incorporating multi-head attention, the mechanism effectively performs diverse message-passing computations, enriching the representation of node (token) features.
#### Expressive graph attention through a Graph Isomorphism Network (GIN): GIN-Attention 2.1.2
We now build on this concept and propose an enhanced attention mechanism that expands the traditional scaled dot-product attention by replacing the linear aggregation of value vectors V with a Graph Isomorphism Network (GIN)-based process. Additionally, this mechanism introduces a sharpening parameter to the attention computation and allows for the use of alternative activation functions beyond Softmax. The detailed steps of the algorithm are as follows.
In its most basic form, the adjacency matrix A is computed using a modified Softmax function, which incorporates a learnable sharpening parameter a (we can view the sharpening parameter as a dial on a microscope to focus in on key relationships):
$$A_{ij}=\text{Softmax}\left(\frac{\alpha\cdot Q_{i}\cdot K_{j}^{\top}}{\sqrt{d_{k}}}\right),\tag{2}$$
where:
$$Q=X W_{Q},\quad K=X W_{K},$$
and Wo. W x E Remodel X dk are learnable projection matrices. The sharpening parameter ox dynamically adjusts the focus of the attention distribution. A higher a leads to sharper attention, while a lower o produces a smoother distribution. Alternatively, the Softmax operation can be replaced with other activations, such as element-wise sigmoids:
$$\mathrm{Sigmoid:}\ A_{i j}=\sigma\left(\beta\cdot(Q_{i}\cdot K_{j}^{\top}-\tau)\right),$$
or the Sharp Softplus function:
Sharp Softplus: $A_{ij}=\dfrac{\log\left(1+e^{\beta\cdot\left(Q_i\cdot K_j^\top-\tau\right)}\right)}{\beta}$,
where, 3 controls the sharpness, and 7 represents a learnable threshold.
The adjacency matrix A can be further refined based on various modes, such as applying thresholding, sparsification (e.g., top-k), or additional activations. These transformations adapt A to better represent the graph structure before aggregation. For instance:
$$A^{\prime}=\mathrm{ReLU}(A-\tau),\quad A^{\prime}=\mathrm{top-}k(A,k).$$
Causality is enforced by applying a lower triangular mask to ensure that each node only attends to itself and its predecessors.
Once the adjacency matrix A' is computed, the GIN aggregation is applied. The updated node embeddings X' are computed as:
$$X^{\prime}=\mathbf{MLP}\left(\epsilon\cdot X+A^{\prime}X\right),$$
where & is a learnable parameter that scales the contribution of the current node embeddings X, and A'X aggregates information from the neighboring nodes.
The original GIN formulation [31] updates node embeddings as:
$$X^{\prime}=\mathbf{MLP}\left((1+\epsilon)\cdot X+A X\right),$$
where 1 · X explicitly includes the self-loop contribution, and e is a learnable parameter controlling its importance. Unlike traditional GIN formulations, the term 1 · X is omitted here because the skip connection around the attention block inherently captures this self-loop (see, e.g. Figure 1). The MLP consists of two linear transformations, a non-linear activation (e.g., SiLU), and optional dropout and normalization layers (details below).
In the multi-head setting used here, each head h computes its own adjacency matrix A/, and performs GIN-based aggregation independently:
$$X_{h}^{\prime}=\mathrm{MLP}_{h}\left(\epsilon_{h}\cdot X_{h}+A_{h}^{\prime}X_{h}\right).$$
Figure 3: Construction of the GIN-Attention mechanism. The flowchart shows how input embeddings in the hidden states in each layer in the transformer via self-attention are used to construct the attention matrix. The output is processed further before aggregation and GIN-MLP application. The alternative PNA processing discussed in the paper is done in a conceptually similar way, except that we use query, key and value projections followed by developing up to four distinct aggregrations that are concatenated and then projected back into the hidden dimension via a MLP.
The outputs from all heads are concatenated and ultimately, linearly transformed:
$$Y=\mathrm{Concat}(X_{1}^{\prime},X_{2}^{\prime},\ldots,X_{H}^{\prime})W_{O},$$
where Wo E R(H+dk)×4mode is an optional learnable output projection matrix (if not used, the aggregated output is used without further transformation). This design allows each head to focus on different relational patterns within the input, enhancing the overall expressiveness of the attention mechanism.
The GIN-aggregated outputs X' are combined with the residual connection and normalized using RMSNorm.
The overall framework is visualized schematically in Figure 3. It systematically integrates graph-based modeling into the attention mechanism, enabling the Transformer to capture richer relational structures and achieve better generalization.
Details on GIN layer design The Graph Isomorphism Network (GIN) in this work is constructed to enhance relational modeling capabilities. The GIN aggregates node features X by applying the adjacency matrix A' (defined for each head, but this detail omitted her since we introduce the general GIN layer architecture) to perform a summation aggregation from neighboring nodes and updates the embeddings as:
$$X^{\prime}=\operatorname{MLP}(\epsilon\cdot X+A^{\prime}X),$$
where e is a learnable parameter controlling the self-loop contribution. The MLP is designed with two linear layers and intermediate transformations, implemented as:
$\text{MLP}(z)=\text{Linear}_{2}(\text{Dropout}(\text{SiLU}(\text{Norm}(\text{Linear}_{1}(z)))))$,
where:
$\text{Linear}_{1}:\mathbb{R}^{d_{\text{in}}}\to\mathbb{R}^{d_{\text{in}}}$, $\text{Linear}_{2}:\mathbb{R}^{d_{\text{in}}}\to\mathbb{R}^{d_{\text{in}}}$,
and dh = Y · dm, with y being a configurable multiplier that determines the intermediate hidden dimension of the MLP layer. This multiplier allows for control over the capacity of the MLP, enabling adjustments to the model's expressiveness based on task complexity. Norm refers to RMSNorm, which stabilizes the feature representations, while SiLU provides non-linearity. Dropout is applied after activation to regularize the model and prevent overfitting.
We note that one could, in principle, feed an external adjacency matrix to be added to the values identified by the attention mechanisms. While this is not explored here, it could be subject of further research, along with trainable methods to construct adjacency matrix that could replace or complement the attention mechanism focused on in this work.
#### 2.1.3 Extension to Principal Neighborhood Aggregation (PNA)-Attention
The attention mechanism can be extended by replacing the conventional value aggregation with more complex neighborhood aggregation strategies, inspired by Principal Neighborhood Aggregation (PNA) [32]. This approach introduces multiple aggregators-such as sum, mean, max, and variance-to capture diverse relational patterns within the input graph structure.
As before, the adjacency matrix A is first computed using scaled dot-product attention, with the option to refine it via thresholding, sparsification, or alternative activation functions. Once the adjacency matrix A is processed, the aggregators compute the characteristics of the neighborhood of each node. For example:
$\quad$ Sum Aggregator: sum_agg = $AX,\quad$
Mean Aggregator: mean_agg = $\dfrac{AX}{\text{deg}}$.
where deg = A1 is the degree of each node, and 1 is the all-ones vector. Additional aggregators include:
$${\mathrm{Max~Aggregator:~max\_age_{i}=\operatorname*{max}_{j\in{\mathrm{neighbors}}(i)}}}X_{j},$$
Variance Aggregator: var_agg = mean_of_squares - (mean_agg) .
The outputs of these aggregators are concatenated for each node to form a feature vector that combines information across multiple aggregation modes:
combined = Concat(sum_agg, mean_agg, max_agg, var_agg).
This combined feature vector is then passed through a Multi-Layer Perceptron (MLP) to produce the updated node embeddings. The MLP is defined as:
$\text{MLP}(z)=\text{Linear}_{2}(\text{SilU}(\text{Linear}_{1}(z)))$,
where Linear- projects the combined features to a hidden dimension, and Linear2 maps it back to the original dimension.
As in the GIN-Attention case, in the multi-head setting, each head h computes its own adjacency matrix An and applies the aggregation independently. The outputs from all heads are concatenated and linearly transformed:
$$Y=\mathrm{Concat}(X_{1}^{\prime},X_{2}^{\prime},\ldots,X_{H}^{\prime})W_{O},$$
where Wo is the learnable output projection matrix. This approach enables each head to learn distinct relational patterns, enriching the overall representation.
To ensure causality, a lower triangular mask is applied to A, ensuring that each node only attends to itself and its predecessors. Additionally, the mechanism allows for blending the original attention output with the aggregated output using a learnable residual blending ratio:
$$Y=(1-\lambda)\cdot Y_{\mathrm{original}}+\lambda\cdot Y_{\mathrm{aggregated}},$$
where \ E [0, 1] is a learnable blending parameter.
This extended framework systematically integrates multiple neighborhood aggregation strategies, enhancing the flexibility and expressiveness of the attention mechanism while maintaining computational efficiency.
As before, the adjacency matrix A can undergo flexible processing to adapt to the desired relational structure. If conventional softmax normalization is not applied, alternatives such as ReLU can be used directly on the attention scores,
enabling sparsity while preserving non-negative weights. Further, the algorithm allows for advanced transformations, including thresholding, sharp softplus activation, sigmoid-based scaling, or top-k sparsification. These options allow that the adjacency matrix can dynamically adapt to various tasks while retaining causality through the application of a lower triangular mask.
As before we could in principle feed an external adjacency matrix to be added to the values identified by the attention mechanisms.
# 2.2 Experimental results: Graph-aware attention
We present several case studies to test the performance of GIN-Attention and PNA-Attention versus standard linear attention. For all tests we use a consistent model size of 25 M parameter, the same training data, with the same training hyperparameters.
Figure 4 shows the performance of the regular transformer model, identified as "Reference" model) and the GIN-Attention model. The figure shows training loss comparing the regular transformer and GIN model, over training epochs along with validation perplexity comparing the regular transformer and GIN model, over training epochs. It can be seen that the GIN-Attention model clearly outperforms the standard attention model. Both, the minimum validation loss is found to be lower for GIN-Attention and the model shows significantly less overfitting. The lowest validation loss is found in epoch 5 for the regular transformer model, and in epoch 8 for the GIN model.
Among the graph-aware attention Transformer models, the best performance was found for GIN-Attention with Softmax activation and the use of a trainable per-layer sharpening parameter ox;. The final distribution of the sharpening parameter across the transformer layers is shown in Figure 5, revealing an interesting distribution of how sharpening is used strategically distributed across the model layers.
Figure 6 shows the minimum training loss and minimum validation perplexity across a variety of experiments for a systematic comparison. As above, the case identified as "Reference" is a regular transformer architecture, as before. In addition to GIN-Attention, cases considered include PNA-Attention, and variations within each model architecture. We find that the best performing model with lowest validation perplexity is GIN attention with Softmax and a MLP multiplier y of 0.5, with trainable sharpening parameter and no o_proj layer.
We also find that except for one case, all GIN model architectures perform better than the reference standard attention. None of the PNA architectures improves upon the reference case, suggesting that this architectural concept is not viable (at least for the cases explored). We present the results here for comparison and to leave this direction of research for further investigation.
Next, Figure 7 depicts the generalization gap and the ratio of the lowest training loss to the lowest validation loss for select cases that perform well overall. We find that the reference model shows the highest generalization gap. indicating overfitting. Models using GIN-Attention with Softmax and varying MLP multipliers demonstrate reduced generalization gaps, with the GIN configuration using a multiplier y of 0.5 and sharpening without o_proj achieving the best performance consistent with earlier results. We note that GIN-Attention with SharpSoftplus activation and a fixed threshold also exhibits improved generalization compared to the reference, but falls short compared to the other cases. This comparison highlights the effect of architectural choices on model generalization and gives us insights into which specific approach works best.
Looking at the results, we observe an important effect of the GIN MLP multiplier ratio y. To further analyze this behavior systematically, Figure 7 shows the minimum validation perplexity as a function of γ. We find that the relationship between y and validation perplexity for various configurations follow a trend, indicating an increase in validation perplexity as the GIN MLP multiplier ratio grows. Configurations with lower MLP ratios (specifically: MLP_mult=0.5) exhibit better validation perplexity, suggesting a trade-off between multiplier ratio and generalization. We suspect that this choice has a regularization effect on the training performance.
The results demonstrate a clear advantage of using GIN-Attention as a strategy to revise the conventional attention mechanisms in pretraining strategies, here demonstrated for protein modeling tasks.
### 2.3 Sparse graph-aware attention as a fine-tuning strategy
The earlier discussion showed how expanding linear attention to include more expressive graph operations such as GIN-Attention can increase performance. However, training Transformer models from scratch can be a time-consuming and expensive process. Here we explore how the general concept can be used to develop a fine-tuning strategy that allows us to use a pre-trained transformer model and adapt it to a particular set of training data.
Figure 4: Training and validation performance of the regular transformer model (identified as "Reference" model)) and the GIN model. A, Training loss comparing the regular transformer and GIN model, over training epochs. B, Validation perplexity comparing the regular transformer and GIN model, over training epochs. C. Minimum validation loss measured across all epochs. The minimum validation loss is found in epoch 5 for the regular transformer model, and in epoch 8 for the GIN model.
In this method, a Graph Neural Network (GNN) is integrated into the Transformer architecture immediately after the standard self-attention mechanism, enhancing the model's ability to capture relational structures within the data, but geared towards a fine-tuning approach similar to LoRA that adds shallow layers to a model [33, 34, 35]. The adjacency matrix. A is derived from the attention weights computed during standard self-attention step. However, it is refined based on specific configurations, such as thresholding to achieve sparsification to yield efficient computational strategies.
We experimented with a variety of aggregation strategies and found summation mechanisms to work well overall, while being easy to implement. First, we sum all per-head adjacency matrices Ap, while clamping values at one, providing an aggregated view of all relationships identified by the attention mechanism. The effective aggregated adjacency matrix A is computed by summing the per-head adjacency matrices An and applying a clamping function to ensure the values remain within a specified range:
$$A_{\rm agg}=\min\left(1,\sum_{h=1}^{H}A_{h}\right),\tag{3}$$
transforming H individual adjacency matrices to a single, aggregrated adjacency matrix A. A sharpening function is applied:
$$A^{\prime}=\mathrm{sigmoid}\left(\alpha(A_{\mathrm{agg}}-\tau)\right),$$
where or controls the sharpness of the sigmoid and 7 is a threshold that allows for further fine-tuning of the mapping function.
Figure 5: The distribution of the sharpening parameter o , across all layers i in the GIN model at the end of training. The sharpening parameter 04 controls the focus of the attention mechanism by scaling the logits before applying the softmax function. A value of o; = 1.0 corresponds to the standard softmax behavior, where no additional sharpening or smoothing is applied. The variation of Q; indicates how different layers adjust their focus during training. Layers with a; > 1.0 exhibit sharper attention distributions, focusing more strongly on specific tokens, while layers with as < 1.0 produce smoother attention distributions, allowing a more even consideration of all tokens. This behavior reflects the adaptive nature of the GIN model in optimizing attention mechanisms for different layers to improve overall performance. All models are constructed to have approximately the same number of parameters, 25M.
Figure 6: Minimum training loss and minimum validation perplexity measured, across a variety of cases. The case identified as "Reference" is a regular transformer architecture. Cases considered include PNA attention, GIN attention, and variations within each scenario. The best performing model with lowest validation perplexity is GIN attention with softmax and a MLP multiplier y of 0.5, with trainable sharpening parameter. Except for one case, all GIN model architectures perform better than the reference standard attention. None of the PNA architectures improves upon the reference case, suggesting that this architectural concept is not viable.
Figure 7: Further training dynamics analysis. Panel A: Generalization gap for selected cases that perform well overall, measured after 9 training epochs. The reference model shows the highest generalization gap, indicating overfitting. Models using GIN (Graph Isomorphism Network) with Softmax and varying MLP multipliers demonstrate reduced generalization gaps, with the GIN configuration using a multiplier γ of 0.5 and sharpening achieving without o_proj one of the lowest gaps. The PNA configuration with SharpSoftplus activation and a fixed threshold also exhibits improved generalization compared to the reference. This comparison highlights the effect of architectural choices on model generalization. Panel B: Ratio of lowest training loss to lowest validation loss achieved. The lowest ratio is also found for the GIN model using a multiplier 7 of 0.5 and sharpening achieving without o_proj.
Figure 8: Minimum validation perplexity as a function of the GIN MLP multiplier ratio y. The plot demonstrates the relationship between the GIN MLP multiplied y and validation perplexity for various configurations: MLP_mult=0.5, sharpening, MLP_mult=0.5, MLP mult=1, and MLP mult=4. The data points are fitted with a power law trend line, indicating an increase in validation perplexity as the GIN MLP multiplier ratio grows. Configurations with lower MLP ratios (e.g., MLP_mult=0.5) exhibit better validation perplexity, suggesting a trade-off between multiplier ratio and generalization.
Afterwards, discrete thresholding is applied:
$$A^{\prime}=\begin{cases}A_{i j}^{\prime},&\text{if}A_{i j}^{\prime}>\epsilon\\ 0,&\text{otherwise}\end{cases}$$
Alternatively, other sparsification strategies like top-k selection may be used. To ensure causality, a lower triangular mask is applied, restricting each node to attend only to itself and its predecessors.
This approach systematically combines the expressiveness of the GNN with the contextual representations learned by the attention mechanism.
The refined adjacency matrix A' is passed along with the transformer hidden states X into the GNN. The updated embeddings are computed as:
$$X^{\prime}=X+\operatorname{GNN}(X,A^{\prime})\cdot\lambda,$$
where ) is a learnable scaling parameter that modulates the contribution of the GNN output to the final embeddings. This scale ensures the GNN's impact is dynamically adjusted during training, allowing for effective blending of relational features with the transformer hidden states. We typically chose initial values for À as a linear curve over model layers from 0 to a finite value, e.g. 0.5 in the last layer.
After the GNN aggregation, the output X' is combined with the residual from the input hidden states. Additionally, a normalization step is applied to ensure numerical stability and consistent scaling of the features:
$$X^{\prime\prime}=\operatorname{Norm}(X^{\prime}),$$
where Norm refers to RMSNorm or another normalization technique.
The final output of the layer is the normalized, GNN-augmented hidden states:
$$Y=X^{\prime\prime}.$$
By integrating the GIN immediately after the self-attention step, this approach refines the relational representations learned by the attention mechanism before the signal is fed to the feed-forward (FF) MLP block. The explicit inclusion of À ensures that the GNN's impact can be adaptively controlled, providing a robust and flexible mechanism for capturing complex dependencies within the input data.
The Sparse GIN framework implements a Graph Isomorphism Network layer to refine GIN Layer Implementation node representations using causal graph structures. Consistent with the rerence approach we use sum aggregation. which is both efficient and expressive in capturing neighbor information. The GIN layer updates node embeddings X' by combining the self-loop features X and aggregated neighbor features using the adjacency matrix A:
$X^{\prime}=\epsilon\cdot X+$ Aggregate($X_{\rm neighbors}$),
where: є is a learnable parameter that scales the self-loop contribution and Aggregate(Xneighbors) ≡ AX, using the sum aggregation strategy (as before, the identity term is removed due to the residual connection that already captures it).
To enhance the representation power, the aggregated features are processed through an MLP:
MLP(z) = Linear2(Activation(Norm(Linear1(z))),
where Linear1 and Linear2 are fully connected layers, with Linear1 projecting the input to a hidden dimension controlled by a multiplier ^, and Linear2 mapping it back to the output dimension and Norm applies normalization (e.g., LayerNorm), and a ReLU activation function introduces non-linearity.
We enforce causality by applying a mask to edges such that only edges (i, j) where i ≤ j within the same graph are retained:
Causal Mask: edge_indexaltered = {(i, j) | i ≤ j, and batch[i] = batch[j]}.
This ensures that nodes only receive information from preceding nodes, maintaining a directed acyclic graph structure.
The final output of the GIN layer is computed by combining the transformed node features with a residual connection:
$$X_{\mathrm{residual}}=\mathrm{Linear}(X),$$
$$X_{\mathrm{final}}=\operatorname{MLP}(X^{\prime})+X_{\mathrm{residual}}.$$
This residual connection ensures smooth gradient flow and stabilizes training, particularly when the input and output dimensions differ.
The use of MLPs with normalization and activation enhances the representation learning capabilities, while residual connections ensure robust integration of the GIN layer into the overall architecture. By simplifying the aggregation strategy and emphasizing causality, the approach efficiently captures relational dependencies while adhering to computational constraints.
When used as a fine-tuning strategy, only parameters in this newly added GIN are trainable and the rest of the model is frozen per its pre-trained weights.
Figure 9 shows the performance of LoRA fine-tuning (Figure 9A) and sparse GIN fine-tuning. In sparse GIN fine-tuning, we interpret the attention matrix computed by the pre-trained model as an adjacency matrix. Here, we sum attention matrices across all heads and clamp at 1.0, sparsify the adjacency matrix, and then use it as an input to a GIN model (details, see Materials and Methods). Both LoRA and sparse GIN feature the same number of trainable parameters. Next. Figure 9A shows training loss over epochs for LoRA and sparse GIN. S
We find that sparse GIN demonstrates improved convergence and lower final training loss compared to LoRA, indicating improved optimization efficiency and training dynamics. Figure 9B visualizes validation perplexity over epochs for LoRA and sparse GIN. We also observe that sparse GIN achieves lower perplexity across all epochs, suggesting better generalization to unseen data. These results are in general agreement with the data obtained for GIN-Attention per the previous section, suggesting that the graph-based approach yields better generalization and less overfitting. Especially the finding that sparse GIN fine-tuning is easy to implement with few parameters, but with significantly improved trainining dynamics, offers interesting future opportunities.
Another interesting aspect is that we can feed an external adjacency matrix to be added to the values identified by the attention mechanisms; if this is included during training it can provide a straightforward way to incorporate structured graph information into a pre-trained model that was originally not necessarily identified for this objective.
A key parameter in the sparse GIN model is the use of the trainable scale parameter that delineates how the results of GIN updates are merged with the original signal. Figure 10 depicts the trainable scale parameter ) over all layers in the model, plotted over all training epochs. Early in training, higher layers exhibit stronger scaling values, indicating a higher reliance on sparse GIN adjustments. As training progresses, the scaling values stabilize, demonstrating convergence in the relative importance of the sparse GIN contributions across layers. The color gradient reflects the magnitude of the scale parameter, with warmer colors (red) indicating higher values and cooler colors (blue) indicating lower values. This visualization provides insights into the adaptive behavior of À in each layer over the course of training.
Figure 9: Performance of LoRA fine-tuning (panel A) and sparse GIN fine-tuning. In sparse GIN fine-tuning, we interpret the attention matrix computed by the pre-trained model as an adjacency matrix. Here, we sum attention matrices across all heads and clamp at 1.0, and then use it as an input to a GIN model. Only adjancy matrix values above a threshold of 0.2 are considered, introducing a sparseness and computational efficiency. Both LoRA and sparse GIN feature the same number of trainable parameters. Panel A: Training loss over epochs for LoRA and sparse GIN. Sparse GIN demonstrates faster convergence and lower final training loss compared to LoRA, indicating improved optimization efficiency. Panel B: Validation perplexity over epochs for LoRA and sparse GIN. Sparse GIN achieves lower perplexity across all epochs, suggesting better generalization to unseen data.
For a more global perspective on how \ behaves during training, Figure 11 shows two analyses. Figure 11A depicts the average trainable scale parameter over training steps, revealing a rapid decline in the average scale parameter during the initial stages of training, indicating early adaptation of the sparse GIN contributions. After the initial drop, the scale stabilizes and gradually increases slightly, suggesting the model fine-tunes the integration of sparse GIN as training progresses. Figure 11B displays the trainable scale parameter for each layer after the last training epoch. The scale parameter exhibits an increasing trend from lower to higher layers, reflecting the progressively stronger reliance on sparse GIN in deeper layers of the model. We note that this layer-wise scaling suggests that deeper layers benefit more from the structural adjustments provided by sparse GIN (this observation informed our choice for the initial depth scaling of the parameter as defined above).
In another experiment we trained a Sparse-GIN model on a symbolic reasoning dataset to test how general the performance improvements are, along with a few additional variations (see, Figure 12). The results of the validation perplexity analysis for various model configurations are presented in Figure ??. Figure ??A highlights the performance differences between GIN, GIN with fixed \, GIN with fixed \ and a smaller GNN, and LoRA. As evident from the results, the Sparse-GIN model with learnable ) achieves the lowest validation perplexity (2.856), outperforming all other configurations. This indicates that GIN approach effectively captures the necessary relationships in the data. The Sparse-GIN with fixed \ configuration slightly increases the perplexity, suggesting that fixing \ limits the flexibility of the model to adapt to data. Using a smaller GNN in the GIN with fixed À configuration (GIN, fixed λ, small GNN) results in slightly worse performance. This increase implies that the smaller GNN size might reduce the model's capacity to fully capture intricate dependencies, thus leading to a minor degradation in performance. The LoRA model exhibits the highest perplexity across all cases, which may indicate that while LoRA introduces additional parameter-efficient mechanisms, it does not optimize as effectively for this specific task. The error bars in the figure represent the standard deviations across multiple runs, confirming the statistical significance of the observed differences. These findings underscore the superior performance of the GIN model in minimizing validation perplexity and highlight the trade-offs introduced by modifying \ or reducing the GNN size. Such insights are valuable for designing future architectures to balance efficiency and performance. Figure ??B shows the trainable scale parameter `. Similar to the earlier results, the parameter is found to be smallest in earlier layers, and largest in deep layers.
### 3 Conclusion
We explored theoretical and practical adaptations of the Transformer architecture through the lens of graph-based models and category theory. By interpreting and formulating Transformers as Graph Isomorphism Networks (GIN), we established a novel perspective on their attention mechanisms, demonstrating structural equivalence between the
Figure 10: Trainable scale parameter ) over all k layers in the model, plotted over all epochs. The trainable scale parameter delineates the relative importance of the sparse GIN as it is added to the original signal. The plot illustrates how the scale parameter evolves over both the layer index and the epoch fraction. Early in training, higher layers exhibit stronger scaling values, indicating a higher reliance on sparse GIN adjustments. As training progresses, the scaling values stabilize, suggesting convergence in the relative importance of the sparse GIN contributions across layers. The color gradient reflects the magnitude of the scale parameter, with warmer colors (red) indicating higher values and cooler colors (blue) indicating lower values. This visualization provides insights into the adaptive behavior of the trainable scale parameter over the course of training.
aggregation processes in graph neural networks and the focus mechanisms in attention layers. This equivalence provides a foundation for extending Transformers with explicit graph reasoning capabilities that manifests itself in better generalization performance.
We further explored the use of Principal Neighborhood Aggregation (PNA) method, which augments attention by integrating multiple aggregation techniques per attention head, testing more diverse representations. While performance did not exceed that of the GIN model, this enhancement, coupled with the flexibility of graph neural networks, could offer an alternative framework for handling structured and relational data in domains such as bioinformatics, materials science, and other areas.
From a theoretical standpoint, we incorporated insights from category theory to frame Transformers as functors that map input to output representations while preserving structural relationships. This abstraction highlights their inherent capacity for compositional reasoning and contextual adaptation, bridging concepts from information theory, statistical physics, and graph theory. Additionally, the categorical perspective inspires future directions, such as exploring natural transformations for designing more robust and adaptive attention mechanisms that incorporate powerful graph modeling techniques like GINs or PNAs.
Our proposed sparse GIN fine-tuning approach can be used as an alternative or complement to LoRA [33], using sparse adjacency matrices derived from attention mechanisms as inputs to graph neural networks. This method significantly enhances the model's ability to learn from mathematical training data, demonstrating improved performance and flexibility. We find that Sparse-GIN with learnable ) achieves the lowest validation perplexity, demonstrating superior adaptability and performance, while fixed \, smaller GNN size, and LoRA configurations show trade-offs in flexibility and capacity (Figure 12). These deeper insights provide additional evidence for the findings, especially since they hold for a different dataset.
Figure 11: Global dynamics of the trainable scale parameter ) during training and across model layers k. Panel A visualizes the average trainable scale parameter over training steps. The plot illustrates a rapid decline in the average scale parameter during the initial stages of training, indicating early adaptation of the sparse GIN contributions. After the initial drop, the scale stabilizes and gradually increases slightly, suggesting the model fine-tunes the integration of sparse GIN as training progresses. Panel B displays the trainable scale parameter for each layer at the last epoch. The scale parameter exhibits an increasing trend from lower to higher layers, reflecting the progressively stronger reliance on sparse GIN in deeper layers of the model. This layer-wise scaling suggests that deeper layers benefit more from the structural adjustments provided by sparse GIN.
The findings and methods presented in this work not only deepen the understanding of attention mechanisms but also pave the way for innovative applications of Transformers integrated with graph-based reasoning. By synthesizing theoretical rigor with practical innovation, this work contributes to the ongoing evolution of neural network architectures, advancing their potential to address increasingly complex challenges in science and technology.
#### 3.1 Transformers are secretly GIN-like graph reasoning models
The discussion in this section explains how a standard transformer architecture can be viewed as a GIN-type graph neural network. We posit that vanilla Transformer architectures (as shown in Figure 1A) are, in fact, GIN-like graph models. It thereby invokes our modeling strategy as a "GIN-within-GIN" mechanism, which nests per-head GINs inside a global GIN, yielding a hierarchical model for graph-structured data.
A transformer layer consists of multi-head self-attention. For each head h = 1, 2, ... , H, we compute:
$$\mathbf{A}_{h}=\mathrm{softmax}\!\left({\frac{\mathbf{Q}_{h}\,\mathbf{K}_{h}^{\dagger}}{\sqrt{d_{k}}}}\right)\!,\quad\mathbf{O}_{h}=\mathbf{A}_{h}\,\mathbf{V}_{h},$$
where
- Qh, Kb, Vb E RT xdk are the query, key, and value matrices for head h;
- · T is the (sequence or node) dimension;
- · dk is the per-head feature dimension.
These quantities are computed as follows:
$$\mathbf{Q}_{h}=\mathbf{H}\mathbf{W}_{h}^{Q},\quad\mathbf{K}_{h}=\mathbf{H}\mathbf{W}_{h}^{K},\quad\mathbf{V}_{h}=\mathbf{H}\mathbf{W}_{h}^{V},$$
where:
- H ∈ RT×d×: The input hidden states, where T is the sequence length (or number of nodes) and d is the hidden dimension.
- · W&, W E Rdxdh : Learnable weight matrices specific to head h, projecting the input hidden states into the query, key, and value spaces, respectively.
Figure 12: Validation perplexity comparison between different model configurations. Panel A: The bar plot illustrates the validation perplexity values for GIN, GIN with fixed λ, GIN with fixed λ and a smaller GNN, and LoRA. Measured values and error bars represent the standard deviation of the measured perplexity in the last training epoch. GIN achieves the lowest perplexity, while LoRA exhibits the highest perplexity. Panel B shows the trainable scale parameter ). Similar to the earlier results, the parameter is found to ultimately be smallest in earlier layers, and largest in deep layers.
- Qh, Kh, Vn E RTXdk: The query, key, and value matrices for head h, where dg = d/H is the feature dimension for each head.
These projections transform the input into three distinct spaces:
- · The query matrix Qt, captures the representation of the tokens in terms of the features they query.
- · The key matrix Kk represents the features available for attention.
- · The value matrix Vn provides the features that will be aggregated based on the attention weights.
Each head's output On E RT xdk is concatenated along the feature dimension:
$$\mathbf{H}_{\mathrm{attn}}=\bigoplus_{h=1}^{H}\mathbf{O}_{h},$$
resulting in a IR ×(H dk) matrix. A shared projection reduces this to the original hidden dimension d:
$$\mathbf{H}_{\mathrm{attn}}^{(k)}=\mathbf{H}_{\mathrm{attn}}\,\mathbf{W}^{O},$$
where WO ER(H dk)×d is learnable.
Each attention head A}, ∈ RT ×7 serves as an adjacency matrix, specifying how each "node" (token) attends to others. This adjacency-guided aggregation is central to the connection between transformers and GINs.
In a Graph Isomorphism Network (GIN), the layer update for a node v at the k-th layer is:
$${\bf h}_{v}^{(k)}=\mathrm{MLP}^{(k)}\!\Big((1+\epsilon)\,{\bf h}_{v}^{(k)}\ +\ \sum_{u\,\in\,{\mathcal{N}}(v)}\ {\bf h}_{u}^{(k)}\Big),$$
where e is a scalar controlling the self-contribution, and MLP is a multi-layer perceptron for feature transformation. Similarly, in a transformer, the self-attention mechanism computes;
$$\mathbf{H}_{\mathrm{attn}}^{(k)}=\mathbf{H}^{(k)}+\Big(\sum_{h=1}^{H}\mathbf{A}_{h}\left(\mathbf{H}^{(k)}\mathbf{V}_{h}^{(k)}\right)\Big)\mathbf{W}_{h}^{(k),O(k)}$$
where An is the adjacency matrix from attention, W/ is the projection matrix, and the residual connection ensures self-contribution.
The resemblance becomes complete with the application of the feed-forward network, which contains an MLP:
$$\mathbf{H}^{(k)}=\text{MLP}\big{(}\mathbf{H}^{(k)}_{\text{attn}}\big{)}=\text{MLP}\big{(}\mathbf{H}^{(k)}+\left(\sum_{h=1}^{H}\mathbf{A}_{h}\,\mathbf{H}^{(k)}\right)\mathbf{W}^{(k),O}_{h}\big{)}.\tag{4}$$
Here:
- · The attention mechanism aggregates information from neighbors, here through a multi-headed process (like uEN(v) in GINs).
- · The FFN applies a non-linear transformation, analogous to MLP in GINs.
- · The residual connection corresponds to the (1 + c) term in GINs, with e = 0 in transformers.
Transformers hence resemble a GIN-like process, where attention handles neighbor aggregation, and the feed-forward MLP handles nonlinear feature transformation. There are, of course, notable distinctions. For instance, Transformers concatenate the outputs of multiple heads and then apply a single projection and feed-forward layer, rather than summing neighbor features and applying an MLP per node. In GINs, the MLP directly follows the aggregated representation of each node and its neighbors (scaled by (1 + e)). Transformers, however, first mix token embeddings across multiple attention heads (via concatenation) before applying a global linear projection W .. Only after this step does the transformer's feed-forward network come into play. These differences in adjacency construction, concatenation versus summation, and MLP placement mean that transformers and GINs, while sharing certain high-level motifs, are architecturally distinct and exhibit different inductive biases. Our GIN-within-GIN model addresses these differences and formally adds hierarchical GINs for more expressive graph processing with proper injective properties.
Our GIN-Attention architecture adds a hierarchical structure by nesting per-head GINs inside a global GIN-like algorithm. Let H(6) E RT×d be the input node-feature matrix for the k-th layer. We define:
$$\mathbf{H}^{(k)}=\text{MLP}^{(k)}_{\text{global}}\Big{(}\mathbf{H}^{(k)}+\Big{(}\bigoplus_{h=1}^{H}\text{MLP}^{(k)}_{h}\Big{(}(1+\epsilon)\mathbf{H}^{(k)}+\mathbf{A}^{(k)}_{h}\,\mathbf{H}^{(k)}\Big{)}\Big{)}\,\mathbf{W}^{(k)}_{O}\Big{)}\tag{5}$$
where:
- · MLP (.) is an independent GIN graph neural network for head h.
- (6) is the adjacency (attention) matrix from head h.
- is an optional learnable projection matrix applied after concatenation across heads and global GIN aggregation.
- global ( · ) integrates the outputs of all per-head aggregated features to produce a global representation. • MLP(k)
Each head specializes in capturing localized patterns using its own GIN, and the global GIN integrates these representations for the next layer.
By nesting GINs within each attention head, we obtain more powerful hierarchical model for graph relational modeling. As our results showed, this increases performance at the same parameter count and specifically with increased generalization capabilities and reduced overfitting.
These and other insights complement other works where researchers have elucidated important principles and strategies implicitly or tacitly captured in the original transformer architecture, providing increasing explanation for the broad usability and impact the model architecture has been able to achieve [36, 37, 38. 39, 40, 41, 42, 43, 44]. We hope that our initial results and strategy presented here can be explore in other models, with other data sets and tasks, and tested in other settings.
### 3.2 Theoretical foundations of sparsification in GNN-Based transformers
Sparsification in GNN-based Transformers introduces a principled approach to improving scalability and interpretability while enhancing the relational reasoning capabilities of attention mechanisms. At its core, sparsification reflects a foundational shift from dense, exhaustive relational modeling to selective, task-relevant graph operations. This aligns with real-world data distributions, which are often sparse and structured (e.g., social networks, molecular graphs, dependency trees in language).
Graph-based neural networks and attention mechanisms are inherently complementary: Attention dynamically determines relevance among tokens, while GNNs encode structural relationships through adjacency matrices. Sparsification introduces an explicit relational inductive bias by filtering connections in the graph, focusing on task-relevant edges while discarding noise. This selective process reflects principles from graph sparsity theory, where retaining high-weight edges preserves the graph's expressive power [31, 32, 45].
The aggregrated, sparsified matrix A' defined in Section 2.3 retains the most significant relationships across attention heads, aligning the computation with the graph sparsity principles observed in real-world relational data. This focus on sparse, high-weight edges not only improves computational efficiency but also introduces regularization, mitigating overfitting in high-dimensional spaces.
The integration of sparsification into multi-head attention transforms each Transformer layer into a hierarchical graph reasoning module. By aggregating information across all attention heads into a single effective adjacency matrix A(^(^) at layer k, the framework models complex relationships while preserving scalability. Node embeddings H(6) are updated as:
$$H^{(k)}=\text{MLP}^{(k)}_{\text{global}}\left(H^{(k-1)}+\lambda^{(k)}\text{MLP}^{(k)}_{\text{Space-GRN}}\left((1+\epsilon)H^{(k-1)}+A^{\prime(k)}H^{(k-1)}\right)\right),\tag{6}$$
where e controls the self-loop contribution and \(6) is the trainable scale parameter. This hierarchical reasoning enables each layer to capture global and local dependencies while adapting to sparse relational structures.
Compositionality in Category Theory The Sparse GIN framework can be interpreted as a compositional system for relational reasoning, rooted in the principles of category theory [20, 21, 46, 47]. In this view, the Transformer is modeled as a functor F : C -> D, mapping objects and morphisms from an input category C (e.g., sequences or graphs) to an output category D (e.g., embeddings or predictions), while preserving their structural relationships. Sparsification introduces a natural transformation n : F -> F', where F' represents the sparsified counterpart of the Transformer. Formally, for each input x E C, the natural transformation n satisfies:
$$\eta_{x}:F(x)\to F^{\prime}(x),$$
where 72 modifies the adjacency structure dynamically by transitioning from dense attention to sparsified graph representations. This transition ensures that essential relational information is retained while introducing computational and structural efficiency. Furthermore, the sparsified adjacency matrix Ag& can be viewed as an enriched morphism in C, capturing weighted relationships between nodes. By applying A(K) = ƒ (∑½=1 4)(0), the framework dynamically aligns relational reasoning with the task requirements, ensuring compositional consistency across layers. This perspective establishes Sparse-GIN as a modular and mathematically principled extension of Transformers for graph-based reasoning.
Information Bottleneck Theory Sparse GIN can be framed through the lens of Information Bottleneck Theory, [48, 49] which posits that a model should retain only the most relevant information from the input while discarding redundant or noisy details. In this context, sparsification introduces an information bottleneck by filtering low-weight edges in the adjacency matrix A'(6), ensuring that only task-critical relationships are preserved. Formally, the sparsified matrix A'(4) can be seen as an intermediate representation Z that minimizes the mutual information I ( X ; Z ) between the input X
and the sparsified representation, subject to a constraint on preserving the predictive information I(Z; Y ) necessary for the target output Y :
min I(X; Z) subject to I(Z; Y) > threshold.
This trade-off may enable Sparse-GIN to reduce overfitting by discarding irrelevant features while maintaining expressive power. By modeling sparsification as an information bottleneck, the framework achieves robust generalization across diverse tasks while aligning with real-world graph sparsity observed in domains like molecular interactions, mechanistic principles in physics, or social networks.
Hierarchical Representation Learning Sparse GIN may naturally support hierarchical representation learning by aggregating information across local and global relational structures within graphs [50]. Each Transformer layer k models node-level interactions using the sparsified adjacency matrix A'(6), while progressively building higher-order abstractions through inter-layer aggregation. As defined in equation (6), the node embeddings H(6) in each layer are updated to selectively retain the most significant relationships, ensuring task-relevant features are propagated. To capture global hierarchical dependencies, the final representation aggregates features across layers using the FF MLP. This hierarchical structure enables Sparse-GIN to model complex dependencies at multiple levels of abstraction, bridging local interactions and global task objectives. This multi-level reasoning aligns with cognitive processes and supports applications requiring deep relational insights, such as molecular property prediction and symbolic reasoning.
Summary of implications The combination of sparsification and hierarchical graph reasoning provides a framework bridges structured and sequential data modeling, and could be useful as a fact of scalable and interpretable foundation for AI systems. These principles lay the groundwork for future advancements in sparsity-aware attention mechanisms and their applications to diverse domains such as bioinformatics, symbolic reasoning, and multi-modal learning. We find that Sparse-GIN demonstrates robust generalization across diverse fine-tuning experiments, including mathematical reasoning tasks, and symbolic logic datasets, achieving strong performance across all domains.
The sparsification of adjacency matrices in GIN-based Transformers carries foundational implications for structured and sequential data modeling:
- Scalability: By reducing computational complexity, sparsification extends the applicability of Transformers to large graphs and long sequences.
- · Robustness and Regularization: Sparsity acts as an implicit regularizer, improving generalization and mitigating overfitting in high-dimensional spaces.
- · Alignment with Real-World Data: Many real-world graphs, such as molecular structures or dependency graphs, exhibit sparsity. By modeling these distributions explicitly, the framework naturally aligns with the underlying structure of the data.
- · Expressive Power: Sparsified graph reasoning retains essential structural information while discarding redundant or noisy edges, maintaining expressivity with reduced complexity.
- · Auxiliary Control with External Adjacency Matrices: The framework supports the integration of external adjacency matrices during training and inference, enabling auxiliary control over the relational structure. For example, domain-specific knowledge encoded as a predefined adjacency matrix can guide the model to focus on specific dependencies, such as known molecular interactions or pre-defined dependency structures. Future experiments could explore scenarios where external adjacency matrices are introduced only during inference to adapt the model dynamically to new contexts.
- New Foundation Model Development and Training Strategy: This approach, essentially implemented in our fine-tuning experiments, suggests a novel training paradigm: first, pretrain and even fine-tune a standard Transformer on the target task or a related dataset, and then augment it with Sparse-GIN as an additional strengthening mechanism. By retaining the pretrained Transformer's core capabilities and introducing sparsified graph reasoning in a second stage, this method combines the benefits of dense pretraining with sparse fine-tuning, enabling task-specific enhancements without retraining the entire model. As an alternative, one could also train a GIN-Attention model and then add a Sparse-GIN layer on top of it.
### 3.3 Broader perspective and outlook
A key objective of AI for science is the advancement of discovery via generalization, moving from fitting to training data towards more expressive, generalizable AI systems that can reason over never-before-seen data and problems. When considered in the context of biological problems, for instance, the concept of GIN-Attention, which integrates Graph Isomorphism Networks (GINs) into Transformer attention mechanisms, aligns seamlessly with the principles
of materiomics [51, 52, 53, 54, 55, 16], the holistic study of material properties and behaviors through their intrinsic structures and interactions. In materiomics, understanding the hierarchical and relational nature of molecular or microstructural patterns is crucial, as these relationships dictate material performance across scales. These are powerful principles in biological materials and living organisms [56, 57], and are key to complex problem solving in biological agentic systems [58].
GIN-Attention models these relationships explicitly by interpreting attention weights as graph adjacency matrices, enabling the Transformer to capture intricate dependencies inherent in materials science. For example, in studying protein folding, peptide assembly, the structure and mechanics of polymer chains, or poly-/crystalline lattices, GIN-Attention can enhance the representation of how individual components interact within a material's architecture, in distinct and varied feature representations and abstractions that emergec naturally at each head and layer (Figure 13). By leveraging the graph isomorphism property, it ensures that structurally equivalent elements yield consistent predictions, regardless of orientation or representation, a critical aspect in analyzing materials with symmetrical or repetitive motifs and generalizable principles. This approach opens avenues for more interpretable and precise modeling in materiomics, enabling the design of bioinspired or architected materials by uncovering hidden correlations between microstructure and macroscopic properties. Thus, GIN-Attention bridges advanced machine learning techniques with the foundational challenges of materials science, paving the way for transformative discoveries in the field. Another interesting direction would be to explore how GIN-Attention based models could be integrated into multi-agent systems to further their emergent capabilities and especially scientific discovery [13].
There are many other fields where such approaches can be useful, in diverse areas ranging from music (where an orchestral piece can be modeled as a complex graph of interactions between notes, instruments, themes, arrangements, and so on, at different scales and resolutions) to protein modeling, to disease, and perhaps even human creativity. We believe that achieving these integrative goals requires bridging graph theory and Transformer architectures, providing injective biases aligned with the implicit graph-like nature of attention mechanisms, and introducing novel methods to enhance their capabilities to relate. reason and develop functors across domains. The integration of Graph-Aware Isomorphic Attention and sparse graph neural network fine-tuning opens new pathways for improving relational reasoning and generalization in Transformers.
Looking forward, several opportunities emerge:
- 1. Multi-Modal and Cross-Domain Applications: The proposed methods are particularly promising for fields like bioinformatics, materials science, genomics, ecosystems, or systems biology, where both relational and sequential data are prevalent. Extending this approach to multi-modal datasets (e.g., combining images and text) could further enhance its versatility.
- 2. Interpretable AI: By making the relational reasoning in attention explicit, these methods can pave the way for more interpretable Al models, aiding domains where transparency is critical, such as healthcare, environmental studies, and scientific discovery.
- 3. Efficient Fine-Tuning Techniques: Sparse GNN fine-tuning, with its minimal computational footprint, offers a scalable alternative for adapting large pre-trained models to specialized tasks. Exploring its applications in low-resource settings or edge computing could be impactful, along with additional experimentation to test model performance in real-world applications like materiomic reasoning.
- 4. Advanced Graph Reasoning: Building on the theoretical insights, integrating more complex graph reasoning techniques, such as deeper message passing neural networks or hierarchical graph structures, could yield even more expressive models.
- 5. Foundational AI Research: The reinterpretation of Transformers as hierarchical GIN models may be a nucleus for deeper exploration into their theoretical underpinnings. This perspective could lead to new architectures that unify graph neural networks and Transformers into a cohesive framework.
- 6. Application to Intersections between Natural and Social Systems: Beyond technical advancements, these methods hold potential for studying natural systems (e.g., protein folding or ecological networks) and social systems (e.g., social media dynamics or economic modeling), where relational structures are paramount, and where universal patterns could be critical to achieve generalization and the extraction of new knowledge.
It may be worthwhile to explore how the focus on graphs may also help develop not only better performing models but also provide pathways for improving reasoning models that focus on inference compute scaling [59, 60, 12, 61, 62, 41, 63, 25, 421. For instance, we could feed relations modeled by the GIN-Attention mechanisms (or Sparse-GIN) back into the reasoning process and train models with increased mechanistic awareness. This additional signal could serve as powerful cues in the development of reasoning models. A specific idea is to utilize graph representations, perhaps processed or sparsified relationships, to inform, train and evolve the reasoning process in hidden dimensions without
Figure 13: Dynamic Graph Representation Learning with GIN-Attention in Transformers This schematic illustrates the iterative process of GIN-Attention in a Transformer architecture, applied to material microstructures, here conceptually shown for a model with two layers and two heads. Starting with raw microstructural data (e.g., proteins or polymers), an initial graph representation is constructed. At each layer, multiple attention heads dynamically build and refine graph structures by updating adjacency matrices based on learned attention scores. The outputs of all heads are merged to produce updated graph representations, which are iteratively refined across layers. The final learned representation integrates structural and relational insights, enabling the model to predict material properties, uncover structure-property relationships, and design novel materials. This framework highlights the simultaneous graph construction and feature learning facilitated by GIN-Attention.
ever decoding to discrete tokens. Such self-reflection in embedding space had been explored in earlier work [34, 12] and could be a strategy to inference-scale performance with relevant reward signals during fine-tuning.
By synthesizing graph-based reasoning with the proven capabilities of Transformers, we may build a new generation of models that are more flexible, interpretable, and capable of tackling complex, real-world challenges. While much further research is necessary, this work offers a new perspective on attention in Transformer architectures by integrating graph isomorphic reasoning and dynamic graph-based fine-tuning, enabling unprecedented adaptability, generalization. and interpretability for tackling complex relational data across diverse domains. The ease of adapting sparse GIN models into any existing Transformer architecture may be a particularly appealing way to explore such opportunities.
Other opportunities may exist when graph reasoning in attention as explored in this paper is combined with coarse-grain (CG) modeling, a method widely used in multiscale simulation of physical and other systems [65, 66, 51, 67]. Thereby, groups of tokens could be mapped to coarser representations and further strengthen capabilities in hiearchical reasoning. Figure 14 shows two possible scenarios. Figure 14A visualizes a method by which tokens are mapped into different levels of coarser representations, in which graph operations are then conducted. For decoding to higher-resolution space one can either find suitable methods to decode coarse tokens (in a way that respects causality) or decode tokens at the fine level during inference and task solution as proposed in [64]. Recent related work such as the use of Byte Latent Transformer models [68] represent an effective coarse-grained representation is learned at byte-level. In another approach outlined in Figure 14B, we propose scaled coarse-graine attention, which may simplify the computation of
Figure 14: Exploration of future alternative graph-based attention and Transformer models that engage heavily with coarse-grained representations. Panel A: Tokens are mapped into different levels of coarser representations, in which then graph operations are conducted. For decoding, one can either find suitable methods to decode coarse tokens (in a way that respects causality) or decode tokens at the fine level during inference and task solution as done in [64]. Panel B: Scaled coarse-grained attention, where we simplify the computation of adjacency matrices by conducting their construction in a coarse representation space. For computing graph operations, these coarse adjacency are upscaled (using bilinear, nearest cubic, etc. algorithms) to the full-scale resolution, where they are used to conduct graph operations. The advantage of the latter approach is that it remains fully causal.
adjacency matrices for graph reasoning. The core idea is to conduct adjacency matrices for graph reasoning in a learned coarse representation space, taking advantage of related or concerted interaction between tokens. For computing graph operations at the fine level, the coarse-level adjacency matrices can be upscaled (using bilinear, nearest cubic, etc. algorithms) to the full-scale resolution, where they are used to conduct graph operations. The advantage of the latter approach is that it remains easily fully causal since all attention computations are done in high-resolution space.
### Materials and Methods 4
Much of the theoretical and methodological details have been provided in the main paper. Here we include additional aspects and practical steps. We refer to the GitHub repository source code for additional specifics in the algorithmic steps.
### Graph-aware attention: Training task, custom tokenizer and model construction 4.1
To assess performance of the graph-aware attention approaches, we develop and train a custom model with the same training set from scratch. We use lamm-mit/protein_secondary_structure_from_PDB [69] for all Protein Data Bank (PDB) proteins (https://www.rcsb.org/) with up to 256 amino acid sequence length, creating a task to dominating secondary structure features based on the sequence. We predict both the overall dominating secondary structure type and second-ranked secondary structure type, as follows:
Samples from the training set are:
<sup>&</sup>lt;|begin_of_text|><|sequence|>[SEQUENCE]<|/sequence|><|Primary_SS_Type|><|Secondary_SS_Type}|><|eot_id|>
### Data set sample
```
<\begin_of_text\><|sequence|>PLIVPYNLPLPGGVV . . .LNEISKLGISGDIDLTSASYTMI<|/sequence|><|BS|><|UNSTRUCTURED|><|eot_id|>
<\begin_of_text|><|sequence|>ADDIVFKAKNGDVKFPHKA...GCHEEWKKGPTKCGECHKK<|/sequence|><|T|><|T|><|eot_id|>
<\begin_of_text|><|sequence|>SLQDPFLMALRRERVPVSI ...QWYKHAISTVVPSRPVSH<|/sequence|><|BS|><|UNSTRUCTURED|><|eot_id|>
<\begin_of_text|><|sequence|>SMEQVAMELRLTELTRLLRSVLD...SIGLE<|/sequence|><|AH|><|UNSTRUCTURED|><|eot_id|>
```
We split the dataset into 90% training and 10% test data.
We train a custom tokenizer with special tokens for each of the amino acids, secondary structures and others:
| Special tokens |
| --- |
| # Special tokens |
| "< pad >", |
| "< eot_id >", |
| "< begin_of_text >" , |
| "< unk >", |
| "< mask >", |
| "< sequence >", |
| "< / sequence > " |
| # Single-letter amino acid codes |
| "A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V" |
| # Additional special words |
| "< AH >", "< BS >", "< DNSTRUCTURED >", "< BEND >", "< PHIHELIX >", "< 310HELIX >", "< BETABRIDGE >", "< T >" |
The tokenizer has a vocabulary size of 271 unique tokens.
The model architectures all feature No = 6 layers, and 8 heads each, following the Llama 3 architecture and base parameters (like dropout, RMS norm, etc.) (as introduced in meta-11ama/Meta-Llama-3-8B-Instruct). The model all feature the same standard feed-forward MLP and other architectural components, except for the attention module that is constructed differently. The hidden dimension in is adjusted in each model so that the resulting final parameter count is around 25 M parameter in either model choice. For instance, in the best performing GIN-Attention model the hidden dimension is 528, whereas it is 512 in the standard transformer variant (the GIN-Attention case still performs better than the reference case even if the hidden dimension is identical, even though it has fewer parameters in that case). We use key-value grouping equal to one in all cases. We experimented also with a case where we only adapted the hidden dimension in the attention mechanism but left the inner, expanded dimension in the Transformer FF MLP the same. This case, along with removing the o_proj layer after the GIN MLP, yielded the best overall performance.
A summary of other training and hyperparameters is provided in Table 1.
| Hyperparameter | Value |
| --- | --- |
| Learning Rate | 1 × 10- |
| Per Device Train Batch Size | 8 |
| Per Device Eval Batch Size | ব |
| Gradient Accumulation Steps | ব |
| Number of Training Epochs | 9 |
| Weight Decay | 0.01 |
| Learning Rate Scheduler Type | Constant |
| Warmup Steps | 250 |
| Packing | False |
| Max Gradient Norm | 1 |
Table 1: Hyperparameters for model training (reference, GIN-Attention, PNA-Attention, and all variants), implemented in SFTTrainer that is part of the Hugging Face Transformer Reinforcement Learning (TRL) package (https://huggingface.co/docs/tr1/ en/index). Train and test loss is computed every 100 steps.
#### 4.2 Sparse GIN fine-tuning model
We use the mlabonne/orca-math-word-problems-80k dataset [70] for training, and create instruction prompts using this format:
### Data set formatting
### User: [QUESTION]<|eot_id|>### Assistant: [ANSWER]]<|eot_id|>"
One example is:
Figure 15: Sharpening and thresholding techniques implemented to sparsify the attention matrix for use as fine-tuning algorithm. This plot shows the results of sharpening (r and a) and thresholding (e). As one can see, sharpening yields sharper distributions, which are then sparsified using the discrete threshold to yield close to binary adjacancy matrices. As reference, for the Sparse GIN fine-tuning model, we use 7 = 0.1, a sharpening value of & = 10.0, and a sparsification threshold of c = 0.6.
### Data set example
## User: Macey saves to buy herself a shirt. She was able to save $1.50 already. She needs to save for 3 more weeks at $0.50 week to have enough for the shirt. How much does the shirt cost?<|eot_id\>### Assistant: Macey has already saved $1.50. She plans to save for 3 more weeks at a rate of $0.50 per week. \n\nThe total amount she will save in the next 3 weeks is:\n3 weeks $0.50/week = $1.50\n\nkding this to the amount she has already saved:\p$1.50 (already saved) + $1.50 (to be saved) = $3.00\n\nTherefore, the shirt costs $3.00.<|eot_id|>'
We split the dataset into 90% training and 10% test data.
As pre-trained model we use the meta-11ama/Llama-3.2-3B-Instruct foundation model. For the LoRA variant, we create adapters for q_proj, k_proj, v_proj, and o_proj with rank r = 50 and α = 50. For the Sparse GIN model, we use 7 = 0.1 and a sharpening value of a = 10.0. We use a sparsification threshold of s = 0.6. These parameters are not trainable in our experiment. The effect of the various processing techniques implemented are shown in Figure 15, for a randomly generated causal set of values. The hidden dimension of the GIN layers is 155. Both variants have approximately 28 M trainable parameters (the pre-trained model has around 3B parameters, which are frozen).
A set of other training and hyperparameters is summarized in Table 2.
| Hyperparameter | Value |
| --- | --- |
| Learning Rate | 2×10 |
| Per Device Train Batch Size | 1 |
| Per Device Eval Batch Size | 2 |
| Gradient Accumulation Steps | イ |
| Number of Training Epochs | 3 |
| Weight Decay | 0.01 |
| Learning Rate Scheduler Type | Constant |
| Warmup Steps | 50 |
| Packing | False |
| Max Gradient Norm | 0.5 |
Table 2: Hyperparameters for LoRA and Sparse-GIN model training, implemented in SFTTTrainer (https://huggingface.co/ docs/trl/en/index).
In the second example, we add LoRA layers in all linear layers in the model except for embedding and head layer. The model is trained with the lamm-mit/SAGI-1-SYMBOLIC_DATA_PLUS_REASONING_DATA_V1_100K dataset, consisting of 100K logic and reasoning questions.
We further trained a Sparse-GIN model on the lamm-mit/bio-silk-mech-mix-q-a-35K dataset.
### Data set example # 1
### User: Even if a crime that has been committed by computer is discovered and reported, the odds of being both arrested and convicted greatly favor the criminal. Each of the following, if true, supports the claim above EXCEPT: A: The priorities of local police departments, under whose jurisdiction most computer crime falls, are weighted toward visible street crime that communities perceive as threatening. B: Computer criminals have rarely been sentenced to serve time in prison, because prisons are overcrowded with violent criminals and drug offenders. C: The preparation of computer-fraud cases takes much more time than is required for average fraud cases, and the productivity of prosecutors is evaluated by the number of good cases made. D: The many police officers who are untrained in computers often inadvertently destroy the physical evidence of computer crime.<|eot id|>### Assistant: B<|eot id|>
### Data set example # 2
```
### User: Why would a person go to school?</eot_id\>### Assistant: Define: Person(x): x is a person GoToSchool(x): x goes to
school SuccessfulInLife(x): x is successful in life Given: ∀ x(Person(x) → (GoToSchool(x) → SuccessfulInLife(x))) Implies:
Person(x) → (GoToSchool(x) → SuccessfulInLife(x)) Explanation: This statement implies that for all values of 'x', if 'is a
person, then if 'x' goes to school, they will be successful in life. Therefore, a person goes to school in order to be
successful in life.
```
# Code and data
All codes and data are available at https://github.com/lamm-mit/Graph-Aware-Transformers.
## Acknowledgments
The author acknowledges support from the MIT Generative AI initiative.
# References
- [1] Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences 79, 2554–2558 (1982).
- [2] Bahdanau, D., Cho, K. & Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014).
- [3] Graves, A., Wayne, G. & Danihelka, I. Neural turing machines. In arXiv preprint arXiv:1410.5401 (2014).
- [4] Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Computation 9, 1735-1780 (1997).
- Attention is All you Need (2017). URL https://papers.nips.cc/paper/ [5] Vaswani, A. et al. 7181-attention-is-all-you-need.
- [6] Alec Radford, Karthik Narasimhan, Tim Salimans & Ilya Sutskever. Improving Language Understanding by Generative Pre-Training URL https://gluebenchmark.com/leaderboard.
- [7] Xue, L. et al. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics 10, 291-306 (2021). URL https://arxiv.org/abs/2105. 13626v3.
- [8] Jiang, A. Q. et al. Mistral 7B (2023). URL http://arxiv.org/abs/2310.06825.
- [9] Phi-2: The surprising power of small language models Microsoft Research. URL https://www.microsoft. com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/.
- [10] Dubey, A. et al. The Llama 3 Herd of Models (2024). URL https://arxiv.org/abs/2407.21783. 2407. 21783.
- [11] Brown, T. B. et al. Language Models are Few-Shot Learners (2020).
- [12] Buehler, M. J. Preflexor: Preference-based recursive language modeling for exploratory optimization of reasoning and agentic thinking (2024). URL https://arxiv.org/abs/2410.12375. 2410.12375.
- [13] Ghafarollahi. A. & Buehler, M. J. Sciagents: Automating scientific discovery through multi-agent intelligent graph reasoning (2024). URL https://arxiv.org/abs/2409.05556. 2409.05556.
- [14] Qiu, R., Xu, Z., Bao, W. & Tong, H. Ask, and it shall be given: Turing completeness of prompting (2024). URL https://arxiv.org/abs/2411.01992.2411.01992.
- [15] Su. J. et al. Roformer: Enhanced transformer with rotary position embedding, arXiv preprint arXiv:2104.09864 (2021).
- [16] Reiser, P. et al. Graph neural networks for materials science and chemistry. Communications Materials 3. Article number 93 (2022). URL https://doi.org/10.1038/s43246-022-00279-w. Open Access.
- [17] Zhang, M., Cui, Z., Neumann, M. & Chen, Y. Hierarchical graph attention network for semi-supervised node classification (2019). URL https://arxiv.org/abs/1902.06667.
- [18] Yun. S., Jeong, M., Kim, R., Kang, J. & Kim. H. J. Graph transformer networks. In Advances in Neural Information Processing Systems (NeurIPS) (2019). URL https://arxiv.org/abs/1911.06455.
- [19] Veličković, P. et al. Graph attention networks (2018). URL https://arxiv.org/abs/1710.10903.
- Group Extensions and Homology. Annals of Mathematics 43, 757-[20] Eilenberg, S. & MacLane, S. 831 (1942). URL https://www.jstor.org/stable/1968966#id-name=JSTORhttps://www.jstor.org/ stable/1968966.
- General theory of natural equivalences. [21] Eilenberg, S. & Mac Lane, S. Transactions of the American Mathematical Society 58, 247 (1945). URL https://www.ams.org/journals/tran/1945-058-00/ S0002-9947-1945-0013131-6/S0002-9947-1945-0013131-6.pdf.
- [22] Cunningham, H., Ewart, A., Riggs, L., Huben, R. & Sharkey, L. Sparse autoencoders find highly interpretable features in language models (2023). URL https://arxiv.org/abs/2309.08600.2309.08600.
- [23] Templeton, A. et al. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. Transformer Circuits (2024). URL https://transformer-circuits.pub/2024/scaling-monosemanticity/. Core Contributor; Correspondence to [email protected]; Author contributions statement below.
- [24] Simon, E. & Zou, J. Interplm: Discovering interpretable features in protein language models via sparse autoencoders. bioRxiv(2024). URL https://www.biorxiv.org/content/early/2024/11/15/2024.11.14. 623630. https://www.biorxiv.org/content/early/2024/11/15/2024.11.14.623630.full.pdf.
- [25] Mischler, G., Li, Y. A., Bickel, S. et al. Contextual feature extraction hierarchies converge in large language models and the brain. Nature Machine Intelligence 6, 1467–1477 (2024). URL https://doi.org/10.1038/ s42256-024-00925-4. Received 30 January 2024; Accepted 16 October 2024; Published 26 November 2024.
- [26] Giesa, T., Spivak, D. & Buehler, M. Reoccurring Patterns in Hierarchical Protein Materials and Music: The Power of Analogies. BioNanoScience 1 (2011).
- [27] Tokareva, O., Jacobsen, M., Buehler, M., Wong, J. & Kaplan, D. L. Structure-function-property-design interplay in biopolymers: Spider silk (2014).
- [28] Lu, W., Kaplan, D. L. & Buehler, M. J. Generative modeling, design, and analysis of spider silk protein sequences for enhanced mechanical properties. Advanced Functional Materials (2023). URL https://doi.org/10.1002/ adfm. 202311324. First published: 02 December 2023.
- [29] Wong, J. et al. Materials by design: Merging proteins and music. Nano Today 7 (2012).
- [30] Abbott, V. & Zardini, G. Flashattention on a napkin: A diagrammatic approach to deep learning io-awareness (2024). URL https://arxiv.org/abs/2412.03317. 2412.03317.
- [31] Xu, K., Hu, W., Leskovec, J. & Jegelka, S. How powerful are graph neural networks? In International Conference on Learning Representations (2019). URL https://arxiv.org/abs/1810.00826.
- [32] Corso, G., Cavalleri, L., Beaini, D., Lio, P. & Velickovic, P. Principal neighborhood aggregation for graph nets. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS) (2020). URL https://arxiv.org/abs/2004.05718.
- [33] Hu, E. J. et al. LoRA: Low-Rank Adaptation of Large Language Models (2021). URL https://arxiv.org/ abs/2106.09685v2.
- [34] Buehler, E. L. & Buehler, M. J. X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible Framework for Large Language Models with Applications in Protein Mechanics and Design (2024). URL https://arxiv.org/abs/ 2402.07148v1.
- [35] Mao, Y. et al. A survey on LoRA of large language models. Frontiers of Computer Science 19, 197605 (2025). URL https://doi.org/10.1007/s11704-024-40663-9. Open Access.
- [36] Teo, R. S. Y. & Nguyen, T. M. Unveiling the hidden structure of self-attention via kernel principal component analysis (2024). URL https://arxiv.org/abs/2406.13762. 2406.13762.
- [37] Razzhigaev, A. et al. Your Transformeris Secretly Linear (2024). URL https://arxiv.org/abs/2405. 12250. 2405.12250.
- [38] Oren, M., Hassid, M., Yarden, N., Adi, Y. & Schwartz, R. Transformers are Multi-state RNNs (2024). URL https://arxiv.org/abs/2401.06104.2401.06104.
- [39] Pfau. J., Merrill, W. & Bowman, S. R. Let's Think Dot by Dot: Hidden Computation in Transformer Language Models (2024). URL https://arxiv.org/abs/2404.15758. 2404.15758.
- [40] Sanford, C. et al. Understanding transformer reasoning capabilities via graph algorithms (2024). URL https: //arxiv.org/abs/2405.18512. 2405.18512.
- [41] Wei. J. et al. Chain-of-thought prompting elicits reasoning in large language models (2023). URL https: //arxiv.org/abs/2201.11903. 2201.11903.
- [42] Li, Z., Liu, H., Zhou, D. & Ma, T. Chain of thought empowers transformers to solve inherently serial problems (2024). URL https://arxiv.org/abs/2402.12875. 2402.12875.
- [43] Zhang, R., Yu, Q., Zang, M., Eickhoff, C. & Pavlick, E. The same but different: Structural similarities and differences in multilingual language modeling (2024). URL https://arxiv.org/abs/2410.09223. 2410. 09223.
- [44] Ruoss, A. et al. Amortized planning with large-scale transformers: A case study on chess (2024). URL https://arxiv.org/abs/2402.04494.2402.04494.
- [45] Spielman, D. A. & Srivastava, N. Graph sparsification by effective resistances. SIAM Journal on Computing 40, 1913-1926 (2011).
- [46] Giesa, T., Spivak, D. & Buehler, M. Category theory based solution for the building block replacement problem in materials design. Advanced Engineering Materials 14 (2012).
- [47] Spivak, D., Giesa, T., Wood, E. & Buehler, M. Category theoretic analysis of hierarchical protein materials and social networks. PLoS ONE 6 (2011).
- [48] Tishby, N., Pereira, F. C. & Bialek, W. The information bottleneck method. arXiv preprint physics/0004057 (2000). URL https://arxiv.org/abs/physics/0004057.
- [49] Saxe, A. M. et al. On the information bottleneck theory of deep learning. Journal of Statistical Mechanics: Theory and Experiment 2019, 124020 (2019). URL https://doi.org/10.1088/1742-5468/ab3985.
- [50] Hamilton, W. L., Ying, R. & Leskovec, J. Representation learning on graphs: Methods and applications (2018). URL https://arxiv.org/abs/1709.05584. 1709.05584.
- [51] Cranford, S. W. & Buehler, M. J. Biomateriomics (Springer Netherlands, 2012).
- [52] Shen, S. C. et al. Computational Design and Manufacturing of Sustainable Materials through First-Principles and Materiomics. Chemical Reviews(2022). URL https://pubs.acs.org/doi/fu11/10.1021/acs.chemrev. 2c00479.
- [53] Buehler, M. J. A computational building block approach towards multiscale architected materials analysis and design with application to hierarchical metal metamaterials. Modelling and Simulation in Materials Science and Engineering 31,054001 (2023). URL https://dx.doi.org/10.1088/1361-651X/accfb5.
- [54] Guo, K. & Buehler, M. A semi-supervised approach to architected materials design using graph neural networks. Extreme Mechanics Letters 41 (2020).
- [55] Guo, K. & Buehler, M. J. Rapid prediction of protein natural frequencies using graph neural networks. Digital Discovery(2022). URL https://pubs.rsc.org/en/content/articlehtml/2022/dd/d1dd00007ahttps: //pubs.rsc.org/en/content/articlelanding/2022/dd/d1dd00007a.
- [56] Buehler, M. J. Generative retrieval-augmented ontologic graph and multi-agent strategies for interpretive large language model-based materials design. ACS Engineering Au (2023). URL 10.1021/acsengineeringau. 3c00058.
- [57] Arevalo, S. E. & Buehler, M. J. Learning from nature by leveraging integrative biomateriomics modeling toward adaptive and functional materials. MRS Bulletin 2023 1–14 (2023). URL https://link.springer.com/ article/10.1557/s43577-023-00610-8.
- [58] Dreyer, T. et al. Comparing cooperative geometric puzzle solving in ants versus humans. Proceedings of the National Academy of Sciences 122, e2414274121 (2025). URL https://www.pnas.org/doi/abs/10.1073/ pnas.2414274121.https://www.pnas.org/doi/pdf/10.1073/pnas.2414274121.
- [59] OpenAI. OpenAI o1 System Card. https://cdn.openai.com/o1-system-card-20241205.pdf (2024).
- [60] Plaat, A. et al. Reasoning with large language models, a survey (2024). URL https://arxiv.org/abs/2407. 11511. 2407.11511.
- [61] Zelikman, E., Wu, Y., Mu, J. & Goodman, N. D. Star: Bootstrapping reasoning with reasoning (2022). URL https://arxiv.org/abs/2203.14465. 2203.14465.
- [621 Zelikman, E. et al. Quiet-STaR: Language models can teach themselves to think before speaking (2024). URL https://arxiv.org/abs/2403.09629.2403.09629.
- [63] Kojima, T., Gu, S. S., Reid, M., Matsuo, Y. & Iwasawa, Y. Large language models are zero-shot reasoners (2023). URL https://arxiv.org/abs/2205.11916. 2205.11916.
- [64] Hawthorne, C. et al. General-purpose, long-context autoregressive modeling with perceiver ar (2022). URL https://arxiv.org/abs/2202.07765.2202.07765.
- [65] Tarakanova, A., Ozsvar, J., Weiss, A. S. & Buehler, M. J. Coarse-grained model of tropoelastin self-assembly into nascent fibrils. Materials Today Bio 3, 100016 (2019).
- [66] Wang, J. et al. Machine Learning of Coarse-Grained Molecular Dynamics Force Fields. ACS Central Science 5, 755-767 (2019).
- [67] Yeo, J. et al. Multiscale modeling of keratin, collagen, elastin and related human diseases: Perspectives from atomistic to coarse-grained molecular dynamics simulations. Extreme Mechanics Letters 20 (2018).
- [68] Pagnoni, A. et al. Byte latent transformer: Patches scale better than tokens (2024). URL https://arxiv.org/ abs/2412.09871. 2412.09871.
- [69] Yu, C. H. et al. End-to-End Deep Learning Model to Predict and Design Secondary Structure Content of Structural Proteins. ACS biomaterials science & engineering 8, 1156-1165 (2022). URL https://pubmed.ncbi.nlm. nih.gov/35129957/.
- [70] Mitra, A., Khanpour, H., Rosset, C. & Awadallah, A. Orca-math: Unlocking the potential of SLMs in grade school math (2024). URL https://arxiv.org/abs/2402.14830. 2402.14830.
|
# In-situ GRAPH REASONING AND KNOWLEDGE EXPANSION USING GRAPH-PREFLEXOR
A PREPRINT
@ Markus J. Buehler* Laboratory for Atomistic and Molecular Mechanics MIT Cambridge, MA 02139, USA [email protected]
January 14, 2025
### ABSTRACT
The pursuit of automated scientific discovery has fueled progress from symbolic logic to modern AI, forging new frontiers in reasoning and pattern recognition. Transformers function as potential systems, where every possible relationship remains latent potentiality until tasks impose constraints, akin to measurement. Yet, refining their sampling requires more than probabilistic selection: solutions must conform to specific structures or rules, ensuring consistency and the invocation of general principles. We present Graph-PReFLexOR (Graph-based Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning), a framework that combines graph reasoning with symbolic abstraction to dynamically expand domain knowledge. Inspired by reinforcement learning, Graph-PReFLexOR defines reasoning as a structured mapping M : T -> (G, P, A), where tasks yield knowledge graphs G, abstract patterns P, and final answers A. Inspired by category theory, it encodes concepts as nodes and their relationships as edges, supporting hierarchical inference and adaptive learning through isomorphic representations. Demonstrations include hypothesis generation, materials design, and creative reasoning, such as discovering relationships between mythological concepts like "thin places" with materials science. We propose a "knowledge garden growth" strategy that integrates insights across domains, promoting interdisciplinary connections. Results with a 3-billion-parameter Graph-PReFLexOR model show superior reasoning depth and adaptability, underscoring the potential for transparent, multidisciplinary AI-driven discovery. It lays the groundwork for general autonomous reasoning solutions.
Keywords Artificial Intelligence · Science · Graph Theory · Category Theory · Materials Science · Materiomics · Language Modeling · Reasoning · Isomorphisms · Engineering
#### 1 Introduction
Discovery is driven by the ability to think deeply, reflect, and iteratively refine ideas before arriving at conclusions. Emergent artificial intelligence (AI) methods have achieved success in domains such as natural language processing [1, 2, 3, 4, 5, 6, 7], materials science, and molecular biology, including protein folding [8], and others. However, much of these advances rely on models that produce outputs directly, without an explicit intermediate reasoning process akin to scientific thinking, and hence, generally lack self-awareness and capacity to reflect. For Al to ultimately accelerate discovery, we must develop systems capable of reasoning in relational and symbolical manners [9, 10, 11, 12, 13], reflecting critically, and explaining their answers in an iterative process that mirrors scientific methods of exploration.
When we solve a problem by identifying how things are connected and flow together, we often discover a template that works far beyond your original situation, serving as a model that can be understood in a different context or applied to a new scenario to extrapolate. For example, the hierarchical structure of bone, combining stiffness and toughness
<sup>*</sup>Corresponding author.
Figure 1: Visualization of generalization via abstraction. Top: Example, where a phenomenon in an original domain (here: protein materials fracture, specifically flaw-tolerance in alpha-helical protein meshes [14]) is modeled as relational abstract patterns, and then used to describe distinct phenomena in other domains. The diagram shows how structural patterns in protein materials can be abstracted and applied across domains through categorical mappings and graph-based relationships. The three-level hierarchy demonstrates functorial relationships between source domain concepts (protein materials), abstract pattern recognition, and diverse applications in networks, social systems, biological systems, and musical composition. Bottom: Flowchart for visualizing the process from a task to a graph representation (with shared relational descriptions such as IS-A, RELATES-TO, and INFLUENCES), symbolic abstraction, hypothesis generation, and the final answer. The vertical dashed line with mathematical symbols (α, β, δ, ->) represents the shared representation of all problems in tokenized form, where the model learns to generalize representations across domains.
across scales, has inspired the design of lightweight, durable materials for aerospace and architecture [15, 16, 17]. Category theory [18, 19, 10, 11, 91, a branch of mathematics that emphasizes how objects relate rather than their internal detail, can be a powerful tool to construct models of complex phenomena through a lens on relational aspects. At its core, category theory emphasizes morphisms-arrows that represent interactions or mappings between objects. Such an approach can reveal hidden patterns - where the same structures keep appearing in nature, human organizations, technology, and science. By mapping out these connections and flows, we can unlock powerful solutions that have worked across many fields. We can view this like discovering a universal language that helps translate successful ideas from one domain to another, letting us recognize and apply useful, and transferrable, patterns wherever they might be useful (Figure 1). In the example depicted in the figure, we show how flaw-tolerance in alpha-helical protein meshes can be modeled as relational abstract patterns, and then used to describe phenomena in other domains [14, 11, 20, 9].
A central concept is that of an isomorphism, a special type of morphism that establishes a structural equivalence between objects, enabling insights from one domain to be applied to another. For example, consider Newton's second law of motion, F = ma, which describes the proportionality between force F, mass m, and acceleration a. This relationship is isomorphic to Ohm's law in electrical circuits, V = IR, where voltage V plays the role of force, current I corresponds to acceleration, and resistance R serves as the proportionality factor akin to mass. Both laws can be abstracted into a general form, y = kx, where y represents the driving factor, x the response, and k the proportionality constant. This abstraction reveals that the structural relationships governing these systems are fundamentally equivalent. even though the physical quantities differ. By identifying such isomorphisms, category theory allows scientists to transfer insights across domains, uncovering deep connections between seemingly unrelated phenomena. This approach not only simplifies complex systems but also provides a systematic framework for discovering universal laws, as it emphasizes relational properties that transcend specific instances. Conversely, if we can discover isomorphisms directly from data, it opens the possibility of uncovering new theories and equations by identifying deep structural parallels across disparate domains, revealing universal principles that might otherwise remain hidden.
## 1.1 Modeling isomorphisms for generalization
To model isomorphisms computationally, we require a neural network architecture capable of capturing the structural equivalences between relational systems. Graph Isomorphism Networks (GINs) are particularly well-suited for this purpose. GINs are designed to operate on graph-structured data, where nodes represent objects and edges capture relationships or interactions. Unlike traditional graph neural networks (GNNs), which may struggle to distinguish between non-isomorphic graphs, GINs achieve maximum expressiveness for distinguishing graph structures by leveraging a theoretically grounded update rule that closely resembles the Weisfeiler-Lehman graph isomorphism test [21]. In a GIN, the node update function aggregates features from neighboring nodes using a weighted sum and applies a learned multi-layer perceptron (MLP) to the result, ensuring the network can capture higher-order structural properties. For example, to model the isomorphism between Newton's second law, F = m x a, and Ohm's law, V = I x R, a GIN can represent the relationships between variables and operators (e.g., force, mass, acceleration, =, ×) as nodes and their interactions as edges. By learning the graph embeddings, the GIN can identify the shared relational structure underlying these systems, demonstrating their equivalence. After processing, the embeddings for both equations would be nearly identical in the latent space, reflecting their structural similarity and isomorphic relationship as they can be mapped to the same representation (see. Section S1 and Figure S3. for an example). This capability to recognize and model isomorphisms makes GINs a powerful tool for tasks that involve reasoning over relational data, such as predicting properties of molecules, designing robust materials, or uncovering universal scientific laws.
How does this relate to Transformer-based LLMs? Recent work has shown that Transformers implicitly function as graph isomorphism neural networks [13], which provides a powerful lens for explicitly integrating graph-based reasoning into these architectures. Whereas [13] focused on theoretically grounded advancements in the Transformer architecture itself to strengthen its expressive capacity, in this paper we focus on utilizing a standard Transformer interpreted with the expressive power of a GIN, and endow it with explicit capabilities to conduct in-situ graph reasoning and symbolic deduction. We hypothesize that by explicitly leveraging graph theory within Transformers can unlock new levels of performance, generalization, and interpretability, accelerating progress in various scientific domains. Moreover, modeling complex principles can be inspired from biology, specifically the use of hierarchically organized systems that utilize some of the core ideas of category theory in materials and scientific exploration, where natural systems are found to often reuse relational patterns and mechanisms such as amino acids, DNA, and others. Building on these ideas, it has been suggested that inspiration for such advances may come from biology, such as emergent hierarchical intelligence based on agentic systems and other strategies that mimic biological materials and design principles [22, 23, 24, 25, 26]. Related principles include concerted problem solving as illustrated in a recent study of ant dynamics [27], providing ample examples for latent opportunities to discover mechanisms that can be translated across domains.
Figure 2: Overview of the approach used in this paper, presenting the concept of multi-step reflection (panel a), graph-based modeling of context and tasks (panel b), abstract pattern formulation (panel c), and finally, integrated in the multi-stage reasoning mechanisms (panel d).
Large Language Models (LLMs) have demonstrated capabilities in generating human-like text and extracting insights from scientific literature. A grand challenge in the application of AI models in science in particular remains to develop algorithms that yield more sophisticated reasoning mechanisms [25, 28, 29, 30, 31, 32, 33, 34, 35]. These models often excel in fluency and pattern recognition, yet typically lack the ability to engage in structured reasoning. Addressing this limitation, earlier work like PReFLexOR (abbreviation of: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning) [36] introduced a framework where models were developed that innately develop a thinking and reflection strategy before answering, leveraging reinforcement learning (RL) to refine reasoning through iterative reflection and dynamic feedback loops, inspired by earlier research [37, 38, 39, 40]. A key goal of the work reported in this paper is to develop a strategy that allows AI models to not simply retrieve information but to learn underlying abstractions and relational motives, and then use these insights to generalize across new domains and ultimately to expand its understanding and capabilities to generalize.
Building on this foundation, we expand the concept of "thinking before answering" to incorporate in situ graph-based reasoning. Graphs provide a powerful framework for representing relationships, causality, and structures inherent in scientific problems (Figure 2). From molecular interactions, dynamics, to material properties, many scientific phenomena are naturally represented as graphs. By enabling models to autonomously construct and manipulate these symbolic graph representations, we create a model that mimics the reflective, iterative reasoning processes integral to scientific inquiry. In Transformers, knowledge is represented as a superposition of potentialities that collapse into specific outputs when conditioned on tasks. This interpretation offers a framework for enhancing both reasoning and creativity. We show how, by balancing structured coherence with divergent exploration, models can traverse new ideas, generating insights while maintaining rigorous, task-aligned solutions.
#### 1.2 Foundations of in-situ graph reasoning
In this work, we introduce a novel approach that unifies the linguistic fluency of LLMs with the relational reasoning capabilities that have been quite successful in architectures such as Graph Neural Networks (GNNs). Our model extends the reflective capabilities introduced in the original PReFLexOR model, enabling the construction of graph-based intermediate reasoning mechanisms, by taking advantage of the flexible capabilities of the transformer architecture. Through these symbolic representations via special tokens, the model engages in a "thinking phase," reasoning over the graph to refine its understanding before generating an answer. This approach not only enhances interpretability but also improves the model's ability to solve complex scientific problems requiring relational and structural reasoning. We achieve this by formalizing reasoning as a structured mapping:
$\mathcal{M}:\mathcal{T}\rightarrow(\mathcal{G},\mathcal{P},\mathcal{A})$,
where a task 7 produces a knowledge graph G, abstract patterns P, and final answers A. The knowledge graph
$${\cal G}=(V,E)\tag{2}$$
encodes concepts as nodes V and relationships as directed edges E.
By explicitly constructing and abstracting relational graphs, the model can encode structural information that standard next-token prediction often overlooks or treats only implicitly. In this setup, each entity and its relations become first-class objects in the learned representation, enabling the network to detect and exploit common subgraph motifs and isomorphisms that recur across different inputs. Unlike pure sequence-based transformers, which must infer latent structure solely from token order and distributional cues, a graph-driven approach anchors learning in explicitly linked entities, preserving both local and global connectivity. As a result, it is not only easier for the model to discover universal features (such as repeated subgraphs or underlying algebraic forms) but also more straightforward to apply symbolic abstractions, since the graph representation makes these recurring structures and higher-level patterns more salient and amenable to consistent transformations.
Our model thereby offers opportunities for diverse use cases in AI for science and beyond. Tasks such as multi-step reasoning, hypothesis generation, and causal inference become more robust and explainable. This work bridges connectionist and symbolic paradigms, pushing the boundaries of what AI can achieve in scientific domains [41, 42, 43]. By creating models that have a more explicit process of relational deliberation, both symbolically and structurally, before answering, we take an important step toward AI systems capable of true scientific reasoning and discovery.
#### 1.3 Outline of this paper
The plan of the paper is as follows. First, we briefly review the PReFLexOR architecture published in earlier work [36], then describe the training process of developing Graph-PReFLexOR, and then review several case studies of how the method can be applied. We conclude with final thoughts and a discussion on future opportunities.
### 2 Results and Discussion
We report the results of a series of experiments and analyses conducted based on the model. We start with a review of the PReFLexOR framework [36], how we adapted it for graph reasoning capability, and then move on to the results of the experiments conducted. We conclude with a discussion of the results, interpretations, and an outlook to future opportunities.
#### 2.1 Review of PReFLexOR
PRefLexOR, Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning [36], is a framework that enhances the reasoning capabilities of language models by integrating preference optimization and recursive reasoning (Figure 3). The approach relies on the introduction of special tokens such as <|thinking|>..<|/thinking|> and <|reflect|>..<|/reflect|>, which explicitly mark distinct phases of reasoning, enabling structured generation and iterative refinement. During training, the model undergoes two stages: Structured Thought Integration, where reasoning processes are guided using special tokens and optimized through Odds Ratio Preference Optimization (ORPO), and Independent Reasoning Development, which leverages masking of thinking tokens and applies Efficient Exact Optimization (EXO) [44] to align final outputs with preferred reasoning patterns without prescribing the specific reasoning steps. Thinking tokens allow the model to simulate intermediate reasoning steps, while masking ensures that the model infers the best reasoning pathways without direct supervision. We note that EXO's reverse Kullback-Leibler divergence [45] objective promotes mode-seeking behavior, emphasizing
Figure 3: Overview of the PRefLexOR framework as reported in [36], presented here for completeness. The training process involves two stages: (1) Structured Thought Integration Training, focusing on incorporating structured reasoning components, and (2) Independent Reasoning Development, aimed at fostering model autonomy in reasoning. During inference, the Recursive Reasoning Algorithm is employed to iteratively refine responses. Below, the role of reasoning components is depicted in the two training phases, showing transitions from unmasked to masked reasoning.
dominant reasoning strategies over diluted alternatives. By dynamically generating datasets and employing recursive feedback loops, PRefLexOR enables models to self-teach, iteratively refining their reasoning through on-the-fly task generation and reflection. This combination of token-driven structure, recursive refinement, and preference alignment makes PRefLexOR a suitable foundation for reasoning-intensive applications.
As was shown in the original paper [36], with a trained model the reasoning process can be iteratively refined at inference time, where the model generates intermediate reasoning (thinking phase), the critic evaluates it (reflection phase), and then produces improved responses. Formally, the reasoning at step i, Rgi, is represented as:
${\bf R}_{i+1}=f_{\rm critic}({\bf R}_{i},{\bf F}_{i})$,
where focioc applies feedback F to refine the intermediate reasoning. At each step, reflection is guided by feedback on gaps or inconsistencies in Ry, enabling the next iteration to improve alignment with the desired output. The final response A is derived after N refinement steps as:
$${\bf A}=g({\bf R}_{N}),\tag{4}$$
where g extracts the synthesized final answer. During training, masking is applied to tokens embedded within <|thinking|>..<|/thinking|>, requiring the model to infer reasoning paths indirectly while optimizing the final output. This iterative combination of thinking and reflection, structured via tokens and recursive refinement, ensures that the model autonomously enhances its reasoning capabilities, providing accurate and well-aligned outputs.
Here is an example:
| Basic structure of the reasoning strategy using a thinking phase before answering. |
| --- |
| User: [User question or task] |
| Assistant: |
| < thinking > |
| [GRAPH] |
| [ABSTRACT PATTERNS] |
| [ADDITIONAL THINKING STEPS] |
| < /thinking > |
| [Answer] |
#### Graph-PReFLexOR algorithm design and training 2.2
Here we use the original PReFLexOR framework but create a different function to develop the structured training data by teaching the model to explicitly construct graphs and symbolic representations in its thinking phase. Details are included in the Materials and Methods section, but we review highlights here. All training is done on top of the meta-11ama/Llama-3.2-3B-Instruct model, a small but performant base LLM that serves as the foundational platform for development.
Structured reasoning is generated on-the-fly during training as described in the Materials and Methods section. At its core, the method constructs dynamic knowledge graphs in-situ G = (V, E), where nodes V represent key concepts extracted from enriched context, and edges E encode relationships such as IS-A, RELATES-T0, and INFLUENCES, common and shared relational descriptions. These relationships are quantified using a semantic scoring function f (ci, c; rrg), which evaluates the significance of relationships r between concepts c; and c;. Retrieval-Augmented Generation (RAG) is employed to enrich the input context and identify abstract patterns and dependencies within the graph, enabling higher-order reasoning.
The thinking section organizes reasoning into structured headings, including 'Core Concepts and Relationships' 'Reasoning Steps', and 'Abstract Patterns', providing a framework for logical progression and shared token representation to encourage generation (see, Figure 1 and the discussion in Section S1). For example, the 'Core Concepts and Relationships' heading defines the key nodes and their connections in the knowledge graph. while 'Reasoning Steps' outlines step-by-step deductions using the generalized formalisms. The 'Abstract Patterns' section summarizes these relationships into symbolic transformations, such as α → β → γ → δ → ε, with proportional dependencies α α ε indicating critical links. These structured headings not only guide the model's reasoning but also ensure interpretability and coherence in outputs and aid to boost extrapolation capacity in recursive reasoning steps.
Special tokens delineate the reasoning phases, facilitating recursive refinement of the knowledge graph and symbolic abstractions during training and inference. By uniting graph-based reasoning, symbolic methods, and structured thinking sections, Graph-PReFLexOR achieves robust, interpretable performance in generating high-quality responses for reasoning-intensive tasks.
We train the model using the same multi-stage training process as done in the original model [36], featuring first ORPO and then DPO-EXO. The training dataset is the same as in the original paper, consisting of raw text of around 1,000 papers in the realm of biological and bio-inspired materials [30].
### 2.3 Detailed analysis of example responses: Single task-response pair
In the first example, we task the model as follows, challenging the model to answer a question that partially involves a materials-specific task but also invokes a domain that was not included in the training data, to test how well the model can extrapolate its structured reasoning to other domains that were not included in the training set. The task is:
### User task
User: Propose new idea to relate music and materials
The model responds first by thinking about the problem by outputting a series of "thoughts" enclosed within the <|thinking|> and <|/thinking|> tokens. The response closely follows the structured approach developed during training. First, the model provides a knowledge graph:
Knowledge graph **Audio Signal [TS-A] => **Physical Substance - [RELATES-TO] -> ic Response** RELATES-TO] -> **Music* **Material's Mechanical Properties mic Response** - [INFLUENCES]-> INFLUENCES]-> **Nonlinear Dynamic Response** -[INFLUENCES]-> **Material's Mechanical Prope: Spectrum** -[RELATES-TO]-> **Music*> -[RELATES-TO]-> **Material's Mechanical Properties* [INFLUENCES] -> **Frequency Spectrum** -[INFLUENCES] -> **Music* -[IS-A]-> **Material** -[RELATES-TO]-> **Music** -[RELATES-TO]->
Next, the model develops an abstract reasoning pattern as follows:
Figure 4: Overview of the data generated in response to the question: Propose a new idea to relate music and materials. The knowledge graph (top) illustrates the relationships between core concepts: music (blue), material properties (purple), and frequency spectrum (green). Key relationships include IS-A hierarchies (e.g., Music is an Audio Signal) and influence paths through nonlinear dynamic responses. The abstract pattern (bottom) formalizes these interactions through a triple system (a, B, y) with proportional influence (a x B) and feedback loop (y -> a). The integration between graph and pattern manifests in multiple ways: the music-to-material influence path in the graph maps to a -> 3 in the pattern; the material's mechanical properties feedback in the graph corresponds to the essential condition y → cx, and the frequency spectrum mediation shown in the graph provides the physical mechanism for the proportional influence (α ο β) in the pattern. Dotted lines explicitly map concrete elements to their abstract counterparts, demonstrating how the theoretical framework emerges from and guides the practical implementation. This dual representation captures both the detailed mechanisms of music-material interaction and its fundamental mathematical structure.
Abstract pattern *Abstract Pattern -> 3 -> 7 **Kev Transformation Rule α α Β *Essential Condition: *Explanation: α represents *#Music*+, β represents *#Naterial**, and η represents *#Material's Mechanical Properties**.
The abstract pattern α → β → η shows the direct relationship betw The key transformation rule αα β indicates that Music (၇) has a proportional influence on Material's Mechanical Prop The essential condition γ → α signifies that Material's Mechanical Properties (Y) are influenced by Music (α), creating a feedback loop in the system Pattern Context: Inspire a new method to tune and control the mechanical properties of materials by analyting tad manipulating the frequency
spectrum of music. This method leverages the dynan usic on materials.
The model then proceeds to develop a series of additional structured mechanisms (reasoning steps, relevant materials, design principles, material properties, hypothesis, background), building on these initial steps. These are:
### Additional thinking steps * Reasoning Steps ** 1. Music has dynamic responses on materials, similar to how it affects human emotions Analyze the frequency spectrum of music to understand its dynamic responses Use the frequency spectrum to tune and control the mechanical properties of materials 4. This method leverages the relationship between music's frequency spectrum and material responses **Relevant Materials or Concepts**: Nonlinear Dynamic Response Frequency Spectrum Material Selection
Tuning and Control of Mechanical Properties **Design Principles**: **Frequency Spectrum Analysis**: Study the dynamic responses of materials to different frequencies. **Material Selection**: Choose materials with nonlinear dynamic responses to optimize mechanical property tuning **Frequency Spectrum Manipulation**: Modify the frequency spectrum of music to influence material responses **Tuning and Control**: Use the manipulated frequency spectrum to tune and control the mechanical properties of select materials *Material Properties ** **Nonlinear Dynamic Response**: Materials exhibit nonlinear responses to external stimuli, such as vibrations or **Frequency Dependence**: Material responses vary with different frequencies, allowing for tuning and control **Mechanical Property Tuning**: Materials? mechanical properties can be adjusted by manipulating their nonlinear dynami responses to specific frequencies *Hypothesis** : "Music's frequency spectrum can twne and control the nonlinear dynamic responses of materials, minicking the effects of music on aterials. **Additional Background**: The relationship between music and materials involves andlyzing and manipulating the dynamic responses of materials to different
frequencies. This can be achieved by studying vibrations or frequencies. By understanding how materials react to various frequencies, it's possible to tune and control their mechanical properties. This approach can be applied to materials used in various applications, from engineering to biomedical devices. The key is to select materials that exhibit significant nonlinear responses to specific frequencies, allowing for targeted manipulation of their properties
All of the above content was enclosed within thinking tokens. Once these are completed, the model produces the final answer, shown in Text Box 1.
We can see that the output features rich content that we can analyze. First, Figure 4 depicts a visualization of the knowledge graph and abstract reasoning pattern. We find that the relationship between music and materials is formalized through an abstract pattern derived from their fundamental interactions, where the model proposes the triple system (a. B. ~) where or represents music as an audio signal, B represents the material as a physical substance, and y represents the material's mechanical properties. The core relationship follows the pattern α -> β -> γ with the key transformation rule a x 3 and essential condition y -> a, proposing a closed feedback loop.
This system manifests through nonlinear dynamic responses of materials to different frequencies, where the frequency spectrum of music directly influences material properties. The relationship leverages the dynamic responses of materials to different frequencies, mimicking the effects of music on materials. Through frequency spectrum analysis and manipulation, the proposed method enables tuning and control of mechanical properties such as possibly damping. stiffness, and damping capacity. This approach opens possibilities for non-destructive material testing, property tuning, and biomedical applications, where the nonlinear dynamic response provides a mechanism for controlled material modification through specific frequency interactions.
Thinking about the idea more deeply, the proposal's main novelty lies in its symmetrical conceptualization of musicmaterial interaction as a feedback loop. Rather than just treating music as an input that affects materials (which would
be more obvious), it proposes a cyclic system where material properties can also influence the musical response. The proposal relies heavily on well-known concepts like nonlinear dynamic responses and frequency analysis, but the practical implementation focuses on fairly standard approaches (frequency spectrum analysis, material selection). The proposed methodology presents notable innovations in relating music and materials, particularly through its biomimetic foundation that draws parallels between material responses and human emotional reactions to music (a -> response). The approach extends beyond conventional frequency-response studies by establishing a comprehensive framework that encompasses specific technical implementations through dynamic mechanical analysis (DMA), concrete property targets (damping, stiffness), and notably, potential biomedical applications for cancer and neurological disorders. The methodology's novelty lies not only in its theoretical framework but in its practical manifestation through non-contact and non-destructive testing methods. The proposal demonstrates depth in addressing implementation challenges, particularly in material selection optimization and frequency spectrum manipulation. Critically, it acknowledges and provides direction for key technical hurdles, including the scalability of the method to larger structures, the complexity of identifying materials with appropriate nonlinear dynamic responses, and the need for robust theoretical models to predict material behavior under musical frequency stimulation. The potential impact is amplified by its extension into biomedical applications, suggesting a bridge between materials science and medical diagnostics through music-inspired techniques. This comprehensive treatment of both theoretical foundations and practical challenges, coupled with the biomimetic inspiration and medical applications, establishes the proposal as a significant innovation in material property tuning methodologies, transcending traditional approaches to frequency-based material manipulation.
The innovation's fundamental strength emerges from its adherence to the abstract pattern α → β → γ → α, where the cyclic relationship enables a unique feedback system. The proportionality rule α ο β manifests in the biomimetic response mechanism, while the essential condition y → x is realized through the material's frequency-dependent behavior. This creates a dynamic system where music (o) influences material properties (^^) through material response (ß), while the resulting property changes create new response patterns, forming a continuous adaptive loop. As an interpretation, the relationship can be viewed as a coupled system where each component's influence is nonlinearly dependent on the others:
$${\frac{\partial\gamma}{\partial t}}=f(\alpha,\beta)\qquad{\mathrm{and}}\qquad{\frac{\partial\beta}{\partial t}}=g(\gamma,\alpha),$$
capturing both the immediate response and long-term evolution of material properties under musical influence. This mathematical framework elegantly supports the proposal's practical applications while maintaining the fundamental symmetry of the original abstract pattern.
### 2.4 In-situ graph generation and recursive reasoning through multi-agent modeling
The earlier example showed that the model was able to expand its capabilities beyond the materials-science focused training data and was able to successfully integrate different domains into the structured reasoning paradigm. In the next experiment, we task the model as follows:
### User task
User: Integrate a snowflake and ant behavior to design a new tough material made from protein.
However, unlike before where one can respond with a single shot, here we use recursive reasoning by using a two-agent setup where the thinking steps are critiqued, then improved, and fed back to the model. Ultimately, an integrated response is developed by the model that incorporates the various ideas, concepts, and details developed during the recursive process. Figure 5 shows a flowchart of the process used. In the example discussed below, we use N = 3 iterations until the final answer is produced through multiple refinement steps.
Each iteration in the algorithm produces an intermediate result that is critiqued and improved as delineated in Figure 5. The final answer after iteration N = 3 is shown in Text Box 2.
Once the maximum number of iterations has been reached, we develop an integrated answer that uses the responses from all three iterations to develop the final response to the task. For the example discussed here, the final, integrated answer is:
Figure 5: Graph-PRefLexOR Recursive Reasoning Algorithm, using graph reasoning and abstract representations of relational mechanics, using a multi-agent system with Agent #1 being the Graph Reasoning model, and Agent #2 being a general-purpose critic model. The reflection is generated using the Critic agent and then used to improve the thinking process. This resembles an iterative approach leveraging the Reasoning Model and a general-purpose Critic Model to generate, refine, and optionally integrate responses. As before, the process ultimately involves generating initial responses, extracting reflections, improving thinking processes, and creating new responses based on refined thinking, with an optional final integration step. The algorithm relies on extracting thinking processes (indicated via < | thinking |> .. < | / thinking |> ) and reflection processes generated by the Critic. The sampled responses can either be used in their final state or integrated into an amalgamated response that shows very rich facets in the scientific process.
Figure 6: Overview of the graph data generated in response to the question: Integrate a snowflake and ant behavior to design a new tough material made from protein. The figure depicts both the knowledge graph alone (panel a) and an integration of the knowledge graph with the abstract pattern representation (panel b). Results after the first iteration, i = 1, are shown.
### Final integrated answer from recursive Graph-PReFLexOR
Designing a new tough material made from protein inspired by snowflake and ant behavior involves creating a protein-based matoo in applications The key components of this design are: 1. *#Fractal Patterns (Snovflake Inspiration):** Incorporate fractal patterns and seLI-sinilarity to create intricate, complex
structures. Use branching patterns and scaling material's toughness and adaptability. 2. **Self-Organization (Ant Inspiration):** Design the protein material to mimic the self-organization and hierarchical structures found in ant colonies. Encourage self-organization through hierarchical structures at multiple length scales.
will promote adaptability and diversity in the mater 3. *#Hierarchical Structure:** Incorporate multiple length scales to create complex, tough materials. This will enhance th material's toughness and adaptability. The design process involves: 1. *#Design Protein Material:** Create a protein-based material with fractal patterns and hierarchical structures. Incorporat self-organization and hierarchical structures at multiple length scales. 2. **Test and Refine:** Test the protein material's toughness, adaptability, and diversity in properties. Refine the design based on the test results to enhance the material's performance. The material properties of the designed protein-based material are: - **Fractal Pattern Mimicry:** Inspired by snowflakes, the protein material exhibits fractal patterns and self-similarity **Hierarchical Structure:** Mimics ant behavior by creating hierarchical structures at multiple length scales **Self-Organization:** Encourages self-organization, similar to ants, to enhance toughness and adaptability. **Toughness Enhancement:** Combines fractal patterns and hierarchical structures to create tough, protein-based materials **Adaptability:** Designed for diverse applications by mimicking snowflake and ant patterns. The potential applications of the designed protein-based material are: - **Biomedical Applications:** Inspired by snowflake and ant patterns, protein-based materials can be used in biomedical
applications such as tissue engineering, wound dressi **Biodegradable Materials: ** Protein-based materials can be designed to be biodegradable, providing a sustainable alternat to traditional materials.
= **Soft Robotics:** Inspired by ant behavior, protein-based materials can be used in soft robotics, enabling more flexible adaptable robotic systems - **Biomineralization: ** Protein-based materials can mimic natural biomineralization processes, providing insights into bone ooth formation. The challenges and opportunities associated with designing protein-based materials inspired by snowflake and ant behavior are - **Scalability:** Maintaining fractal patterns and hierarchical structures at multiple length scales while scaling up the material **Self-Organization:** Encouraging self-organization in protein materials to enhance toughness and adaptability **Biocompatibility:** Ensuring the protein material is biocompatible and non-toxic for potential applications - **Synthesis Methods:** Developing efficient and scalable methods for synthesizing protein-based materials with fractal patterns and hierarchical structures. Future research directions include: **Fractal Geometry of Protein Materials:** Investigating the fractal geometry of protein materials and its role in embancing toughness and adaptability. **hnt-Inspired Protein Naterials:** Developing protein-based materials that mimic ant behavior, such as self-organization and hierarchical structures.
– *#Biodegradable Protein Materials:** Designing protein-based materials that are biodegradable and sustainble.
– **Synthesis Nethodo for Fractal Pro materials with fractal patterns and hierarchical structures. In conclusion, designing protein-based materials inspired by snowflake and ant behavior offers a promising approach to creating tough, adaptable, and diverse materials. By mimicking fractal patterns and hierarchical structures, and self-orgaization,
protein materials can be engineered to enhance tough as scalability, self-organization, biocompatibility, and synthesis methods meed to be addressed. Future research soould focus on full potential of protein-based materials.
For a comparison, the answer from a standard (non-fine-tuned) model is shown in Text Box S1. We note that this response does not include any thinking section (and hence, no graph reasoning or abstraction), and solely consists of the text as provided.
We first compare the results from the first iteration of the graph reasoning model with the standard model. From the standpoint of a scientist at a high technical level, the graph reasoning based response demonstrates superior intellectual depth and reasoning quality. The response provides a highly systematic framework for material design, detailing explicit parameters (e.g., protein selection, fractal pattern generation, hierarchical structure formation) and an iterative validation process that reflects rigorous scientific methodology and clear delineation in graph form (Figure 6). This structured approach aligns well with the expectations of graduate-level research, as it includes clear constraints, performance criteria, and testing protocols akin to what one might find in peer-reviewed scientific literature. By contrast, the response from the standard model showcases a more conceptual, broader-stroke approach and lacks the comprehensive methodological granularity seen in the other response. While it does incorporate topics such as self-healing peptides and adaptive camouflage, its overall elaboration remains more conceptual than procedural. Thus the graph reasoning result more closely meets the benchmarks of high rigor by systematically addressing design parameters, experimental validation, and scalability concerns.
We compare the two responses using GPT-4o as evaluator, with results shown in Table 1. Graph-PReFLexOR systematically addresses the design principles (fractals, hierarchical structures, seff-organization) and walks through potential challenges (scalability, synthesis methods, biocompatibility) with a more methodical cause-and-effect flow and hence offers a deeper theoretical reasoning overall. In contrast, the comparison with the non-fine tuned model is
much shorter and more general. It does not dig as deeply into the theoretical underpinnings, deep materials-specific reasoning, or longer-term challenges. The graph-based response offers more robust, multi-layered reasoning.
| Criteria | Graph reason- | Reference | | Explanation |
| --- | --- | --- | --- | --- |
| | ing response | model | (Re- | |
| | (Response 1) | sponse 2) | | |
| Intellectual Depth | 10/10 | 7/10 | | Response 1 combines text and the graph, providing a |
| | | | | multi-layered, systematic approach to design. Response |
| | | | | 2 lacks visual support and focuses on textual descrip- |
| | | | | tions. |
| Reasoning Quality | 10/10 | 8/10 | | The graph in Response 1 supports robust reasoning, con- |
| | | | | necting abstract concepts to practical design principles. |
| | | | | Response 2 demonstrates logical reasoning but without |
| | | | | visual reinforcement. |
| Logic | 10/10 | 7/10 | | Response 1's graph enhances the logical connections be- |
| | | | | tween abstract patterns, protein structures, and material |
| | | | | properties. Response 2 lacks a structured framework to |
| | | | | visually link ideas. |
| Use of Academic Measures | 10/10 | 6/10 | | Response 1 leverages the graph to convey systematic |
| | | | | relationships and integrates validation criteria effectively. |
| | | | | Response 2 is descriptive and less rigorous in methodol- |
| | | | | ogy. |
| Clarity and Organization | 8/10 | 9/10 | | While Response 1 provides depth, the graph may over- |
| | | | | whelm casual readers. Response 2 is simpler and more |
| | | | | accessible but less comprehensive. |
| Potential Applicability | 10/10 | 8/10 | | Response 1 explicitly ties abstract and biological prin- |
| | | | | ciples to practical applications, supported by the graph. |
| | | | | Response 2 mentions potential applications but without |
| | | | | detailed pathways. |
| Overall Integration of Graph | 10/10 | 0/10 | | The graph in Response 1 enhances intellectual depth, |
| | | | | reasoning, and academic rigor. Response 2 lacks a graph |
| | | | | entirely. |
| Overall Score | 9.7/10 | 6.4/10 | | Response 1 excels in intellectual depth, reasoning, and |
| | | | | academic rigor due to the graph and systematic method- |
| | | | | ology. Response 2, while clear and concise, falls short |
| | | | | in depth and structure. |
Table 1: Comparison of responses, as obtained using GPT-4o as a judge. Graph-PReFLexOR responses are delineated as 'Response 1', and the reference model as 'Response 2'. The graph reasoning model performs much better overall, with an average score of 9.7 vs. 6.4.
#### 2.5 Growing a knowledge garden via in-situ graph generation and knowledge expansion
In this next experiment we use the capability of the graph reasoning model to generate graphs that describe phenomena in a relational manner and apply it to explore ever-expanding graph structures by repeatedly prompting the model. This allows us to recursively grow a graph, starting from an initial concept or response to a task, and to examine whether the model can blend creative and analytical reasoning. The initial task is:
### l Iser task
User: Write a song by incorporating snow flakes and flower petals.
### Knowledge graph
```
Knowledge Graph
**Snov Flakes**
-「TS-A] ->
**Natural Phenomenon*
[IS-A]->
*Natural Phenomen
- [RELATES-TO] ->
**Winter**
-[RELATES-TO]-> **Spring*
-[INFLUENCES] -> **Mood** (e.g., Serene, Calming)
-[INFLUENCES]-> **Mood** (e.g., Delicate, Whimsical)
*Snow Flakes** -[INFLUENCES]-> **Nature's Beauty**
-[INFLUENCES]-> **Nature's Beauty*
**Snow Flakes** -[INFLUENCES]-> **Seasonal Changes*
* -[INFLUENCES]-> **Seasonal Changes*
**Flower Petals*
```
The abstract patterns are:
Abstract Patterns **Abstract Pattern:** α → β B -> ー 1 1 1 1
↑ 1 1
8 2 8 2 3 2 3 2 3 1 1 1
8 2 8 2 3 2 3 2 3 2 3 1 1 1
8 2 8 2 3 2 3 2 3 2 3 1 1 1 1
8 2 8 2 3 2 3 2 3 2 3 2 a -> ( B -> ( **Kev Transformation Rule: ** If a -> S and B -> 8 then 8 -> & **Essential Condition:** Q = B Pathern Context:
The song incorporates snowflakes and flower petals to represent the contrasting yet harmonious aspects of nature's beauty and renewal of spring. Together, they highlight the dynanic interplay between these elements, creating a balanced and evocative musical composition.
This task results in the graph as depicted in Figure 7. There are other elements in the response, as reflected here:
# Reasoning steps
**Reasoning Steps**
- 1. Snowflakes and flower petals are both natural phenomena.
- Snowflakes are associated with winter, while flower petals are associated with spring.
- 3. Snowflakes evoke a sense of serenity and uniqueness, while flower petals symbolize delicacy and renewal.
4. Combining snowflakes and flower petals in a song can create a h
- changes 5. This combination can inspire a balanced and evocative musical composition.
-
- **Relevant Materials or Concepts**:
- Snowflakes
- Flower Petals
Nature's Beauty
- Seasonal Changes
- Serenity
- Uniqueness
- Delicacy
- Renewal
- **Design Principles**:
-
-
- **Contrast*»: Societar , there petals represent opposing seasons, creating a striking ontrast.
*#Harmony**: Together, they create a harmonious representation **Material Properties**:
- **Snowflakes**:
- **Appearance**: Delicate, intricate, crystalline structure
- **Sound**: Soft, gentle, repetitive
**Emotional Connotation**: Serene, calming, unique
-
-
- **Flower Petals**:
- **Appearance**: Delicate, colorful, soft
- **Sound**: Light, airy, whimsical
**Emotional Connotation**: Delicate, whimsical, renewal
- **Hypothesis**:
"Incorporating snowflakes and flower petals in a song will create a harmonious representation of nature's beauty and seasonal changes, evoking distinct emotions and sensations."
**Additional Background**:
To incorporate snowflakes and flower petals into a song, one could consider the following elements;
1. **Imagery**: Use vivid and evocative language to describe the intricate patterns and beauty of snowflakes and flover petals. This can be achieved through lyrics or even instrumental descriptions.
2. *#Metaphors**: Drav parallels betwen the delicate nature of snowflakes and flover petals and the human ezperiance. For
instance, snowflakes could represent individuality a
3. **Seasonal themes**: Explore the contrast between winter (snowflakes) and spring (flower petals) to create a narrative that highlights the cyclical nature of life and the beauty in each season.
4. *#Musical motifs**: Use specific musical motifs or melodies to evoke the feeling of snowflakes (e.g., soft, repetitive
patterns) and flower petals (e.g., light, airy, whim
```
5. **Collaboration**: Consider co-writing with a poet or lyricist to craft lyrics that incorporate these natural elements in a
meaningful and impactful way.
[...]
```
The final response after the thinking section is:
Figure 7: Resulting graph in response to the task: Write a song by incorporating snow flakes and flower petals . The visualization depicts the knowledge graph integrated with the abstract pattern representation.
Final response
composition.
```
*Song: "Snowflakes and Flower Petal
**Verse 1**
In winter's chill
Each one unique, a delicate cal
Dance upon the breeze, a fleeting
A moment's beauty, lost in the night
**Chorus*
cold, there's warmth to
the flower, left behind
But even in the In the petals of
soft to the touch
Delicate
colorful
A symbol
of renewal
, in the winter
*Verse 2
In spring's awakening, where petals unfold
tapestry of colors, a story to be told
light, a gentle caress
Whimsical.
A promise of new life, in every tender dress
**Chorus
But even in the warmth, there's serenity to find
In the snowflakes that fall, in the stil
llness
Unique, a work of art, a fleeting sight
A symbol of peace, in the quiet of the night
*Bridge*
In the contrast, we find harmony
A balance of opposites, a symphony
flower petals,
Snowflakes and
A celebration of life, in all its seasons' rhyme
snowflakes and flower petals,
A reflection of ourselves, in nature's design
A symbol of hope, in the beauty that's always there
```
The answer also includes additional facets and analyses, as follows. These additional details and rational developments are critical for the proposed approach as they provide a traceable, detailed and interpretable analysis of how the answers
```
were developed.
Design principles and background
**Design Principles**
**(ontrast**: Snowflakes and flower petals represent opposing seasons, creating a striking contrast
*Harmony**: Together, they create a harmonious representation of nature's beauty and seasonal changes.
**Evocative**: Each element evokes distinct emotions and sensations, inspiring a balanced and evocative musical composition
**Material Properties**
*Snowflakes*
Appearance**: Delicate, intricate, crystalline structure
**Sound**: Soft, gentle, repetitive
**Emotional Connotation**: Serene, calming, unique
*Flower Petals*
*: Delicate, colorful, soft
*Sound**: Light, airy, whimsical
**Emotional Connotation**: Delicate, whimsical, renewa
*Hypothesis**
Incorporating snowflakes and flower petals in a song will create a harmonious representation of nature's beauty and seasonal
hanges, evoking distinct emotions and sensations.
*Additional Background**:
To incorporate snowflakes and flower petals into a song, consider using metaphors, imagery, and musical motifs to evoke the
emotions and sensations associated with these natural elements. You could also explore the contrast between the two elements to
create a narrative that highlights the cyclic
seasonal themes and lyrics that reflect the unique characteristics of each element to create a rich and evocative musical
```
While this section focuses on growing the graph iteratively, the depth of the response merits further analysis as it sheds light on the capabilities of the model. First, we note that the model's approach to the musical composition task demonstrates sophisticated pattern recognition capabilities through multiple analytical layers, and a capability to blend creative and analytical reasoning. For a deeper analysis, let M represent the model's mapping function, where M : Natural Elements -> Musical Structure. The model proposes that the primary recognition pattern can be formalized as P(α, β) = {(α → x) => (β → x) | x ∈ Influences}, where α represents snowflakes and β represents flower petals. This symmetrical mapping demonstrates the model's ability to identify and preserve parallel structures while maintaining essential distinctions.
We can see that the model's pattern recognition operated on multiple levels: L1 : Categorical : Temporal → {Winter, Spring}, and C3 {Natural Phenomenon}, C2 Influence : {Mood, Nature's Beauty, Seasonal Changes }. The abstraction process generated a formal framework where Vx E L3 : (α -> x) \ (β -> x) while maintaining the essential condition α + 3. This demonstrates sophisticated categorical reasoning while preserving compositional possibilities.
The response reveals remarkable symmetrical structures between winter and spring phenomena, characterized by parallel relationships that can be formalized as S = {{sw, Sp} E Phenomena × Seasons}, where snowflakes (s.,) and flower petals (sn) exhibit isomorphic influence patterns. Each element maintains identical categorical relationships, with (Sw, Sp) E Natural Phenomenon, while preserving distinct seasonal associations: Sw +> Winter, so >> Spring. The influence structure demonstrates perfect symmetry across three domains, expressible as Vx E {Mood, Nature's Beauty, Seasonal Changes} : (Sw → x) ←> (Sp → x). This symmetry manifests despite the temporal opposition of their respective seasons, suggesting a fundamental balance in nature's organizational structure. The relationship can be further abstracted into a commutative diagram where both elements, while maintaining sm + Sm exhibit identical transformative properties: T : { Sm, Sn} x {Mood, Nature's Beauty, Seasonal Changes} -> {0, 1}. This structural symmetry underlies the aesthetic and phenomenological balance observed in seasonal transitions.
As shown in Figure 7, the responses introduce an abstract pattern framework P = {α, β, δ, ε, ς } where α, β represent distinct natural phenomena (α + β) that exhibit parallel transformative properties. The system is characterized by a set of mappings {α, β} → {δ, ε, ς}, with each mapping representing a direct transformative relationship. The key transformation rule (α → δ) Λ (β → δ) => (δ → ε) establishes a conditional cascade effect. This creates a unique convergent structure where two distinct source elements {α, β} maintain symmetric relationships with target elements while preserving their fundamental inequality. The pattern demonstrates both first-order transformations T : {α, β} × {δ, ε, ς } => {0, 1} and second-order conditional transformations T2 : δ × € → {0, 1} under specific antecedent conditions, forming a hierarchical transformation framework.
The model prioritized structural understanding over domain-specific musical application, evidenced by its construction of the abstract framework P = {α, β, δ, ε, ς } before considering musical implementation. This approach demonstrates strong logical reasoning but suggests potential limitations in capturing the temporal and emotional nuances essential to musical composition. The symmetrical framework, while mathematically elegant (tx E C3. Sm → x (=> Sn → x), might benefit from additional parameters to capture the dynamic nature of musical expression.
### 2.5.1 Knowledge expansion growth phase
An interesting aspect of the capability of our graph reasoning model is its ability to grow graphs dynamically to extract, generate and design new knowledge. To do this we iteratively task the model to generate graphs, where new prompts are developed based on earlier ones to expand the answer in new directions. Since the model produces a graph at every iteration, all these graphs can ultimately be integrated into one large graph that spans diverse topics. We use the following series of prompts as visualized in Figure 8.
Figures 9 and S1 present complementary visualizations of the concatenated knowledge graphs derived from the series of prompts outlined in Figure 8. Figure 9 organizes the knowledge graphs by individual prompts, emphasizing the structure and interconnections within each sub-graph. This visualization highlights the local relationships and the specific thematic focus of each prompt while illustrating how these ideas are interconnected across different sub-graphs. In contrast, Figure S 1 integrates the same data into a unified knowledge graph, providing a holistic view of the interplay among all the prompts. By merging the sub-graphs into a single structure, this representation reveals overarching connections and emergent patterns that may not be immediately apparent when examining the sub-graphs in isolation. Together, these figures offer a layered perspective on the relationships within the dataset, from prompt-specific insights to a system-level understanding.
Figure 10 depicts the graph in a more abstract representations, visualized per distinct layouts and different node properties. Data is shown in node degree, PageRank [46], and bridging coefficient displays. These are fundamental measures in network analysis, each capturing different aspects of a node's role within a graph, where the analyses point to the powerful insights that can be generated based on the model output. Node degree represents the number of connections a node has, providing a simple yet powerful indicator of local importance; in directed graphs, this can be further divided into in-degree (incoming connections) and out-degree (outgoing connections).
As an alternative. PageRank offers a more global perspective, measuring a node's importance based on the significance of its neighbors. Originally developed for ranking web pages, PageRank emphasizes the quality of connections, assigning higher scores to nodes linked by other highly-ranked nodes. PageRank is a measure of a node's global importance in a network, originally designed to rank web pages in search engines. It assigns a score to each node based on the principle that connections from important nodes contribute more to a node's rank than connections from less important nodes. The algorithm works iteratively, where a node's PageRank is proportional to the sum of the PageRank scores of its neighbors, weighted by the number of links those neighbors have. This approach captures both the quantity and quality of connections, emphasizing nodes that are linked by highly influential entities. A damping factor is typically introduced to account for random jumps, ensuring the scores converge and the network can handle dangling nodes (nodes with no outgoing links). In the context of knowledge graphs, PageRank is particularly relevant for identifying the most semantically important entities or concepts. For instance, in a knowledge graph where nodes represent entities and edges represent relationships, PageRank can highlight nodes that are central to the graph's structure, such as frequently referenced concepts or pivotal entities connecting different subdomains. This can aid in prioritizing entities for tasks
Figure 8: Logical sequential flow of prompts used to grow the knowledge graph successively through recursive reasoning. This diagram illustrates the progression of ideas, starting with creative tasks (e.g., composing a song using snowflakes and flower petals) and advancing through scientific and philosophical inquiries, such as molecular symmetry, seasonal patterns, and nanopatterning. The sequence of questions can either be generated by human collaborators, or by AI. In the latter case, an 'infinite' loop of expanding the knowledge graph can be constructed. The example prompt trajectory shown here was developed via human-AI collaboration.
like question answering, reasoning, or graph traversal. Moreover, PageRank can uncover influential nodes that might serve as anchors for linking related knowledge, enhancing both the efficiency of queries and the understanding of the graph's overall topology.
The bridging coefficient highlights a node's role in connecting otherwise unconnected or loosely connected regions of the graph. Nodes with high bridging coefficients serve as critical links between clusters, facilitating information flow and structural integration. Together, these metrics provide complementary insights into the structure and dynamics of complex networks. The bridging coefficient measures how well a node serves as a bridge between otherwise unconnected or loosely connected regions of a graph, emphasizing the node's role in maintaining structural integrity and facilitating information flow. In the context of knowledge graphs, nodes with high bridging coefficients are often critical for linking distinct clusters of knowledge, such as different domains or subgraphs. These bridging nodes play a crucial role in enabling cross-domain reasoning and ensuring the graph remains well-connected. For example, in a knowledge graph representing scientific research, a bridging node might connect distinct areas like biology and materials science, fostering interdisciplinary insights. By identifying nodes with high bridging coefficients, one can uncover concepts or entities that act as gateways to new knowledge, prioritize areas for graph enrichment, or ensure robustness in query pathways. This measure is particularly useful for tasks like ontology alignment, where structural gaps between subdomains need to be bridged effectively.
Domain prestige is a measure of how accessible a node is within a directed graph, calculated as the fraction of all nodes that can reach a given node either directly or indirectly. It captures a node's global influence by accounting for indirect paths, providing a more comprehensive view of its accessibility. In the context of knowledge graphs, this measure highlights key entities that are reachable by a large portion of the graph, making them central for reasoning, inference, or information propagation. Nodes with domain prestige act as crucial hubs or reference points, while nodes with
Figure 9: Visualization of the resulting integrated knowledge graph that emerges after the series of prompts delineated in Figure 8. To start, we started tasking the model to Write a song by incorporating snowflakes and flower petals. The resulting graphs are organized by prompts but we show the connections that emerge between the sub-graphs.
Figure 10: Visualization of the integrated knowledge graph created based ion the series of prompts delineated in Figure 8, starting with: Write a song by incorporating snowflakes and flower petals. The data is the same as shown in Figure 9, but organized here as an integrated graph rather than by prompt, and laid out using the Fruchterman Reingold layout algorithm [47]. Panel a, node size by node degree. Panel b, node size by page rank. Panel c, node size by bridging coefficient. Panel d, node size by domain prestige (metric defined by fraction of nodes within a network that are directly od indirectly pointing to it). Table 2 lists the top 5 nodes per each of these measures.
low values are likely isolated or peripheral. This metric is particularly valuable for identifying entities that facilitate connectivity and play pivotal roles in the overall graph structure. Table 2 lists the top 5 nodes for each of these measures, complementing the visual representation in Figure 10 to indicate the most significant nodes in each case and metric. For alternative layouts, Figure 11 depicts various layout choices for comparison.
| Node Degree | Page Rank | Bridging Coefficient | Domain Prestige |
| --- | --- | --- | --- |
| Music | Cyclical Pattern | Spring | Plant Health |
| Self-Recognition | Seasonal Changes | Mood-Delicate | Mechanical Phenomenon |
| Flower Petals | Attachment and Detachment | Surface Modification Technique | Cyclical Pattern |
| Glass Beads | Biological Process | Fracture Dynamics | Material Vibration |
| Nanopatterning | Music | Music Characteristics | Attachment and Detachment |
Table 2: Comparison of top nodes in the generated knowledge graph for different measures, following the four panels in Figure 10.
Figure 11: Visualization of the integrated knowledge graph created based ion the series of prompts delineated in Figure 8 (starting with: Write a song by incorporating snowflakes and flower petals.). The underlying data is the same as shown in Figure 9, but organized here effectively via circular layout (panel a), radial axis layout (panel b), and Yifan Hu layout [48] (panel c). Color and size of each node is scaled by node degree. The layout in panel c visualizes the great distance traversed in the integrated graph, providing far-ranging relationships developed by the model.
The knowledge expansion process showcased in these repeated prompts demonstrates the ability of Graph-PReFLexOR to iteratively generate and connect knowledge graphs based on a series of related prompts. Beginning with a creative task (writing a song incorporating snowflakes and flower petals), the model progressively expands its knowledge base by addressing scientific and philosophical inquiries stemming from the initial prompt. Some unique insights/ideas generated during the recursive knowledge expansion process include:
- Symmetrical structures in nature: The model identifies symmetrical influence patterns between snowflakes (associated with winter) and flower petals (associated with spring). Despite their temporal opposition, both exhibit isomorphic influence on mood, nature's beauty, and seasonal changes. This reveals a fundamental balance in nature's organization across different seasons.
- · Abstraction of natural phenomena: The model constructs an abstract pattern framework representing the relationships between distinct natural phenomena. This framework highlights both first-order and second-order conditional transformations, demonstrating the model's capacity for high-level reasoning and abstraction.
- · Integration of diverse concepts: The knowledge graph expands to incorporate concepts like molecular symmetry principles, nanopatterning in flower petals, and the philosophical idea of impermanence and renewal, showcasing the ability to connect ideas across diverse disciplines from materials science to music to literature and philosophy [49, 50] .
- · Emergent patterns: By visualizing the integrated knowledge graph, overarching connections and patterns emerge that might not be apparent when examining individual sub-graphs. This highlights the system's ability to synthesize knowledge and reveal hidden relationships.
The knowledge expansion process exemplifies the strength of Graph-PReFLexOR in generating a network of interconnected ideas, moving beyond isolated responses to build a complex and evolving knowledge base. This capability holds significant potential for interdisciplinary research and exploration.
### 2.6 Autonomously growing knowledge garden
For a fully autonomous generation of graphs without human input for prompt generation, the system can be modified easily by using an LLM to develop new prompts autonomously based on previous results, to create an ever-expanding abstraction of relationships. Figure 13 shows the results of an autonomously grown knowledge graph. The upper panel shows the graph, and the lower panel depicts a selection of interesting paths identified in the integrated graph. The starting prompt, the only one generated by a human, was chosen to be:
## User task
User: Discuss an interesting idea in bio-inspired materials science.
The model then iterated for N = 12 iterations to grow the graph. Figure 12 shows the series of prompts developed and used by the model to grow the graph (we emphasize that only the first prompt was provided by the human user).
The analysis presented in Figure 14 provides a comprehensive understanding of the graph's structure and dynamics. The degree distribution (Top Left) highlights the presence of a scale-free topology, characterized by a few highly connected hubs that dominate the graph's connectivity, alongside a majority of low-degree nodes. This structure suggests robustness to random node failures but potential vulnerability to targeted attacks on the hubs. The visualization of the largest connected component (Top Right) reveals its modular nature, with densely connected subregions surrounded by sparsely connected nodes. This organization suggests a hierarchical topology, where clusters are linked by intermediary nodes, enabling efficient navigation through the network. The clustering coefficient distribution (Middle Left) emphasizes the variation in local connectivity. While most nodes exhibit low clustering coefficients, indicating sparse local neighborhoods, the few nodes with high clustering coefficients are likely critical to maintaining the coherence of tightly-knit subcommunities. The betweenness centrality distribution (Middle Right) further illustrates the graph's structural dependencies. A small subset of nodes exhibits significantly higher centrality, underscoring their role as key connectors that facilitate information flow between otherwise disjoint regions of the graph. These nodes act as bridges, ensuring overall network connectivity and efficiency. The shortest path length distribution (Bottom Left) demonstrates that the graph exhibits small-world characteristics, with most nodes separated by only a few hops. This property enables rapid information transfer across the graph, a hallmark of efficient networks in both natural and engineered systems. The community size distribution (Bottom Right) provides insights into the modular organization of the graph. Communities vary widely in size, with larger communities potentially playing a dominant role in global connectivity. Central nodes within each community, annotated at the base of the bars, represent the most connected nodes within their respective communities. These nodes likely serve as local hubs, facilitating intra-community interactions and linking smaller, peripheral nodes. This analysis reveals a graph that is highly efficient, modular, and robust, with a clear
interplay between global connectivity and local structure. The presence of scale-free properties, small-world behavior. and modular organization suggests that the grown graph's topology is optimized for both stability and adaptability.
In the next experiment, we prompt the model with this initial prompt, by specifically focusing the algorithm to develop new tasks at the intersection with distinct knowledge areas to foster exploration of dissimilar ideas, such as art and science (see, Materials and Methods, section ):
# User task
User: Discuss protein design based on collagen and elastin.
The algorithm develops a series of prompts as shown in Figure 15, resulting in the integrated graph depicted in Figure 16. A deeper analysis of the resulting graph is provided in Figure S2.
We highlight one unique insight developed by the model, to relate the concept of 'thin places' with 'biological materials' and 'art'. Thin places is a concept drawn from various mythological and spiritual traditions, notably Celtic lore, referring to sites or moments where the boundary between the physical world and a transcendent realm is perceived to be exceptionally thin or permeable. In our integrated graph, these thin places are not only connected to sitespecific, mythological art installations but also echo the conceptual blurring of boundaries integral to protein design and biomaterial engineering. Such a connection is unusual because, at first glance, mythic realms and advanced biotechnology inhabit entirely distinct spheres. Yet the same drive to challenge and dissolve traditional boundarieswhether between art and nature, or between synthetic and organic matter-underlies both phenomena. Consequently, the metaphor of thin places illuminates how immersive, boundary-blurring experiences in mythological art might inform innovative approaches to tissue engineering and bioluminescent biomaterials, bridging realms often seen as incompatible. This underscores the novel insight that creative, myth-inspired concepts can resonate meaningfully with the cutting edges of scientific inquiry.
Naturally, the output of these reasoning steps and larger graph generated can be used for further analysis. While a broader expansion is beyond the scope of this paper, we show one example here. For instance, we experimented with charging the o1 model [51] to propose a new theory or concept that incorporates the results produced by Graph-PReFLexOR. The results are shown in Text Box 3, showing how other reasoning models can utilize the graph reasoning output and produce concise delineation of ideas.
### 3 Conclusion
This work introduced Graph-PReFLexOR, a framework that integrates in-situ graph reasoning, symbolic abstraction, and recursive reflection into the generative modeling paradigm (Figures 1 and 2). By embedding graph-based intermediate representations within LLMs, Graph-PReFLexOR advances beyond the limitations of purely linguistic systems to tackle complex tasks requiring relational reasoning, multi-step deduction, and adaptive knowledge synthesis. Our experiments were designed to assess particularly how well the model could generalize, as the training data consisted of 1,000 papers on biological materials [30], whereas the tasks were constructed at the interface of science and other disciplines including design, music, and philosophy to probe the model's generalization capability beyond the technical-focused training data. We find exceptional performance throughout and see that the model could very well generalize, follow the structured reasoning it learned during training (see, Figure 1), and even construct highly complex graphs that interface a myriad of disciplines (see, e.g. Figure 9 and S1), and other results.
A particularly compelling application was the development of the knowledge garden concept, building on the ability of the model to grow knowledge graphs dynamically and iteratively, by adding new relational insights and abstractions. This resulted in a series of experiments where we expanded graphs upon an initial simple task, yielding complex graph structures that themselves could be the basis of further research, inquiry and reasoning (e.g., Figures 10 and 11). Much future work can be conducted based on this method. Some initial experimentation on growing graphs autonomously yielded interesting results, such as shown in Figures 13 and 14, where we identified a capability of the model to search and expand topics and connect complex ideas.
By explicitly constructing and abstracting relational graphs, the Graph-PReFLexOR approach provides a more structured foundation than standard sequence-oriented Transformer training. In particular, preserving connectivity among entities and relations makes it easier to detect and exploit universal features such as isomorphisms and recurrent subgraph patterns. Unlike pure next-token objectives, which largely rely on distributional cues hidden in token sequences, graph abstractions bring structural commonalities to the forefront, enabling more systematic identification of shared algebraic forms, relational templates, and higher-order symmetries. As a result, symbolic rewriting and generalization become more direct, since the underlying topology is explicitly represented rather than merely inferred. Consequently, the
resulting embeddings capture deeper, domain-invariant regularities that might otherwise remain implicit or fragmentary in a purely sequence-based approach.
#### 3.1 Mathematical and Logical Framework
Graph-PReFLexOR establishes a pipeline of thinking and a unified framework that bridges symbolic reasoning and dynamic graph-based abstraction to tackle the complexity of scientific inquiry. At its core, the framework formalizes reasoning as a multi-layered mapping as introduced in equation 1 where a task T produces a knowledge graph G, abstract patterns P, and final answers A. The knowledge graph G = (V, E) encodes concepts as nodes V and relationships as directed edges E, such as IS-A, RELATES-TO, and INFLUENCES. The system derives abstract patterns P by identifying higher-order dependencies, structured as transformations:
$$\alpha\rightarrow\beta\rightarrow\gamma\rightarrow\delta\rightarrow\epsilon,$$
with proportional relationships such as:
a x e.
indicating how the initial state & contributes to the final transformation e. Recursive reflection as shown in Figure 5 refines these outputs iteratively, producing an optimized answer A.
Inspired by category theory, Graph-PReFLexOR emphasizes relational over intrinsic properties, allowing the abstraction of domain-specific patterns into reusable, transferable frameworks. For example, graph symmetries enable translational reasoning across materials science, bioengineering, and philosophy, breaking traditional siloed boundaries. The recursive refinement process ensures these abstractions remain grounded and interpretable, aligning outputs with task specific goals while introducing new hierarchies of understanding, specifically creating shared embedding representations with multidimensional mappings.
Another key aspect of the framework lies in its iterative, knowledge-expanding capability. By incorporating feedbackdriven graph updates and dynamically integrating symbolic patterns, Graph-PReFLexOR demonstrates the potential for in-situ knowledge growth. This positions it uniquely to address challenges in hypothesis generation, interdisciplinary exploration, and adaptive learning-tasks that require not just computational power but conceptual flexibility. Ultimately, Graph-PReFLexOR presents opportunities to expand the paradigm of AI-driven reasoning by fostering transparent, interpretable models of discovery, laying the groundwork for a new era of autonomous scientific and creative inquiry.
# 3.2 Advances Over Existing Approaches
Compared to traditional LLMs that rely on unstructured token-level generation, Graph-PReFLexOR introduces:
- · Structured Intermediate Representations: Unlike models that output responses directly, Graph-PReFLexOR generates a knowledge graph G to explicitly represent relationships and dependencies. This improves interpretability and ensures reasoning consistency, which can be adapted easily to other scientific, technical or other domains, and formalizes shared representations.
- Symbolic and Connectionist Integration: While most transformer-based models do not focus specifically on symbolic reasoning capabilities, Graph-PReFLexOR bridges the gap by combining linguistic fluency with graph-based reasoning. This aligns with hybrid approaches but demonstrates superior adaptability to novel tasks.
- · Recursive Refinement: Recursive reasoning enables multi-step improvement by iteratively refining knowledge graphs and abstract patterns. This dynamic process enhances response quality and ensures alignment with task objectives.
- · Scalability and Adaptability: Graph-PReFLexOR extends traditional methods by dynamically expanding its knowledge graph to adapt to evolving prompts, facilitating knowledge transfer across domains.
Our experimental results demonstrate that Graph-PReFLexOR outperforms baseline models in reasoning depth, knowledge transfer, and adaptability. For instance, the recursive reasoning mechanism (equation 3) achieves higher reasoning depth and adaptability compared to static methods, as we demonstrated in Table 1. For scientific applications, and special cases like mathematics, the critic function fortice could involve specific fact-checking or consistency assessments, or even executing simulations to inject new data or physical insights. These can be incorporated during training or inference and ensure that such a recursive reasoning system produces accurate and consistent results. This flexibility offers significant potential for scientific applications.
#### 3.3 Quantum-Inspired Metaphor for Transformers and Graph-Based Reasoning
The Transformer architecture, at its core, mirrors certain fundamental principles of quantum mechanics, particularly in its representation and refinement of knowledge. While the analogy is not perfect and it is crucial to recognize that this remains a metaphor and not an actual quantum mechanical process, it allows us to explore some new associations. In a Transformer, before a specific task or question is posed, the system exists in a state analogous to quantum superposition, where all possible answers coexist as potential outcomes (Figure 17). Formally, let the system's latent state be represented as a superposition of knowledge embeddings:
$$|\Psi_{0}\rangle=\sum_{k}c_{k}|k\rangle,$$
where each basis state |k) corresponds to a potential reasoning path or answer, and ck reflects the weight or amplitude of that path based on prior knowledge. The multi-head attention mechanism introduces a form of entanglement, binding token representations through a context-sensitive tensor product:
$$|\Psi\rangle=|\psi_{1}\rangle\otimes|\psi_{2}\rangle\otimes\cdots\otimes|\psi_{N}\rangle,$$
where tokens dynamically interact to refine the system's focus. This process can be seen as reducing the system's uncertainty, akin to minimizing entropy in quantum systems.
In the case of Graph-PReFLexOR, graph-based reasoning emerges as one expression of this general quantum-like behavior. Here, the knowledge graph serves as a concrete structure for representing interdependencies between concepts, evolving iteratively through recursive refinement as shown in Figure 5. Each refinement step is analogous to the application of a quantum operator, governed by reasoning rules encoded in a Hamiltonian:
$$|\psi_{i+1}\rangle={\hat{H}}|\psi_{i}\rangle,$$
where | vi } represents the graph's state at step i, and H encodes the principles of relational optimization (e.g., maximizing coherence or minimizing contradictions). The final knowledge graph, after sufficient refinement, reflects a collapsed eigenstate, optimized for the given question.
Principally, this analogy extends to the entire Transformer framework. The recursive refinement within Graph-PReFLexOR (e.g., Figures 1, 2 and 5) exemplifies how transformers resolve complex queries by dynamically collapsing potential answers into structured outputs through an iterative process. The abstraction mechanisms used here are just one specific instantiation of this broader capability, where relationships between nodes represent entangled dependencies that become more expressive, and accessible, through shared representations that encourage the model think in isomorphisms. In more abstract terms, the system applies interference patterns via attention weights to amplify relevant paths and suppress contradictory ones:
$$c_{k}^{(i+1)}=\sum_{j}{\mathcal{A}}_{k j}c_{j}^{(i)},$$
where Ai are attention weights that act analogously to quantum amplitudes, modulating the influence of each path.
By connecting graph reasoning to these foundational quantum-like behaviors, Graph-PReFLexOR illustrates how Transformers can model tasks requiring multi-step abstraction, recursive refinement, and contextual reasoning. The graph serves as a symbolic representation of the underlying quantum-inspired mechanisms, offering interpretability and structure while adhering to the general principles governing the model's operation.
The reasoning process can be viewed as evolving from a state of superposition |Ψο) = Σκ εκ/k), where potential reasoning paths are explored. Through recursive refinement, the framework updates the reasoning state iteratively, R++1 = fcrice (Re, F2), analogous to quantum state evolution governed by external feedback F . The integration of reasoning paths into a unified representation aligns with the mapping defined in equation 1, where a task 7 produces a knowledge graph G, abstract patterns P, and final answers A. Just as the quantum system collapses into a measurable eigenstate, Graph-PReFLexOR resolves these intermediate abstractions into a final response: A = g(RN), providing a coherent and interpretable output.
Transformers are unique in developing this analogy, as unlike traditional neural networks, Transformers explicitly model relationships between inputs through their self-attention mechanism, creating a dynamic superposition of weighted dependencies across all tokens in a sequence. This mechanism reflects quantum superposition, where all possible relationships coexist as potentialities before being resolved. Multi-headed attention further enhances this analogy by enabling independent exploration of diverse patterns, akin to quantum entanglement, where multiple states are interdependent. Additionally, the Transformer's decoding process parallels wavefunction collapse, as the model resolves probabilistic attention distributions into specific outputs. This explicit modeling of interdependencies and
dynamic evolution of representations makes Transformers inherently suited to the quantum analogy, more so than other architectures like convolutional or recurrent neural networks, which rely on static filters or sequential processing. respectively. Dense neural networks rely on fixed weight matrices to transform inputs in a static, layer-by-layer manner. without explicitly capturing interdependencies between different elements of the input. Each transformation is local to the input vector as a whole, akin to a single deterministic operation, rather than a superposition of potential relationships. Unlike transformers, dense networks do not employ mechanisms like self-attention to dynamically assign weights to input relationships, making it difficult to draw analogies to quantum phenomena such as superposition or entanglement. Furthermore, dense networks lack the iterative refinement of representations seen in transformers, where relational dependencies evolve layer by layer, mirroring quantum state evolution.
While the quantum analogy offers conceptual insights, it does not literally apply to Transformers, which operate on real-valued vectors rather than complex Hilbert spaces and do not follow quantum mechanical laws such as unitarity or the Born rule. Consequently, terms like "superposition," "entanglement," and "collapse" are used figuratively, reflecting iterative reasoning rather than actual quantum processes. This can be misleading if taken literally, as Transformers remain classical computational systems. Still, these discussions give us insights into how Transformers can be viewed, and potentially improved, to better model physical processes.
### 3.4 Challenges and Future Opportunities
Future work could focus on scaling the framework to larger datasets and more complex models, addressing challenges such as interpretability in deeply interconnected graphs. Integrating Graph-PReFLexOR with state-of-the-art architectures, such as multi-modal transformers and graph neural networks, will further expand its applicability to domains like bioinformatics, materials science, and hypothesis-driven discovery. Graph-PReFLexOR represents an advancement in reasoning frameworks, combining symbolic and connectionist paradigms. By embedding explicit graph reasoning and recursive reflection into LLMs, it sets a new benchmark for scientific discovery, unlocking opportunities for transformative research across disciplines.
The ability to construct structured and symbolically focused reasoning strategies within the flexible framework of LLMs is appealing as an alternative to conventional symbolic mechanisms and strategies. It also allows for the construction of powerful agentic frameworks. For instance, the approach taken in this paper demonstrated a strategy to create an iterative prompting to expand knowledge graphs; this can be done via human-AI collaboration or via AI-based reasoning only. It can also incorporate specific targets, such as to expand the knowledge graph towards specific directions (e.g. introduce art, music, specific technologies, etc.) to drive discovery and mimic some mechanisms seen in directed diffusion. Other strategies may invoke more complex approaches to avoid existing known relationships to drive the model towards unexplored domains. Other aspects could involve adding retrieval-augmented assessments against literature (e.g. using Semantic Scholar) or the internet, and even include assessments of feasibility as done in other agentic frameworks that operate at the interface of AI and physics [26, 29, 52]. This helps us to expand the use of AI tools as powerful assistants to connect ideas and explore new frontiers of knowledge that were previously not accessible. Broad access to such tools is essential, especially democratization access of AI for broad audiences, as this holds a key to vast unchartered knowledge. AI is no longer merely a tool but a partner in new creative pursuits of humans.
### 4 Materials and Methods
We describe key materials and methods developed in this section.
# 4.1 Graph-PReFLexOR model development
### 4.1.1 Knowledge Graph Generation and Question-Answering Framework
This section details the key algorithms developed for generating a knowledge graph, enhancing context, and producing structured question-answering outputs during the training phase. The training dataset is the raw text of around 1,000 scientific papers, as detailed in earlier papers [30, 35].
During in-situ dataset generation during training, [36] the knowledge graph generation algorithm creates a focused graph that includes key concepts and their relationships, such as classification (IS-A), influence (INFLUENCES), and connections (RELATES-TO). Retrieval-Augmented Generation (RAG) is used to identify abstract patterns and dependencies, enhancing the conceptual representation, using the original dataset of raw scientific papers. As discussed in the original paper, the use of RAG is significant since it provides a direct, deep and structured connection to related concepts in the training corpus to ensure full and complete reasoning paths are developed during training.
The process proceeds in a systematic fashion via distinct phases of construction.
Question-and-Answer Generation Algorithm This algorithm synthesizes knowledge graph generation, enriched context, and question-answer generation into a unified process. It produces challenging questions, detailed correct answers, and rejected answers for evaluation.
An abstract representation of the workflow is as follows:
1. Retrieve a randomly selected context T (from all raw data) as a sequence of tokens T = {t1, 12, ... , ty} from the knowledge index.
$$T=\bigcup_{i=1}^{k}\mathrm{{Text}}(n_{i}).$$
- 2. Generate a question Q based on T by maximizing relevance:
$$Q={\underset{q}{\operatorname{argmax}}}\operatorname{Relevance}(q,T).$$
3. Enrich T using RAG, yielding T' = T + RAG(T). Enrichment algorithms use RAG to add supplementary insights to the retrieved context, to connect specific text chunks used for generating the question with the entire dataset, ensuring that the answer and reasoning steps incorporate a global perspective.
4. Construct a knowledge graph G from T'.
5. Extract reasoning steps S and generate an answer A as:
$$A={\underset{a}{\operatorname{argmax}}}\operatorname{Quality}(a|Q,S,G).$$
6. Generate a rejected answer A' by perturbing A or through direct model prompts.
Details on Graph Construction: Given a set of extracted concepts C = {c1,c2, ... , cn} and relationships R = {r1, r2, ... , rm }, the knowledge graph G = (V, E) is constructed as:
- · V : Nodes representing the concepts C.
- · E: Directed edges (ci, ci, rk ) where rk specifies the relationship type between c; and c; .
The relationship types r ; are encouraged to be within a set of categories (e.g., IS-A, RELATES-TO, INFLUENCES), albeit the training process may yield alternative relationship in the data.
Abstract Pattern Generation Using the Generative Framework Abstract patterns are derived using RAG by identifying higher-order dependencies and summarizing them as:
$$P(C,R)\sim\sum_{i,j}f(c_{i},c_{j};r_{k}),$$
where f quantifies the semantic strength of the relationship r & between c; and c;. Abstract patterns are constructed by analyzing hierarchical dependencies within the enriched context and summarizing key relationships into symbolic representations. The process begins with a knowledge graph G = (V, E), where V represents concepts and E defines relationships. The abstract pattern generation algorithm operates as follows:
First, we identify core concepts {c1, c2, . . . , cn } from V. Second, we evaluate the relevance of the relationships r% ∈ E based on semantic weights:
$$r_{k}=\arg\max f(c_{i},c_{j};r),$$
where f is a scoring function quantifying the significance of r between concepts c; and c;. Third, we construct a directed sequence of abstract states a, B, Y, δ, e representing transformations, where:
$$\alpha\rightarrow\beta\rightarrow\gamma\rightarrow\delta\rightarrow\epsilon.$$
Finally we derive proportional relationships between states, such as:
a x e.
indicating the dependence of the initial state o on the final transformation e.
This framework synthesizes complex relationships into interpretable symbolic patterns, enabling high-level reasoning and abstraction. The above steps are conducted via repeated prompting an LLM, with RAG, to construct data in the form delineated based on raw data.
#### Integrated Dataset Generation Algorithm 4.1.2
Using the above steps, this algorithm creates datasets for training or evaluation by generating question-answer pairs and applying optional filtering based on specific criteria.
## Abstract Workflow:
1. Iterate over topics Tk or randomly sample contexts.
2. For each Th: - Generate a question Qks, correct answer Ax, and rejected answer Ai. - Optionally filter pairs where Ax does not meet predefined correctness criteria.
3. Formulate the dataset as:
$\mathcal{D}=\{(Q_{k},A_{k},A_{k}^{\prime})\mid k=1,...,N\}$.
4. Apply post-processing P if specified, yielding the final dataset:
$${\mathcal{D}}^{\prime}={\mathcal{P}}({\mathcal{D}}).$$
The dataset generation algorithm supports training and evaluation by producing structured data that combines knowledge graph reasoning with contrastive answer evaluation.
#### Example of Resulting in-situ generated Data for Training 4.1.3
The in-situ generated data consists of structured text representing user queries, reasoning steps, knowledge graphs, and responses. Below is a snippet of the resulting data.
First, the question generated is:
### Training data: Question advantage of using Origami and Kirigami Engineering (OKN) for fabricating microscale 3D objects What is the down approa
The answer is developed first within a thinking section, followed by the actual answer. The thinking sectionb begins with the generation of a knowledge graph and reasoning, like so:
Training data: Knowledge graph/reasoning
- Concepts and Relationships IS-A- Origami Engineering Kırıgamı Engineering
-
- Traditional Top-Down Approaches (Overcomes limitations RELATES-TO- Dynamic Transitions (From 2D patterns to 3D construct
-
- enable the creation of 3D structures from 2D
-
-
overcoming the limitations of geometry changes with unique properties
The abstract pattern identified is:
### Training data example: Abstract pattern
```
Abstract Pattern .
Kev Transformation Rule
of microscale 3D ohierts
gurability influences OKN (~)
OKN influences traditional top-down approaches (δ
OKN enables dynamic transitions (s)
Essential Condition .*
The more dynamic transitions ((ε)) OKN enables, the more advantageous it is for fabricating microscale 3D objects (α)
ttern Context: The primary advantage of using Origami and Kirigami Engineering (OKN) for fabricating microscale 3D objects i
ability to creatie highly reconfigurability allows or opertices, overcoming the linititions of traditions of traditions of traditions of traditional
sible geometry changes,
. . . .
```
As an interpretation, the model identifies that the effectiveness of OKN in fabricating microscale 3D objects (a) is directly proportional to its capability to enable dynamic transitions (e).
The final response after < | thinking | > . < | / thinking | > section is:
### Training data: Final answer
| The primary advantage of using Origami and Kirigami Engineering (OKN) for fabricating microscale 3D objects compared to |
| --- |
| traditional top-down approaches lies in their ability to create highly reconfigurable structures with a wide range of |
| properties. Here's a breakdown of this advantage: |
| 1. Overcoming traditional limitations: OKNs allow for the design of materials with a limitless variety of patterns and |
| composites, overcoming the limitations of traditional methods that are constrained by factors such as miscibility, |
| compatibility, and temperature stability. This versatility enables the creation of microscale 3D objects with tailored |
| properties for specific applications. |
| 2. Dynamic transitions: Unlike static metamaterials, OKNs offer dynamic transitions from two-dimensional patterns to |
| three-dimensional constructs. This enables large and reversible geometry changes, allowing for adaptability and |
| multifunctionality in the resulting microscale 3D objects. |
| 3. Unique property combinations: OKNs facilitate the engineering of materials with unique combinations of properties that are |
| typically contradictory. For example, OKN structures can exhibit high conductivity and stretchability simultaneously, which is |
| challenging to achieve with traditional methods. This enables the creation of microscale 3D objects with unprecedented |
| functionalities. |
| 4. Macroscopic deformability: DKMs provide macroscopic deformability far beyond the strain limits of the constituent materials. |
| This allows for programmable and complex responses to strain, making them particularly suitable for applications requiring |
| adaptability and multifunctionality. For instance, OKN-based biomedical devices can conform to the shape of organs or tissues, |
| reducing tissue damage and improving integration. |
| 5. Predictability and reproducibility: OKN structures can be manufactured using established additive or subtractive techniques, |
| such as lithography or laser cutting. This ensures predictability and reproducibility in their mechanical, optical, and |
| electrical properties, making them reliable for various applications. |
This snippet illustrates how the generated data combines user input, structured reasoning, knowledge graph insights, and a comprehensive response to create a cohesive dataset for downstream tasks. Figure 18 shows a more detailed analysis of the graph generated for this training sample, as well as a visual analysis of the overall answer.
The system effectively translates user queries into structured reasoning steps and detailed responses, leveraging the knowledge graph to identify and elaborate on key relationships and insights. The abstract pattern captures the hierarchical influence of OKN principles, emphasizing dynamic transitions as a pivotal advantage. The inclusion of dynamic transitions and unique property combinations highlights the capability of the system to synthesize complex ideas. Finally, the response integrates theoretical concepts with practical applications, ensuring relevance and depth.
#### 4.1.4 Training strategy
The training strategy is identical as described in [36], using a variant of ORPO [53] and DPO, specifically the EXO variant [54, 44].
#### 4.2 Recursive reasoning implementation
We delineate details and prompts used in the recursive reasoning implementation visualized in Figure 5. The Graph-PRefLexOR recursive reasoning algorithm iteratively refines graph representations of relational mechanics using a multi-agent system with Agent #1 being the Graph Reasoning model, and Agent #2 being the non fine-tuned meta-llama/Llama-3.2-3B-Instruct model.
This prompt is used for Agent #2 to critique the thought process generated in the past round, generated by Agent #1:
# Critic prompting strategy and a thought process process and provide suggestions to improve well-reasoned manne Question: {question Thought process: th
The prompt to improve the previous thinking section is as follows:
### Improvement prompting strategy will show you a thought process and feedback. efully implement the feedback and improve the thought process by addressing Thought process: {think} Feedback: {reflect Provide the improved thought process and nothing else. The revised thought process
This prompt is used to integrate earlier responses into an integrated, final answer.
```
Integration prompting strategy
T will show you a question and several possible answers
QUESTION: {question}
ANSWER #0: {answer_0}
ANSWER #1: {answer_1}
Carefully incorporate all ideas presented in the answer candidates into a very detailed, final answer
Do not repeat the question. You directly begin your response with the final answer to the question
The answer is:
```
#### Knowledge garden prompting strategy through agentic modeling 4.3
Below is an example of how the knowledge graph is expanded by providing the set of earlier graph structures (graph_str) and asking another LLM agent to develop a new question, to which Graph-PReFLexOR then responds, then generating a new graph. Since the prompt includes the earlier keywords/nodes as instructed in the prompt, the algorithm results in a well-connected graph that expands in new directions. The prompt can be constructed in particular ways to emphasize certain aspects. For this example, we seek to create multi-disciplinary outputs, especially at the interface with art.
```
Prompting strategy to develop new questions to grow the knowledge garden
Consider this list of topics/keywords.
Formulate a concise follow-up creative and highly unusual question to ask about a related but totally different
Your question should include at least one of the original topics/keywords marked as '...' but expand to new dissimilar fields
such as philosophy or art.
Original list of topics/keywords:
{graph str}
The new question is:
```
The resulting new question, with an example shown in Figure 15 for this particular prompt.
### 4.4 Graph visualization
We use mermaid. js to visualize graphs, where the output of the graph reasoning model is converted to the scripting language using Claude 3.5 Sonnet, then exported using the web editor (https://mermaid.live/). Additional visualizations and analyses are conducted using Gephi [55] (https://gephi.org/) and various plugins therein.
#### Glossary and key terms 4.5
Table 3 shows a summary of key terms and definitions used in this paper.
# Code, data and model weights availability
Codes, model weights and additional materials are available at https://huggingface.co/lamm-mit and https: //github.com/lamm-mit/PRefLexOR. The model is available at lamm-mit/Graph-Preflexor_01062025.
# Acknowledgments
The author acknowledges support from the MIT Generative AI initiative.
# References
- [1] Vaswani, A. et al. Attention is All you Need (2017). URL https://papers.nips.cc/paper/ 7181-attention-is-all-you-need.
- [2] Alec Radford, Karthik Narasimhan, Tim Salimans & Ilya Sutskever. Improving Language Understanding by Generative Pre-Training URL https://gluebenchmark.com/leaderboard.
- [3] Xue, L. et al. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics 10, 291-306 (2021). URL https://arxiv.org/abs/2105. 13626v3.
- [4] Jiang, A. Q. et al. Mistral 7B (2023). URL http://arxiv.org/abs/2310.06825.
| | Glossary of Terms |
| --- | --- |
| Term | Definition |
| Graph-PReFLexOR (Preference-based Recur- | A generative framework that performs dynamic graph reasoning and |
| sive Language Modeling for Exploratory Op- | iteratively expands domain knowledge using a reinforcement learning |
| timization of Reasoning) | inspired approach. It leverages detailed knowledge graphs and abstract |
| | representations, allowing for hierarchical reasoning and adaptive learn- |
| | ing, following the approach detailed in [36]. |
| PReFLexOR | A framework that enhances the reasoning capabilities of language models |
| | by integrating preference optimization and recursive reasoning. |
| Knowledge Graph (KG) | A graph representation of knowledge where nodes (V) represent con- |
| | cepts and edges (E) represent relationships between concepts. In Graph- |
| | PReFLexOR, relationships include "IS-A," "RELATES-TO," and "IN- |
| | FLUENCES." |
| Abstract Pattern (P) | Symbolic representations that summarize key relationships and depen- |
| | dencies identified within a knowledge graph, often represented as a series |
| | of transformations. |
| Recursive Reasoning | An iterative process of refining reasoning by applying feedback to im- |
| | prove alignment with the desired output. |
| Thinking Phase | A stage where the model simulates intermediate reasoning steps, marked |
| | by the tokens < thinking > and < / thinking >. |
| Reflection Phase | A stage where the model evaluates the intermediate reasoning generated |
| | during the thinking phase and provides feedback for improvement, here |
| | achieved via a second agent. |
| Retrieval-Augmented Generation (RAG) | A technique used to enrich the input context by retrieving relevant infor- |
| | mation from a knowledge index, enabling higher-order reasoning. |
| Semantic Scoring Function (f) | A function that evaluates the significance of relationships between con- |
| | cepts in a knowledge graph. |
| Base LLM, here: | The foundational large language model (LLM) used to train Graph- |
| meta-llama/Llama-3.2-3B-Instruct | PReFLexOR. |
| Odds Ratio Preference Optimization (ORPO) | An optimization method used during the training of PReFLexOR [36] to |
| | guide reasoning processes using special tokens. |
| Independent Reasoning Development | A stage in PReFLexOR training that leverages masking of thinking |
| | tokens to ensure the model infers reasoning pathways without direct |
| | supervision. |
| Efficient Exact Optimization (EXO) | An optimization method used in PReFLexOR to align final outputs with |
| | preferred reasoning patterns. |
Table 3: Summary of key terms used in this study.
- [5] Phi-2: The surprising power of small language models Microsoft Research. URL https://www.microsoft. com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/.
- [6] Dubey, A. et al. The llama 3 herd of models (2024). URL https://arxiv.org/abs/2407.21783. 2407. 21783.
- [7] Brown, T. B. et al. Language Models are Few-Shot Learners (2020).
- [8] Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 1-12 (2021).
- [9] Spivak, D., Giesa, T., Wood, E. & Buehler, M. Category theoretic analysis of hierarchical protein materials and social networks. PLoS ONE 6 (2011).
- [10] Spivak, D. I. & Reoccurring, M. J. B. Reoccurring Patterns in Hierarchical Protein R eoccurring patterns in hierarchical protein materials and music : The power of analogies 0-13 (2011).
- [11] Giesa, T., Spivak, D. & Buehler, M. Category theory based solution for the building block replacement problem in materials design. Advanced Engineering Materials 14 (2012).
- [12] Abbott, V. & Zardini, G. Flashattention on a napkin: A diagrammatic approach to deep learning io-awareness (2024). URL https://arxiv.org/abs/2412.03317. 2412.03317.
- [13] Buehler, M. J. Graph-aware isomorphic attention for adaptive dynamics in transformers (2025). URL https: //arxiv.org/abs/2501.02393.2501.02393.
- [14] Ackbarow, T., Sen, D., Thaulow, C. & Buehler, M. Alpha-helical protein networks are self-protective and flaw-tolerant. PLoS ONE 4 (2009).
- [15] Nepal, D. et al. Hierarchically structured bioinspired nanocomposites. Nature Materials 2022 1–18 (2022). URL https://www.nature.com/articles/s41563-022-01384-1.
- [16] Ritchie, R., Buehler, M. & Hansma, P. Plasticity and toughness in bone. Physics Today 62 (2009).
- [17] Launey, M., Buehler, M. & Ritchie, R. On the mechanistic origins of toughness in bone, vol. 40 (2010).
- [18] Eilenberg, S. & MacLane, S. Group Extensions and Homology. Annals of Mathematics 43, 757-831 (1942). URL https://www.jstor.org/stable/1968966#id-name=JSTORhttps://www.jstor.org/ stable/1968966.
- [19] Eilenberg, S. & Mac Lane, S. General theory of natural equivalences. Transactions of the American Mathematical Society 58, 247 (1945). URL https://www.ams.org/journals/tran/1945-058-00/ S0002-9947-1945-0013131-6/S0002-9947-1945-0013131-6.pdf.
- [20] Giesa, T., Jagadeesan, R., Spivak, D. I. & Buehler, M. J. Matriarch: A Python Library for Materials Architecture. ACS Biomaterials Science & Engineering 150901085625000 (2015). URL http://pubs.acs.org/doi/10. 1021/acsbiomaterials.5b00251.
- [21] Weisfeiler, B. & Leman, A. A. Reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia 2, 12–16 (1968).
- [22] Cranford, S. W. & Buehler, M. J. Biomateriomics (Springer Netherlands, 2012).
- [23] Groen. N., Cranford, S., de Boer, J., Buehler, M. & Van Blitterswijk, C. Introducing materiomics (2011).
- [24] Arevalo, S. E. & Buehler, M. J. Learning from nature by leveraging integrative biomateriomics modeling toward adaptive and functional materials. MRS Bulletin 2023 1–14 (2023). URL https://link.springer.com/ article/10.1557/s43577-023-00610-8.
- [25] Buehler, M. J. MeLM, a generative pretrained language modeling framework that solves forward and inverse mechanics problems. Journal of the Mechanics and Physics of Solids 105454 (2023). URL https://linkinghub. elsevier.com/retrieve/pii/S0022509623002582.
- [26] Ghafarollahi, A. & Buehler, M. J. Sciagents: Automating scientific discovery through multi-agent intelligent graph reasoning (2024). URL https://arxiv.org/abs/2409.05556. 2409.05556.
- [27] Dreyer, T. et al. Comparing cooperative geometric puzzle solving in ants versus humans. Proceedings of the National Academy of Sciences 122, e2414274121 (2025). URL https://www.pnas.org/doi/abs/10.1073/ pnas.2414274121. https://www.pnas.org/doi/pdf/10.1073/pnas.2414274121.
- [28] Singhal, K. et al. Large language models encode clinical knowledge. Nature (2023). URL https://www. nature.com/articles/s41586-023-06048-6. Open access.
- [29] Ghafarollahi, A. & Buehler, M. J. Protagents: protein discovery via large language model multi-agent collaborations combining physics and machine learning. Digital Discovery 3, 1389-1409 (2024). URL https://doi.org/10.1039/D4DD00013G. †.
- [30] Luu, R. K. & Buehler, M. J. BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-inspired Materials. Adv. Science. (2023). URL https://doi.org/10.1002/advs. 202306724.
- [31] Yu, S., Ran, N. & Liu, J. Large-language models: The game-changers for materials science research. AI in Chemical Engineering 100076 (2024). URL https://doi.org/10.1016/j.aichem.2024.100076. Under a Creative Commons license.
- [32] Hu, Y. & Buehler, M. J. Deep language models for interpretative and predictive materials science. APL Machine Learning 1,010901 (2023). URL https://aip.scitation.org/doi/abs/10.1063/5.0134317.
- [33] Buehler, E. L. & Buehler, M. J. X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible Framework for Large Language Models with Applications in Protein Mechanics and Design (2024). URL https://arxiv.org/abs/ 2402.07148v1.
- [34] Buehler, M. J. MechGPT, a language-based strategy for mechanics and materials modeling that connects knowledge across scales, disciplines and modalities. Appl. Mech. Rev. (2023). URL https://doi.org/10. 1115/1.4063843.
- [35] Lu, W., Luu, R. K. & Buehler, M. J. Fine-tuning large language models for domain adaptation: Exploration of training strategies, scaling, model merging and synergistic capabilities (2024). URL https://arxiv.org/abs/ 2409.03444. 2409.03444.
- [36] Buehler, M. J. Preflexor: Preference-based recursive language modeling for exploratory optimization of reasoning and agentic thinking (2024). URL https://arxiv.org/abs/2410.12375. 2410.12375.
- [37] Zelikman, E., Wu, Y., Mu, J. & Goodman, N. D. Star: Bootstrapping reasoning with reasoning (2022). URL https://arxiv.org/abs/2203.14465. 2203.14465.
- [38] Zelikman, E. et al. Quiet-star: Language models can teach themselves to think before speaking (2024). URL https://arxiv.org/abs/2403.09629.2403.09629.
- [39] Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models (2023). URL https: //arxiv.org/abs/2201.11903.2201.11903
- [40] Kojima, T., Gu, S. S., Reid, M., Matsuo, Y. & Iwasawa, Y. Large language models are zero-shot reasoners (2023). URL https://arxiv.org/abs/2205.11916. 2205.11916.
- [41] Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. In Nature, vol. 323, 533-536 (Macmillan, 1986).
- [42] Newell, A. & Simon, H. A. Human problem solving (Prentice-Hall, 1972).
- [43] McCarthy, J. Programs with common sense. RLE and MIT Computation Center Cambridge, MA 1, 77-84 (1960).
- [44] Ji, H. et al. Towards efficient exact optimization of language model alignment (2024). URL https://arxiv. org/abs/2402.00856. 2402.00856.
- [45] Kullback, S. & Leibler, R. A. On information and sufficiency. Annals of Mathematical Statistics 22, 79–86 (1951).
- [46] Brin, S. & Page, L. The anatomy of a large-scale hypertextual web search engine. In Proceedings of the Seventh International Conference on the World Wide Web (WWW 1998), 107-117 (Elsevier Science Publishers B.V., Brisbane, Australia, 1998).
- [47] Fruchterman, T. M. J. & Reingold, E. M. Graph drawing by force-directed placement. Software: Practice and Experience 21, 1129-1164 (1991).
- [48] Hu, Y. F. Efficient and high quality force-directed graph drawing. The Mathematica Journal 10, 37–71 (2005).
- [49] Hesse, H. The Glass Bead Game (Vintage Classics, 2000).
- [50] Ziolkowski, T. The Novels of Hermann Hesse: A Study in Theme and Structure (Princeton University Press, Princeton, NJ, 1965). Includes analysis of the Glass Bead Game and its conceptual influence.
- [51] OpenAI et al. Openai o1 system card (2024). URL https://arxiv.org/abs/2412.16720. 2412.16720.
- [52] Ghafarollahi, A. & Buehler, M. J. Atomagents: Alloy design and discovery through physics-aware multi-modal multi-agent artificial intelligence (2024). URL https://arxiv.org/abs/2407.10022. 2407.10022.
- [53] Hong, J., Lee, N. & Thorne, J. Orpo: Monolithic preference optimization without reference model (2024). URL https://arxiv.org/abs/2403.07691.2403.07691.
- [54] Rafailov, R. et al. Direct preference optimization: Your language model is secretly a reward model (2024). URL https://arxiv.org/abs/2305.18290.2305.18290.
- [55] Bastian, M., Heymann, S. & Jacomy, M. Gephi: An open source software for exploring and manipulating networks (2009). URL http://www.aaai.org/ocs/index.php/ICWSM/09/paper/view/154.
- [56] Xu, K., Hu, W., Leskovec, J. & Jegelka, S. How powerful are graph neural networks? In International Conference on Learning Representations (2019). URL https://arxiv.org/abs/1810.00826.
Figure 12: Logical sequential flow of prompts exploring bio-inspired materials. The progression begins with a broad question (prescribed by the human: Discuss an interesting idea in bio-inspired materials science. ) and transitions to increasingly detailed and interconnected inquiries into adaptability, self-healing, and their roles in mitigating climate change and enhancing ecosystem resilience. This structure exemplifies recursive reasoning. These autonomously generated prompts are used for constructing & howledge graphs that are then assembled into an integrated graph.
Figure 13: An autonomously grown knowledge graph, where a multi-agent AI system creates questions and answers, each with a new knowledge subgraph, which is then concatenated into an integrated graph and converted into GraphML format using GPT-4o for analysis using networkx and Gephi. The upper panel shows the graph, and the lower panel a selection of interesting paths identified in the overall graph. The starting prompt was chosen to be: Discuss an interesting idea in bio-inspired materials science. The model then iterated for N = 12 iterations to grow the graph. The nodes with the highest degrees are 'Sustainable Materials', 'Self-healing and Adaptable Materials', and 'Ecosystems'.
Figure 14: Visualization of the graph analysis of the autonomously grown graph depicted in Figure 13. (Top Left) Degree distribution shows a heterogeneous structure with prominent hubs and many low-degree nodes. (Top Right) Visualization of the largest connected component reveals its modular structure. (Middle Left) Clustering coefficient distribution highlights sparse local connectivity, with some nodes in dense clusters. (Middle Right) Betweenness centrality distribution indicates a small subset of critical nodes facilitating connectivity. (Bottom Left) Shortest path length distribution suggests a small-world structure with most nodes separated by a few hops. (Bottom Right) Community size distribution, with central nodes labeled, reflects varying sizes and roles within the graph. The analysis collectively underscores the complex topology of the graph, including heterogeneity, modularity, and small-world characteristics.
Figure 15: Logical sequential flow of prompts exploring protein design and bioluminescent biomaterials. The progression begins with protein design and expands into speculative questions about bioluminescent biomaterials, their artistic and ecological applications, and their integration into immersive art installations. These prompts illustrate how the intersection of biology, technology, and art can be explored recursively to generate novel insights and creative concepts.
Figure 16: Grown knowledge graph based on the prompt Discuss protein design based on collagen and elastin. Here, the agentic system is specifically charged to develop new questions (see, Figure 15) that integrate dissimilar fields such as philosophy or art. The visualization depicts nodes sized and colored by page rank, with top nodes being 'Biolumniscent Biomaterial' (most significant node also per node degree), 'Protein', 'Installation', 'Elastin' and 'Concept'. A unique feature is the incorporation of the concept of 'thin places', a concept drawn from various mythological and spiritual traditions that invokes sites or moments where the boundary between the physical world and a transcendent realm is perceived to be exceptionally thin or permeable. The model identifies this as an interesting association for this particular task, which is particularly interesting as a way to create a new concept that incorporates highly multidisciplinary relationships.
Figure 17: A conceptual illustration of how a Transformer-based system starts with a latent superposition of potentiality that encompass all potential answers, in a lose analogy to quantum systems. When a task is posed, it "measures" these possibilities to produce an output. In Graph-PReFLexOR, we leverage graph-based reflection and reasoning to converge on an optimal answer by imposing structured mechanisms of reasoning.
Figure 18: Visualization of graph construction in training set development. The figure depicts a visualization of a graph and response to the question: What is the primary advantage of using Origami and Kirigami Engineering (OKN) for fabricating microscale 3D objects compared to traditional top-down approaches?, during on-the-fly training set development from random raw data.
Text Box 1: Final answer (after thinking). to the task Propose a new idea to relate music and materials.
| Final answer (after thinking) |
| --- |
| **Proposed Idea: "Music-Inspired Material Tuning"** |
| Inspire a new method to tume and control the mechanical properties of materials by analyzing and manipulating the frequency |
| spectrum of music. This method leverages the dynanic responses of materials to different frequencies, minicking the effects of |
| music on materials. |
| **Key Concepts:** |
| 1. **Monlinear Dymamic Response**: Materials exhibit nonlinear responses to external stimuli, such as vibrations or frequencies. |
| 2. **Frequency Dependence**: Material responses vary with different frequencies, allowing for tuning and control. |
| 3. **Frequency Spectrum Analysis**: Study the dynamic responses of materials to different frequencies by analyzing their |
| frequency response curves or using techniques like dynamic mechanical analysis (DMA). |
| 4. **Material Selection**: Choose materials with nonlinear dynamic responses to specific frequencies, such as those exhibiting |
| high damping or stiffness at particular frequencies. |
| 5. *#Music Frequency Spectrum Manipulation**: Nodify the frequency spectrum of music to influence material responses. This can |
| be achieved by filtering or modifying the frequency content of the music to match the target material's nonlinear dynamic |
| responses. |
| 6. **Tuning and Control**: Use the manipulated frequency spectrum to tune and control the mechanical properties of selected |
| materials, such as damping, stiffness, or damping capacity. |
| **Design Principles: ** |
| 1. **Frequency Spectrum Analysis**: Study the dynamic responses of materials to different frequencies to understand their |
| nonlinear dynamic responses. |
| 2. **Material Selection**: Choose materials with nonlinear dynamic responses to specific frequencies, such as those exhibiting |
| high damping or stiffness at particular frequencies. |
| 3. **Frequency Spectrum Manipulation**: Modify the frequency spectrum of music to influence material responses, using techniques |
| like filtering or modifying the frequency content of the music. |
| 4. **Tuning and Control**: Use the manipulated frequency spectrum to tune and control the mechanical properties of selected |
| materials, such as damping, stiffness, or damping capacity. |
| **Potential Applications:** |
| 1. **Material Tuning**: Develop a method to tune and control the mechanical properties of materials, such as damping or |
| stiffness, by manipulating the frequency spectrum of music. |
| 2. **Nonlinear Dynamic Response Measurement**: Use music's frequency spectrum to measure nonlinear dynamic responses of |
| materials, providing a non-contact and non-destructive method. |
| 3. **Material Selection**: Identify materials with nonlinear dynamic responses to specific frequencies, aiding in the selection |
| of materials for specific applications. |
| 4. **Biomedical Applications**: Apply the principle of Music-Inspired Material Tuning to develop novel, non-invasive, and |
| non-destructive techniques for monitoring or treating diseases, such as cancer or neurological disorders. |
| **Challenges and Future Directions: ** |
| 1. **Material Selection**: Identifying materials with nonlinear dymamic responses to specific frequencies can be challenging, |
| requiring extensive testing and characterization. |
| 2. **Frequency Spectrum Manipulation**: Modifying the frequency spectrum of music to influence material responses can be |
| difficult, as it depends on the specific material's nonlinear dynamic responses and the desired mechanical properties. |
| 3. **Scalability**: Scaling the Music-Inspired Material Tuning method to larger structures or more complex materials can be |
| challenging, requiring further research and development. |
| 4. **Theoretical Modeling**: Developing theoretical models to explain and predict the nonlinear dynamic responses of materials |
| to music's frequency spectrum can be an ongoing research direction. |
| **Conclusion:** |
| The proposed method, "Music-Inspired Material Tuning," offers a novel approach to tune and control the mechanical properties of |
| materials by analyzing and manipulating the frequency spectrum of music. This method leverages the dynamic responses of |
| materials to different frequencies, minicking the effects of music on materials. By understanding and manipulating the nonlinear |
| dynamic responses of materials to specific frequencies, this method can be applied to various materials and applications, |
| providing a non-contact and non-destructive method for tuning and controlling material properties. |
Text Box 2: Final answer after iteration 3 from recursive Graph-PReFLexOR
Final answer after iteration 3 from recursive Graph-PReFLexOR Based on the provided context, here's a comprehensive design for a tough, protein-based material inspired by snowflake and ant behavior: **Design Concept:** "Snowflake-Ant Inspired Protein Material" (SAIPM) **Design Principles : * 1. **Fractal Pattern Mimicry**: Incorporate fractal patterns and self-similarity to enhance toughness and adaptability, inspired by snowflakes **Hierarchical Structure**: Create hierarchical structures at multiple length scales to mimic ant belavior, promoting 2. self-organization and toughness. 3. **Protein Selection**: Choose proteins with high strength, toughness, and adaptability, such as collagen, silk, or silk-like proteins. *Molecular Assembly**: Design a molecular assembly process that allows for easy incorporation of fractal patterns and hierarchical structures, inspired by snowflakes and ants.
5. **Self-Assembly**: Encourage self-assembly of protein molecules to form complex structures, mimicking ant behavio **Design Structure : * 1. **Fractal Pattern Generation**:
- Use computational models to generate fractal patterns inspired by snowflakes. - Incorporate these patterns into protein sequences or structures 2. *#Hierarchical Structure Creation**:
- Design a hierarchical structure at multiple length scales, inspired by ant behavior. Use techniques like layer-by-layer assembly or 3D printing to create these structures. 3. **Protein Selection and Preparation **: - Choose proteins with high strength, toughness, and adaptability. - Prepare these proteins in a form suitable for incorporation into the SAIPM structure. 4. **Molecular Assembly**: - Design a molecular assembly process that allows for easy incorporation of fractal patterns and hierarchical structures. - Use techniques like covalent bonding, cross-linking, or ionic interactions to assemble protein molecules. 5. **Self-Assembly**: Encourage self-assembly of protein molecules to form complex structures - Use techniques like pH-induced unfolding, temperature-dependent assembly, or external stimuli to control the assembly process **Design Parameters : ** - **Fractal Pattern Parameters Order of fractal pattern (e.g., octaves) - Fractal pattern size (e.g., diameter, length) - Fractal pattern orientation (e.g., angle, direction) **Hierarchical Structure Parameters**:
- Number of hierarchical levels - Length scale of each level (e.g., diameter, length) - Orientation of each level (e.g., angle, direction) **Protein Parameters** - Protein type (e.g., collagen, silk) Protein concentration - Protein processing or modification (e.g., denaturation, cross-linking) **Molecular Assembly Parameters** - Assembly method (e.g., covalent bonding, ionic interactions) - Assembly conditions (e.g., pH, temperature, solvent) **Self-Assembly Parameters** - Assembly trigger (e.g., pH, temperature, external stimuli) - Assembly duration or rate ** Design Evaluation Criteria : * - **Toughness**: Measure the SHIPM's resistance to mechanical stress, such as tensile strength, compressive strength, or impact resistance.
- **Adaptability**: Evaluate the SAIPM's ability to adapt to different environments or applications, such as self-healing, self-repair, or self-adaptation. - **Strength**: Measure the SAIPM's strength in diferent directions or under various loading conditions.
- **Energy Dissipation**: Assess the SAIPM's ability to dissipate ene viscoelasticity or crack deflection **Biocompatibility**: Evaluate the SAIPM's compatibility with biological systems, including cell adhesion, cell growth, and biodegradation - **Synthesis Efficiency**: Assess the efficiency of the molecular assembly process and the SAIPM's production rate **Design Validation:** #Toughness Validation**: Test the SHIPM under various mechanical stress conditions (e.g., tensile, compressive, impact) to confirm its toughness - **Adaptability Walidation**: Evaluate the SAIPM's adaptability by exposing it to different environments or applying it to various applications to confirm its ability to adapt. **Strength Validation**: Measure the SAIPW's strength in different directions or under various loading conditions to confirm its strength - **Energy Dissipation Validation**: Assess the SHPM's ability to dissipate energy under mechanical stress by analyzing its viscoelastic properties or observing crack deflection. **Biocompatibility Validation**: Test the SAIPM's biocompatibility by observing cell adhesion, cell growth, and biodegradation **Synthesis Efficiency Validation**: Evaluate the efficiency of the molecular assembly process and the SMIPM's production rate by comparing the synthesized material to theoretical predictions. **Design Iteration: ** - **Feedback Loop**: Establish a feedback loop to collect data on the SMIPM's performance, including mechanical stress, adaptability, strength, energy dissipation, biocompatibility, and synthesis efficiency. - **Design Revisions**: Use the collected data to revise the design, adjusting parameters such as fractal pattern order, hierarchical level number, protein type, assembly method, or assembly conditions to optimize the SAIPM's performance **Reiteration**: Repeat the design iteration process until the SAIPM meets the desired performance criteria or until further improvements are not feasible. **Design Requirements:* - **Toughness**: The SHIPM should exhibit high toughness, as measured by tensile strength, compressive strength, or impact resistance.
**Adaptability**: The SAIPM should demonstrate adaptability, such as self-healing, self-repair, or self-adaptation, to meet the specific requirements of various applications. - **Strength**: The SAIPM should exhibit high strength in different directions or under various loading conditions **Energy Dissipation**: The SAIPM should be able to dissipate energy under mechanical stress, such as through viscoelasticity or crack deflection.
– *#Biocompatibility**: The SAIPM should be compatible with biological systems, including cell adhesion, cell growth, and biodegradation **Synthesis Efficiency**: The SHIPM's molecular assembly process should be efficient and scalable for large-scale production. **Design Constraints:** - *#Protein Selection**: The choice of protein is limited by its strength, toughness, and adaptability.
- *#Molecular Assembly method and conditions are constrained by the ne - **Self-Assembly**: The assembly process is limited by the trigger and duration or rate of assembly **Fractal Pattern Generation**: The fractal pattern generation process is limited by the complexity of the patterns and the computational resources required. - **Hierarchical Structure Creation**: The hierarchical structure creation process is limited by the number of hierarchical levels and the length scale of each level. ** Design Requirements and Constraints: ** - **Design Requirements:* Toughness: High tensile strength, compressive strength, or impact resistance. Adaptability: Self-healing, self-repair, or self-adaptation. - Strength: High in different directions or under various loading conditions. Energy Dissipation: Ability to dissipate energy under mechanical stress. - Biocompatibility: Compatibility with biological systems. - Synthesis Efficiency: Efficient and scalable molecular assembly process. - **Design Constraints:* - Protein Selection: Limited by protein strength, toughness, and adaptability. - Molecular Assembly: Limited by assembly method and conditions.
- Self-Assembly: Limited by assembly trigger and duration or rate - Fractal Pattern Generation: Limited by Cractal pattern order and size.
- Hierarchical Structure Creation: Limited by number of hierarchical levels and length scale. - Protein Preparation: Limited by protein processing or modification. *Design Principles and Materials:* Design Principles: Fractal Pattern Mimicry* **: Incorporate fractal patterns and self-similarity to enhance toughness and adaptability. 2. **Hierarchical Structure**: Create hierarchical structures at multiple length scales to promote self-organization and toughness. 3. *#Protein Selection**: Choose proteins with high streagth, toughness, and adaptability.
4. *#Molecular Assembly**: Design a molecular assembly process that allows for easy hierarchical structures.
5. **Self-Assembly**: Encourage self-assembly of protein molecules to form complex structures. Materials:
- **Protein**: Collagen, silk, or silk-like proteins. **Molecular Assembly Method**: Covalent bonding, ionic interactions, or other techniques that allow for easy incorporation of fractal patterns and hierarchical structures. **Assembly Conditions**: pH, temperature, solvent, or other conditions that facilitate self-assembly **Design Requirements and Materials:** Design Requirements: - **Protein**: High strength, toughness, and adaptability (e.g., collagen, silk). **Molecular Assembly Method**: Efficient and scalable (e.g., covalent bonding, ionic interactions). - **Assembly Conditions**: Triggered by pH, temperature, or external stimuli. **Design Principles and Materials: ** Design Principles: 1. *#Fractal Pattern Nimicry*#: Incorporate fractal patterns and self-similarity to enhane toughess and adaptability.
2. *#Hierarchical Structure**: Create hierarchical struc toughness 3. **Protein Selection**: Choose proteins with high strength, toughness, and adaptability, **Molecular Assembly**: Design a molecular assembly process that allows for easy incorporation of fractal patterns and hierarchical structures **Self-Assembly**: Encourage self-assembly of protein molecules to form complex structures. Materials *Protein**: Collagen, silk, or silk-like proteins. - **Molecular Assembly Method**: Covalent bonding, ionic interactions, or other techniques that allow for easy incorporation of fractal patterns and hierarchical structures. **Assembly Conditions**: pH, temperature, solvent, or other conditions that facilitate self-assembly **Design Requirements and Materials:** Design Requirements: - **Protein**: High strength, toughness, and adaptability (e.g., collagen, silk). - *#Molecular Assembly Method**: Efficient and scalable (e.g., covalent bonding, ionic interactions).
- **Assembly Conditions**: Triggered by pH, temperature, or external sti ** Design Principles and Materials :* Design Principles:
1. *#Fractal Pattern Mimicry**: Incorporate fractal patterns and self-similarity to enhance toughness and adaptability. 2. **Hierarchical Structure**: Create hierarchical structures at multiple length scales to promote self-organization and toughness 3. **Protein Selection**: Choose proteins with high strength, toughness, and adaptability, **Molecular Assembly**: Design a molecular assembly process that allows for easy incorporation of fractal patterns and hierarchical structures.
5. **Self-Assembly**: Encourage self-assembly of protein molecules to form complex structures. Materials:
- **Protein**: Collagen, silk, or silk-like proteins - **Molecular Assembly Method**: Covalent bonding, ionic interactions, or other techniques that allow for easy incorporation of fractal patterns and hierarchical structures. - **Assembly Conditions**: pH, temperature, solvent, or other conditions that facilitate self-assembly, ** Design Requirements and Materials : ** Design Requirements: - **Protein**: High strength, toughness, and adaptability (e.g., collagen, silk) - **Molecular Assembly Method**: Efficient and scalable (e.g., covalent bonding, ionic interactions). - **Assembly Conditions *: Triggered by pH, temperature, or external stimuli.
**Design Principles and Materials:** Design Principles: 1. *#Fractal Pattern Minicry**: Incorporate fractal patterns and self-similarity to enhane toughness and adaptability.
2. *#Hierarchical Structure**: Create hierarchical stru toughness. 3. **Protein Selection**: Choose proteins with high strength, toughness, and adaptability. 4. **Molecular Assembly**: Design a molecular assembly process that allows for easy incorporation of fractal patterns and hierarchical structures.
5. **Self-Assembly**: Encourage self-assembly of protein molecules to form complex structures. Materials:
- *#Podeshar Assembly Method**: Covalent bonding, ionic interactions, or other techniques tha aller for easy incorporation of
- *#MoLecular Assembly Methode-stral - **Assembly Conditions**: pH, temperature, solvent, or other conditions that facilitate self-assembly, ** Design Requirements and Materials : * Design Requirements: - **Protein**: High strength, toughness, and adaptability (e.g., collagen, silk). - **Molecular Assembly Method**: Efficient and scalable (e.g., covalent bonding, ionic interactions).
- **Assembly Conditions**: Triggered by pH, temperature, or external sti **Design Principles and Materials : ** Design Princinles: **: Incorporate fractal patterns and self-similarity to enhance toughness and adaptability. 1. ** Fractal Pattern Mimicry* 2. **Hierarchical Structure**: Create hierarchical structures at multiple length scales to promote self-organization and toughness. 3. **Protein Selection**: Choose proteins with high strength, toughness, and adaptability. 4. **Molecular Assembly**: Design a molecular assembly process that allows for easy incorporation of fractal patterns and hierarchical structures.
5. **Self-Assembly**: Encourage self-assembly of protein molecules to form complex structures. Materials: . **Protein*: Collagen, silk, or silk-like proteins.
– **Protein*: Collagen, silk, or silk-like proteins.
– **Nolecular Assembly Method**: Covalent bonding, ionic interaction fractal patterns and hierarchical structures. - **Assembly Conditions**: pH, temperature, solvent, or other conditions that facilitate self-assembly, **Design Requirements and Materials: ** Design Requirements: - *#Protein**: High strength, toughness, and adaptability (e.g., col2agen, silk).
- *#Molecular Assembly Method**: Efficient and scalable (e.g., covalent bonding, ionic inter - **Assembly Conditions**: Triggered by pH, temperature, or external stimuli. **Design Principles and Materials:** Design Principles: 1. **Fractal Pattern Mimicry**: Incorporate fractal patterns and self-similarity to enhance toughness and adaptability. 2. **Hierarchical Structure**: Create hierarchical structures at multiple length scales to promote self-organization and toughness
Text Box 3: Result of developed by o1 based on the results developed by Graph-PReFLexOR, explaining the concept of 'Mytho-Luminal Liminality'.
### Mytho-Luminal Liminality
We propose a novel theoretical framework, Mytho-Luminal Liminality, uniting the mythological concept of thin places with the boundary-blurring practices of protein design and biomaterial engineering. In Celtic lore and related traditions, a thin place is defined as a domain X where seemingly incompatible worlds-the mundane and the transcendent-intersect. Formally, if we let 2 be the set of all conceptual or physical "boundary" domains, then:
(3x E D) : ThinPlaces(x) A BioTech(x) ^ ChallengingUnderstandingBoundaries(x)
> MythoLuminalLiminality(x).
Here, ThinPlaces(x) indicates that x manifests a mythic boundary principle (e.g., the "veil" between worlds), BioTech(:c) signals that x leverages biological processes or protein engineering, and Challenging Understanding Boundaries(x) specifies an active bluring of traditional categories (art vs. science, natural vs. synthetic). Whenever these conditions co-occur, we claim that x exemplifies Mytho-Luminal Liminality, a state in which mythic transcendence intersects with engineered materials to evoke new modes of perception. In practice, this framework urges us to create immersive, site-specific installations using real proteins-such as Elastin-Like Proteins or Collagenous Proteins-coupled with bioluminescent or ephemeral components, so that visitors experience the sensation of stepping into a boundary realm reminiscent of folkloric "thin places.
Indeed, the same conceptual impulse that once inspired ancient myths to describe liminal passageways (between mortal and otherworldly planes) can now inform tissue engineering and protein design, bridging the gap between organic and synthetic, living and manufactured. By intentionally integrating artwork referencing Ancient Mythologies or harnessing luminescent biomaterials in ways that challenge our understanding of boundaries, we highlight the spiritual awe inherent in the process of creating "Ilving" installations. Below is a simple TikZ sketch illustrating how a mythic boundary perspective (thin places) merges with a biotechnological context to yield novel immersive experiences:
This diagram suggests that "thin places," though mythic, can motivate new creative efforts in synthetic biology, while "Challenging Understanding Boundaries" provides a conceptual conduit that connects folklore-inspired liminality to biomaterial research. By embedding Elastin-Like Proteins or Collagen Tripeptides in immersive artworks, one effectively fashions a domain where the physical and the transcendent meet, thus triggering the awe and disorientation characteristic of thin places. This theoretical approach-Mytho-Luminal Liminality-offers a hitherto unexplored avenue for transdisciplinary collaboration: artistry in living materials that is at once rooted in ancient mythic boundary concepts, yet forward-looking in its deployment of cutting-edge biotechnology.
Within the broader schema of Mytho-Luminal Liminality, our knowledge graph highlights that highly specific molecular elements-such as Collagen Tripeptide (Gly-X-Y) and Tensin-"relate to" one another en route to Protein Design, an entity itself classified as a Biological Molecule. From a philosophical vantage, this chain suggests that the structural elegance of protein segments (e.g., the repeating Gly-X-Y motif) can be approached as a modern instantiation of boundary-challenging principles. Indeed, just as a thin place blurs the division between spiritual and mundane, so too does Protein Design blur the line between "naturally occurring" and "engineered" molecular architectures. Further underscoring this parallel, our graph shows that Molecular Modeling (an IS-A Computational Technique) directly relates to both Collagen and Elastin, reinforcing the idea that an interplay of artistic imagination and technical rigor underlies the creation of new protein scaffolds. Philosophically, this implies that challenging our understanding of boundaries-which, in mythic contexts, manifests as a thin place-can also guide transformative science in tissue engineering and biomaterial innovation. By weaving mythic insights into protein design thinking, one treats the protein-folding arena as a "liminal space" where computational, biological. and even artistic considerations converge to yield novel forms that feel as uncanny and awe-inspiring as an ancient threshold between worlds.
# Supplementary Information
# In-situ graph reasoning and knowledge expansion using GraphPReFLexOR
Markus J. Buehler [email protected]
Text Box S1: Answer of the non-fine tuned model. The answer does not contain any reasoning steps, graphs or abstract patterns.
### Answer of the non-fine tuned model
**Designing a Novel Protein-Based Material Inspired by Snowflake and Ant Behavior* **Introduction** inspired numerous innovative materials and technologies, from biomimetic surfaces to self-healing materials. This Nature has project aims to integrate the unique properties of snowflakes and ants to design a new, tough protein-based material. By studying the intricate structures and behaviors of snowflakes and ants, we can develop a material that combines exceptional strength, durability, and adaptability. **Snowflake-Inspired Structure* Showflakes exhibit a hexagonal lattice structure, which provides exceptional strength and stability. This structure can be replicated in a protein-based material by using a combination of amino acids and peptides to create a repeating pattern of hexagonal units **Ant-Inspired Behavior** Ants are known for their remarkable ability to adapt to changing environments and withstand extreme conditions. They achieve this through their unique behavior, which involves: 1. **Self-organization**: hnts organize themselves into complex social structures, allowing them to respond to changing conditions.
2. *#Adaptive camouflage*: hnts can change their color and texture to blend in with their surroundings, making them nearly invisiole.
3. **Self-healing**: Ants can repair damaged tissues and restore their colony's integrity. **Designing the Protein-Based Material** To create a material inspired by snowflakes and ants, we can combine the following components: 1. **Protein scaffold**: A protein-based scaffold will provide the structural framework for the material. This can be achieved using a combination of collagen, elastin, and keratin. 2. **Heragonal lattice**: The protein scaffold will be arranged in a hexagonal lattice structure, inspired by snowflakes. This will provide exceptional strength and stability 3. **Ant-inspired behavior**: The material will be designed to exhibit self-organization, adaptive camouflage, and self-healing properties. This can be achieved through the use of: **Microscopic particles**: The material will contain microscopic particles that can change their color and texture to blend in with their surroundings. **Self-healing peptides**: The material will contain peptides that can repair damaged tissues and restore its integrity. 4. **Biocompatibility**: The material will be designed to be biocompatible, ensuring that it can be safely integrated into living tissues. **Prototype Development** To develop a prototype of this material, we can use a combination of 3D printing and bioprinting techniques. The protein
scaffold will be created using a 3D printer, and the specialized mold *Testing and Evaluation* The material will be tested for its mechanical properties, including tensile strength, compressive strength, and impact esistance. Additionally, the material's self-organization, adaptive camouflage, and self-healing properties will be evaluated through a series of experiments. ** Potential Applications* The protein-based material inspired by snowflakes and ants has numerous potential applications, including: 1. **Biomedical devices**: The material could be used to create implantable devices, such as bone grafts or tissue engineering scaffolds.
2. **Textiles**: The material could be used to create durable, self-healing textiles for clothing and upholstery 3. **Construction materials**: The material could be used to create self-healing concrete or other construction materials. **Conclusion* The design of a novel protein-based material inspired by snowflakes and ants represents a significant breakthrough in biominetic materials science. By combining the unique properties of snowflakes and ants, we can create a material that combines exceptional strength, durability, and adaptability. This material has numerous potential applications
construction materials, and could revolutionize the way we design and engineer mater and adaptability. This material has numerous potential applications in biomedical devices, textiles, and
Figure S1: Visualization of the integrated knowledge graph created based ion the series of prompts delineated in Figure 8 (model was tasked: Write a song by incorporating snowflakes and flower petals.). The data is the same as shown in Figure 9, but organized here effectively as an integrated graph rather than by prompt.
Figure S2: Visualization of the graph analysis of the autonomously grown graph depicted in Figure 16. (Top Left) Degree distribution shows a heterogeneous structure with prominent hubs and many low-degree nodes. (Top Right) Visualization of the largest connected component reveals its modular structure. (Middle Left) Clustering coefficient distribution highlights sparse local connectivity, with some nodes in dense clusters. (Middle Right) Betweenness centrality distribution indicates a small subset of critical nodes facilitating connectivity. (Bottom Left) Shortest path length distribution suggests a small-world structure with most nodes separated by a few hops. (Bottom Right) Community size distribution, with central nodes labeled, reflects varying sizes and roles within the graph. The analysis collectively underscores the complex topology of the graph, including heterogeneity, modularity, and small-world characteristics.
### Example: Mapping two distinct, but isomorphic graphs into a shared representation ਫ਼।
For sake of the demonstration, we have two distinct graphs that represent the equations:
$$G_{1}:F=m\times a,\quad G_{2}:V=I\times R,$$
where G1 and G2 are defined as detailed below:
### Graph Structures:
- Graph G1:
- Nodes: F,m,a,=, x
- Edges: =-> F, =-> x, x -> m, x -> a, m -> x, a -> x
- · Graph G2:
- Nodes: V , I, R, =, ×
- Edges: =-> V, =-> x, x -> I, x -> R, I -> x, R -> x
Initial Embeddings: The nodes in G1 and G2 are assigned distinct two-dimensional embeddings to reflect their unique roles in the equations. The initial embeddings, chosen randomly here, are referred to as iteration 0 in Table S1. Of course, embeddings in a LLM are much higher-dimensional but the 2D illustration allows us to exemplify the approach more easily. In the scope of a LLM, the initial embeddings would reflect how tokens are mapped into the embedding space via the embedding layer from which on they are processed further.
Table S1: Distinct 2D embeddings (iteration 0) for each variable in G1 and G2, the variables F, m. q. V. I and R have different embeddings as they would in a LLM, since each token is assigned a unique embedding. The embeddings for = and x are identical in both cases, as they would be in a LLM since the same tokens are mapped to the same vector.
| Node | Graph | Initial Embedding |
| --- | --- | --- |
| F | G1 | (0.7, 0.3) |
| m | G1 | (0.6, 0.9) |
| a | G1 | (0.1, 0.9) |
| | G1 | (1.2, 0.5) |
| × | G1 | (0.8, 0.6) |
| V | G2 | (0.2, 0.9) |
| I | G2 | (0.05, 0.8) |
| R | G2 | (0.9, 0.1) |
| ll | G2 | (1.2, 0.5) |
| × | G2 | (0.8, 0.6) |
Graph Isomorphism Network (GIN) Update Mechanism: For details on GIN models, we refer readers to other literature [56, 13]. For the demonstration here we assume we have a small MLP with nonlinear activations (ReLU) in our GIN update:
$$\mathbf{h}_{v}^{(k+1)}=\operatorname{MLP}\bigl((1+\epsilon)\cdot\mathbf{h}_{v}^{(k)}+\sum_{u\in\mathcal{N}(v)}\mathbf{h}_{u}^{(k)}\bigr),$$
where N (v) are the neighbors of v, and
$\mathrm{MLP}(\mathbf{x})=\sigma\big{(}W_{L}\,\sigma(W_{L-1}\ldots\sigma(W_{2}\,\sigma(W_{1}\mathbf{x}+\mathbf{b}_{1})+\mathbf{b}_{2})\cdots+\mathbf{b}_{L-1})+\mathbf{b}_{L}\big{)}$.
Weights/biases (W1, W2, ... b1, b2, ... ) are learnable parameters optimized during training of the model.
Final Embeddings After One Iteration For the sake of the demonstration, we identified a particular set of weights and biases such that the embeddings of the distinct variables are mapped into the same representation by the GIN operation. Hence, after one iteration of summation and MLP, the updated embeddings h, 7 are shown in Table S2.
| | Node Graph | h(1) | (Comments) |
| --- | --- | --- | --- |
| ll | G1 | (0.58, -0.81) | Matches =(1) in G2 |
| F | G1 | (-0.96. 0.29) | Matches V(1) in G2 |
| × | G1 | (0.39, -0.92) | Matches ×(1) in G2 |
| m | G1 | (-0.70, 0.72) | Matches I(1) in G2 |
| a | G1 | (0.68, 0.73) | Matches R(1) in G2 |
| ll | G2 | (0.58, -0.81) | Matches =(1) in G1 |
| V | G2 | (-0.96, 0.29) | Matches F(1) in G1 |
| × | G2 | (0.39, -0.92) | Matches ×(1) in G1 |
| I | G2 | (-0.70, 0.72) | Matches m(1) in G1 |
| R | G2 | (0.68, 0.73) | Matches a(1) in G1 |
Table S2: Iteration 1 embeddings after the GIN update with a nonlinear MLP. Notice that the embeddings for G1 and G2 are now aligned, respecting the functional equivalence of their structures. See Figure S3 for a plot.
Discussion This example demonstrates the power of GINs in mapping distinct concepts into a shared representation, forming one example of how abstraction can be achieved. Despite the graphs G1 and G2 starting with distinct initial embeddings and different structures, the GIN successfully aligned their embeddings after one iteration. This alignment reflects the structural equivalence of the graphs:
# isomorphic graphs with equivalent structures => aligned embeddings.
Thus, the equations F = m × α and V = I × R share the same functional relationships (output = parameter x input), leading to aligned embeddings despite differing physical meanings. It confirms that after processing by the GIN, the two isomorphic graphs have the same node embeddings and graph-level embeddings. Figure S3 depicts a visualization of the transformation, including the graphs and how their embeddings adapt after one layer of GIN.
The argument with relation to Graph-PReFLexOR is that rather than developing a training objective to model isomorphic mappings explicitly, we model these transformations explicitly by mapping a task into a graph representation, then a symbolic abstraction, and then to the final answer, as shown in Figure 1. In this visualization, the bottom part of the image particularly shows how problems are expressed in shared tokenization form to yield effectively a similar result as visualized in Figure S3. This reflects a mapping into abstraction for generalization and back, achieving mulitple goals simultaneously.
Figure S3: Visualization of transformation of two isomorphic graphs, each representing an equation (F = m × α and V = I × R (top), into an identical representation (bottom). As one can see, after the transformation of node embeddings by a GIN, the nodes of both graphs collapse on the same embeddings are are indistinguishable, to yield a generalized representation (akin to the concept described in Figure 1, bottom). The two graphs not only have the same node embeddings but also the same graph-level embeddings, revealing that they are isomorphic.
|