entry_id
stringlengths 33
34
| published
stringlengths 14
14
| title
stringlengths 6
252
| authors
sequencelengths 1
1.7k
| primary_category
stringclasses 152
values | categories
sequencelengths 1
8
| text
stringlengths 0
52.1M
| introduction
stringlengths 0
79.1k
⌀ | background
stringlengths 0
34.5k
⌀ | method
stringlengths 0
49.1k
⌀ | results
stringlengths 0
175k
⌀ | discussion
stringlengths 0
57.6k
⌀ | conclusion
stringlengths 5
156k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.17941v1 | 20240926151632 | Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense | [
"Filippo Bartolucci",
"Iacopo Masi",
"Giuseppe Lisanti"
] | cs.CV | [
"cs.CV"
] |
Nucleon tomography and total angular momentum of valence quarks
from synergy between lattice QCD and elastic scattering data
Krzysztof Cichy^10000-0002-5705-3256,
Martha Constantinou^20000-0002-6988-1745,
Paweł Sznajder^30000-0002-2684-803X,
Jakub Wagner^30000-0001-8335-7096
September 28, 2024
============================================================================================================================================================
§ ABSTRACT
Image manipulation detection and localization have received considerable attention from the research community given the blooming of Generative Models (GMs).
Detection methods that follow a passive approach may overfit to specific GMs, limiting their application in real-world scenarios, due to the growing diversity of generative models.
Recently, approaches based on a proactive framework have shown the possibility of dealing with this limitation.
However, these methods suffer from two main limitations, which raises concerns about potential vulnerabilities: i)
the manipulation detector is not robust to noise and hence can be easily fooled; ii) the fact that they rely on fixed perturbations for image protection offers a predictable exploit for malicious attackers, enabling them to reverse-engineer and evade detection.
To overcome this issue we propose PADL, a new solution able to generate image-specific perturbations using a symmetric scheme of encoding and decoding based on cross-attention, which drastically reduces the possibility of reverse engineering, even when evaluated with adaptive attacks <cit.>.
Additionally, PADL is able to pinpoint manipulated areas, facilitating the identification of specific regions that have undergone alterations, and has more generalization power than prior art on held-out generative models.
Indeed, although being trained only on an attribute manipulation GAN model <cit.>, our method generalizes to a range of unseen models with diverse architectural designs, such as StarGANv2, BlendGAN, DiffAE, StableDiffusion and StableDiffusionXL.
Additionally, we introduce a novel evaluation protocol, which offers a fair evaluation of localisation performance in function of detection accuracy and better captures real-world scenarios.
§ INTRODUCTION
Advancements in Generative Models (GMs) for image synthesis have continually transformed the landscape of the field, showcasing remarkable capabilities in tasks ranging from unconditional image generation from random noise to nuanced manipulation given a natural image to edit. Nevertheless, this progress introduces significant security concerns because a ill-intentioned user could alter the semantics of a genuine image to attain a malicious objective. To address this issue, several counter tools have been developed focusing on binary detection of manipulations <cit.> limited to specific GMs.
Trained on both authentic and manipulated images, these methods are categorized as passive because detection countermeasures are performed after manipulation.
However, their performance and ability to generalize are limited because they need to be retrained for each new GM released, a time-consuming and demanding task given the large number of GMs released every day.
Recent solutions have addressed the limitations of passive methods by adopting a proactive schema <cit.> which implements countermeasures before any manipulation occurs.
This new proactive technology intercepts a painful need of the community towards having the right to discern what is generated from what is authentic.
Indeed, in the private sector, big tech companies of the caliber of Meta and Google dedicated resources to the design of solutions that proactively detect AI-generated contents <cit.>.
Various proactive approaches have been proposed so far, among these solutions image tagging <cit.> introduces a hidden message into the image in order to verify the provenance of the image while the solution proposed in <cit.> aims directly at disrupting the output of the generative models that are used to manipulate the image.
Recently, proactive detection techniques <cit.> were introduced by augmenting the input image with an additive perturbation[The additive perturbation is called “template” using <cit.>'s terminology but we use perturbation in the rest of the paper for clarity.] as a form of protection.
When a protected image is altered, the embedded perturbation is also tampered with, preventing its verification and enabling the detection of manipulations.
However, it is worth highlighting that both <cit.> suffer from two main limitations that could potentially be exploited by an attacker. On the one hand, the manipulation detector is not robust to noise and can be easily fooled by simply adding Gaussian noise to an image. However, in this case, tuning the value of σ for this noise is not easy as a low value may not fool the detector while a high value may corrupt the image too much. In addition, an attacker usually does not have access to the manipulation detector.
On the other hand, both methods use a fixed set of perturbations and, by reverse engineering one of the predetermined perturbations, an attacker could manipulate images and authenticate them as real using the reversed perturbation.
To this end, we conducted two experiments, one adding Gaussian noise with increasing σ and the second to reverse engineer the perturbation used in <cit.>.
Fig. <ref>(a)(b) shows that this family of solutions can be easily bypassed.
To address this issue, our research aims to enhance the proactive protection mechanism by transitioning from a finite set of perturbations to a customized perturbation per image, which ensures robustness as shown in Fig. <ref>(a)(c).
However, designing image-specific perturbations is a challenging task due to the lack of a clear ground truth to guide the model.
To overcome this limitation, we leverage the transformer architecture's cross-attention mechanism, and condition a learnable perturbation on the image, resulting in a unique protection, tailored to the specific characteristics of each input image.
The framework consists of an Encoding and Decoding module with a symmetric cross-attention mechanism. The encoder customizes a sequence of learnable tokens through cross-attention layers to create a personalized perturbation for each image, while the decoder recovers this perturbation to detect and localize manipulations. Additionally, a revamped loss function enforces diversity in the perturbations, contributing to the effectiveness of our approach. With this, protected images can be shared online, and their authenticity can always be verified through the decoder module.
In addition, the performance of state-of-the-art for proactive methods reported <cit.> is biased toward manipulated pixels. Indeed, the protocol defined for detection uses both non-manipulated and manipulated images, while the localisation considers only manipulated images.
This may seem straightforward if detection works perfectly, but as shown in <ref>, detection often fails with unseen GMs, making it unreliable to decide when to compute localisation.
For this reason, we introduce a new evaluation protocol where localisation depends on the detection accuracy.
This new protocol provides a fair comparison by considering both non-manipulated and manipulated images when evaluating the localisation and better generalizes to real-world scenarios.
The contributions of our work can be summarized as follows:
♢
we empirically demonstrate the vulnerability of state-of-the-art proactive schemes to either noise or to our black-box attack which allows estimating the perturbation from the protected image and adding it to malicious manipulated images so that the proactive scheme will detect them as protected and real.
♢
we introduce PADL a new proactive solution which is robust to our black-box attack and to adaptive attacks <cit.> specifically tailored for our method.
♢
PADL achieves remarkable generalization capabilities, indeed, although being trained only on STGAN it generalizes to StarGANv2, BlendGAN, DiffAE, StableDiffusion and StableDiffusion XL.
♢
We define a new evaluation protocol for image manipulation detection and localization that ensures a balanced and realistic comparison. This protocol uses both manipulated and non-manipulated images for localization, and conditions the evaluation on the detector's prediction, thereby reflecting real-world scenarios where most images are authentic.
§ RELATED WORK
Passive defense Prior works proposed methods against image manipulation that follow a passive protocol, which means that countermeasures are taken after the manipulation has occurred.
In this category, earlier methods <cit.> identify artifacts left by generative models in the RGB representation of the image. This was achieved by training models on a dataset of real and manipulated images to discriminate between them by examining the visual content.
Nirkin <cit.> improved manipulation detection methods by exploiting face-context discrepancies.
The approach integrated a face identification network for precise semantic segmentation and a context recognition network that considered hair, ears and neck.
By utilizing signals from both networks to identify discrepancies, they enhanced traditional fake image detection.
Chai <cit.> introduced a patch-based CNN classifier to identify and visualize the regions of an image that have undergone manipulation. The classifier slides through the different image patches to determine if it is real or not, thus verifying if and in which region manipulation has occurred.
Dang <cit.> proposed an alternative approach by incorporating an attention mechanism to process and enhance feature maps for the detection task.
The feature map is then used to highlight informative regions, improving binary classification and visualizing manipulated regions.
New solutions <cit.> shifted the focus from the image content to the noise present in some regions of the image. Zhou <cit.> utilises RGB images and noise features extracted using steganalysis rich filter model, in conjunction with a Faster R-CNN module, to detect forgeries.
Similarly, Yang <cit.> followed the same approach yet employed a trainable noise extractor based on Constrained CNN <cit.>.
This choice was motivated by the susceptibility of previous filters to adversarial attacks.
HiFiNet was proposed in <cit.> leveraging four branch encoders that learn a fine-grained hierarchical categorization of the manipulation and provide 2D localization for the manipulation.
While the works described above have produced interesting results, these models perform poorly when applied to new manipulations not seen during training: manipulation generated by different techniques can have different visual artifacts, which hampers the generalization of all learning-based passive methods.
Proactive defense To overcome the limitations of passive methods, researchers have started to explore proactive approaches, in the sense that countermeasures are implemented before any manipulation occurs.
The solution from Ruiz <cit.> proposed to disrupt the generator output by applying an imperceptible perturbation to real images.
The perturbation is generated by a modified version of adversarial attacks such as FGSM, I-FGSM, and PGD and is able to generalize across different image conditioning classes.
However, this solution does not work in a black-box scenario as it requires knowledge of a specific GM.
Wang <cit.> introduced a solution closer to the concept of watermarking.
This approach embeds a hidden message within real images, ensuring its retrieval even after manipulations in order to authenticate the image's identity.
A U-Net model embeds a bit sequence into the images, leveraging redundancy to enhance resistance against manipulation.
Although not intended as a detection tool, this technique can be used to track the origin of changes within a social network by linking each user image to its unique identifier. The concept of watermarking has been extended in <cit.> by applying proactive watermarking to the train data and then training or fine-tuning a GM to maintain the watermark. This approach enables the extraction of the watermark from newly generated images yet assumes being the “owner” of the GM.
Asnani <cit.> proposed a proactive framework for generalized manipulation detection in which a perturbation is added to the input image.
If manipulations occur, the perturbation is tampered and the image can be detected as manipulated.
The perturbation is randomly selected from a finite set that have been learned at training time.
This work has been successively extended in <cit.> with the introduction of manipulation localization.
In this paper, we show that <cit.> are prone to attack, while our method generalizes across diverse unseen GMs and offers a per-image protection perturbation that minimizes the vulnerabilities of predictability caused by the reuse of the same set of perturbations.
§ METHOD
The proposed approach relies on a set of learnable tokens that, conditioned on the input image , produces a image-specific perturbation _e.
This perturbation is used for the detection and localization of manipulations, employing two primary components: an Encoding Module and a Decoding Module. The encoding part is composed by a perturbation encoder that transforms the learnable tokens into a perturbation conditioned on the input image .
The decoding part is composed by a protection decoder that extracts the perturbation _d and a Map Block ℳ in charge of performing manipulation detection and localizing the manipulations.
All the components of our architecture, i.e., , and ℳ, consists of N ViT-like transformer blocks.
The whole architecture is shown in <ref>.
§.§ Encoding image-specific perturbations
The encoding module is used to protect real images ∈ℝ^H× W×3 before manipulation occurs.
In passive approaches, detection involves discerning between an authentic image and its manipulated version (), generated by a generative model .
In a proactive framework <cit.>, given an authentic image , the method applies a transformation (·) to create a protected image (),
then the manipulation detection process occurs between protected image () and its manipulated counterpart (()). It is important to note that, since the detection process is based solely on the verification of the presence of the perturbation, an image without a perturbation cannot be classified as protected, regardless of its authenticity.
In prior proactive approaches <cit.>, protection involved randomly selecting a predefined perturbation from a finite set and embedding it into the image. The process for which the perturbation is embedded in the image is additive, i.e., () ≐ + _e, similarly to what is done for adversarial attacks <cit.> yet using another procedure.
On the contrary, our approach parametrizes the transformation (·) still with an additive perturbation _e yet using a set of P learnable tokens {t_i ∈ℝ^d | i=1,…, P}, where d is the dimension of the inner representation used by our model <cit.>.
The tokens are shared among all the perturbed images. Yet, unlike prior art, we also specialize the perturbation to be image-specific, which means the perturbation is also conditioned on the input data , aiming to prevent a black-box attack that may reverse engineer it. We will empirically prove this claim in the experimental section in <ref>, but preliminary results can be appreciated in <ref>.
Our transformation (·) is defined as follows:
() = + α·_e(;), with α > 0
where α is a fixed non-negative scalar parameter used to control the strength of the protecting perturbation and _e(;) returns values in [-1,…,+1] using an hyperbolic tangent function. The values are bounded so that the norm of the perturbation added to the image is limited in the ℓ_2 norm and upper bounded to α√(H× W). This avoids the need for extra losses to minimize the ℓ_2 norm of the perturbation making the training simpler with less hyper-parameters than <cit.>.
The perturbation encoder takes as input the learnable tokens 𝐓_0={} and applies a series of N parameterized transformations based on transformer <cit.> blocks with both self- and cross-attention. The self-attention mechanism only depends on the input tokens while the cross-attention is used to specialize the tokens, conditioning them on the input image .
In particular, we divide the image into P non-overlapping patches of dimension p× p, such that P= HW/p^2,
and embed them using a patch embedder into a sequence of tokens {}, where each x_i ∈ℝ^d has the same dimensionality of the learnable tokens and indicates the matrix of the patch embeddings, thus ∈ℝ^P d.
At each of the N perturbation encoder blocks, the learnable tokens are conditioned on the input image employing the patch tokens as context in the cross attention as:
T_n = softmax[s (T^⋆_n-1W_qW_k^⊤)] W_v + T^⋆_n-1, n=1… N,
where T^⋆_n-1∈ℝ^P d is the matrix containing the tokens processed by the self-attention in the same block,
T_n is the matrix updated with the conditioning after <ref>,
W_q, W_k, W_v are the query, key and value weights matrices,
and s is a scaling factor, computed as in <cit.>.
T^⋆_n-1 is used as query into the cross-attention mechanism while the input image patches are used as context, i.e., they serve as keys and values in <ref>.
This mechanism determines the level of importance that each learnable token in the query should attribute to the corresponding image token , enabling the customization of the learned tokens to the visual characteristics of the image.
In other words, the final perturbation is constructed by taking convex, linear combination of patch embeddings where the combination is learned through the similarity between learnable tokens and patch embeddings.
In the last block, when n=N, the perturbation is finally attained constraining its values with hyperbolic tangent function:
_e(;) = tanh{ϕ_e (T_N)},
where ϕ_e projects and reshapes the tokens in order to match the dimensions of the image.
§.§ Decoding image-specific perturbations for manipulation detection and localization
The decoding module, shown in <ref>-right, is employed to detect and localize any manipulations that may have occurred. This module is composed by two parts: (i) a perturbation decoder , and (ii) a Map Block ℳ to perform detection and estimate the manipulation map.
The decoding module can take either () or (()) as input. Additionally, since in real-world scenarios unprotected images may also be observed, we include as an alternative input. This allows training the module to also detect unprotected inputs, in contrast to <cit.>.
This input image is transformed into patch embeddings using a different patch embedder than and is fed to the perturbation decoder with the intent of recovering the protecting perturbation, if present.
Differently from and ℳ, the perturbation decoder employs only self-attention layers.
This time it is the patch embeddings to go as input and the output of —i.e. ^⋆_N— is forced to recover the original perturbation _e using a reconstruction loss, i.e. ^⋆_N ≐_d ≈_e—see <ref>.
The recovered perturbation is subsequently exploited by the Map Block ℳ.
The Map Block follows an architecture similar to the encoding module yet, interestingly, the conditioning is inverted and the patch embeddings are provided as input.
These patch embeddings are augmented with a learnable class token <cit.> that we concatenate to the classic patch embedding as
{,} where indicates the new patch embedding matrix. We seek for an inductive bias where the token stores information about being or not being manipulated.
Although both blocks receive the sequence of image patches as input, unlike in the encoding part, _d recovered from is used as context in the cross-attention mechanism of ℳ to condition the manipulation map estimation. This choice is symmetric in comparison to the encoding module, where image patches were used in the cross-attention context. However, the rationale behind this is that the estimated perturbation _d is intended to serve as a guide for the subsequent localization of manipulation performed on the image token sequence. Given that the output map should retain the original image's details, the signal is employed to highlight the location where the image has been manipulated. The cross-attention block of the decoding part is thus:
𝐗_,n = softmax[s(^⋆W_qW_k_d^⊤)] _dW_v + ^⋆ , n=1… N,
where ^⋆ is the matrix containing the patch embeddings processed by the self-attention in the same block.
It is worth highlighting that the weights matrices W_q, W_k, W_v, of , and ℳ are not shared.
In the last decoding block, when n=N, the final predicted manipulation mask ℳ() will be a convex linear combination of the recovered perturbation _d, and the way the combination is decided depends on how the recovered perturbation _d “attends” to the patch embedding:
ℳ()=𝐗_,n as {x_1,n … x_P,n_localization _,n_detection} when n=N,
where _,N is extracted and fed to a multi-layer perceptron, supervised with binary labels, as detailed in <ref>. The first P tokens, instead, are projected and reshaped in order to match the dimension of the ground-truth manipulation maps on which they are supervised.
Following prior art <cit.>, the ground-truth manipulation map is defined as Y≐1/2^8-1gray( |() - (())|) where gray(·) converts an RGB image into grayscale. Each pixel of the manipulation mask takes a continuous value in [0,…,1] indicating how much a pixel has been manipulated.
§.§ Training
At train time, all the modules, , , and ℳ are jointly optimized on , () and (()).
For each forward pass, a real image is provided as input to which generates the image-specific perturbation _e to obtain the corresponding protected image τ()=+_e.
Following prior art <cit.>, in order to simulate possible manipulations by generative models, we employ a single GM to manipulate (), resulting in the manipulated protected image (()).
Both τ() and (()) are then fed to the decoding module, which extracts the perturbation, performs binary detection and estimates the manipulation map. The overall training process is detailed in <ref>
Loss objectives
To force the decoded perturbation _d to be similar to the encoded one we apply a reconstruction loss, _rec, while to maximize the similarity between the ground-truth Y and the estimated ℳ() manipulations map we use the cosine distance, as in _map:
_rec = _e_d, _map = Yℳ(), ℒ_div = ∑_i,j=1
i≠ j^Bmax(_e[i]_e[j],0 )
In addition, to ensure variation within the batch for _e, we introduced a perturbation diversity loss, _div. This loss is crucial as it constrains to generate a unique signal for each image.
This loss computes the cosine similarity between _e for pairs of images within the batch, ensuring varying perturbations across different images. Without this loss, would create a single perturbation: plain cosine similarity was not enough, as the model learned only two distinct perturbations with a cosine similarity of -1. Consequently, within a batch, the mean cosine similarity tended to approach zero due to the compensatory effect between same and opposite perturbation comparisons. To address this, negative values were clamped to zero, effectively removing contributions from pairs with negative similarity and forcing the perturbation to be orthogonal. An ablation study on the importance of the ℒ_div loss component is provided in <ref>.
Finally, in order to train ℳ to perform manipulation detection, we simply apply binary cross-entropy to the output of the multi-layer perceptron that processes _,N supervised by the binary labels indicating if we are processing τ() (protected) vs or (()) (unprotected or manipulated).
In addition, we randomly sum a small Gaussian noise to the image provided as input to the detection loss during training.
By doing so, we explicitly force our model to distinguish our perturbation from noise applied to the images enabling a more robust protection.
The overall loss employed to optimize the model is given by the sum of all previous terms as
ℒ = ℒ_rec + ℒ_map + ℒ_div + ℒ_BCE
§.§ Image protection, manipulation detection and localization
The proposed approach is consistent with prior works that utilize a proactive method for defending against image manipulation. This method is applicable to any individual or organization, such as journalists and media outlets, that wish to safeguard the integrity of their images. For news agencies publishing sensitive content, such as reports on political events or social unrest, the ability to verify whether an image has been altered is critical in preventing the spread of misinformation. A protected image can be shared online, and its authenticity can always be verified by the decoder module, ensuring that any manipulation or tampering becomes detectable. This process is shown in <ref>.
Additionally, in legal or forensic investigations, where image evidence is crucial, this approach offers an extra layer of security. By embedding an invisible protection, law enforcement agencies and legal professionals can ensure that the images presented in court remain untampered from capture to presentation.
§ EXPERIMENTS
Datasets
Our models have been trained only on the CelebA <cit.> dataset.
The images of the dataset have been aligned, centered and cropped to a resolution of H=W=128, as in <cit.>.
During training these images are observed with or without manipulations.
The manipulated version is generated using only STGAN <cit.> which is set to alter the baldness and smile attributes.
For evaluating the generalization capability of our model to unseen GMs we consider StarGanV2 <cit.> and four more recent generative models, namely, BlendGAN <cit.> based on generative adversarial networks <cit.>, DiffAE <cit.> based on denoising diffusion implicit models <cit.>, StableDiffusion (SD) <cit.> and StableDiffusionXL (SDXL) <cit.>, both based on latent diffusion models <cit.>.
In addition, we report in the supplementary material an experiment using a diffusion-based model, i.e., StableDiffusion 1.5, to manipulate the image during training and evaluate the generalization performance also for this model. Using different GMs for training does not impact the generalization performance of PADL, which means the generalization is invariant to the two different GMs used.
As the test set, we employ the subsets of CelebA-HQ and Summer2Winter <cit.> provided in the benchmark defined by <cit.>.
To further extend the evaluation, we selected an additional test set of 200 real images from FFHQ <cit.>.
The supplementary material provides a list of the GMs used in the evaluation along with the tasks they were used for (e.g., image2image, style transfer, attribute manipulation).
Metrics and Evaluation
To evaluate our model's ability to accurately detect manipulations, we compute the accuracy considering both manipulated and non-manipulated images.
With regard to localization, we compute the Area under the ROC Curve (AUC) between the ground-truth and the estimated manipulation maps. In light of the fact that the ground truth map is still a continuous map, it is necessary to threshold it to calculate the ROC curve. To ensure the absence of any bias in the selection of the threshold, the performance is shown considering different thresholding values, i.e., t=[0.1, 0.25, 0.5].
The performance reported by the state of the art <cit.> is biased toward manipulated pixels. More in detail, the protocol defined for detection <cit.>, uses 400 images, 200 non-manipulated and 200 manipulated by the GM. While the localisation evaluation uses only the 200 manipulated images.
This may seem straightforward if detection works perfectly, but as shown in <ref>, detection often fails with unseen GMs, making it unreliable to decide when to compute localisation.
For this reason, we introduce a new evaluation protocol where localisation depends on the detection accuracy.
This new protocol provides a fair comparison by considering both non-manipulated and manipulated images when evaluating the localisation. This approach balances the two classes and better reflects real-world scenarios, where most images are likely authentic.
In the proposed evaluation protocol, the localization is conditioned on the detector's prediction, and the metrics are calculated for the four following scenarios:
* Manipulated image correctly detected as manipulated: Localization evaluation is computed between the ground truth (GT) map and the predicted map, as typically done.
* Manipulated image incorrectly detected as non-manipulated: Metrics are computed between the GT map and an all-zero map (indicating "non-manipulation"), reflecting the detection result.
* Non-manipulated image correctly detected as non-manipulated: Localization is evaluated between an all-zero GT map (indicating a real image) and a predicted all-zero map.
* Non-manipulated image incorrectly detected as manipulated: Metrics are computed between the predicted map and an all-zero GT map (indicating "non-manipulation").
With this new evaluation protocol, the localization is conditioned on the accuracy of the detection yet all the methods will be evaluated on the same number of pixels.
Implementation details
The dimension of the patch processed by the transformer has been set to p=8. All attention blocks have the same dimensions and number of heads, which were set to 64 and 8, respectively, therefore the dimension of the inner representation used by the transformer is d=512.
The learnable tokens are initialized with random normal values.
For both the encoding and the decoding modules we employ learnable positional encodings, as in <cit.>.
For all the experiments the strength of the perturbation α has been set to 0.03.
We employed the AdamW <cit.> optimizer with an initial learning rate of 1 × 10^-4. All models were implemented in PyTorch <cit.> and trained on an RTX A6000 GPU with 48GB of memory. The total runtime for the training ranges from over 4 hours for the model with N=3 layers to about 10 hours for the model with N=12.
The average time required to protect an image, that is a forward pass through the encoder to generate the perturbation and add it to the image, is 6.93 ms.
Conversely, to recover the perturbation and detect if the image has been manipulated, the decoder takes on average 10.27 ms. Measures are taken on an NVIDIA A6000 synchronizing all cuda events.
§.§ PADL performance across diverse GMs
Proactive schemes were introduced to generalize the manipulation detection capability of a model to unseen GMs. To this end we evaluated the performance of PADL with different configurations of N = [3,6,12] with GMs and datasets unseen during training.
From <ref> it is evident that the performance of PADL when trained solely on CelebA is generally consistent across various configurations of N, particularly when applied to most unseen Generative Models (GMs). For StarGANv2, BlendGAN, SD 1.5, and SDXL, PADL shows high detection accuracy with N = [3, 6, 12], indicating that these GMs manipulations are aggressive enough to be detected by all models. However, when evaluated on DiffAE, PADL performs less effectively. DiffAE poses the greatest challenge due to its subtle pixel-level manipulations, which result in the lowest generalization performance across all configurations of N. DiffAE's lower performance can be attributed to its minimal pixel alterations, which are harder for PADL to detect. This is further corroborated by the results in <ref>, where we computed the sum of all the pixels considering the soft non-binarized ground-truth masks across GMs. It can be seen that DiffAE is the one that yields the lowest sum by a large margin, proving that it does create subtle manipulations. Interestingly, increasing the value of N to 6 or 12 shows some improvement in detecting these subtle manipulations in DiffAE, likely due to the increased complexity of the perturbations generated by PADL as N grows. As also noted in <ref> in the supplementary material, an increase in N induces a more complex learned perturbation in <ref>. This gain with DiffAE can be easily explained: as the parameter N induces more complex perturbations if we stick to small N, the perturbation will be coarse and the subtle manipulation of DiffAE will not be strong enough to corrupt the PADL perturbation, thus the PADL decoder will find the manipulated images still “protected” (false negatives). If we increase the perturbation complexity (N=6), the PADL decoder is now able to spot the corruption induced by the subtle DiffAE manipulation resulting in a higher detection rate.
In light of this analysis, for all subsequent experiments, we considered PADL with N=6 since in this setting it can detect both subtle manipulations (DiffAE) and other more aggressive GMs for a better coverage of unseen GMs and improved generalization.
§.§ Performance across diverse GMs and comparison with state-of-the-art
We evaluated the performance of both passive and proactive solutions with GMs and datasets unseen during training.
It is possible to appreciate from <ref> that passive methods <cit.> achieve performance comparable to the state of the art only on GMs used at training time <cit.> but are unable to generalize across unseen generative models in both detection and localization.
In particular, images manipulated by more advanced architectures are recognized as real images.
Compared to both passive and proactive methods <cit.>, <ref> shows that PADL achieves more robust performance in both detection and localization, while other solutions fall short in localization when the detection performance decreases.
In addition, PADL is able to identify manipulation even when tested on data from a different domain, as can be appreciated from the performance observed when employing the Summer2Winter dataset.
Finally, PADL achieves remarkable detection performance, with near-perfect accuracy, even against the latest generative models like SD and SDXL, despite being trained only on STGAN, outperforming other solutions by a significant margin.
§.§ Black-box attack to proactive scheme: reverse engineering of the protection
To assess the safety related to using the same protection for all the images, we designed a simple attack, performed in a black-box scenario, i.e., without knowledge of the detection model, to extract the perturbation from a limited number of protected images.
This can be later exploited to deceive the detection model with new, unseen and unprotected images.
The attack leverages a dataset composed by K protected images (), taken from CelebA and a set of different unprotected images , taken from CelebA-HQ. These images have been selected from different datasets to maximize the fairness of this experiment.
The architecture is composed by a learnable perturbation and a CNN model 𝒫_ext which serves as protection extractor.
Given () = + _e, we seek to estimate the unknown perturbation _e by decomposing ().
During training, the learnable perturbation and the CNN model 𝒫_ext are jointly optimized using the following loss:
_attack = 𝒫_ext(())) + ||𝒫_ext()||_2 + ||||_2
The first term of the loss computes the cosine similarity between the learnable and the extracted perturbations, estimated by 𝒫_ext, thereby constraining the protection to resemble the perturbation structure.
The second term computes the ℓ_2 norm of unprotected images and is used to force the protection extractor to focus on the estimation of instead of being guided by the image content.
Finally, the third term aims at minimizing the magnitude of the reversed perturbation via an ℓ_2 loss term so as to mitigate potential degradation in the quality of the image.
Once the perturbation has been optimized it can be applied to a new set of unprotected images.
For this experiment, we employ the test set defined from FFHQ <cit.> and used for the generalization experiment so as to ensure no overlap with the images seen during training.
The reversed perturbation is applied to these images which are then provided as input to the detection model in order to predict if they are protected or not.
The experiment was conducted using an increasing number of protected images (e.g., from a minimum of 4 up to 64). In addition, the results of this experiment were averaged over 10 trials, and for each trial, a different subset of protected images was considered.
From <ref>, it is possible to appreciate that the proposed attack successfully estimates the fixed protection proposed by <cit.> resulting in 98% of accuracy. The learned protection is capable of approximating the original perturbation _e with an average cosine similarity of 0.76 across all K.
Conversely, when the attack is applied to our solution, thanks to the image-specific protection, the accuracy drops drastically, meaning that it is not possible to estimate a perturbation capable of breaking our model.
The same protocol was employed to attack <cit.>. Results showed a constant accuracy close to 100% across all values of K.
These results can be attributed to the inherent lack of robustness of these models since they were not designed to be robust and accept random noise as a protection, a flaw that can be exploited in the attack as shown in <ref>(a).
To further stress our approach, we additionally designed a black-box adaptive attack <cit.>, specifically tailored to our approach.
Similarly to the previous attack, we employ the same CNN-based model as the protection extractor, however, rather than relying on a singular learnable perturbation, we employ a set of perturbations, proportional to the number of protected images, to better accommodate the inherent diversity of our protection mechanism. Additionally, a perturbation diversity loss, _div, is applied to the ensemble of perturbations to enforce variance within the set. As reported in <ref>, despite the adaptive attack, our model demonstrates its robustness, yielding comparable results to those observed for the previous attack.
§.§ Protection impact on image quality
The process of image protection may reduce the quality of the visual output.
To quantify this phenomenon, we measure the degradation between and () at the pixel level by computing the mean squared error (MSE), and at the perceptual level, employing the Learned Perceptual Image Patch Similarity <cit.> (LPIPS).
The results reported in <ref> (b) confirm that our protection mechanism has little impact on the overall image quality, as highlighted by the very low values for both MSE and LPIPS metrics.
This result is also supported by the qualitative examples reported in <ref> (a). Here, protected images are compared to their original input counterparts. The protection applied by <cit.> is more noticeable, as can be observed from <ref> (a) and also by the higher MSE and LPIPS values in <ref> (b).
Compared to <cit.>, PADL allows better performance, while also minimizing the impact on image quality.
This is a consequence of the fact that, differently from <cit.>, our solution applies an upper bound to limit the perturbation by combining the use of hyperbolic tangent and α, as described in <ref>.
Ablation on the impact of the protection strength α on the image quality is also provided in the supplemental material.
§ CONCLUSION
This work introduces a novel solution for proactive image manipulation detection and localization.
Our solution employs a transformer-based encoder conditioned by an input image to generate a specific perturbation. Then a transformer-based decoder is used to extract the perturbation and leverage it to perform manipulation localization and detection.
Unlike previous methods based on fixed protection, our solution generates image-specific perturbations, improving resistance against reversal attacks, while also achieving remarkable detection and localization performance. It is also worth highlighting that the perturbation introduced by our approach has very little impact on the image quality.
Broader impact
The objective of this research is to prevent the misuse of generative image models therefore mitigating the spread of misinformation.
By enabling a more effective detection of manipulated images, we hope to offer a way to bolster trust and integrity in digital content, which is crucial for fields such as journalism, forensics, and law enforcement.
Limitations and future works
The main limitation of our solution is related to the drop in performance for the localization when generative models based on a diverse architecture and paradigm (e.g., diffusion model) are employed.
In order to enhance performance in this regard, it would be interesting to explore the potential of novel architectural approaches for both decoder and encoder modules.
Although our method demonstrates superior performance compared to previous approaches, further investigation is required to assess its suitability for real-world scenarios. Online platforms may apply filters to uploaded images, potentially compromising the embedded protection.
Additionally, it would be worthwhile to assess whether the methodology employed for perturbation generation can be repurposed for other tasks, such as adversarial attacks. These represent promising directions for further research and advancement in the field.
§ ACKNOWLEDGEMENT
Funded by the European Union – Next Generation EU within the framework of the National Recovery and Resilience Plan NRRP – Mission 4 "Education and Research" – Component 2 - Investment 1.1 “National Research Program and Projects of Significant National Interest Fund (PRIN)” (Call D.D. MUR n. 104/2022) – PRIN2022 – Project reference: Adversarial Venture, the Mixed Blessing of Adversarial Attacks 20227YET9B_002 J53D23007030006.
splncs04
§ SUPPLEMENTAL MATERIAL
§.§ Loss Ablation
Compared to previous art <cit.>, which utilized up to ten loss functions to achieve their objectives, PADL simplifies the approach by employing only four losses, as detailed in <ref>, each specifically designed to enforce essential properties, resulting in a more efficient yet effective model. The removal of any one of these four losses would prevent the model from functioning as intended. For instance, removing the ℒ_div loss would prevent the model from generating image-specific perturbations. Without ℒ_div, the model would minimize the remaining losses by learning a single perturbation for all images, which contradicts our design goals.
Moreover, using plain cosine similarity (i.e., ℒ_div without the clipping max(·,0)) failed to produce image-specific perturbations. The encoder ended up learning only two distinct perturbations with a cosine similarity of -1, essentially opposite directions that merely minimized the loss. This led to a situation where, within a batch, the mean cosine similarity approached zero due to the compensatory effect between similar and opposite perturbations, resulting in no meaningful learning. To counter this, negative values were clamped to zero, effectively disregarding pairs with negative similarity and forcing the perturbations to be orthogonal. Additional evidence of this behavior is provided in <ref>.
§.§ Training PADL with a Diffusion Model
To provide additional evidence of the capability of our solution, we trained an additional version of PADL using a diffusion model as the GM. Results in <ref> show that the model continues to perform robustly against unseen manipulations (SDXL, BlendGAN, and StarGANv2), demonstrating its strong generalization capability.
Testing PADL trained with diffusion against DiffAE and STGAN was not possible due to resolution incompatibility. PADL maintains strong generalization performance on unseen GMs, proving that its effectiveness lies in the method itself, not the specific training data used.
§.§ Robustness against degradations
Both passive and proactive methods may fall short when the images undergo some simple editing degradations.
To this end, we conducted an experiment to evaluate the performance of our approach when four types of degradations are applied, namely blur, Gaussian noise, JPEG compression, and low resolution. This experiment has been conducted following a leave-one-out protocol, that is, we trained four model injecting three out of the four degradations during training and we tested on the unseen degradation.
More in details, during training we adopted a degradation scheduler policy that challenges the optimization of the perturbation. At each iteration, it randomly selects whether training will proceed with original images or with one of the three degradations. The probability of employing a degradation and its intensity follows a linear schedule, which is proportional to the current iteration count.
It is worth noting that these degradations are applied directly on protected images to resemble a real-world scenario.
Moreover, the images have been manipulated using STGAN <cit.>, in particular, by modifying the “Bald” attributes.
Following the work of <cit.> we employed the following settings for the degradations:
* Compression: Images are saved as JPEG with a quality level set to 50%.
* Blur: A Gaussian blur is applied to the images using a 7 × 7 kernel.
* Noise: Gaussian noise with zero mean and unit variance is added to the images. To preserve the unity gain, values over 1 and below -1 are clamped.
* Low-Resolution: The image is resized to half of its original resolution and then upscaled back to the original resolution using bilinear upsampling.
As reported in <ref>, our solution is susceptible to two degradations, such as, JPEG compression and Gaussian noise. This is due to the fact that both these degradations significantly compromise the quality of the protected image, leading to its detection as manipulated.
To enhance the overall robustness of the framework we also trained PADL considering all degradations during training.
All models reported in <ref> have been trained considering all degradations.
§.§ Perturbation strength
Variations in the α parameter directly correspond to proportional changes in the magnitude of the perturbation.
Employing a hyperbolic tangent on the output of allows us to bound the maximum value of the perturbation, consequently, α is the only parameter that controls the magnitude.
To show the impact of alpha on the image quality we conducted an experiment by training four models with different values of α.
Quantitative results are reported in <ref> while some qualitative samples are shown in <ref>.
Using a value for α higher than 0.03 results in visible artifacts that degrade the image quality.
For all our experiments we set α=0.03 since it represents the optimal balance between quality (i.e., the perturbation magnitude is sufficiently low to preserve image quality) and performance (i.e., the magnitude is sufficiently high to ensure detectability by the decoder).
§.§ Perturbation variation across model depths
A different number of transformer layers can influence the generated perturbation.
This phenomenon is shown in <ref>.
The perturbations across models with a different number of layers are visually similar, yet different across images.
It is possible to notice that a patch-based pattern emerges mainly because of the way the transformer architecture processes the images.
However, the pattern of the patches becomes more complex as the depth increases.
Although the N=12 model produces a markedly distinctive appearance for each patch, it is evident that even the shallower model (N=3) can generate perturbations which are different across images.
§.§ Reverse attack with multiples templates
The solution proposed by <cit.> (MaLP) released only the model with a single perturbation, the attack described in <ref> has been conducted using this model.
In order to better ascertain the proposed attack, we conducted an additional test, training their model using three perturbations, since it was shown to achieve the highest performance in <cit.>.
As is possible to observe from <ref>, even with a set of three perturbations, MaLP <cit.> remains susceptible to reverse attacks.
§.§ Visualization of manipulation maps
<ref> illustrates a selection of real images, accompanied by their protected version, the image-specific perturbations and the manipulated versions, along with the ground truth and the estimated manipulation maps of PADL and MaLP <cit.>.
STGAN manipulations are local to specific attributes of the image and this clearly influences the look of the map estimated by our model.
§.§ Visualization of PADL predictions
In <ref> we present a selection of images generated using SDXL, the most recent and advanced generative model among those employed in our evaluation, which is able to generate images that look real to the human eye.
Nonetheless, PADL, which is trained only on older GAN-based generative models, is able to correctly identify extremely realistic manipulations with accuracies approaching 100%.
§.§ Additional Implementation details
The STGAN model has been detached from the computational graph, therefore its gradient is not exploited during the training process.
Consequently, the models are unable to rely on the STGAN architecture during training, which results in a solution capable of generalizing across both detection and localization.
The evaluation of the state-of-the-art models was conducted by utilising the original code and models released by the authors. The results of <cit.>'s cosine similarity values do not correspond with those presented in their original article. This discrepancy is caused by a calculation error in the ground truth of the manipulation maps, which was present in the released code. To ensure fairness, all their values have been recalculated.
§.§ Generative Models, datasets and relative licenses
For each GM used at test time we employed the reference test set released by <cit.>.
Each test set corresponds to a subset of 200 real image taken from the original source dataset.
As new GMs were introduced, we complemented the test set images with new images from the FFHQ dataset <cit.>.
For CelebA <cit.> and CelebA HQ <cit.>, we use the test images released by <cit.>.
For a fair comparison, we will release our new test images based on FFHQ <cit.>.
In the context of image manipulation, Style Transfer models (BlendGAN and StarGANv2) were employed to generate images based on a fixed reference style image. In contrast, the attribute manipulation model DiffAE was configured to alter the facial attribute “bald”. The SD and SDXL models were utilized in an img2img configuration, conditioned with the prompt “a nice picture of a smiling person” for CelebA-HQ and FFHQ and "a nice picture of a winter landscape with snowy weather" for Summer2Winter.
<ref> provides a list of all generative models used in our experiments, along with their architecture, task and license.
CelebA <cit.> is intended for non-commercial research purposes only, to which we strictly adhere. Similarly, CelebA HQ <cit.> and FFHQ <cit.> are licensed under CC BY-NC-SA 4.0, indicating their availability for non-commercial purposes.
| Advancements in Generative Models (GMs) for image synthesis have continually transformed the landscape of the field, showcasing remarkable capabilities in tasks ranging from unconditional image generation from random noise to nuanced manipulation given a natural image to edit. Nevertheless, this progress introduces significant security concerns because a ill-intentioned user could alter the semantics of a genuine image to attain a malicious objective. To address this issue, several counter tools have been developed focusing on binary detection of manipulations <cit.> limited to specific GMs.
Trained on both authentic and manipulated images, these methods are categorized as passive because detection countermeasures are performed after manipulation.
However, their performance and ability to generalize are limited because they need to be retrained for each new GM released, a time-consuming and demanding task given the large number of GMs released every day.
Recent solutions have addressed the limitations of passive methods by adopting a proactive schema <cit.> which implements countermeasures before any manipulation occurs.
This new proactive technology intercepts a painful need of the community towards having the right to discern what is generated from what is authentic.
Indeed, in the private sector, big tech companies of the caliber of Meta and Google dedicated resources to the design of solutions that proactively detect AI-generated contents <cit.>.
Various proactive approaches have been proposed so far, among these solutions image tagging <cit.> introduces a hidden message into the image in order to verify the provenance of the image while the solution proposed in <cit.> aims directly at disrupting the output of the generative models that are used to manipulate the image.
Recently, proactive detection techniques <cit.> were introduced by augmenting the input image with an additive perturbation[The additive perturbation is called “template” using <cit.>'s terminology but we use perturbation in the rest of the paper for clarity.] as a form of protection.
When a protected image is altered, the embedded perturbation is also tampered with, preventing its verification and enabling the detection of manipulations.
However, it is worth highlighting that both <cit.> suffer from two main limitations that could potentially be exploited by an attacker. On the one hand, the manipulation detector is not robust to noise and can be easily fooled by simply adding Gaussian noise to an image. However, in this case, tuning the value of σ for this noise is not easy as a low value may not fool the detector while a high value may corrupt the image too much. In addition, an attacker usually does not have access to the manipulation detector.
On the other hand, both methods use a fixed set of perturbations and, by reverse engineering one of the predetermined perturbations, an attacker could manipulate images and authenticate them as real using the reversed perturbation.
To this end, we conducted two experiments, one adding Gaussian noise with increasing σ and the second to reverse engineer the perturbation used in <cit.>.
Fig. <ref>(a)(b) shows that this family of solutions can be easily bypassed.
To address this issue, our research aims to enhance the proactive protection mechanism by transitioning from a finite set of perturbations to a customized perturbation per image, which ensures robustness as shown in Fig. <ref>(a)(c).
However, designing image-specific perturbations is a challenging task due to the lack of a clear ground truth to guide the model.
To overcome this limitation, we leverage the transformer architecture's cross-attention mechanism, and condition a learnable perturbation on the image, resulting in a unique protection, tailored to the specific characteristics of each input image.
The framework consists of an Encoding and Decoding module with a symmetric cross-attention mechanism. The encoder customizes a sequence of learnable tokens through cross-attention layers to create a personalized perturbation for each image, while the decoder recovers this perturbation to detect and localize manipulations. Additionally, a revamped loss function enforces diversity in the perturbations, contributing to the effectiveness of our approach. With this, protected images can be shared online, and their authenticity can always be verified through the decoder module.
In addition, the performance of state-of-the-art for proactive methods reported <cit.> is biased toward manipulated pixels. Indeed, the protocol defined for detection uses both non-manipulated and manipulated images, while the localisation considers only manipulated images.
This may seem straightforward if detection works perfectly, but as shown in <ref>, detection often fails with unseen GMs, making it unreliable to decide when to compute localisation.
For this reason, we introduce a new evaluation protocol where localisation depends on the detection accuracy.
This new protocol provides a fair comparison by considering both non-manipulated and manipulated images when evaluating the localisation and better generalizes to real-world scenarios.
The contributions of our work can be summarized as follows:
♢
we empirically demonstrate the vulnerability of state-of-the-art proactive schemes to either noise or to our black-box attack which allows estimating the perturbation from the protected image and adding it to malicious manipulated images so that the proactive scheme will detect them as protected and real.
♢
we introduce PADL a new proactive solution which is robust to our black-box attack and to adaptive attacks <cit.> specifically tailored for our method.
♢
PADL achieves remarkable generalization capabilities, indeed, although being trained only on STGAN it generalizes to StarGANv2, BlendGAN, DiffAE, StableDiffusion and StableDiffusion XL.
♢
We define a new evaluation protocol for image manipulation detection and localization that ensures a balanced and realistic comparison. This protocol uses both manipulated and non-manipulated images for localization, and conditions the evaluation on the detector's prediction, thereby reflecting real-world scenarios where most images are authentic. | Passive defense Prior works proposed methods against image manipulation that follow a passive protocol, which means that countermeasures are taken after the manipulation has occurred.
In this category, earlier methods <cit.> identify artifacts left by generative models in the RGB representation of the image. This was achieved by training models on a dataset of real and manipulated images to discriminate between them by examining the visual content.
Nirkin <cit.> improved manipulation detection methods by exploiting face-context discrepancies.
The approach integrated a face identification network for precise semantic segmentation and a context recognition network that considered hair, ears and neck.
By utilizing signals from both networks to identify discrepancies, they enhanced traditional fake image detection.
Chai <cit.> introduced a patch-based CNN classifier to identify and visualize the regions of an image that have undergone manipulation. The classifier slides through the different image patches to determine if it is real or not, thus verifying if and in which region manipulation has occurred.
Dang <cit.> proposed an alternative approach by incorporating an attention mechanism to process and enhance feature maps for the detection task.
The feature map is then used to highlight informative regions, improving binary classification and visualizing manipulated regions.
New solutions <cit.> shifted the focus from the image content to the noise present in some regions of the image. Zhou <cit.> utilises RGB images and noise features extracted using steganalysis rich filter model, in conjunction with a Faster R-CNN module, to detect forgeries.
Similarly, Yang <cit.> followed the same approach yet employed a trainable noise extractor based on Constrained CNN <cit.>.
This choice was motivated by the susceptibility of previous filters to adversarial attacks.
HiFiNet was proposed in <cit.> leveraging four branch encoders that learn a fine-grained hierarchical categorization of the manipulation and provide 2D localization for the manipulation.
While the works described above have produced interesting results, these models perform poorly when applied to new manipulations not seen during training: manipulation generated by different techniques can have different visual artifacts, which hampers the generalization of all learning-based passive methods.
Proactive defense To overcome the limitations of passive methods, researchers have started to explore proactive approaches, in the sense that countermeasures are implemented before any manipulation occurs.
The solution from Ruiz <cit.> proposed to disrupt the generator output by applying an imperceptible perturbation to real images.
The perturbation is generated by a modified version of adversarial attacks such as FGSM, I-FGSM, and PGD and is able to generalize across different image conditioning classes.
However, this solution does not work in a black-box scenario as it requires knowledge of a specific GM.
Wang <cit.> introduced a solution closer to the concept of watermarking.
This approach embeds a hidden message within real images, ensuring its retrieval even after manipulations in order to authenticate the image's identity.
A U-Net model embeds a bit sequence into the images, leveraging redundancy to enhance resistance against manipulation.
Although not intended as a detection tool, this technique can be used to track the origin of changes within a social network by linking each user image to its unique identifier. The concept of watermarking has been extended in <cit.> by applying proactive watermarking to the train data and then training or fine-tuning a GM to maintain the watermark. This approach enables the extraction of the watermark from newly generated images yet assumes being the “owner” of the GM.
Asnani <cit.> proposed a proactive framework for generalized manipulation detection in which a perturbation is added to the input image.
If manipulations occur, the perturbation is tampered and the image can be detected as manipulated.
The perturbation is randomly selected from a finite set that have been learned at training time.
This work has been successively extended in <cit.> with the introduction of manipulation localization.
In this paper, we show that <cit.> are prone to attack, while our method generalizes across diverse unseen GMs and offers a per-image protection perturbation that minimizes the vulnerabilities of predictability caused by the reuse of the same set of perturbations. | The proposed approach relies on a set of learnable tokens that, conditioned on the input image , produces a image-specific perturbation _e.
This perturbation is used for the detection and localization of manipulations, employing two primary components: an Encoding Module and a Decoding Module. The encoding part is composed by a perturbation encoder that transforms the learnable tokens into a perturbation conditioned on the input image .
The decoding part is composed by a protection decoder that extracts the perturbation _d and a Map Block ℳ in charge of performing manipulation detection and localizing the manipulations.
All the components of our architecture, i.e., , and ℳ, consists of N ViT-like transformer blocks.
The whole architecture is shown in <ref>.
§.§ Encoding image-specific perturbations
The encoding module is used to protect real images ∈ℝ^H× W×3 before manipulation occurs.
In passive approaches, detection involves discerning between an authentic image and its manipulated version (), generated by a generative model .
In a proactive framework <cit.>, given an authentic image , the method applies a transformation (·) to create a protected image (),
then the manipulation detection process occurs between protected image () and its manipulated counterpart (()). It is important to note that, since the detection process is based solely on the verification of the presence of the perturbation, an image without a perturbation cannot be classified as protected, regardless of its authenticity.
In prior proactive approaches <cit.>, protection involved randomly selecting a predefined perturbation from a finite set and embedding it into the image. The process for which the perturbation is embedded in the image is additive, i.e., () ≐ + _e, similarly to what is done for adversarial attacks <cit.> yet using another procedure.
On the contrary, our approach parametrizes the transformation (·) still with an additive perturbation _e yet using a set of P learnable tokens {t_i ∈ℝ^d | i=1,…, P}, where d is the dimension of the inner representation used by our model <cit.>.
The tokens are shared among all the perturbed images. Yet, unlike prior art, we also specialize the perturbation to be image-specific, which means the perturbation is also conditioned on the input data , aiming to prevent a black-box attack that may reverse engineer it. We will empirically prove this claim in the experimental section in <ref>, but preliminary results can be appreciated in <ref>.
Our transformation (·) is defined as follows:
() = + α·_e(;), with α > 0
where α is a fixed non-negative scalar parameter used to control the strength of the protecting perturbation and _e(;) returns values in [-1,…,+1] using an hyperbolic tangent function. The values are bounded so that the norm of the perturbation added to the image is limited in the ℓ_2 norm and upper bounded to α√(H× W). This avoids the need for extra losses to minimize the ℓ_2 norm of the perturbation making the training simpler with less hyper-parameters than <cit.>.
The perturbation encoder takes as input the learnable tokens 𝐓_0={} and applies a series of N parameterized transformations based on transformer <cit.> blocks with both self- and cross-attention. The self-attention mechanism only depends on the input tokens while the cross-attention is used to specialize the tokens, conditioning them on the input image .
In particular, we divide the image into P non-overlapping patches of dimension p× p, such that P= HW/p^2,
and embed them using a patch embedder into a sequence of tokens {}, where each x_i ∈ℝ^d has the same dimensionality of the learnable tokens and indicates the matrix of the patch embeddings, thus ∈ℝ^P d.
At each of the N perturbation encoder blocks, the learnable tokens are conditioned on the input image employing the patch tokens as context in the cross attention as:
T_n = softmax[s (T^⋆_n-1W_qW_k^⊤)] W_v + T^⋆_n-1, n=1… N,
where T^⋆_n-1∈ℝ^P d is the matrix containing the tokens processed by the self-attention in the same block,
T_n is the matrix updated with the conditioning after <ref>,
W_q, W_k, W_v are the query, key and value weights matrices,
and s is a scaling factor, computed as in <cit.>.
T^⋆_n-1 is used as query into the cross-attention mechanism while the input image patches are used as context, i.e., they serve as keys and values in <ref>.
This mechanism determines the level of importance that each learnable token in the query should attribute to the corresponding image token , enabling the customization of the learned tokens to the visual characteristics of the image.
In other words, the final perturbation is constructed by taking convex, linear combination of patch embeddings where the combination is learned through the similarity between learnable tokens and patch embeddings.
In the last block, when n=N, the perturbation is finally attained constraining its values with hyperbolic tangent function:
_e(;) = tanh{ϕ_e (T_N)},
where ϕ_e projects and reshapes the tokens in order to match the dimensions of the image.
§.§ Decoding image-specific perturbations for manipulation detection and localization
The decoding module, shown in <ref>-right, is employed to detect and localize any manipulations that may have occurred. This module is composed by two parts: (i) a perturbation decoder , and (ii) a Map Block ℳ to perform detection and estimate the manipulation map.
The decoding module can take either () or (()) as input. Additionally, since in real-world scenarios unprotected images may also be observed, we include as an alternative input. This allows training the module to also detect unprotected inputs, in contrast to <cit.>.
This input image is transformed into patch embeddings using a different patch embedder than and is fed to the perturbation decoder with the intent of recovering the protecting perturbation, if present.
Differently from and ℳ, the perturbation decoder employs only self-attention layers.
This time it is the patch embeddings to go as input and the output of —i.e. ^⋆_N— is forced to recover the original perturbation _e using a reconstruction loss, i.e. ^⋆_N ≐_d ≈_e—see <ref>.
The recovered perturbation is subsequently exploited by the Map Block ℳ.
The Map Block follows an architecture similar to the encoding module yet, interestingly, the conditioning is inverted and the patch embeddings are provided as input.
These patch embeddings are augmented with a learnable class token <cit.> that we concatenate to the classic patch embedding as
{,} where indicates the new patch embedding matrix. We seek for an inductive bias where the token stores information about being or not being manipulated.
Although both blocks receive the sequence of image patches as input, unlike in the encoding part, _d recovered from is used as context in the cross-attention mechanism of ℳ to condition the manipulation map estimation. This choice is symmetric in comparison to the encoding module, where image patches were used in the cross-attention context. However, the rationale behind this is that the estimated perturbation _d is intended to serve as a guide for the subsequent localization of manipulation performed on the image token sequence. Given that the output map should retain the original image's details, the signal is employed to highlight the location where the image has been manipulated. The cross-attention block of the decoding part is thus:
𝐗_,n = softmax[s(^⋆W_qW_k_d^⊤)] _dW_v + ^⋆ , n=1… N,
where ^⋆ is the matrix containing the patch embeddings processed by the self-attention in the same block.
It is worth highlighting that the weights matrices W_q, W_k, W_v, of , and ℳ are not shared.
In the last decoding block, when n=N, the final predicted manipulation mask ℳ() will be a convex linear combination of the recovered perturbation _d, and the way the combination is decided depends on how the recovered perturbation _d “attends” to the patch embedding:
ℳ()=𝐗_,n as {x_1,n … x_P,n_localization _,n_detection} when n=N,
where _,N is extracted and fed to a multi-layer perceptron, supervised with binary labels, as detailed in <ref>. The first P tokens, instead, are projected and reshaped in order to match the dimension of the ground-truth manipulation maps on which they are supervised.
Following prior art <cit.>, the ground-truth manipulation map is defined as Y≐1/2^8-1gray( |() - (())|) where gray(·) converts an RGB image into grayscale. Each pixel of the manipulation mask takes a continuous value in [0,…,1] indicating how much a pixel has been manipulated.
§.§ Training
At train time, all the modules, , , and ℳ are jointly optimized on , () and (()).
For each forward pass, a real image is provided as input to which generates the image-specific perturbation _e to obtain the corresponding protected image τ()=+_e.
Following prior art <cit.>, in order to simulate possible manipulations by generative models, we employ a single GM to manipulate (), resulting in the manipulated protected image (()).
Both τ() and (()) are then fed to the decoding module, which extracts the perturbation, performs binary detection and estimates the manipulation map. The overall training process is detailed in <ref>
Loss objectives
To force the decoded perturbation _d to be similar to the encoded one we apply a reconstruction loss, _rec, while to maximize the similarity between the ground-truth Y and the estimated ℳ() manipulations map we use the cosine distance, as in _map:
_rec = _e_d, _map = Yℳ(), ℒ_div = ∑_i,j=1
i≠ j^Bmax(_e[i]_e[j],0 )
In addition, to ensure variation within the batch for _e, we introduced a perturbation diversity loss, _div. This loss is crucial as it constrains to generate a unique signal for each image.
This loss computes the cosine similarity between _e for pairs of images within the batch, ensuring varying perturbations across different images. Without this loss, would create a single perturbation: plain cosine similarity was not enough, as the model learned only two distinct perturbations with a cosine similarity of -1. Consequently, within a batch, the mean cosine similarity tended to approach zero due to the compensatory effect between same and opposite perturbation comparisons. To address this, negative values were clamped to zero, effectively removing contributions from pairs with negative similarity and forcing the perturbation to be orthogonal. An ablation study on the importance of the ℒ_div loss component is provided in <ref>.
Finally, in order to train ℳ to perform manipulation detection, we simply apply binary cross-entropy to the output of the multi-layer perceptron that processes _,N supervised by the binary labels indicating if we are processing τ() (protected) vs or (()) (unprotected or manipulated).
In addition, we randomly sum a small Gaussian noise to the image provided as input to the detection loss during training.
By doing so, we explicitly force our model to distinguish our perturbation from noise applied to the images enabling a more robust protection.
The overall loss employed to optimize the model is given by the sum of all previous terms as
ℒ = ℒ_rec + ℒ_map + ℒ_div + ℒ_BCE
§.§ Image protection, manipulation detection and localization
The proposed approach is consistent with prior works that utilize a proactive method for defending against image manipulation. This method is applicable to any individual or organization, such as journalists and media outlets, that wish to safeguard the integrity of their images. For news agencies publishing sensitive content, such as reports on political events or social unrest, the ability to verify whether an image has been altered is critical in preventing the spread of misinformation. A protected image can be shared online, and its authenticity can always be verified by the decoder module, ensuring that any manipulation or tampering becomes detectable. This process is shown in <ref>.
Additionally, in legal or forensic investigations, where image evidence is crucial, this approach offers an extra layer of security. By embedding an invisible protection, law enforcement agencies and legal professionals can ensure that the images presented in court remain untampered from capture to presentation. | null | null | This work introduces a novel solution for proactive image manipulation detection and localization.
Our solution employs a transformer-based encoder conditioned by an input image to generate a specific perturbation. Then a transformer-based decoder is used to extract the perturbation and leverage it to perform manipulation localization and detection.
Unlike previous methods based on fixed protection, our solution generates image-specific perturbations, improving resistance against reversal attacks, while also achieving remarkable detection and localization performance. It is also worth highlighting that the perturbation introduced by our approach has very little impact on the image quality.
Broader impact
The objective of this research is to prevent the misuse of generative image models therefore mitigating the spread of misinformation.
By enabling a more effective detection of manipulated images, we hope to offer a way to bolster trust and integrity in digital content, which is crucial for fields such as journalism, forensics, and law enforcement.
Limitations and future works
The main limitation of our solution is related to the drop in performance for the localization when generative models based on a diverse architecture and paradigm (e.g., diffusion model) are employed.
In order to enhance performance in this regard, it would be interesting to explore the potential of novel architectural approaches for both decoder and encoder modules.
Although our method demonstrates superior performance compared to previous approaches, further investigation is required to assess its suitability for real-world scenarios. Online platforms may apply filters to uploaded images, potentially compromising the embedded protection.
Additionally, it would be worthwhile to assess whether the methodology employed for perturbation generation can be repurposed for other tasks, such as adversarial attacks. These represent promising directions for further research and advancement in the field. |
http://arxiv.org/abs/2409.17298v1 | 20240925191654 | Sparsity, Regularization and Causality in Agricultural Yield: The Case of Paddy Rice in Peru | [
"Rita Rocio Guzman-Lopez",
"Luis Huamanchumo",
"Kevin Fernandez",
"Oscar Cutipa-Luque",
"Yhon Tiahuallpa",
"Helder Rojas"
] | stat.ME | [
"stat.ME",
"cs.LG",
"stat.AP",
"stat.ML"
] |
[
*
=====
§ INTRODUCTION
Today, precision agriculture is undergoing rapid transformation due to the integration of advanced technologies such as pattern recognition, machine learning, and the use of remotely sensed data and imagery <cit.>. These innovations have drastically improved farmers' ability to forecast agricultural yields with unprecedented accuracy. By analyzing large volumes of data and detecting hidden patterns, these technologies enable the prediction of crop yields, identification of potential problems, and optimization of resource use, including water, fertilizers, and pesticides. However, in the regional context—particularly in Peru—the application of these technologies for agricultural production forecasting remains underexplored, leaving substantial potential yet to be tapped <cit.>.
Therefore, in this work, we are interested in studying how machine learning techniques, combined with climatological and geospatial data, remote sensing data, and imagery, can be used to improve the predictive capacity of agricultural yields. In particular, we aim to investigate the identification of causal relationships between remote sensing variables, such as NDVI, precipitation, and temperature, and agricultural yields of certain crops. Additionally, we are interested in using these causal relationships to build simple and parsimonious machine learning models that accurately forecast agricultural yields. To this end, we focus on rice crop yield data in Peru as a case study for our proposed techniques and methodologies.
It is important to mention that neither the choice of the agricultural product nor the specific geographical area limits the scope of our conclusions and results. However, it is also important to highlight that, despite the relevance of rice in Peruvian agriculture, there is a scarcity of local research specifically addressing rice yield forecasting using advanced techniques. Therefore, this study specifically aims to develop and investigate how the use of sparsity, regularization, and machine learning techniques, combined with remote sensing variables, can positively influence the accuracy of agricultural yield forecasts, contributing to this sector and to the development of innovative methodologies that allow for yield prediction, production optimization, and the sustainability of this crop.
In this context, several advanced methodologies employed in this work are highlighted. Among these are techniques for extracting remote sensing data and integrating them with the National Agricultural Survey (ENA), both of which significantly influence crop yield. Notably, the regression model with Elastic-Net regularization stands out, offering enhanced flexibility by combining the penalties associated with two standard rules with desirable properties. This approach achieves a balance between variable selection and parameter regulation, which was used to identify causal relationships between remote sensing variables and agricultural yield <cit.>. Since our goal is to identify parsimonious causal relationships between variables, we will employ sparsity-inducing techniques that align with qualitative field criteria. Additionally, Generalized Additive Models (GAM) will be applied to capture nonlinear relationships between predictor variables and agricultural yield. Finally, the XGBoost model will be implemented, a highly flexible machine learning technique for agricultural yield prediction. XGBoost is capable of handling large datasets and capturing complex interactions between variables, significantly improving prediction accuracy. Therefore, these last two models will be employed to obtain agricultural yield forecasts based on the previously identified causal relationships, and their results will be compared with those obtained from the Elastic-Net regularization regression model.
§.§ Outline
The article is organized as follows. In Section sec:2, we describe the study area and explain how the dataset was structured and obtained, including the extraction and processing of remote sensing data and its integration with the Peruvian National Agrarian Survey (ENA). We also provide a detailed explanation of the data preprocessing and modeling phase, where we applied techniques such as regression with Elastic-Net regularization, Generalized Additive Models (GAM), and the XGBoost model. In Section sec:3, we present and discuss the results obtained from these models, analyzing their performance and how causal relationships between remote sensing variables and agricultural yield were identified. In Section sec:4, we present the conclusions of the study, emphasizing the main findings and demonstrating how the use of sparsity, regularization, and machine learning techniques, combined with remote sensing variables, can enhance the prediction of agricultural yield. Finally, in Section sec:5, we outline the specific contributions made by the authors to this work.
§ MATERIALS AND METHODS
§.§ Area of study
As mentioned earlier, we use paddy rice production in Peru as a case study for our methodologies. Several regions of Peru were selected as study areas, focusing on those with the highest concentration of paddy rice production. The primary source of information for this analysis is the agricultural censuses conducted in Peru between 2015 and 2018, which provide detailed and up-to-date data on farming practices and characteristics[<https://www.datosabiertos.gob.pe/search/type/dataset?query=encuesta+nacional+agropecuaria sort_by=changed sort_order=DESC>].
To ensure proper organization and georeferencing of the collected data, a specific coding system was developed. This system allows for the identification of the location of each study area, facilitating both spatial and temporal analysis. The coding system includes the department code (CCDD), which identifies the administrative region; the province code (CCPP), which specifies the subdivision within the department; the district code (CCDI), which details the local subdivision; and the cluster code, which groups smaller areas or sampling units within a district. Additionally, latitude and longitude coordinates were incorporated, as demonstrated in Table tab1, which illustrates the implementation of this coding system in the project. This highlights its applicability in identifying and monitoring paddy rice production areas over time.
§.§ Remote sensing data
The satellite images used in this study were obtained through remote sensing and processed using open-access tools like Google Earth Engine (GEE)[<https://code.earthengine.google.com/>]. On this platform, radiometric and atmospheric corrections were applied, and cloud-induced variability was addressed to ensure high-resolution images<cit.>. These images are crucial for generating accurate information from relevant indices, as shown in Figure sens.
Key remote sensing variables that significantly influence crop development and yield were identified and selected. The first of these is the Normalized Difference Vegetation Index (NDVI), a crucial metric for assessing vegetation health <cit.>, which is calculated as follows:
NDVI = ρ NIR - ρ RED/ρ NIR + ρ RED
,
where NIR is the near infrared reflectance band and RED is the red band. The data for this variable was obtained through the MOD13Q1 sensor, which generates NDVI time series with a frequency of 16 days <cit.>. This sensor allows categorizing land surface properties and biological processes, as well as primary production and land cover changes. The sampled NDVI series is shown in Figure ser-ndvi.
The second variable is Precipitation (PREC), which directly affects soil moisture and is therefore a critical factor for crop growth <cit.>. Although the specific formula for determining precipitation using CHIRPS Pentad is not explicitly provided, the process involves combining satellite data with information from weather stations <cit.>. The general procedure can be described as follows:
PREC = f(S,T,C)
,
where PREC represents the estimated precipitation, S corresponds to satellite data, T refers to ground station data, and C includes applied corrections and adjustments. This process generates precipitation time series in a gridded format, which is useful for trend analysis and seasonal drought monitoring. The sampled PREC time series is shown in Figure ser-prec.
Finally, the third remote sensing variable is Temperature (TEMP), which plays a crucial role in crop germination and development <cit.>. This variable is obtained using the MOD11A1 sensor, and although no specific formula is provided, the general formula is as follows:
TEMP = a + b(T_31 + T_32) + c(T_31 - T_32) + d(T_31 - T_32)^2
,
where LST represents the land surface temperature, and T_31 and T_32 are the brightness temperatures in bands 31 and 32, respectively <cit.>. The coefficients a, b, c, d are empirically determined and vary with atmospheric and surface conditions. To estimate land surface temperature, algorithms that combine satellite data with weather station observations are employed. The sampled TEMP time series is shown in Figure ser-temp.
§.§ Data preprocessing
Once the remotely sensed data and relevant agricultural census data were extracted, it was crucial to ensure robust temporal consistency and uniform quality and frequency before proceeding with the analysis. To achieve this, a Spline interpolation process was implemented, allowing the establishment of a weekly frequency in the NDVI, PREC, and TEMP series <cit.>. This process not only enhanced the accuracy in capturing the temporal variability of the data, but also enabled the division of the data into twelve lags, referred to as lags, for the NDVI, Precipitation, and Temperature variables. As a result, after interpolation, the three remote sensing variables are provided as time series with a weekly frequency.
Additionally, considering the complex and highly nonlinear nature of the relationships between the remote sensing variables and agricultural yield, we chose to include both first- and second-order variations of the NDVI, Precipitation, and Temperature variables, along with their respective time lags. Due to the physical interpretation of these first- and second-order differences, we refer to these new variables as the velocities and accelerations of the remotely sensed variables. For instance, NDVI velocity—analogously defined for the Precipitation and Temperature variables—is described as the rate of change in NDVI values between consecutive periods:
VEL_NDVI_t := Δ NDVI_t =NDVI_t - NDVI_t-1,
where NDVI_t is the NDVI corresponding to week t. Additionally, NDVI acceleration—analogously defined for the Precipitation and Temperature variables—represents the change in NDVI velocity between consecutive periods:
ACCEL_NDVI_t := Δ^2 NDVI_t= VEL_NDVI_t-VEL_NDVI_t-1.
Once the velocity and acceleration variables were defined for the three series, we also incorporated the lags of up to 12 weeks for these new variables into the analysis. Specifically, the following sequences of variables were considered
{VEL_NDVI_t-d}_d=1^12, {ACCEL_NDVI_t-d}_d=1^12,
{VEL_PREC_t-d}_d=1^12, {ACCEL_PREC_t-d}_d=1^12,
{VEL_TEMP_t-d}_d=1^12, {ACCEL_TEMP_t-d}_d=1^12.
These newly derived variables allow us to capture higher-order dynamic relationships that would be difficult to detect using the original variables alone. An important finding in this study, as we will demonstrate, is that these new variables significantly improve the predictive capacity of the models. They enable us to incorporate dynamic effects into the relationships in a straightforward manner.
§.§ Data Set
The development of the dataset for this study involved several key considerations. First, we ensured that the crop under study was homogeneous, focusing on agricultural areas where only one type of crop was grown, which allowed for more accurate data collection. Additionally, we selected a transitory crop, which undergoes distinct phenological stages such as sowing, growth, and harvest. Another crucial criterion was the ability to clearly observe and distinguish the crop using satellite imagery. For these reasons, along with its social importance, we chose to study paddy rice, a crop of significant relevance in Peru that met all the above criteria.
Once the crop areas were identified, we proceeded with the extraction of their variables and characteristics, drawing from two main data sources. The first source was the National Agrarian Survey (ENA), which includes variables characterizing the use of good agricultural practices on the farms associated with the sampled crop areas. The second data source consisted of remote sensing data, extracted using open-access tools such as Google Earth Engine (GEE). For this source, algorithms were developed to obtain spatial information on NDVI, PREC, and TEMP, based on latitude, longitude, and multispectral images of the sampled plots. These series were then interpolated to obtain weekly observations, with lags of up to twelve weeks before the harvest date.
It is important to note that limitations related to inaccuracies in the geographic information system for rice cultivation provided by the ENA required significant resources for data cleaning and correction. Combined with the limited resources available for the study, this resulted in the identification of only 348 paddy rice plots across different regions of Peru. However, it is worth emphasizing that with additional resources, we could significantly expand the sample size, thereby strengthening our results and conclusions.
Finally, after completing the variable engineering process—which involved creating velocity and acceleration variables from the remote sensing data—we integrated this information with control variables extracted from the National Agrarian Survey, as shown in Table Tabla:Variables. This integration resulted in a comprehensive dataset of 348 records, which served as the basis for the analysis conducted in the modeling phase described in detail below.
§.§ Modeling Phase
The final dataset considered for this study is composed of 𝒟={(x_i, y_i)}_i=1^N, where N=348 represents the number of sampled plots. Here, the response variable y_i ∈ℝ, labeled as Prod-Hect, denotes the agricultural yield of the crop, measured in tons per hectare, where the harvest occurred in week T_i. Additionally, the covariate vector x_i∈ℝ^81 consists of two groups of variables.
The first group, denoted by z_i∈ℝ^9, includes variables that characterize the application of good agricultural practices for crop i. This set of variables is defined as follows:
z_i = (P204_TIPO_i, P206_INI_i, P208_i, P211_1_i,
P211_2_i, P211_4_i, P212_i, P213_i, P213_i),
where the labels are described in Table Tabla:Variables. It is important to note that the variables comprising z_i were extracted from the ENA, corresponding to the year immediately following the harvest week T_i. The second group consists of the series VEL_NDVI_t^i, ACCEL_NDVI_t^i, VEL_PREC_t^i, ACCEL_PREC_t^i, VEL_TEMP_t^i, and ACCEL_TEMP_t^i, where the superscript indicates that these series pertain to crop i. This is expressed as follows:
w_i=( {VEL_NDVI_T_i-d^i}_d=1^12, {ACCEL_NDVI_T_i-d^i}_d=1^12,
{VEL_PREC_T_i-d^i}_d=1^12, {ACCEL_PREC_T_i-d^i}_d=1^12,
{VEL_TEMP_T_i-d^i}_d=1^12, {ACCEL_TEMP_T_i-d^i}_d=1^12) .
Therefore, the covariate vector consists of 81 variables and is represented as follows:
x_i=(z_i,w_i).
Finally, our dataset 𝒟∈ℝ^348× 82 includes both cross-sectional and longitudinal variables that, as will be shown later, effectively characterize the corresponding agricultural yields. This dataset encompasses remote sensing variables related to climatic and geospatial conditions up to 12 weeks prior to the harvest. As will be discussed later, these temporal lags will enable us to identify causal relationships between these variables and agricultural yield. Next, having established the database for this study, we will outline the three methodologies we will employ to obtain our results and conclusions.
§.§.§ Regression with Elastic-Net Regularization
Given the dataset 𝒟, our objective is to forecast agricultural yield y_i using the predictors x_i defined earlier. To achieve this, we assume a linear regression structure between the variables:
y_i=β_0 + x_i^⊤β + ξ_i,
where the intercept β_0 and the weights β=(β_1,…,β_81) are unknown parameters, and ξ_i represents the error term. Since our goal is to construct a simple and parsimonious model, and given that we have a large number of predictor variables that are closely related, we choose to induce moderate sparsity in the parameter vector. Therefore, to obtain the parameter estimates (β̂_0, β̂), we opt for Elastic-Net regularization <cit.>, which involves solving the convex optimization problem:
(β_0, β) ∈ℝ×ℝ^81min{1/2∑_i=1^N( y_i - β_0 - x_i^⊤β)^2 + λ[ 1/2(1 - α)β_2^2 + αβ_1 ] },
where |β|p = ( ∑i=1^81 |β_i|^p )^1/p represents the standard ℓ^p norm. The penalty hyperparameter λ≥ 0 controls the complexity of the resulting model, while the hyperparameter 0 ≤α≤ 1 governs the desired level of sparsity. For more details, see <cit.>. We chose to set α=0.02, which assigns more weight to the ℓ^2 norm compared to the ℓ^1 norm. This asymmetry in the weights results in a calibration of the induced sparsity in the parameter vector β that aligns with qualitative field criteria regarding the expected relationships.
To determine the optimal value of λ, we employed the standard cross-validation criterion, evaluating the mean squared error (MSE) of the cross-validation for different values of λ on a logarithmic scale, as shown in Figure Res: Lambda. This procedure yielded an optimal value of λ=2.58. Finally, once both hyperparameters were calibrated, we solved the problem (<ref>) using convex optimization algorithms extensively detailed in <cit.>. Consequently, the solution to (<ref>) provides us with a sparse estimate of the parameter vector for the model (<ref>).
§.§.§ Gradient Tree Boosting
Let q:ℝ^81→𝒯 represent the structure of a tree that maps the characteristics of a crop x_i to the index of the corresponding leaf. The weight vector of its leaves is given by ω=(ω_1,…,ω|𝒯|)∈ℝ^|𝒯|, where ω_k denotes the score of the k-th leaf. Here, 𝒯 is the set of leaves of the tree, and |𝒯| indicates the total number of leaves. To obtain the prediction of agricultural yield ŷ_i, we will use an additive ensemble κ of these trees, denoted by ϕ, which can be expressed as follows:
ŷ_i = ϕ(x_i) = ∑_k=1^κ f_k(x_i), f_k ∈ℱ,
where ℱ = {f:ℝ^81→ℝ| f(x_i) = ω_q(x_i)} denotes the space of regression trees, also known as CART <cit.>. It is important to note that each f_k corresponds to an independent tree structure q with its associated leaf weights ω.
Based on the dataset 𝒟, the learning of the functions f_k used in the model (<ref>) is achieved by solving the regularized optimization problem:
f_k∈ℱ, ∀ kmin ∑_i=1^N (ŷ_i- y_i)^2 + ∑_k=1^κ{γ |𝒯| + 1/2λω^2_2},
where the hyperparameter γ penalizes complexity due to the depth of the trees, and λ regularizes the weights of the trees to prevent overfitting. The standard learning algorithm used to solve (<ref>) is based on gradient methods, which is why this model is commonly referred to as Gradient Tree Boosting (XGBoost) <cit.>. The hyperparameters in (<ref>) are calibrated using established methodologies. Specifically, we perform optimal selection through 3-fold cross-validation, as illustrated in Figure fig: CrossValidation. Through this procedure, we obtain optimal values of γ=0.1 and λ=0.6. Additionally, to regularize the set of tree leaves 𝒯, we optimally limit the maximum depth of the trees to 5.
§.§.§ Semi-parametric Additive Model
As a semi-parametric alternative, we choose a Generalized Additive Model (GAM) structure, which can be expressed in our case as follows:
ŷ_i=θ_0+z_i^⊤θ+∑_j=1^72 f_j(w_i^(j)),
where w_i^(j) is the j-th coordinate of the vector defined in (<ref>). In this model, θ∈ℝ^9 represents a parameter vector, θ_0 ∈ℝ is the intercept, and f_j are smooth functions to be estimated using the dataset 𝒟. To estimate the model in (<ref>), we adopt the widely used approach of representing the functions f_j with reduced-rank smoothing splines that result from solving variational problems. For more details, see <cit.>.
§ RESULTS AND DISCUSSION
As mentioned earlier, we have two groups of predictor variables. The first group, z_i, consists of variables that characterize the use of good agricultural practices in crops. These variables serve as control variables; that is, we are not interested in their direct effects but rather use them to control for the influences of other factors that may affect the relationship between the predictors and the response variable. The second group, w_i, includes the velocities and accelerations of the remote sensing variables—specifically, the velocities and accelerations of NDVI, PREC, TEMP, and their respective time lags, as defined in (<ref>), (<ref>), and (<ref>). Our primary interest in this study is to understand the causal relationships between the remote sensing variables and agricultural yield, particularly focusing on the parameters (or coefficients) associated with the variables in the vector w_i.
Regarding the velocity variables, the parameter vectors obtained from model (<ref>) through (<ref>) for NDVI and TEMP are highly sparse, in contrast to the parameter vector for PREC, which is dense; see Figure fig: Velocidad. The sparsity in the velocities of NDVI and Temperature suggests that the effects of variations in these variables on agricultural yield are delayed. For instance, first-order variations in NDVI affect agricultural yield only 8 or 9 weeks after they occur, as shown in Figure fig: Velocidad. A similar pattern is observed with Temperature variations, which have a lag of 10 to 12 weeks. It is important to note that these lag periods are not precise and may vary with the sample. Nonetheless, a significant qualitative conclusion is that variations in these two variables do not immediately impact agricultural yield. The lag associated with both climatic variables may be attributed to the crop germination process.
In contrast, variations in Precipitation have an immediate and lasting effect on agricultural yield. The scenario changes when considering the acceleration variables: second-order variations in NDVI impact agricultural yield between 6 to 9 weeks later, while similar variations in Precipitation and Temperature affect it approximately 3 weeks later; see Figure fig: aceleracion. The acceleration parameters associated with all three variables, obtained from model (<ref>) through (<ref>), are also sparse vectors.
Based on these analyses, we can assert that the velocities and accelerations of NDVI, Precipitation, and Temperature have a causal effect on agricultural yield. Since these causal relationships involve time lags, we can state that the relationship is in the Granger sense <cit.>. It is essential to highlight that these causal relationships are absent in the original remote sensing variables; constructing the velocity and acceleration variables is necessary to establish such causal connections.
To compare predictive capacity following the identification of causal relationships between the remote sensing variables and agricultural yield, we chose to use the XGBoost model described in (<ref>) and (<ref>). This approach allows us to capture the complex non-linear patterns between the predictor variables x_i and the response variable y_i. Given that XGBoost is a highly flexible non-parametric model, combined with the constraints posed by a small sample size, there is a risk of overfitting when relationships are spurious or synthetic. In this context, our construction of velocity and acceleration variables helps mitigate this risk, as these variables maintain causal relationships with our response variable.
Since the Elastic-Net regularized regression model is fully parametric and XGBoost is entirely non-parametric, we will also include a semi-parametric alternative in our comparative prediction analysis: the GAM model described in (<ref>). To compare the three models, the dataset was split into training and test sets in an 80% to 20% ratio, respectively. The performance of each model in both samples is presented in Table tab:mse-modelos.
The XGBoost model demonstrates a good fit, as evidenced by its low MSE value; however, the notable difference in MSE between the training and test samples suggests potential overfitting. In contrast, the Elastic-Net regularized regression model shows a smaller increase in MSE values, indicating greater stability and better generalization ability. Meanwhile, the GAM model exhibits high MSE values in both the training and test samples compared to the other two models.
These conclusions regarding the performance of the three models may be attributed to the limited amount of data available for this study, which restricts XGBoost's ability to perform effective cross-validation and adequately capture the interactions between the variables and agricultural yield. The most significant finding is that the construction of velocity and acceleration variables derived from remote sensing data, due to their causal nature, can substantially enhance the development of models for predicting agricultural yields.
§ CONCLUSIONS
This study demonstrates that remote sensing variables contain valuable predictive information about agricultural production. However, the relationships between these variables and production are neither linear nor straightforward; instead, they are complex and difficult to identify without appropriate dynamic transformations. For this reason, we propose employing sparsity techniques based on Elastic-Net regularized regression, alongside dynamic lag criteria. This approach enables effective management of the correlations between the velocity and acceleration variables derived from remote sensing data, while also enhancing the accuracy of causal relationship identification.
Furthermore, the results indicate a causal relationship, in the Granger sense, between the remote sensing variables and agricultural yield, highlighting their predictive capability—particularly when integrated with agricultural census data. Our main contribution lies in identifying the types of dynamic transformations that should be applied to remote sensing variables for effective use in agricultural prediction models. Therefore, utilizing these transformations along with machine learning techniques represents a promising strategy for developing simpler and more accurate predictive models, significantly improving forecasting capabilities in agriculture.
§ CONTRIBUTIONS AND FINDINGS
This work makes several significant contributions. First, it emphasizes the integration of remote sensing data with agricultural censuses, resulting in the creation of a robust and enhanced database. This combination provides accurate and up-to-date information on both the study area and the crop, facilitating more comprehensive analyses. Second, variable engineering was conducted based on the extracted agricultural and climatic data, enabling the capture of non-linear patterns that influence crop yield. Finally, the results and conclusions of this study lead to substantial improvements in crop forecasting, offering a deeper understanding of how factors such as climate, soil moisture, and vegetation health impact growth and yield.
99
chen2016xgboostChen, T. & Guestrin, C. Xgboost: A scalable tree boosting system. Proceedings Of The 22nd Acm Sigkdd International Conference On Knowledge Discovery And Data Mining. pp. 785-794 (2016)
james2013introductionJames, G. An introduction to statistical learning. (springer,2013)
hastie2017generalizedHastie, T. Generalized additive models. Statistical Models In S. pp. 249-307 (2017)
tibshirani1996regressionTibshirani, R. Regression shrinkage and selection via the lasso. Journal Of The Royal Statistical Society Series B: Statistical Methodology. 58, 267-288 (1996)
zou2005regularizationZou, H. & Hastie, T. Regularization and variable selection via the elastic net. Journal Of The Royal Statistical Society Series B: Statistical Methodology. 67, 301-320 (2005)
duran2020prometeoDuran-Lopez, L., Dominguez-Morales, J., Conde-Martin, A., Vicente-Diaz, S. & Linares-Barranco, A. PROMETEO: A CNN-based computer-aided diagnosis system for WSI prostate cancer detection. IEEE Access. 8 pp. 128613-128628 (2020)
rencher2008linearRencher, A. & Schaalje, G. Linear models in statistics. (John Wiley & Sons,2008)
ambrosio2002correccionAmbrosio, G., González, J. & Arevalo, V. Corrección radiométrica y geométrica de imágenes para la detección de cambios en una serie temporal. Málaga, España. (2002)
saunders1988improvedSaunders, R. & Kriebel, K. An improved method for detecting clear sky and cloudy radiances from AVHRR data. International Journal Of Remote Sensing. 9, 123-150 (1988)
yengoh2015useYengoh, G., Dent, D., Olsson, L., Tengberg, A. & Tucker III, C. Use of the Normalized Difference Vegetation Index (NDVI) to assess Land degradation at multiple scales: current status, future trends, and practical considerations. (Springer,2015)
volante2015expansionVolante, J., Mosciaro, J., Morales Poclava, M., Vale, L., Castrillo, S., Sawchik, J., Tiscornia, G., Maldonado, I., Vega, A., Trujillo, R. & Others Expansión agrícola en Argentina, Bolivia, Paraguay, Uruguay y Chile entre 2000-2010: Caracterización espacial mediante series temporales de índices de vegetación. RIA. Revista De Investigaciones Agropecuarias. 41, 179-191 (2015)
soto2002instructivoSoto Ortiz, R., Vega Marrero, G. & Tamajón Navarro, A. Instructivo técnico del cultivo de Cymbopogon citratus (DC) Stapf (caña santa). Revista Cubana De Plantas Medicinales. 7 (2002)
funk2014quasiFunk, C., Peterson, P., Landsfeld, M., Pedreros, D., Verdin, J., Rowland, J., Romero, B., Husak, G., Michaelsen, J. & Verdin, A. A quasi-global precipitation time series for drought monitoring. (US Geological Survey,2014)
sanchez2012efectoSanchez Garcia, J. & Loayaza Jaramillo, E. Efecto del tiempo de germinación sobre las caracteristicas físicas, reologicas y tecnologicas de la harina del arroz integral variedad iniap 15, cosecha verano. (2012,2012)
guha2019analyticalGuha, S., Govil, H. & Diwan, P. Analytical study of seasonal variability in land surface temperature with normalized difference vegetation index, normalized difference water index, normalized difference built-up index, and normalized multiband drought index. Journal Of Applied Remote Sensing. 13, 024518-024518 (2019)
martinez2009vegetacionMartínez, M. Dinámica de la vegetación a partir del análisis de series temporales del NDVI utilizando la transformada wavelet. Teledetección Del Medio Ambiente. 113, 1823-1842 (2009)
wongsai2017annualWongsai, N., Wongsai, S. & Huete, A. Annual seasonality extraction using the cubic spline function and decadal trend in temporal daytime MODIS LST data. Remote Sensing. 9, 1254 (2017)
tirado2014evaluacionTirado Ospina, Y. & Barreto Ortiz, J. Evaluación de la competitividad del arroz colombiano frente al estadounidense: Un análisis de la seguridad alimentaria en el marco del TLC. (Ibagué: Universidad del Tolima,2014)
estrada2012modelacionEstrada Sifontes, V. & Pacheco Moya, R. Modelación hidrológica con HEC-HMS en cuencas montañosas de la región oriental de Cuba. Ingeniería Hidráulica Y Ambiental. 33, 71-80 (2012)
liakos2018machineLiakos, K., Busato, P., Moshou, D., Pearson, S. & Bochtis, D. Machine learning in agriculture: A review. Sensors. 18, 2674 (2018)
kussul2017deepKussul, N., Lavreniuk, M., Skakun, S. & Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience And Remote Sensing Letters. 14, 778-782 (2017)
briceno2019deforestacionBriceño, N., Castillo, E., Quintana, J., Cruz, S., Lopez, R. & Others Deforestación en la Amazonia peruana: Indices de cambios de cobertura y uso del suelo basado en SIG. Boletín De La Asociación De Geógrafos Españoles. (2019)
fernandezanalisisFernandez Molina, K. & Mendoza Bernardillo, J. Análisis de clasificadores de imágenes satelitales en deslizamiento de tierra con técnicas de machine learning aplicado al distrito de Ancash. (Universidad Peruana de Ciencias Aplicadas (UPC))
hastie2015statisticalHastie, T., Tibshirani, R. & Wainwright, M. Statistical learning with sparsity. Monographs On Statistics And Applied Probability. 143, 8 (2015)
breiman2017classificationBreiman, L. Classification and regression trees. (Routledge,2017)
wood2017generalizedWood, S. Generalized additive models: an introduction with R. (chapman,2017)
granger1969investigatingGranger, C. Investigating causal relations by econometric models and cross-spectral methods. Econometrica: Journal Of The Econometric Society. pp. 424-438 (1969)
| Today, precision agriculture is undergoing rapid transformation due to the integration of advanced technologies such as pattern recognition, machine learning, and the use of remotely sensed data and imagery <cit.>. These innovations have drastically improved farmers' ability to forecast agricultural yields with unprecedented accuracy. By analyzing large volumes of data and detecting hidden patterns, these technologies enable the prediction of crop yields, identification of potential problems, and optimization of resource use, including water, fertilizers, and pesticides. However, in the regional context—particularly in Peru—the application of these technologies for agricultural production forecasting remains underexplored, leaving substantial potential yet to be tapped <cit.>.
Therefore, in this work, we are interested in studying how machine learning techniques, combined with climatological and geospatial data, remote sensing data, and imagery, can be used to improve the predictive capacity of agricultural yields. In particular, we aim to investigate the identification of causal relationships between remote sensing variables, such as NDVI, precipitation, and temperature, and agricultural yields of certain crops. Additionally, we are interested in using these causal relationships to build simple and parsimonious machine learning models that accurately forecast agricultural yields. To this end, we focus on rice crop yield data in Peru as a case study for our proposed techniques and methodologies.
It is important to mention that neither the choice of the agricultural product nor the specific geographical area limits the scope of our conclusions and results. However, it is also important to highlight that, despite the relevance of rice in Peruvian agriculture, there is a scarcity of local research specifically addressing rice yield forecasting using advanced techniques. Therefore, this study specifically aims to develop and investigate how the use of sparsity, regularization, and machine learning techniques, combined with remote sensing variables, can positively influence the accuracy of agricultural yield forecasts, contributing to this sector and to the development of innovative methodologies that allow for yield prediction, production optimization, and the sustainability of this crop.
In this context, several advanced methodologies employed in this work are highlighted. Among these are techniques for extracting remote sensing data and integrating them with the National Agricultural Survey (ENA), both of which significantly influence crop yield. Notably, the regression model with Elastic-Net regularization stands out, offering enhanced flexibility by combining the penalties associated with two standard rules with desirable properties. This approach achieves a balance between variable selection and parameter regulation, which was used to identify causal relationships between remote sensing variables and agricultural yield <cit.>. Since our goal is to identify parsimonious causal relationships between variables, we will employ sparsity-inducing techniques that align with qualitative field criteria. Additionally, Generalized Additive Models (GAM) will be applied to capture nonlinear relationships between predictor variables and agricultural yield. Finally, the XGBoost model will be implemented, a highly flexible machine learning technique for agricultural yield prediction. XGBoost is capable of handling large datasets and capturing complex interactions between variables, significantly improving prediction accuracy. Therefore, these last two models will be employed to obtain agricultural yield forecasts based on the previously identified causal relationships, and their results will be compared with those obtained from the Elastic-Net regularization regression model.
§.§ Outline
The article is organized as follows. In Section sec:2, we describe the study area and explain how the dataset was structured and obtained, including the extraction and processing of remote sensing data and its integration with the Peruvian National Agrarian Survey (ENA). We also provide a detailed explanation of the data preprocessing and modeling phase, where we applied techniques such as regression with Elastic-Net regularization, Generalized Additive Models (GAM), and the XGBoost model. In Section sec:3, we present and discuss the results obtained from these models, analyzing their performance and how causal relationships between remote sensing variables and agricultural yield were identified. In Section sec:4, we present the conclusions of the study, emphasizing the main findings and demonstrating how the use of sparsity, regularization, and machine learning techniques, combined with remote sensing variables, can enhance the prediction of agricultural yield. Finally, in Section sec:5, we outline the specific contributions made by the authors to this work. | null | null | null | null | null |
http://arxiv.org/abs/2409.17775v1 | 20240926121352 | UNICORN: A Deep Learning Model for Integrating Multi-Stain Data in Histopathology | [
"Valentin Koch",
"Sabine Bauer",
"Valerio Luppberger",
"Michael Joner",
"Heribert Schunkert",
"Julia A. Schnabel",
"Moritz von Scheidt",
"Carsten Marr"
] | cs.CV | [
"cs.CV"
] |
Force Fields for Molecular Dynamics Simulations of Charged Dust Particles with Finite Size in Complex Plasmas
S. K. Kodanova^1, 2
=============================================================================================================
Background: The integration of multi-stain histopathology images through deep learning poses a significant challenge in digital histopathology. Current multi-modal approaches struggle with data heterogeneity and missing data. This study aims to overcome these limitations by developing a novel transformer model for multi-stain integration that can handle missing data during training as well as inference.
Methods: We propose UNICORN (UNiversal modality Integration Network for CORonary classificatioN) a multi-modal transformer capable of processing multi-stain histopathology for atherosclerosis severity class prediction. The architecture comprises a two-stage, end-to-end trainable model with specialized modules utilizing transformer self-attention blocks. The initial stage employs domain-specific expert modules to extract features from each modality. In the subsequent stage, an aggregation expert module integrates these features by learning the interactions between the different data modalities.
Results: Evaluation was performed using a multi-class dataset of atherosclerotic lesions from the Munich Cardiovascular Studies Biobank (MISSION), using over 4,000 paired multi-stain whole slide images (WSIs) from 170 deceased individuals on 7 prespecified segments of the coronary tree, each stained according to four histopathological protocols. UNICORN achieved a classification accuracy of 0.67, outperforming other state-of-the-art models. The model effectively identifies relevant tissue phenotypes across stainings and implicitly models disease progression.
Conclusion: Our proposed multi-modal transformer model addresses key challenges in medical data analysis, including data heterogeneity and missing modalities. Explainability and the model's effectiveness in predicting atherosclerosis progression underscores its potential for broader applications in medical research.
§ INTRODUCTION
Atherosclerosis, a complex inflammatory disease of the arterial wall, is a leading cause of cardiovascular morbidity and mortality worldwide. Histopathological classification of atherosclerosis, as proposed by Stary et al. <cit.> and adapted by Virmani et al. <cit.> and Otsuka et al. <cit.>, plays a crucial role in assessing disease severity, progression and potential therapeutic interventions. This classification scheme assesses features such as the thickness of the fibrous cap, the presence of a necrotic core, the degree of inflammation and the extent of calcification within arterial plaques <cit.>.
Whole slide imaging (WSI) allows detailed visualization of tissue samples at micrometer resolution, providing critical insight into various medical conditions. While the most commonly used staining method, haematoxylin and eosin (H&E), provides a sufficient overview of the tissue and is used to diagnose many diseases, in some cases further staining protocols are required to fully understand underlying tissue characteristics<cit.>. For example, immunohistochemistry (IHC) staining is essential for cancer detection, providing critical prognostic, diagnostic, and therapeutic information that enables precise and personalized treatment strategies <cit.>. Specific staining methods, like von Kossa silver stain or Movat pentachrome stain, highlight particular tissue components, such as mineralisation or connective tissue composition and lipid distribution <cit.>. Manual integration and interpretation of multi-stain data is challenging, as multiple high-resolution images that are each potentially gigabytes in size need to be examined in detail.
In recent years, computational pathology, focusing on the analysis of digitized pathology slides, has seen remarkable advances, largely driven by modern machine learning techniques, especially deep learning. These advances cover several areas including disease classification, tissue segmentation, and mutation prediction <cit.>. Given the large size of WSIs, a common strategy is to decompose them into manageable-sized image patches. Subsequently, a pre-trained feature extractor condenses each patch into a low-dimensional feature vector, which is then processed by a Multiple Instance Learning (MIL) network <cit.>. Recent developments in transformer architectures, in particular Vision Transformers (ViT), have shown promising results as the backbone of state-of-the-art feature extractors <cit.>, and when applied as WSI classification networks<cit.>. Although there has been a lot of research on cancer prediction and using deep learning on WSIs, computational pathology is quite new in the field of atherosclerosis. Holmberg et al. used co-registered histopathology and OCT images to improve segmentation of calcification and lesions in OCT images using a UNet <cit.>. The open-source tool Vesseg has been developed to segment lesions using U-nets on H&E stained brachiocephalic arteries<cit.>.
In our work, we introduce UNICORN, a two-stage, end-to-end trainable transformer architecture capable of handling different modalities, in particular features obtained from WSIs with various staining protocols. To our knowledge, this is the first AI model tailored for multi-stain histopathological classification. We show how UNICORN implicitly captures disease progression in atherosclerosis, handles missing data effectively during both training and inference, demonstrating its robust applicability in real-world settings.
§ RESULTS
§.§ UNICORN architecture
UNICORN (UNiversal stain Integration network for CORonary classificatioN) is a network capable of integrating and processing heterogeneous data across different tissue stainings (figure <ref>). The model is end-to-end trainable and exhibits resilience to data incompleteness during both training and inference. The network comprises specialized modules, each having the same architecture. These modules include a two-layer self-attention transformer block with four attention heads, as illustrated in figure <ref>e. Initially, the input data is encoded: the entire slide image is tiled into 256x256 px sized patches. A pre-trained feature extractor is then used to generate embeddings for each patch (see Methods for details). Each staining is individually processed by specialized domain expert modules that learn unique domain-specific features (figure <ref>d). Each of the expert modules propagates a modality token (MT), analogous to the class token (CLS) used in vision transformers, to the aggregation expert module. The aggregation expert module learns to aggregate the information from the different stainings in a CLS token, which is ultimately used as input to the fully-connected (FC) layer which outputs a classification score (figure <ref>d). Since the aggregation expert module, like all expert modules, is a transformer that can handle a variable number of input tokens, it is capable of processing data with missing modalities both during training and inference. UNICORN is evaluated on a multi-stain, multi-class atherosclerosis dataset where it classifies five stages of coronary atherosclerosis.
§.§ MISSION: Multi-stain atherosclerosis classification dataset
The dataset consists of tissue sets from 170 deceased individuals. Each set contains 7 segments from different parts of the coronary tree (figure <ref>a): proximal and distal part of the right coronary artery, main stem, proximal and distal part of the left coronary artery, proximal and distal part of the left circumflex coronary artery. Each segment is stained using four different staining methods: Hematoxylin and Eosin (H&E), Elastica van Gieson (EvG), von Kossa (vK) and Movat pentachrome (Movat) stain. For lesion classification, a modified scheme according to the AHA and the classification method described by Virmani et al. <cit.> was employed: adaptive intima thickening (AIT), pathological intima thickening (PIT), early fibroatheroma (EFA), late fibroatheroma (LFA) and calcified fibroatheroma (CFA). These labels (see Methods and figure <ref> for detailed description) have been assigned by two biomedical experts after reviewing all four stainings. Disease severity increases from AIT over PIT, EFA, and LFA to CFA. Class specific characteristics are highlighted in figure <ref>b: AIT samples show intima thickening (IT) but intact cell structure and smooth muscle cell layer in a proteoglycan and collagen rich matrix. PIT is described as preatherosclerotic lesions with focal fat-laden macrophages (red triangles), inflammatory cells (blue triangles) and fatty streaks. The gray circle shows matrix remodeling resulting in a varying degree of smooth muscle cells. In the EFA sample, the black circle highlights the necrotic core (NC), the blue arrows show inflammatory cells and the red arrows point to the lipid pool. The black dotted circle shows cell debris. In the LFA case, the gray circle shows the NC with cholesterol crafts and fibrocalcific surrounding, the red arrows point to neovascularization. The red arrows in CFA show calcification in the necrotic core in the vK stained slide.
§.§ Evaluation
UNICORN is evaluated in a 5-fold cross validation scheme, splitting the dataset into 60/20/20 train/validation/test set for each fold (see Methods for details). We show that the network outperforms simple approaches that do not take into account the multi-modality of the data (Table 1). Our model achieves an average F1-Score of 0.66 ± 0.04 (mean ± standard deviation) and an accuracy of 0.67 ± 0.05 compared to an F1-Score of 0.63 ± 0.05 and accuracy of 0.64 ± 0.05 for the second-best model.
Most misclassifications are observed between adjacent disease stages, which are inherently difficult to classify and distinguish even for highly trained experts (figure <ref>a). Our model demonstrates strong performance with single-staining inputs, but achieves optimal results when utilizing all available staining methods (figure <ref>b), demonstrating its capabilities of aggregating information. When using only one staining as input, the vK staining performs best in classifying CFA and LFA and second best on EFA. Using H&E staining works best for the classes AIT, PIT and EFA. Movat and EvG perform roughly on par across classes (figure <ref>b).
The attention of the CLS token to the stain specific modality token in the expert aggregation model (figure <ref>c) demonstrates that the model effectively focuses on the most relevant staining for its predictions. H&E staining is predominantly given the highest attention, except for LFA and CFA, where vK staining is essential, aligning well with findings from figure <ref>b. EvG is given slightly higher attention than Movat on AIT and PIT, again in concordance with <ref>b, where EvG staining performs slightly better on these classes.
In figure <ref>d, one staining is left out at inference to visualize the difference in accuracy compared to inference with all four stainings. This gives another measurement of how important a specific staining is to the model's performance. The results again align well with the other experiments: Excluding von Kossa silver stain reduces the classification performance in advanced phenotypes (LFA, CFA), while H&E is the most important staining for the remaining classes. Removing EvG or Movat has little effect and sometimes even improves performance, potential explanations are discussed below.
The results presented in figures <ref>b-c are not only consistent with one another, but also illustrate the relevance of the different stainings for manual characterization and the underlying biological processes. For instance, the vK staining, which imparts a black chromatic hue to calcifications, is of particular importance for the identification and differentiation of the calcified fibroatheroma (CFA) and lipid-rich late fibrous atheromas (LFA). As also illustrated in figures <ref>b and <ref>c, H&E staining remains a fundamental tool for evaluating cell densities and general tissue structure, which is the most crucial aspect for differentiating AIT and PIT. AIT and PIT represent different stages of atherogenesis, with variations in cell density, inflammatory infiltration, and extracellular matrix organization.
§.§ UNICORN highlights explainable features
Three different visualization methods are provided to improve explainability: the typical attention mechanism, resulting from an attention rollout <cit.> over both stages of the transformer (figure <ref>). Figure <ref>b illustrates a tissue phenotype visualization, where tissue regions are highlighted based on the model's interpretation of the class most indicative of each area (see Methods). Due to practical reasons, in this case only the three most progressed disease tissue types are considered. Figure <ref>c highlights areas of attention computed by attention rollout independently per staining. Figure <ref>d combines the information of the first two methods into "class attention," which highlights the tissue phenotypes that the model identifies with high attention and associates with its predicted class. In conclusion, three meaningful visualization maps are given: a) “Where does the model attend to?” b) “Where does the model find phenotypes of the respective classes”, and c) “Where does it attend to the predicted class” which in our experiments highlights the areas of interest best and which are used in the following figures. These visualizations improve the interpretability of the model's decisions significantly, allowing researchers and clinicians to map the model’s predictions to specific tissue regions. This method not only helps confirm the accuracy of the model’s outputs with recognized pathological features but also increases confidence in the model by transparently highlighting its decision-making process.
Moreover, figure <ref>a illustrates that UNICORN is capable of accurately modeling the natural progression of atherosclerosis, as evidenced by a UMAP of an additional internal dataset comprising 768 segments from 114 individuals. The capacity of UNICORN to capture this progression is particularly noteworthy as it implies that the model is not only performing classification but is also learning a representation that aligns with the underlying biological process of the disease. This suggests that the model's decision-making is grounded in clinically relevant features, which could enhance its utility in both diagnostic and research settings.
Furthermore, the progression model exhibits consistency when considering individual stainings, as shown in figure <ref>b. The model's robustness when limited to a single staining demonstrates its adaptability and resilience in scenarios where multimodal data might not be available. This means that the model can still provide meaningful insights even when only partial data is accessible, which is common in real-world clinical environments. The ability to generalize to a single staining suggests that the features learned by the model are rooted in the biological characteristics of the tissue, rather than being artifacts of the specific staining technique. As illustrated in figure <ref>c, the model’s capacity to differentiate between various stainings is supported by empirical evidence. This ability to distinguish the particular staining processing is a crucial step towards effectively handling missing data, as it inherently recognizes what information is absent.
Figure <ref> shows, as an example for the CFA class, class attention maps and corresponding high class attention patches according to figure 3d, revealing regions identified by UNICORN as critical for classification. Additional examples for AIT, PIT, EFA and LFA cases are shown in the appendix (supplementary figure <ref>). The high class attention patches in the CFA case demonstrate the most relevant regions for the phenotype learned by the model and closely match the expert opinion. The sections show a lipid-rich necrotic core consisting of extracellular lipid deposits and cellular debris across the four different stains. The necrotic core is covered by a fibrous cap composed of smooth muscle cells, collagen and other extracellular matrix components.
In the H&E-stained sections, the algorithm focuses on regions characterized by dense fibrotic areas and calcified necrotic cores (blue arrows), which are typical of advanced fibroatheromas. These areas, highlighted in purple, correlate with expert annotations marking regions of calcification and fibrosis. In the EvG stain, the algorithm detects regions rich in extracellular matrix components, particularly elastic and collagen fibers, shown as red wavy structures (blue arrows). These structures, particularly in the shoulder region, outer necrotic core and intima are critical in distinguishing CFA from earlier lesion stages, as CFA indicates less extracellular matrix due to remodeling in the vessel wall. Von Kossa (vK) staining accentuates calcifications, with the model placing particular attention to black-stained regions, indicating areas of significant calcium deposition within the intima. This focus assists in the differentiation of CFA, where calcification is a defining feature. Finally, the Movat pentachrome stain highlights the lipid-rich necrotic core in yellow-green, which the algorithm correctly identifies as a key pathological feature of CFA. By focusing on these areas, the algorithm demonstrates its ability to distinguish CFA from less advanced fibroatheromas. These visualization results, together with the results shown in figure <ref>, underscore the algorithm's ability to accurately identify relevant histologic features across multiple staining techniques.
§.§ Discussion
In our study, we present UNICORN, a transformer-based model designed to analyze multi-stain histopathological data. UNICORN represents a significant advancement in histopathology, particularly in integrating multi-modal data for improved diagnostic accuracy. As precision medicine and personalized healthcare advance, the model’s capability to handle missing data and synthesize information from various sources makes it a valuable tool in research and clinical practice. UNICORN might augment pathologist workflows by providing explainable preliminary assessments and highlighting critical areas of interest in tissue samples, thereby expediting the diagnostic process. Its ability to perform inference on only a subset of possible stains can reduce the need for additional staining, optimizing resource use and cost.
The UNICORN framework holds promise for application beyond coronary artery disease. Its architecture is adaptable to other disease phenotypes and could be augmented by integrating high-throughput functional or longitudinal data. This integration would offer deeper insights into disease mechanisms and support more precise phenotyping. Future research could also investigate the replacement of initial expert modules with alternative networks, such as Convolutional Neural Networks or Vision Transformers, to further enhance model performance. Additionally, the incorporation of clinical and high-throughput data into the model could provide a more comprehensive understanding of disease specific pathology, functional biology and improve classification accuracy.
Despite its promising capabilities UNICORN incorporates several limitations. The performance of the model is dependent on the quality and diversity of used staining protocols. In cases where specific stains, such as Movat or Elastica van Gieson, are less informative when other stains are present, the model's effectiveness may be reduced. Additionally, the current feature extractor, optimized primarily for H&E staining, may not fully leverage information from other stains, potentially limiting the model’s overall performance. Future work should explore fine-tuning strategies and the integration of alternative networks to address this issue. Furthermore, while UNICORN handles missing data well, its generalizability to other phenotypes and the integration of multi-omics data remains to be validated. Finally, the practical implementation in research and clinical settings will require significant investment in training and adaptation of existing workflows. In conclusion, the UNICORN model represents a significant advancement in the integration and analysis of multi-stain histopathological data. Its ability to handle missing modalities and provide comprehensive insights across diverse staining techniques positions it as a valuable tool for enhancing diagnostic accuracy and efficiency. While the model shows promising results in atherosclerosis classification, further optimization and validation are needed to fully realize its potential across various disease phenotypes. The integration of UNICORN into research and clinical workflows has the potential to significantly improve diagnostic consistency and support explainable personalized medicine.
§.§ Methods
§.§.§ Histological data compilation, selection and staining
The Munich cardIovaScular StudIes biObaNk (MISSION) was launched in 2019 and comprises seven cardiovascular relevant tissue samples from each individual, such as coronary and carotid artery tissue samples, as well as myocardium. It also includes blood and plasma, liver, skeletal muscle and various adipose tissues from more than 1000 deceased individuals, collected in formalin for FFPE sections (formalin fixed paraffin embedded) and fresh frozen at -80°C. tissue sections were deparaffinized in xylene substitute and rehydrated through graded alcohol series. A total of 1045 tissue samples from 170 individuals were analyzed in this study, covering the following coronary arteries: each proximal and distal RCA (right coronary artery), the LAD (left anterior descending artery) and the LCX (left circumflex coronary artery) as well as the LM (left main) (figure <ref>a). Each tissue section was subjected to a series of histopathological analyses, including hematoxylin and eosin (H&E) staining, Elastica van Gieson (EvG) staining, von Kossa (vK) silver staining, and Movat Pentachrome (Movat) staining. These techniques were employed to assess the histopathological characteristics of atherosclerotic lesions.
The histological classification of the atherosclerotic lesions was performed in accordance with the American Heart Association (AHA) classification and the adapted classification system proposed by Virmani et al. <cit.>. The lesions were categorized into the following five stages (figure <ref>b). Advanced intima thickening (AIT) is defined by diffuse intimal thickening composed of smooth muscle cells and extracellular matrix in the absence of significant lipid accumulation or inflammatory infiltration. The pathological intima thickening (PIT) is primarily defined by the accumulation of extracellular lipid and inflammatory cells within the intima, though a necrotic core is absent. For the vulnerable stages, early, late, and calcified fibroatheroma (EFA, LFA, and CFA) are distinguished. Early fibroatheroma (EFA) is distinguished by a thin fibrous cap and a developing lipid core whereas the late fibroatheroma (LFA) is more advanced, exhibiting a larger necrotic core and a thicker, though still potentially unstable, fibrous cap. Additionally, LFA displays greater inflammatory involvement. Calcified fibroatheroma (CFA) represents the late stage of fibroatheroma development, with microscopically and macroscopically calcification present within the necrotic core. The fibrous cap may be thickened, and the lesion exhibits reduced cellularity due to extensive calcification.
The institutional review board and ethics committee of the Technical University of Munich, Germany, approved the protocol of the MISSION Biobank (2018-325-S-KK - 22.08.2018). The study was performed in accordance with the provisions of the Declaration of Helsinki and the International Conference on Harmonisation guidelines for good clinical practice. Human data from the MISSION Biobank can be requested by qualified researchers at the German Heart Center Munich.
§.§.§ Image patching and feature extraction
A publicly available pipeline from Wagner et al. <cit.> was used to extract patches and the corresponding features from the WSIs. As the feature extractor, CTransPath <cit.> was used for each staining. Blurred regions and uncoloured background are excluded using a Canny edge detection algorithm <cit.> with a threshold of one edge to be detected per patch. A resolution of 5x for atherosclerosis prediction was used, as it empirically has shown to work best.
§.§.§ Model training
UNICORN is trained for 30 epochs using an AdamW <cit.> optimizer. The learning rate and the weight decay are set to 2.0e-5. Gradients are accumulated across 16 batches with batch size 1 before an update step is made. To increase robustness and force the model to be able to handle missing data well, it is trained with high domain dropout, where domains are randomly masked and not used as input. During training, for each input, one of the input domains is randomly chosen that will not be masked; all other stainings or additional domains have a chance of 0.7 to be masked. As the loss function, standard cross-entropy is used.
§.§.§ Model attention visualization
To get high resolution attention maps, slides are patched with 95% overlap between patches, after which features are extracted. This results in 400 distinct feature bags, each representing the full slide. Each of these bags is independently forwarded through the network. The attention that is generated by attention rollout <cit.> on the expert modules for each of the patches for each bag are all saved. Additionally, for each bag, only a single patch is forwarded and let the model predict a class based on one patch only, which results in the class scores. Using these class scores, one can visualize the Multiclass visualization (figure <ref>b) by multiplying the color corresponding to a certain class by the class probability for a patch. Multiplying the attention scores with the multiclass score of the predicted class then gives the class attention score. In the end, the scores across overlapping patches are averaged for the detailed maps. All scores are normalized between 0 and 1 in the end.
§.§ Acknowledgements
V.K. was supported by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS”. This work was also supported by the BMBF-funded de.NBI Cloud within the German Network for Bioinformatics Infrastructure (de.NBI) (031A532B, 031A533A, 031A533B, 031A534A, 031A535A, 031A537A, 031A537B, 031A537C, 031A537D, 031A538A). C.M. has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (grant agreement number 866411 and 101113551) and is supported by the Hightech Agenda Bayern. M.v.S. is supported by an excellence grant of the German Center for Cardiovascular Research (DZHK-81X3600506), the German Heart Foundation (Deutsche Herzstiftung e.V.), a Leducq PlaqOmics Junior Investigator Grant and a Junior Research Group Cardiovascular Diseases Grant of the CORONA Foundation (S199/10085/2021). The acquisition of the automated ZEISS AxioScan 7 slide scanning system was supported by the German Research Foundation (DFG-95/1713-1) and the German Center for Cardiovascular Research (DZHK-81Z0600501). H.S. is co-applicant of the British Heart Foundation (BHF)/German Centre of Cardiovascular Research (DZHK)-collaboration (DZHK-BHF: 81X2600522) and the Leducq Foundation for Cardiovascular Research (PlaqOmics: 18CVD02). Additional support has been received from the German Research Foundation (DFG) as part of the Sonderforschungsbereich SFB 1123 (B02) and the Sonderforschungsbereich SFB TRR 267 (B05). Further, we kindly acknowledge the support of the Bavarian State Ministry of Health and Care who funded this work with DigiMed Bayern (grant No: DMB-1805-0001) within its Masterplan “Bayern Digital II” and of the German Federal Ministry of Economics and Energy in its scheme of ModulMax (grant No: ZF4590201BA8). This study was supported by Deutsches CHIP Register e.V. (www.chip-register.de) and funded by the German Center for Cardiovascular Research (DZHK), Berlin, Germany, (DZHK81X2200145 and DZHK-81X2600520).
§.§ Contributions
VK led the development of the code, model design, and conducted the evaluations. SB was responsible for data collection and annotation. VK and SB were the primary authors of the manuscript and created the figures. VL and SB started the project. MJ provided guidance to SB on classification tasks, and JS and CM offered advisory support to VK. MS and CM oversaw the project’s initiation. All authors contributed to the critical revision and enhancement of the manuscript.
splncs04
§ SUPPLEMENTAL MATERIAL
| Atherosclerosis, a complex inflammatory disease of the arterial wall, is a leading cause of cardiovascular morbidity and mortality worldwide. Histopathological classification of atherosclerosis, as proposed by Stary et al. <cit.> and adapted by Virmani et al. <cit.> and Otsuka et al. <cit.>, plays a crucial role in assessing disease severity, progression and potential therapeutic interventions. This classification scheme assesses features such as the thickness of the fibrous cap, the presence of a necrotic core, the degree of inflammation and the extent of calcification within arterial plaques <cit.>.
Whole slide imaging (WSI) allows detailed visualization of tissue samples at micrometer resolution, providing critical insight into various medical conditions. While the most commonly used staining method, haematoxylin and eosin (H&E), provides a sufficient overview of the tissue and is used to diagnose many diseases, in some cases further staining protocols are required to fully understand underlying tissue characteristics<cit.>. For example, immunohistochemistry (IHC) staining is essential for cancer detection, providing critical prognostic, diagnostic, and therapeutic information that enables precise and personalized treatment strategies <cit.>. Specific staining methods, like von Kossa silver stain or Movat pentachrome stain, highlight particular tissue components, such as mineralisation or connective tissue composition and lipid distribution <cit.>. Manual integration and interpretation of multi-stain data is challenging, as multiple high-resolution images that are each potentially gigabytes in size need to be examined in detail.
In recent years, computational pathology, focusing on the analysis of digitized pathology slides, has seen remarkable advances, largely driven by modern machine learning techniques, especially deep learning. These advances cover several areas including disease classification, tissue segmentation, and mutation prediction <cit.>. Given the large size of WSIs, a common strategy is to decompose them into manageable-sized image patches. Subsequently, a pre-trained feature extractor condenses each patch into a low-dimensional feature vector, which is then processed by a Multiple Instance Learning (MIL) network <cit.>. Recent developments in transformer architectures, in particular Vision Transformers (ViT), have shown promising results as the backbone of state-of-the-art feature extractors <cit.>, and when applied as WSI classification networks<cit.>. Although there has been a lot of research on cancer prediction and using deep learning on WSIs, computational pathology is quite new in the field of atherosclerosis. Holmberg et al. used co-registered histopathology and OCT images to improve segmentation of calcification and lesions in OCT images using a UNet <cit.>. The open-source tool Vesseg has been developed to segment lesions using U-nets on H&E stained brachiocephalic arteries<cit.>.
In our work, we introduce UNICORN, a two-stage, end-to-end trainable transformer architecture capable of handling different modalities, in particular features obtained from WSIs with various staining protocols. To our knowledge, this is the first AI model tailored for multi-stain histopathological classification. We show how UNICORN implicitly captures disease progression in atherosclerosis, handles missing data effectively during both training and inference, demonstrating its robust applicability in real-world settings. | null | null | §.§ UNICORN architecture
UNICORN (UNiversal stain Integration network for CORonary classificatioN) is a network capable of integrating and processing heterogeneous data across different tissue stainings (figure <ref>). The model is end-to-end trainable and exhibits resilience to data incompleteness during both training and inference. The network comprises specialized modules, each having the same architecture. These modules include a two-layer self-attention transformer block with four attention heads, as illustrated in figure <ref>e. Initially, the input data is encoded: the entire slide image is tiled into 256x256 px sized patches. A pre-trained feature extractor is then used to generate embeddings for each patch (see Methods for details). Each staining is individually processed by specialized domain expert modules that learn unique domain-specific features (figure <ref>d). Each of the expert modules propagates a modality token (MT), analogous to the class token (CLS) used in vision transformers, to the aggregation expert module. The aggregation expert module learns to aggregate the information from the different stainings in a CLS token, which is ultimately used as input to the fully-connected (FC) layer which outputs a classification score (figure <ref>d). Since the aggregation expert module, like all expert modules, is a transformer that can handle a variable number of input tokens, it is capable of processing data with missing modalities both during training and inference. UNICORN is evaluated on a multi-stain, multi-class atherosclerosis dataset where it classifies five stages of coronary atherosclerosis.
§.§ MISSION: Multi-stain atherosclerosis classification dataset
The dataset consists of tissue sets from 170 deceased individuals. Each set contains 7 segments from different parts of the coronary tree (figure <ref>a): proximal and distal part of the right coronary artery, main stem, proximal and distal part of the left coronary artery, proximal and distal part of the left circumflex coronary artery. Each segment is stained using four different staining methods: Hematoxylin and Eosin (H&E), Elastica van Gieson (EvG), von Kossa (vK) and Movat pentachrome (Movat) stain. For lesion classification, a modified scheme according to the AHA and the classification method described by Virmani et al. <cit.> was employed: adaptive intima thickening (AIT), pathological intima thickening (PIT), early fibroatheroma (EFA), late fibroatheroma (LFA) and calcified fibroatheroma (CFA). These labels (see Methods and figure <ref> for detailed description) have been assigned by two biomedical experts after reviewing all four stainings. Disease severity increases from AIT over PIT, EFA, and LFA to CFA. Class specific characteristics are highlighted in figure <ref>b: AIT samples show intima thickening (IT) but intact cell structure and smooth muscle cell layer in a proteoglycan and collagen rich matrix. PIT is described as preatherosclerotic lesions with focal fat-laden macrophages (red triangles), inflammatory cells (blue triangles) and fatty streaks. The gray circle shows matrix remodeling resulting in a varying degree of smooth muscle cells. In the EFA sample, the black circle highlights the necrotic core (NC), the blue arrows show inflammatory cells and the red arrows point to the lipid pool. The black dotted circle shows cell debris. In the LFA case, the gray circle shows the NC with cholesterol crafts and fibrocalcific surrounding, the red arrows point to neovascularization. The red arrows in CFA show calcification in the necrotic core in the vK stained slide.
§.§ Evaluation
UNICORN is evaluated in a 5-fold cross validation scheme, splitting the dataset into 60/20/20 train/validation/test set for each fold (see Methods for details). We show that the network outperforms simple approaches that do not take into account the multi-modality of the data (Table 1). Our model achieves an average F1-Score of 0.66 ± 0.04 (mean ± standard deviation) and an accuracy of 0.67 ± 0.05 compared to an F1-Score of 0.63 ± 0.05 and accuracy of 0.64 ± 0.05 for the second-best model.
Most misclassifications are observed between adjacent disease stages, which are inherently difficult to classify and distinguish even for highly trained experts (figure <ref>a). Our model demonstrates strong performance with single-staining inputs, but achieves optimal results when utilizing all available staining methods (figure <ref>b), demonstrating its capabilities of aggregating information. When using only one staining as input, the vK staining performs best in classifying CFA and LFA and second best on EFA. Using H&E staining works best for the classes AIT, PIT and EFA. Movat and EvG perform roughly on par across classes (figure <ref>b).
The attention of the CLS token to the stain specific modality token in the expert aggregation model (figure <ref>c) demonstrates that the model effectively focuses on the most relevant staining for its predictions. H&E staining is predominantly given the highest attention, except for LFA and CFA, where vK staining is essential, aligning well with findings from figure <ref>b. EvG is given slightly higher attention than Movat on AIT and PIT, again in concordance with <ref>b, where EvG staining performs slightly better on these classes.
In figure <ref>d, one staining is left out at inference to visualize the difference in accuracy compared to inference with all four stainings. This gives another measurement of how important a specific staining is to the model's performance. The results again align well with the other experiments: Excluding von Kossa silver stain reduces the classification performance in advanced phenotypes (LFA, CFA), while H&E is the most important staining for the remaining classes. Removing EvG or Movat has little effect and sometimes even improves performance, potential explanations are discussed below.
The results presented in figures <ref>b-c are not only consistent with one another, but also illustrate the relevance of the different stainings for manual characterization and the underlying biological processes. For instance, the vK staining, which imparts a black chromatic hue to calcifications, is of particular importance for the identification and differentiation of the calcified fibroatheroma (CFA) and lipid-rich late fibrous atheromas (LFA). As also illustrated in figures <ref>b and <ref>c, H&E staining remains a fundamental tool for evaluating cell densities and general tissue structure, which is the most crucial aspect for differentiating AIT and PIT. AIT and PIT represent different stages of atherogenesis, with variations in cell density, inflammatory infiltration, and extracellular matrix organization.
§.§ UNICORN highlights explainable features
Three different visualization methods are provided to improve explainability: the typical attention mechanism, resulting from an attention rollout <cit.> over both stages of the transformer (figure <ref>). Figure <ref>b illustrates a tissue phenotype visualization, where tissue regions are highlighted based on the model's interpretation of the class most indicative of each area (see Methods). Due to practical reasons, in this case only the three most progressed disease tissue types are considered. Figure <ref>c highlights areas of attention computed by attention rollout independently per staining. Figure <ref>d combines the information of the first two methods into "class attention," which highlights the tissue phenotypes that the model identifies with high attention and associates with its predicted class. In conclusion, three meaningful visualization maps are given: a) “Where does the model attend to?” b) “Where does the model find phenotypes of the respective classes”, and c) “Where does it attend to the predicted class” which in our experiments highlights the areas of interest best and which are used in the following figures. These visualizations improve the interpretability of the model's decisions significantly, allowing researchers and clinicians to map the model’s predictions to specific tissue regions. This method not only helps confirm the accuracy of the model’s outputs with recognized pathological features but also increases confidence in the model by transparently highlighting its decision-making process.
Moreover, figure <ref>a illustrates that UNICORN is capable of accurately modeling the natural progression of atherosclerosis, as evidenced by a UMAP of an additional internal dataset comprising 768 segments from 114 individuals. The capacity of UNICORN to capture this progression is particularly noteworthy as it implies that the model is not only performing classification but is also learning a representation that aligns with the underlying biological process of the disease. This suggests that the model's decision-making is grounded in clinically relevant features, which could enhance its utility in both diagnostic and research settings.
Furthermore, the progression model exhibits consistency when considering individual stainings, as shown in figure <ref>b. The model's robustness when limited to a single staining demonstrates its adaptability and resilience in scenarios where multimodal data might not be available. This means that the model can still provide meaningful insights even when only partial data is accessible, which is common in real-world clinical environments. The ability to generalize to a single staining suggests that the features learned by the model are rooted in the biological characteristics of the tissue, rather than being artifacts of the specific staining technique. As illustrated in figure <ref>c, the model’s capacity to differentiate between various stainings is supported by empirical evidence. This ability to distinguish the particular staining processing is a crucial step towards effectively handling missing data, as it inherently recognizes what information is absent.
Figure <ref> shows, as an example for the CFA class, class attention maps and corresponding high class attention patches according to figure 3d, revealing regions identified by UNICORN as critical for classification. Additional examples for AIT, PIT, EFA and LFA cases are shown in the appendix (supplementary figure <ref>). The high class attention patches in the CFA case demonstrate the most relevant regions for the phenotype learned by the model and closely match the expert opinion. The sections show a lipid-rich necrotic core consisting of extracellular lipid deposits and cellular debris across the four different stains. The necrotic core is covered by a fibrous cap composed of smooth muscle cells, collagen and other extracellular matrix components.
In the H&E-stained sections, the algorithm focuses on regions characterized by dense fibrotic areas and calcified necrotic cores (blue arrows), which are typical of advanced fibroatheromas. These areas, highlighted in purple, correlate with expert annotations marking regions of calcification and fibrosis. In the EvG stain, the algorithm detects regions rich in extracellular matrix components, particularly elastic and collagen fibers, shown as red wavy structures (blue arrows). These structures, particularly in the shoulder region, outer necrotic core and intima are critical in distinguishing CFA from earlier lesion stages, as CFA indicates less extracellular matrix due to remodeling in the vessel wall. Von Kossa (vK) staining accentuates calcifications, with the model placing particular attention to black-stained regions, indicating areas of significant calcium deposition within the intima. This focus assists in the differentiation of CFA, where calcification is a defining feature. Finally, the Movat pentachrome stain highlights the lipid-rich necrotic core in yellow-green, which the algorithm correctly identifies as a key pathological feature of CFA. By focusing on these areas, the algorithm demonstrates its ability to distinguish CFA from less advanced fibroatheromas. These visualization results, together with the results shown in figure <ref>, underscore the algorithm's ability to accurately identify relevant histologic features across multiple staining techniques.
§.§ Discussion
In our study, we present UNICORN, a transformer-based model designed to analyze multi-stain histopathological data. UNICORN represents a significant advancement in histopathology, particularly in integrating multi-modal data for improved diagnostic accuracy. As precision medicine and personalized healthcare advance, the model’s capability to handle missing data and synthesize information from various sources makes it a valuable tool in research and clinical practice. UNICORN might augment pathologist workflows by providing explainable preliminary assessments and highlighting critical areas of interest in tissue samples, thereby expediting the diagnostic process. Its ability to perform inference on only a subset of possible stains can reduce the need for additional staining, optimizing resource use and cost.
The UNICORN framework holds promise for application beyond coronary artery disease. Its architecture is adaptable to other disease phenotypes and could be augmented by integrating high-throughput functional or longitudinal data. This integration would offer deeper insights into disease mechanisms and support more precise phenotyping. Future research could also investigate the replacement of initial expert modules with alternative networks, such as Convolutional Neural Networks or Vision Transformers, to further enhance model performance. Additionally, the incorporation of clinical and high-throughput data into the model could provide a more comprehensive understanding of disease specific pathology, functional biology and improve classification accuracy.
Despite its promising capabilities UNICORN incorporates several limitations. The performance of the model is dependent on the quality and diversity of used staining protocols. In cases where specific stains, such as Movat or Elastica van Gieson, are less informative when other stains are present, the model's effectiveness may be reduced. Additionally, the current feature extractor, optimized primarily for H&E staining, may not fully leverage information from other stains, potentially limiting the model’s overall performance. Future work should explore fine-tuning strategies and the integration of alternative networks to address this issue. Furthermore, while UNICORN handles missing data well, its generalizability to other phenotypes and the integration of multi-omics data remains to be validated. Finally, the practical implementation in research and clinical settings will require significant investment in training and adaptation of existing workflows. In conclusion, the UNICORN model represents a significant advancement in the integration and analysis of multi-stain histopathological data. Its ability to handle missing modalities and provide comprehensive insights across diverse staining techniques positions it as a valuable tool for enhancing diagnostic accuracy and efficiency. While the model shows promising results in atherosclerosis classification, further optimization and validation are needed to fully realize its potential across various disease phenotypes. The integration of UNICORN into research and clinical workflows has the potential to significantly improve diagnostic consistency and support explainable personalized medicine.
§.§ Methods
§.§.§ Histological data compilation, selection and staining
The Munich cardIovaScular StudIes biObaNk (MISSION) was launched in 2019 and comprises seven cardiovascular relevant tissue samples from each individual, such as coronary and carotid artery tissue samples, as well as myocardium. It also includes blood and plasma, liver, skeletal muscle and various adipose tissues from more than 1000 deceased individuals, collected in formalin for FFPE sections (formalin fixed paraffin embedded) and fresh frozen at -80°C. tissue sections were deparaffinized in xylene substitute and rehydrated through graded alcohol series. A total of 1045 tissue samples from 170 individuals were analyzed in this study, covering the following coronary arteries: each proximal and distal RCA (right coronary artery), the LAD (left anterior descending artery) and the LCX (left circumflex coronary artery) as well as the LM (left main) (figure <ref>a). Each tissue section was subjected to a series of histopathological analyses, including hematoxylin and eosin (H&E) staining, Elastica van Gieson (EvG) staining, von Kossa (vK) silver staining, and Movat Pentachrome (Movat) staining. These techniques were employed to assess the histopathological characteristics of atherosclerotic lesions.
The histological classification of the atherosclerotic lesions was performed in accordance with the American Heart Association (AHA) classification and the adapted classification system proposed by Virmani et al. <cit.>. The lesions were categorized into the following five stages (figure <ref>b). Advanced intima thickening (AIT) is defined by diffuse intimal thickening composed of smooth muscle cells and extracellular matrix in the absence of significant lipid accumulation or inflammatory infiltration. The pathological intima thickening (PIT) is primarily defined by the accumulation of extracellular lipid and inflammatory cells within the intima, though a necrotic core is absent. For the vulnerable stages, early, late, and calcified fibroatheroma (EFA, LFA, and CFA) are distinguished. Early fibroatheroma (EFA) is distinguished by a thin fibrous cap and a developing lipid core whereas the late fibroatheroma (LFA) is more advanced, exhibiting a larger necrotic core and a thicker, though still potentially unstable, fibrous cap. Additionally, LFA displays greater inflammatory involvement. Calcified fibroatheroma (CFA) represents the late stage of fibroatheroma development, with microscopically and macroscopically calcification present within the necrotic core. The fibrous cap may be thickened, and the lesion exhibits reduced cellularity due to extensive calcification.
The institutional review board and ethics committee of the Technical University of Munich, Germany, approved the protocol of the MISSION Biobank (2018-325-S-KK - 22.08.2018). The study was performed in accordance with the provisions of the Declaration of Helsinki and the International Conference on Harmonisation guidelines for good clinical practice. Human data from the MISSION Biobank can be requested by qualified researchers at the German Heart Center Munich.
§.§.§ Image patching and feature extraction
A publicly available pipeline from Wagner et al. <cit.> was used to extract patches and the corresponding features from the WSIs. As the feature extractor, CTransPath <cit.> was used for each staining. Blurred regions and uncoloured background are excluded using a Canny edge detection algorithm <cit.> with a threshold of one edge to be detected per patch. A resolution of 5x for atherosclerosis prediction was used, as it empirically has shown to work best.
§.§.§ Model training
UNICORN is trained for 30 epochs using an AdamW <cit.> optimizer. The learning rate and the weight decay are set to 2.0e-5. Gradients are accumulated across 16 batches with batch size 1 before an update step is made. To increase robustness and force the model to be able to handle missing data well, it is trained with high domain dropout, where domains are randomly masked and not used as input. During training, for each input, one of the input domains is randomly chosen that will not be masked; all other stainings or additional domains have a chance of 0.7 to be masked. As the loss function, standard cross-entropy is used.
§.§.§ Model attention visualization
To get high resolution attention maps, slides are patched with 95% overlap between patches, after which features are extracted. This results in 400 distinct feature bags, each representing the full slide. Each of these bags is independently forwarded through the network. The attention that is generated by attention rollout <cit.> on the expert modules for each of the patches for each bag are all saved. Additionally, for each bag, only a single patch is forwarded and let the model predict a class based on one patch only, which results in the class scores. Using these class scores, one can visualize the Multiclass visualization (figure <ref>b) by multiplying the color corresponding to a certain class by the class probability for a patch. Multiplying the attention scores with the multiclass score of the predicted class then gives the class attention score. In the end, the scores across overlapping patches are averaged for the detailed maps. All scores are normalized between 0 and 1 in the end.
§.§ Acknowledgements
V.K. was supported by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS”. This work was also supported by the BMBF-funded de.NBI Cloud within the German Network for Bioinformatics Infrastructure (de.NBI) (031A532B, 031A533A, 031A533B, 031A534A, 031A535A, 031A537A, 031A537B, 031A537C, 031A537D, 031A538A). C.M. has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (grant agreement number 866411 and 101113551) and is supported by the Hightech Agenda Bayern. M.v.S. is supported by an excellence grant of the German Center for Cardiovascular Research (DZHK-81X3600506), the German Heart Foundation (Deutsche Herzstiftung e.V.), a Leducq PlaqOmics Junior Investigator Grant and a Junior Research Group Cardiovascular Diseases Grant of the CORONA Foundation (S199/10085/2021). The acquisition of the automated ZEISS AxioScan 7 slide scanning system was supported by the German Research Foundation (DFG-95/1713-1) and the German Center for Cardiovascular Research (DZHK-81Z0600501). H.S. is co-applicant of the British Heart Foundation (BHF)/German Centre of Cardiovascular Research (DZHK)-collaboration (DZHK-BHF: 81X2600522) and the Leducq Foundation for Cardiovascular Research (PlaqOmics: 18CVD02). Additional support has been received from the German Research Foundation (DFG) as part of the Sonderforschungsbereich SFB 1123 (B02) and the Sonderforschungsbereich SFB TRR 267 (B05). Further, we kindly acknowledge the support of the Bavarian State Ministry of Health and Care who funded this work with DigiMed Bayern (grant No: DMB-1805-0001) within its Masterplan “Bayern Digital II” and of the German Federal Ministry of Economics and Energy in its scheme of ModulMax (grant No: ZF4590201BA8). This study was supported by Deutsches CHIP Register e.V. (www.chip-register.de) and funded by the German Center for Cardiovascular Research (DZHK), Berlin, Germany, (DZHK81X2200145 and DZHK-81X2600520).
§.§ Contributions
VK led the development of the code, model design, and conducted the evaluations. SB was responsible for data collection and annotation. VK and SB were the primary authors of the manuscript and created the figures. VL and SB started the project. MJ provided guidance to SB on classification tasks, and JS and CM offered advisory support to VK. MS and CM oversaw the project’s initiation. All authors contributed to the critical revision and enhancement of the manuscript.
splncs04 | null | null |
http://arxiv.org/abs/2409.17350v1 | 20240925205452 | Electronic structure, spin-orbit interaction and electron-phonon coupling of triangular adatom lattices on semiconductor substrates | [
"Lucca Marchetti",
"Matthew Bunney",
"Domenico Di Sante",
"Stephan Rachel"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
i.e.,
e.g.,
et. al.
cf.
././figures/
School of Physics, University of Melbourne, Parkville, VIC 3010, Australia
School of Physics, University of Melbourne, Parkville, VIC 3010, Australia
Institute for Theoretical Solid State Physics, RWTH Aachen University, 52062 Aachen, Germany
Department of Physics and Astronomy, University of Bologna, 40127 Bologna, Italy
School of Physics, University of Melbourne, Parkville, VIC 3010, Australia
§ ABSTRACT
A one-third monolayer of the heavy metals Sn and Pb deposited on semiconductor substrates can lead to a √(3)×√(3) surface reconstruction, constituting an exciting triangular lattice material platform.
A long history of experiments identified charge-ordered and magnetic ground states. These discoveries were accompanied by a decades-long debate of whether electron correlations or other effects involving phonons are the driving force of the symmetry-broken states. The most recent discovery of superconductivity in boron-doped Sn/Si(111) with a T_c between 5K and 9K led to a renewed excitement. Here we revisit the electronic and phononic properties of Sn and Pb adatom triangular lattices on Si(111)
and SiC(0001). For all materials we compute relativistic bandstructures using DFT+U where U is only applied to the substrate atoms in order to adjust the band gap to match the experimental value; as a consequence, some of the resulting tight-binding parameters of the metallic surface band differ substantially compared to previous studies. Remarkably, for Pb/SiC(0001) we predict Rashba spin-orbit coupling as large as 45% of the nearest-neighbor hopping energy. In addition, we compute the phonon spectra and electron-phonon coupling constants for all materials, and for Pb/Si(111) even relativistically although the inclusion of spin-orbit coupling has surprisingly little effect on the electron-phonon coupling constant. We conclude that the resulting couplings are too weak to account for electron-phonon mediated superconductivity in any of these materials.
Electronic structure, spin-orbit interaction and electron–phonon coupling
of triangular adatom lattices on semiconductor substrates
Stephan Rachel
September 28, 2024
====================================================================================================================================
§ INTRODUCTION
The material class of a 1/3 monolayer of group-IV adsorbates (Pb, Sn) on semiconductor substrates has a long history. The 1/3 monolayer coverage leads to a √(3)×√(3)R30^∘ reconstruction where the atoms sit on the T_4 sites in a triangular lattice. Thus three out of their four valence bonds are saturated with the substrate, while the fourth orbital remains “dangling” and leads to a half-filled surface band, susceptible to electron-electron interactions. The
symmetry-broken low-temperature ground states have fascinated researchers for decades. It started with Pb/Ge(111) and the observation of a low-temperature surface charge density wave <cit.>. The same group of researchers found a similar phenomenology for the sister compound Sn/Ge(111) <cit.>. However, the transition from the room temperature √(3)×√(3) phase to the low temperature 3× 3 phase associated with charge-ordering could not be explained using first-principle methods, in contrast to Pb/Ge(111) <cit.>. The authors conjectured <cit.> that electron correlations might be responsible for the charge ordering in Sn/Ge(111). Also the role of defects was discussed in this context <cit.>.
The importance of electron-electron interactions was discussed from the beginning <cit.>, but probably the first paper to propose and explicate a Mott–Hubbard scenario was Ref. profeta-07prl086401. By performing local density and Hubbard-U approximation, they proposed both Sn/Ge(111) and Sn/Si(111) as Mott insulating materials.
Ref. cortes-13prb125113 also reported the charge-ordered 3×3 phase in Sn/Ge(111), and characterized it as an intermediate phase between the metallic 3×3 and Mott insulator phases.
The importance of spin-orbit coupling (SOC) was emphasized in Ref. tresca-21prb045126 for Pb/Ge(111) and Pb/Si(111).
Sn/Si(111) was originally believed to feature surface-metallicity, but micro-four-point-probe conductivity measurements revealed insulating behavior from room temperature all the way down to low temperatures <cit.>. Doping by partially replacing Sn atoms with In or Na deposition, leads rather suddenly to a metallic conductivity suggesting a small energy gap of the undoped Sn/Si(111) system. Angle-resolved photoemission spectroscopy performed on Sn/Si(111) revealed that the measured spectral function is incompatible with both 3× 3 and √(3)×√(3) reconstruction, but compatible with a 2√(3)× 2√(3) phase associated with a row-wise antiferromagnetic order <cit.>. The accompanying many-body calculations estimated a quite sizeable Hubbard-U=0.66eV.
The insulating ground state of Sn/Si(111) was also suggested as a consequence of a Slater-type insulator via band magnetism <cit.>. Ref. hansmann-13prl166401 performed a fully first-principle approach, predicting the size of local and non-local Coulomb interactions and derived phase diagrams for various Y/Si(111) compounds (Y=Si, C, Sn, Pb). For Sn/Si(111) they found local Hubbard-U=1eV and nearest-neighbor V_1=0.5eV compared to a bandwidth W∼ 0.5eV, further establishing Sn/Si(111) as a Mott insulator.
The importance of SOC for Y/Si(111) was emphasized in Ref. badrtdinov-16prb224418, where SOC combined with strong electron correlations was even proposed to stabilize magnetic skyrmions due to Dzyaloshinskii-Moriya interactions.
The observation of 3× 3 to √(3)×√(3) reversible phase transition in Pb/Si(111) by means of variable temperature scanning tunneling microscopy (STM) and DFT calculations was reported in Ref. <cit.>
; the phase transition was found at T_c=86K.
STM and quasi-particle interference (QPI) experiments combined with DFT+U simulations for Pb/Si(111) reported a chiral texture in the charge-ordered phase <cit.>. Phonons along with the strong SOC were proposed as a driving mechanism for the observed charge order. In contrast, a combined STM and QPI study with many-body variational cluster approach simulations emphasized the relevance of non-local Coulomb interactions for the charge-order in Pb/Si(111) <cit.>.
Successful hole-doping of Sn/Si(111) <cit.> and indications for superconductivity <cit.> were reported, supporting the scenario of a doped Mott insulator. Shortly after, the discovery of superconductivity with T_c=4.7K in boron-doped Sn/Si(111) marks a breakthrough <cit.>.
Follow-up theory work suggested that the superconducting ground state might be unconventional and constitute chiral topological superconductivity <cit.>, its nature depending on filling, local and non-local Hubbard interactions. Most recently, experiments on B-doped Sn/Si(111) reported the realization of chiral superconductivity <cit.> with critical temperatures instead close to T_c ∼ 10K.
Adatom lattices on SiC(0001) are the least studied materials amongst our candidates. Previous DFT calculations of Sn/SiC(0001) closely reproduce STM measurements <cit.>. Photoemission data show a deeply gapped state with a gap size of ∼ 2eV; based on dynamical mean-field theory it is argued that it reflects a pronounced Mott-insulating ground state, possibly with antiferromagnetic ordering.
The aim of this paper is to revisit the band structures of the materials Sn/Si(111), Pb/Si(111) and Sn/SiC(0001) in their √(3)×√(3)-reconstructued surface phase with a focus on the strength of spin-orbit coupling (SOC) in Sec. <ref>; we complement our investigation with Pb/SiC(0001) whose √(3)×√(3)-phase has not been reported so far. We do not investigate many-body ground states involving magnetic or charge order using DFT, but rather study and analyze the non-interacting metallic bandstructures.
We further analyze phonon spectra of these materials in Sec. <ref> in order to evaluate the electron-phonon coupling (EPC) strength λ in Sec. <ref> A relativistic study of EPC for Pb/Si(111) is presented in Sec. <ref>.
The discussion of our results is presented in Sec. <ref>, before the end with an outlook in Sec. <ref>.
§ RESULTS I: ELECTRONIC STRUCTURE
The electronic band structure of the four materials with the exception of Pb/SiC(0001) has been reported previously <cit.>.
Here we revisit and compare the four materials. We are interested in their metallic, non-interacting surface bandstructures. By deriving precise tight-binding models of these bands, we provide valuable input for any future many-body analysis of magnetic, charge-ordered, superconducting or otherwise ordered ground states.
While the DFT bandstructures of the four materials are rather similar, it turns out that they realize different regimes of SOC ranging from 10% to 45% of the nearest-neighbor hopping amplitude t_1. This might have major implications for the (strongly) correlated many-body ground states of these materials, as pointed out in a recent study for the plain-vanilla triangular-lattice Rashba-Hubbard model <cit.>.
We use the ab initio package VASP <cit.> within the Perdew-Burke-Ernzerhof (PBE) generalised gradient approximation (GGA) <cit.> to compute the relativistic electronic bandstructures of the four materials. In particular, we employ DFT+U using the approach of Dudarev <cit.> where the Coulomb potential U is applied only to the Si or SiC substrate atoms, but not on the Sn and Pb adatoms. The main effect of U is to vary the gap size of the semiconductor substrate <cit.>; only through changes of the gap size is the metallic surface band influenced. In these calculations we have used a plane-wave cutoff energy of 600 eV and energy convergence criteria of 10^-8eV. For X/SiC(111) 7×7×1 Γ-centered grids were used to sample the Brillouin zone, whereas for X/Si(111) 10×10×1 Γ-centered grids were used for better tight-binding fits close to the Γ-point. The DFT+U bandstructures along with the extracted tight-binding bands (discussed in detail below) are shown in Fig. <ref>, zoomed in on the energy range of the metallic surface band. We find the bandwidths
W/t_1 = 9.71 (W=0.51eV) for Sn/Si(111),
W/t_1 = 7.69 (W=0.53eV) for Pb/Si(111),
W/t_1 = 9.29 (W=0.40eV) for Sn/SiC(0001) and
W/t_1 = 9.97 (W=0.40eV) for Pb/SiC(0001).
In Appendix <ref> we show in Fig. <ref> the same bandstructure plots with larger energy range involving substrate bands.
The four considered materials constitute a case where the metallic surface band is energetically completely separated from the substrate bands, and the description as a single-band tight-binding model is fully justified. The corresponding tight-binding parameters are listed in Tab. <ref> and in Tab. <ref>.
While the isolated triangular lattice of surface adatoms carries a D_6h point group symmetry, the presence of the Si(111) or SiC(0001) substrate breaks inversion symmetry, and reduces the rotational symmetry from C_6 to C_3. This reduces the point group symmetry to C_3v. Group theoretic techniques were used to decompose the Hamiltonian terms, implementational details of which we delegate to Appendix <ref>. Further symmetry analysis of the model has consequences discussed further below.
Interestingly, we find for the nearest-neighbor Rashba amplitude α_1/t_1 the value 0.10 for Sn/Si(111), 0.17 for Sn/SiC(0001), 0.29 for Pb/Si(111) and 0.44 for Pb/SiC(0001). The SOC splitting in Sn/SiC(0001) was previously reported to be 13.7% <cit.>. We attribute the increase in SOC strength in the X/SiC(0001) materials compared to their X/Si(111) counterparts due to the lack of inversion symmetry in the bulk SiC compared to bulk Si. The tight-binding Hamiltonian for our systems
can be written as
H = H_0 + H_R + H_VZ
with
H_0 = ∑_i,j∑_σ t_ij c^†_iσ c^_jσ
H_R = ∑_i,j∑_σ,σ' iλ_ij c^†_iσ c^_jσ' [Υ_ij^∗
(σ×r_ij)·ẑ Υ_ij]_σσ'
H_VZ = ∑_i,j∑_σ,σ' iν_ijJ_ijσ^z_σσ' c^†_iσ c^_jσ'
where Υ_ij=e^iΓ_ijϕ_ij/2σ^z.
The in-plane SOC term H_R contains a conventional Rashba splitting in addition to a radial Rashba effect,
H_R = H_R, conv. + H_R, rad
which can be more naturally expressed as
H_R, conv. = ∑_i,j∑_σ,σ' [iα_ij(σ×r_ij)·ẑ ]_σσ'c^†_iσ c^_jσ'
and
H_R, rad = ∑_i,j∑_σ,σ' [iβ_ijΓ_ij(σ·r_ij)]_σσ'c^†_iσ c^_jσ'
where α_ij = λ_ijcos(ϕ_ij), β_ij = λ_ijsin(ϕ_ij), and the form factor Γ_ij = -sgn(sin(6θ_ij)). Here θ_ij is given by the position of each lattice site. The radial Rashba effect has been investigated in graphene-based heterostructures <cit.>, though not in triangular lattice systems. The strength of the Rashba splitting is given by λ_ij and the mixing between conventional and radial Rashba terms is controlled by ϕ_ij.
We only find finite ϕ mixings for further-nearest neighbour shells containing 12 sites, where the radial strengths β_ij shown in Tab. <ref> are comparable to the conventional α_ij.
We also find what appears to be a valley Zeeman term, given by H_VZ. This term is finite for nearest neighbours that do not lie on any of the mirror planes of the lattice. The sign of the term is governed by the form factor
ν_ij = -sgn(sin(3θ_ij)).
The strengths J_ij given in Tab. <ref> show that this effect is quite weak, with J_2/t_1 less than 1% in all materials.
Some of our derived tight-binding parameter deviate significantly from previously reported ones <cit.>. Most of the change is caused by the inclusion of U in the DFT calculations. By increasing the gap size in order to match the experimentally observed values, the propagation of the Wannier functions is suppressed. The larger band gap acts as a wall resulting in highly localised Wannier functions and a decrease in hybridisation with the bulk. Consequently our Wannier functions (shown in Fig. <ref> a-d) are much more compact as compared with previous studies <cit.>. This leads to a reduction in the longer-ranged hoppings, t_2/t_1 from -0.389 <cit.> to -0.272 for Sn/Si(111), corresponding to a 30% decrease. While second and further neighbor hoppings are still non-negligible, these materials are closer to the nearest-neighbor triangular lattice than previously thought. More importantly, Rashba SOC can be quite sizeable for some of these materials. We note, however, that also our DFT calculations without inclusion of any U led to some discrepancies with the literature. We assume that this is due to the higher resolution used in this work, as well as differing exchange-correlation functionals in the DFT calculations. In Appendix <ref> we compare in Tab. <ref> our derived parameters with previous works.
As previously mentioned, the presence of the substrate reduces the point group symmetry of the surface triangular lattice from D_6h to C_3v, by breaking inversion symmetry and reducing the C_6 rotation symmetry to a C_3. This can be seen in Fig. <ref>, which shows that the substrate also preserves the mirror planes which do not/do go through the bonds. We can further understand the terms in the Hamiltonian (<ref>) in three subsets by a hierarchy of symmetry breaking. The kinetic Hamiltonian H_0 preserves the full D_6h point symmetry group. The Rashba SOC terms H_R only break inversion symmetry and therefore have a C_6v point group symmetry. Note that the radial Rashba effect is only C_6v symmetric on 12 site coordination shells. The valley Zeeman-type terms H_VZ additionally break the the rotational symmetry and have a C_3v symmetry. Anisotropic valley Zeeman terms symmetry under C_3v are only possible on nearest-neighbor shells where the mirror planes do not intersect with bonds, and hence are only finite on these shells. We also note that additional Rashba coupling terms would be allowed under the reduction to a C_3v point group symmetry, but these were not observed.
This symmetry hierarchy serves as one explanation of the relative strengths of the fitted parameters <cit.>. Initially employed to understand bandstructures of bulk semiconductors under external magnetic field or uniaxial strain <cit.>, the symmetry hierarchy is the principle that terms in the Hamiltonian with lower symmetry typically have smaller prefactors than those with higher symmetry.
While this would generally mean that the Hamiltonian can be well approximated by the higher symmetry terms alone, the lower symmetry terms may still lead to important physical effects. In this case, the Rashba and the anisotropic valley Zeeman type terms are small, but follow Moriya's rules for antisymmetric anisotropic spin exchange <cit.>. The Dzyaloshinskii–Moriya interaction <cit.> is an important factor for explaning weak anisotropic ferromagnetism in bulk materials, and is shown to drive a first order phase transition to a weakly ferromagnetic phase above a (second-order transition to) an antiferromagnetic phase. This types of interactions remains unexplored in the realm of surface states, and could have potential consequences for known spin-density wave phases in these materials <cit.>.
§ RESULTS II: PHONONIC STRUCTURE
Ultimately we are interested in the EPC strength; in order to compute it we need to derive the phonon spectra first.
For Sn/Si(111) that was previously done to explore the consequences for the correlated many-body phases <cit.>.
Here we consistently ignore many-body phases such as magnetic or charge order, which would drastically affect the phonon bandstructures <cit.>.
We use the software package phonopy <cit.> along with VASP to compute the phonon spectra, as shown in Fig. <ref> a-d. We emphasize that the phonon computations are performed without spin-orbit coupling; the spin-orbit case is considered further below.
We have colored the phonon eigenvalues according to the adatom weight in the eigenmodes, so that the contribution of surface adatoms is emphasized.
Apart from the differences of frequencies ω at the K and M points, all four phonon spectra look essentially the same up to some global energy scaling. In particular, one can clearly see that the three Sn acoustic phonon bands disperse inside a gap into the Si(111) phonon dispersion. One should note the differing energy scales for X/Si(111) and X/SiC(0001) dispersions, showing higher energy SiC bulk modes.
Computing these spectra turned out to be cumbersome. For X/Si(111) the phonon spectra can be straight-forwardly computed based on the DFT bandstructure using a 2× 2 supercell. For X/SiC(0001) we found, however, imaginary phonon modes. Relaxing the unit cell as well as switching from a 2× 2 to a 3× 3 supercell did not resolve the issue. We noticed that DFT underestimated the experimental gap of SiC(0001) by about 25%. The previously mentioned DFT+U treatment, where U is applied to the atoms of the semiconductor substrate, allowed to adjust the band gap. By increasing the band gap and approaching the experimental value, the imaginary phonon modes shrink and disappear to give real modes when the experimental gap has been met (Fig. <ref> c and d). As DFT underestimates the band gap of Si(111) by the same amount (25%), we repeated the phonon calculations for X/Si(111) with DFT+U input (shown in Fig. <ref> a and b).
The phonon bands of the adatoms (Sn and Pb) shown in Fig. <ref> a-d have relatively flat dispersions, especially in the case of the X/SiC(0001) materials. There has been recent interest in flat phonon bands with regard to ferroelectrics <cit.>. As the flat bands found here are energetically separated from the bulk bands, these materials could act as good platforms for further exploring flat phonon bands experimentally.
§ RESULTS III: ELECTRON-PHONON COUPLING
A fully first-principles study is computationally too expensive for the considered materials. Instead we perform an effective approach based on deformation potentials 𝒟 = Δ E_k/Δ Q due to frozen phonon modes with mode amplitude Q. Loosely speaking, a phonon that strongly couples to states near the Fermi energy E_F cause a large shift Δ E_k in the electronic bandstructure near E_F.
Within the deformation potential approximation, the EPC of a specific phonon mode is calculated via <cit.> λ = 2 N(E_F) ħ/2 M ω^2|𝒟|^2
where N(E_F) is the density-of-states per spin and unit cell, M is the mass of the adatom, and 𝒟 is the aforementioned deformation potential. By inspecting the equation for λ one also notices that the lowest frequency modes contribute most to the EPC due to the 1/ω^2 suppression. Thus we only focus on the three lowest branches at the K or M high symmetry points. The smallest frequencies are ω = 5.52meV, see Tab. <ref>. In the same table we also list the EPC strength λ along with the values of the corresponding deformation potential as well as the DOS.
For a given mode we first create a supercell large enough to realise the full size of the phonon. At the M and K modes this corresponds to 2×2 and 3×3 supercells containing 4 (160) and 9 (360) adatoms (atoms), respectively.
The supercell atoms are then perturbed according to the chosen phonon mode eigenvector and the associated band structure is calculated. Due to the reduction in Brillouin zone size as we increase the size of our supercell, we must unfold the band structure onto the unit cell Brillouin zone to obtain an effective band structure. For each mode we perturb atomic positions by a maximum of Δ Q = 0.1 Å and Δ Q = 0.2Å. It is important to choose only small displacement amplitudes to stay within the linear response regime. From the effective band structures we then calculate the deformation potentials by comparing the gap opening 2Δ E_k with Δ Q.
In Fig. <ref> e-i we show unfolded effective band structures with Δ Q = 0.2Å for different phonon modes and materials. For band structures coupled to modes with both phonon wavevector q=M and q=K we find distinct gap openings along the Γ K and M Γ paths. For each material and coupled mode we calculate deformation potentials for the different gap openings, the largest of these being used to calculate λ.
Due to the large number of atoms required in the EPC calculations it was unfeasible for us to perform all these calculations including relativistic effects.
§ RELATIVISTIC COMPUTATION OF EPC
Phonon calculations are much more costly than electronic bandstructure computations; performing them relativistically, with spin-orbit coupling, is even more expensive. To restore the coherence of the paper, we consider at least for the material Pb/Si(111) the phonon band structure relativistically. Surprisingly, the phonon spectrum hardly changes and there are only minor quantitative changes. In particular, the deformation potential is slightly larger; however, also ω is somewhat larger. In combination, λ is minimally increased as compared to the non-relativistic case. We find λ=0.085 at M and λ=0.059 at K, compared to the values in Tab. <ref> only a mild change. Comparison of the phonon spectrum of Pb/Si(111) with and without SOC is shown in Fig. <ref>. Further discussion about the effect on λ is given in Appendix <ref>, including numerical values in Tab. <ref>. While the increase in λ is insignificant, it is still somewhat counter-intuitive as one would have expected the EPC to shrink in the presence of SOC.
§ DISCUSSION
For all materials we find strongest coupling to phonon modes at the M high symmetry point, due to the gap opening along the MΓ path. We find that the electronic structure couples notably only to phonon modes with a large out-of-plane adatom displacement. Modes with purely in-plane displacement have vanishing effects on the band structure. One such mode, the M high symmetry point of the second branch in Sn/Si(111), produces almost no noticeable shift in the band structure and gives a deformation potential approximately 500 times smaller than the results listed in Tab. <ref>. Effects of DFT+U and in-plane motion are discussed in the Appendix C in the SM along with relevant parameters for calculations in Tab. <ref>.
Tab. <ref> clearly rules out strong EPC as known from MgB_2 <cit.>. The X/SiC(0001) materials have such a small λ that any phonon-mediated superconductivity is out of the picture. Only Sn/Si(111) with its phonon mode at the M point has a somewhat larger EPC strength λ=0.135. To further analyze if such a value of λ could possibly be responsible for the observed superconducting transition at T_c=4.7K, we try to estimate T_c via the Allen-Dynes equation<cit.> T_c = ⟨ω⟩/1.20 exp(-1.04(1+λ)/λ-μ^∗(1+0.62 λ) ). Assuming ω≈ 100K and a Coulomb pseudopotential in the typical range μ^∗∼ 0.05 - 0.2 <cit.>, we find for Sn/Si(111) T_c ∼ 10^-15K ≈ 0.
It is worth emphasizing that the deformation potential method used here to estimate λ replaces the momentum-dependence of EPC with its value at the respective high symmetry point, which is likely to overestimate EPC and thus overestimate T_c. Moreover, if we follow the ideas of Ref. koch-99prl620 to quantify μ^∗ we would end up with an even larger value than μ^∗∼ 0.05 - 0.2, causing a further reduction of T_c. Despite the approximate character of estimating T_c employed in this paper, we conclude that it does not appear to be plausible that EPC could be responsible for the observed superconductivity in Sn/Si(111) <cit.>.
§ OUTLOOK
We have analyzed the electronic and phononic properties of Sn/Si(111), Pb/Si(111), Sn/SiC(0001) and Pb/SiC(0001) with a √(3)×√(3) reconstructed surface phase. Using a deformation potential method, we find that the EPC strength λ is too small in all materials to drive conventional superconductivity. We thus conclude that the recently experimentally observed superconductivity in Sn/Si(111) is likely to be driven by electron–electron interactions. This is in line with the recent experimental claim that the superconductivity is of chiral d-wave type. The tight-binding parameters derived in this paper will be the starting point for future quantum many-body computations of the filling-dependent interacting phase diagrams using methods such as random phase approximation, density matrix renormalization group or functional renormalization group.
We acknowledge discussions with Steve Johnston and Hanno Weitering.
S.R. acknowledges support from the Australian Research Council through Grant No. DP200101118 and DP240100168.
This research was undertaken using resources from the National Computational Infrastructure (NCI Australia), an NCRIS enabled capability supported by the Australian Government.
§ ELECTRONIC STRUCTURE
§.§ Group theory decomposition of Hamiltonian
In the main text of the manuscript, the symmetry hierarchy is discussed, whereby we can decompose terms of the Hamiltonian by successive symmetry reduction. In this section, we discuss practically how this is done, and go into further detail describing the possible symmetry allowed Hamiltonian terms.
A regular triangular Bravais lattice follow a D_6h point group symmetry, which is generated by a C_6 rotation, inversion symmetry and a mirror plane σ_1. The presence of the substrate breaks inversion symmetry, as well as reduces the C_6 to a C_3 rotation symmetry. This can be seen in Fig. <ref>, where the alternating heights of the substrates reduce this rotational symmetry, also removing half of the mirror planes.
All possible terms included in the Hamiltonian must be some linear combination of basis functions of the A_1 irrep of reduced symmetry point group C_3v. It is illuminating, however, to first construct the possible basis functions of irreps C_6v, which includes the spin-orbit coupling and inversion symmetry breaking. Then, we consider which C_6v irreps map to A_1 in C_3v upon the rotation symmetry reduction C_6 → C_3.
The Hamiltonian is written in terms of the bilinears c^†_i σ c^_j σ', which represent an electron hopping from site j with spin σ' to site i with spin σ. Translation invariance means that we need only consider the relative vector between the sites, i.e., we need only consider the symmetry group actions on r_ij, where site i is located at Bravais lattice point R_i, and j at R_j = R_i + r_ij. Point group symmetry operations permute the bilinears amongst spin indices as well those with the same hopping distance |r_ij|. We then only need to consider the nearest-neighbor shells with the same hopping distances one at a time, i.e., the relevant Hilbert space is 4N, for 2 × 2 spin indices, and a nearest-neighbor shell with coordination number N.
The basis functions can be found by constructing the representation R(g) of the symmetry group acting on this Hilbert space, forming projection operators
P^ν = ∑_g d_ν/|G|( χ^ν (g) )^* R(g)
for each irrep ν, and diagonalizing to find the basis functions as the eigenvectors with magnitude 1. d_ν is the dimension of the irrep ν, |G| is the order of the group (which is 12 for C_6v), and χ^ν (g) is the character of the group element g in the irrep ν.
The matrix representations of the symmetry elements
can be constructed from a matrix outer product of the spatial transformation R(g), which can be viewed as the transformation of the bond vector r_ij, and the two spin transformations S(g):
R(g) ⊗ S^* (g) ⊗ S(g)
where c_is→∑_s' S_ss' c_is' is a spin transformation acting on a second quantized fermionic annihilation operator (c^†_is→∑_s' S^*_ss' c^†_is' for the creation operator). S(g) takes the usual form of the SU(2) rotation operator S(g) ≡ U(ϑ, n̂) = exp [-i ϑ (σ·n̂)/2].
We will now discuss the basis functions for the different nearest-neighbor shells. Fig. <ref> shows the different nearest-neighbor shells, up to eighth nearest neighbors. We have included the set of mirror planes which are kept after the symmetry reduction C_6 → C_3. From this figure, we can see that there are three distinct cases to consider. The first is the nearest-neighbors, which is a coordination shell of 6 where the bonds lie in mirror planes. The same analysis also applies to the third, fifth and eighth-nearest neighbor shells.
For the nearest-neighbors, the C_6v symmetric Hamiltonian terms are basis functions of the A_1 irrep of C_6v:
H_t = -t ∑_⟨ ij ⟩, σ c^†_iσ c^_jσ,
H_J = - J ∑_⟨ ij ⟩, σσ'σ_σσ'^z c^†_iσ c^_jσ',
H_R, conv. = - i α∑_⟨ ij ⟩, σσ' [ (σ×r_ij)·ẑ ]_σσ' c^†_i σ c^_j σ'
which we identify as the kinetic hopping, symmetric Zeeman splitting and the conventional Rashba terms respectively. Once we reduce the symmetry to C_3v, an additional term given by the B_1 irrep of C_6v is allowed:
H_R, VR = i β∑_⟨ ij ⟩, σσ'ν^_ij (σ·r_ij)^_σσ' c^†_iσ c^_jσ'
where ν_ij = - sgn ( sin ( 3 θ_ij)) as defined in the main text, with θ_ij as the principle angle made by the bonding vector r_ij. This term would correspond to a valley radial (VR) Rashba term.
The second case is the second nearest-neighbor shell, which also applies to the sixth nearest-neighbors. This is a shell with coordination number of six, however the bonds do not lie on the C_3v mirror planes, but rather halfway between the planes. The C_6v symmetric Hamiltonian terms are therefore the same, but the C_3v symmetric term is given by the other B irrep of C_6v, B_2:
H_Vt = - t ∑_⟨⟨ ij ⟩⟩, σν^_ij c^†_iσ c^_jσ
H_VZ = - J ∑_⟨⟨ ij ⟩⟩, σσ'ν^_ij σ^z_σσ' c^†_iσ c^_jσ'
H_R, conv. = - i α∑_⟨⟨ ij ⟩⟩, σσ'ν^_ij [ (σ×r_ij)·ẑ ]_σσ' c^†_i σ c^_j σ'
which are a valley alternating versions of the kinetic hopping, the Zeeman term and the conventional Rashba term.
The third case is the forth nearest-neighbors, which also applies to the seventh and ninth nearest-neighbors. This is a shell with coordination number 12, where the mirror planes do not intersect any bonds. On top of the terms found for the 6 coordination number shells (Eq. <ref>-<ref>), we find that the following radial Rashba term is C_6v symmetric:
H_R, rad = i β∑_⟨ i,j ⟩_4, σσ' Γ_ij (σ·r_ij)^_σσ' c^†_iσ c^_jσ'
where Γ_ij = -sgn(sin(6θ_ij)). When the symmetry is reduced to C_3v, the valley-alternating versions the C_6v symmetric terms are allowed. The forms of these were already given in Eq. <ref>-<ref>, and can be extended to the radial Rashba term Eq. <ref> by an analagous inclusion of the factor ν_ij.
In all cases, all allowed C_6v kinetic hopping and Rashba terms are observed, and only the symmetry reduced C_3v valley Zeeman Hamiltonian term is observed, as discussed in the main text. This is then why the radial Rashba term is only observed on nearest-neighbor shells with coordination number 12, and why the valley Zeeman splitting is not observed on some shells, including nearest neighbors.
§.§ Full band structure
In Fig. <ref> we show the same band structure as in Fig. <ref> in the main text but over a wider energy range. DFT+U has been used to increase the semiconductor band gaps to match experiment, which is reflected in the figure. One can see that the surface bands disperse through the semiconductor band gap, and we note that there is no energy overlap between the bulk bands and the surface bands in any of the materials.
§.§ Comparison of tight-binding parameters with previous works
In this paper, we have extracted tight-binding parameters as computed with DFT but also with DFT+U where U was only applied to the substrate atoms. The resulting tight-binding parameter differ, and partly they do differ significantly. In Tab. <ref> we compare our tight-binding parameters with previous results from the literature.
§ ELECTRON-PHONON COUPLING
In Tab. <ref> we compare EPC calculations performed with and without DFT+U for the X/Si(111) materials, in-plane phonon modes, and the effect of SOC in phonon calculations. We find that the inclusion of DFT+U causes a reduction in the EPC strength λ. Notably, the deformation potential 𝒟 is mostly unaffected, with the largest change Δ𝒟 = 0.010eV/Å in Pb/Si(111) coupled to the band 1 K mode. The source of this reduction can be mostly attributed to the general increase in phonon mode frequency seen in both materials for a given mode, as λ∼ 1/ω^2.
For both Sn/Si(111) and Sn/SiC(0001) we test phonon modes with eigenvector corresponding to in-plane motion. For these calculations we find no discernible change in the effective band structure as compared to the unperturbed unit cell and thus minimal coupling. It is unsurprising that out-of-plane motion will have the largest effect on the band structures when considering hybridisation of Wannier functions with the bulk. As discussed in the main text, the reduction in hybridisation corresponds to significant changes in tight-binding hopping parameters. Changes in the adatom height above the substrate should have the largest effect on hybridisation, while in-plane motion should leave this largely unchanged.
We also compare the effect of including SOC when calculating the phonon dispersion for Pb/Si(111). The phonon modes corresponding to out-of-plane motion have minor changes in frequency. We see a slight increase in EPC strength λ when coupled to the band 1 M mode, which is due to the increase in deformation potential 𝒟. As changes in λ are minor in Pb/Si(111) which has a large Rashba splitting α_1/t_1 ∼ 30%, we suspect that including SOC in the phonon calculations will have minimal effects on the other materials.
| The material class of a 1/3 monolayer of group-IV adsorbates (Pb, Sn) on semiconductor substrates has a long history. The 1/3 monolayer coverage leads to a √(3)×√(3)R30^∘ reconstruction where the atoms sit on the T_4 sites in a triangular lattice. Thus three out of their four valence bonds are saturated with the substrate, while the fourth orbital remains “dangling” and leads to a half-filled surface band, susceptible to electron-electron interactions. The
symmetry-broken low-temperature ground states have fascinated researchers for decades. It started with Pb/Ge(111) and the observation of a low-temperature surface charge density wave <cit.>. The same group of researchers found a similar phenomenology for the sister compound Sn/Ge(111) <cit.>. However, the transition from the room temperature √(3)×√(3) phase to the low temperature 3× 3 phase associated with charge-ordering could not be explained using first-principle methods, in contrast to Pb/Ge(111) <cit.>. The authors conjectured <cit.> that electron correlations might be responsible for the charge ordering in Sn/Ge(111). Also the role of defects was discussed in this context <cit.>.
The importance of electron-electron interactions was discussed from the beginning <cit.>, but probably the first paper to propose and explicate a Mott–Hubbard scenario was Ref. profeta-07prl086401. By performing local density and Hubbard-U approximation, they proposed both Sn/Ge(111) and Sn/Si(111) as Mott insulating materials.
Ref. cortes-13prb125113 also reported the charge-ordered 3×3 phase in Sn/Ge(111), and characterized it as an intermediate phase between the metallic 3×3 and Mott insulator phases.
The importance of spin-orbit coupling (SOC) was emphasized in Ref. tresca-21prb045126 for Pb/Ge(111) and Pb/Si(111).
Sn/Si(111) was originally believed to feature surface-metallicity, but micro-four-point-probe conductivity measurements revealed insulating behavior from room temperature all the way down to low temperatures <cit.>. Doping by partially replacing Sn atoms with In or Na deposition, leads rather suddenly to a metallic conductivity suggesting a small energy gap of the undoped Sn/Si(111) system. Angle-resolved photoemission spectroscopy performed on Sn/Si(111) revealed that the measured spectral function is incompatible with both 3× 3 and √(3)×√(3) reconstruction, but compatible with a 2√(3)× 2√(3) phase associated with a row-wise antiferromagnetic order <cit.>. The accompanying many-body calculations estimated a quite sizeable Hubbard-U=0.66eV.
The insulating ground state of Sn/Si(111) was also suggested as a consequence of a Slater-type insulator via band magnetism <cit.>. Ref. hansmann-13prl166401 performed a fully first-principle approach, predicting the size of local and non-local Coulomb interactions and derived phase diagrams for various Y/Si(111) compounds (Y=Si, C, Sn, Pb). For Sn/Si(111) they found local Hubbard-U=1eV and nearest-neighbor V_1=0.5eV compared to a bandwidth W∼ 0.5eV, further establishing Sn/Si(111) as a Mott insulator.
The importance of SOC for Y/Si(111) was emphasized in Ref. badrtdinov-16prb224418, where SOC combined with strong electron correlations was even proposed to stabilize magnetic skyrmions due to Dzyaloshinskii-Moriya interactions.
The observation of 3× 3 to √(3)×√(3) reversible phase transition in Pb/Si(111) by means of variable temperature scanning tunneling microscopy (STM) and DFT calculations was reported in Ref. <cit.>
; the phase transition was found at T_c=86K.
STM and quasi-particle interference (QPI) experiments combined with DFT+U simulations for Pb/Si(111) reported a chiral texture in the charge-ordered phase <cit.>. Phonons along with the strong SOC were proposed as a driving mechanism for the observed charge order. In contrast, a combined STM and QPI study with many-body variational cluster approach simulations emphasized the relevance of non-local Coulomb interactions for the charge-order in Pb/Si(111) <cit.>.
Successful hole-doping of Sn/Si(111) <cit.> and indications for superconductivity <cit.> were reported, supporting the scenario of a doped Mott insulator. Shortly after, the discovery of superconductivity with T_c=4.7K in boron-doped Sn/Si(111) marks a breakthrough <cit.>.
Follow-up theory work suggested that the superconducting ground state might be unconventional and constitute chiral topological superconductivity <cit.>, its nature depending on filling, local and non-local Hubbard interactions. Most recently, experiments on B-doped Sn/Si(111) reported the realization of chiral superconductivity <cit.> with critical temperatures instead close to T_c ∼ 10K.
Adatom lattices on SiC(0001) are the least studied materials amongst our candidates. Previous DFT calculations of Sn/SiC(0001) closely reproduce STM measurements <cit.>. Photoemission data show a deeply gapped state with a gap size of ∼ 2eV; based on dynamical mean-field theory it is argued that it reflects a pronounced Mott-insulating ground state, possibly with antiferromagnetic ordering.
The aim of this paper is to revisit the band structures of the materials Sn/Si(111), Pb/Si(111) and Sn/SiC(0001) in their √(3)×√(3)-reconstructued surface phase with a focus on the strength of spin-orbit coupling (SOC) in Sec. <ref>; we complement our investigation with Pb/SiC(0001) whose √(3)×√(3)-phase has not been reported so far. We do not investigate many-body ground states involving magnetic or charge order using DFT, but rather study and analyze the non-interacting metallic bandstructures.
We further analyze phonon spectra of these materials in Sec. <ref> in order to evaluate the electron-phonon coupling (EPC) strength λ in Sec. <ref> A relativistic study of EPC for Pb/Si(111) is presented in Sec. <ref>.
The discussion of our results is presented in Sec. <ref>, before the end with an outlook in Sec. <ref>. | null | null | null | For all materials we find strongest coupling to phonon modes at the M high symmetry point, due to the gap opening along the MΓ path. We find that the electronic structure couples notably only to phonon modes with a large out-of-plane adatom displacement. Modes with purely in-plane displacement have vanishing effects on the band structure. One such mode, the M high symmetry point of the second branch in Sn/Si(111), produces almost no noticeable shift in the band structure and gives a deformation potential approximately 500 times smaller than the results listed in Tab. <ref>. Effects of DFT+U and in-plane motion are discussed in the Appendix C in the SM along with relevant parameters for calculations in Tab. <ref>.
Tab. <ref> clearly rules out strong EPC as known from MgB_2 <cit.>. The X/SiC(0001) materials have such a small λ that any phonon-mediated superconductivity is out of the picture. Only Sn/Si(111) with its phonon mode at the M point has a somewhat larger EPC strength λ=0.135. To further analyze if such a value of λ could possibly be responsible for the observed superconducting transition at T_c=4.7K, we try to estimate T_c via the Allen-Dynes equation<cit.> T_c = ⟨ω⟩/1.20 exp(-1.04(1+λ)/λ-μ^∗(1+0.62 λ) ). Assuming ω≈ 100K and a Coulomb pseudopotential in the typical range μ^∗∼ 0.05 - 0.2 <cit.>, we find for Sn/Si(111) T_c ∼ 10^-15K ≈ 0.
It is worth emphasizing that the deformation potential method used here to estimate λ replaces the momentum-dependence of EPC with its value at the respective high symmetry point, which is likely to overestimate EPC and thus overestimate T_c. Moreover, if we follow the ideas of Ref. koch-99prl620 to quantify μ^∗ we would end up with an even larger value than μ^∗∼ 0.05 - 0.2, causing a further reduction of T_c. Despite the approximate character of estimating T_c employed in this paper, we conclude that it does not appear to be plausible that EPC could be responsible for the observed superconductivity in Sn/Si(111) <cit.>. | null |
http://arxiv.org/abs/2409.17843v1 | 20240926135040 | Auction-based Adaptive Resource Allocation Optimization in Dense IoT Networks | [
"Nirmal D. Wickramasinghe",
"John Dooley",
"Dirk Pesch",
"Indrakshi Dey"
] | cs.GT | [
"cs.GT"
] |
2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
1.2
978-1-5386-5541-2/18/$31.00 2018 IEEE.
[]
Auction-based Adaptive Resource Allocation Optimization in Dense IoT Networks
Nirmal D. Wickramasinghe, Student Member, IEEE,
John Dooley, Member, IEEE,
Dirk Pesch, Senior Member, IEEE,
and Indrakshi Dey, Senior Member, IEEE
N. D. Wickramasinghe is with the Department of Electronic Engineering, Maynooth University, Ireland. (Email: [email protected])
J. Dooley is with the Department Department of Electronic Engineering, Maynooth University, Ireland. (Email: [email protected])
D. Pesch is with the School of Computer Science and Information Technology, University College Cork, Ireland. (Email: [email protected])
I. Dey is with the Walton Institute for Information and Communications Systems Science, Ireland. (Email: [email protected])
September 28, 2024
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The rapid pervasivity of the Internet of Things (IoT) calls for an autonomous and efficient resource management framework to seamlessly register and discover facilities and services. Cloud-Fog-Automation (CFA) standards provide a robust foundation for multi-tiered wireless architectures, enhancing cyber-physical system performance with advanced abstractions. This work is for resource allocation optimization in IoT networks, particularly in power management and time-frequency spreading techniques, ensuring deterministic connectivity, networked computing, and intelligent control systems. Auction game theory is pivotal in managing resource allocation in densely populated, high-demand IoT networks. By employing sealed-bid auctions based on Bayesian game theory, the uncertainties in individual hypotheses and channel states among IoT entities are effectively mitigated. A novel dispersion metric optimization further enhances the coordination of layer-specific IoT uplinks, enabling ultra-reliable, low-latency (URLLC) communication. Numerical results demonstrate the superior performance of this resilient architecture, achieving fair resource allocation with minimal power consumption and robust performance in unsecured scenarios.
Cloud-Fog Automation, IoT, Auction Game Theory, Resource Allocation, Space-Time-Frequency Spreading
§ INTRODUCTION
The, cyber-physical systems (CPS) present a higher combination and coordination between physical and computational (cyber) elements <cit.>. Hence, IoT network-based performance analysis for co-designing cyber-physical systems might be a crucial landmark in next-generation communication systems. Especially, the universal metric of the Internet of Everything (IoX) opens a wide variety of key point indicators (KPI) for cell-free massive multiple-input-multiple-output (MIMO) protocols instead of limiting them to particular cellular clusters. In fact, IoX entities are intelligently interconnected with people, processes, data, and things where everything comes online <cit.> and creates a multi-tier cloud-fog architecture.
Cloud-Fog-Automation (CFA): The cloud layer in CFA systems primarily consists of coordinators making critical decisions, while the fog layers handle a growing number of IoT devices, particularly in industrial settings. Despite recent shifts from layered models (e.g., Pyramid automation) to centralized cloud automation approaches <cit.>, a research gap remains in hybrid CFA architectures. CFA optimizes resource allocation in IoT networks through temporary, localized clusters, aiming for deterministic connectivity, intelligence, and computing. However, IoT device constraints (size, power, communication) challenge these goals, especially in uplink communication. Unlike cloud automation, CFA's fog layers operate independently, ensuring secure boundaries while promoting intelligent, heterogeneous architectures <cit.>. A survey on the IoT-Fog-Cloud ecosystem <cit.> highlights key CFA challenges, such as IoT big data, real-time processing, and heterogeneity.
Authors in <cit.> introduced a time-critical balancing control mechanism for wireless CFA systems, improving reliability in industrial wireless networks. To reduce makespans and costs in fog-cloud environments, <cit.> addressed a multi-objective task-scheduling optimization problem with local stationary stages across cloud-fog layers. <cit.> proposed Platform-as-a-Service (PaaS) functionalities for hybrid cloud/fog environments, facilitating application development, deployment, and management. Resource allocation strategies for enhancing task scheduling and QoS in IoX spaces were examined using optimization techniques in <cit.> and <cit.>. Additionally, resource management applications addressed simultaneous resource sharing and URLLC requirements in IoV architectures <cit.>, <cit.>. Secure communication platforms for mitigating risks and vulnerabilities in cloud-fog-edge layers were emphasized in <cit.> and <cit.>.
Game (Bayesian) Theory for CFA:
While existing literature proposes novel approaches for CFA platforms, it lacks discussion on transformative technologies that could harmonize diverse layers for compatibility and synergy. Each CFA layer has specific competitive demands, but individual self-interest may harm overall CPS performance. Consequently, CFA's core task involves addressing sub-optimization problems driven by user interactions within competitive CPS environments. Modern CFA is framed as a cooperative game model, posing challenges in discovering unique states, particularly when optimizing multi-agent systems under complex network specifications <cit.>.
In contrast, individual node awareness or visibility of its surroundings may be constrained by common data distributions rather than a perfect information game model. Bayesian strategies, however, help govern sub-optimization problems by considering expected utility within a cooperative game model. Therefore, applying Bayesian game theory to CFA protocols is desirable. Existing work presents an incomplete game model for efficient power consumption with incomplete channel state information (CSI) in IoT uplinks <cit.>. Yet, the vast number of random network entities and complex action vectors create computational overhead for current solvers <cit.>. Hence, instead of centralized computation, it is more effective to employ mechanism design based on stochastic structures, avoiding game-theoretic limitations <cit.>. Revelation principles are also key for resource allocation in CFA, offering low computational complexity and distributed processing capabilities.
Auction Theory for CFA: Auction theory, as explored through Bayesian game theory for imperfect multi-agent optimization, serves as a foundation for mechanism design in allocation protocols. Auctions play a key role in price formation models, particularly in procurement, patent licensing, and public finance <cit.>, <cit.>. Advanced auction techniques are used in automated mechanism design, such as advertisement auctions in e-commerce platforms <cit.>, and in real-time bidding (RTB) for e-advertising <cit.>. Auction-based frameworks are also suitable for CFA, as seen in auto-bidding mechanisms <cit.>. Additionally, auction-driven methods address efficient resource allocation in cooperative wireless communications <cit.>, vehicular cloud-assisted networks <cit.>, and blockchain-driven fog environments <cit.>.
Authors in <cit.> provides a comprehensive survey on auction-based mechanisms in cloud/edge computing, focusing on efficient resource management and pricing challenges. <cit.> highlights auctions as effective tools for incentivizing task offloading and resource allocation in fog-cloud systems, prompting reconsideration of auction-based strategies for CFA. Auctions enable dynamic resource allocation by leveraging user synergy across cloud, fog, and edge layers, enhancing CFA performance. Each auction participant contributes to decision-making, distributing computational complexity via node collaboration. To optimize resource allocation with incomplete information in dense IoT networks, sealed-bid auction mechanisms are proposed, addressing IoT device limitations and promoting adaptive, sustainable models for CFA co-design.
Contribution: The primary contributions of our work are;
* We focus on a traffic-heavy cloud-fog network layer to propose CFA protocols that optimize resource allocation in dense IoT networks, addressing challenges like limited communication infrastructure.
* We leverage auction theory-based scalable mechanism design to develop policies that enhance user interactions among IoT gateways and nodes, improving the overall interoperability of the co-design.
* To manage the varying priorities and incomplete information among IoT devices, we employ sealed-bid auctions for resource management, addressing challenges such as inconsistent channel state information and unawareness of neighboring nodes' statuses.
* Recognizing the need for heterogeneous resource segments due to the independence of vendor-specific hardware and software, we introduce space-time-frequency spreading (STFS) techniques to reduce gateway interference and ensure compliance with URLLC requirements.
* Through a comprehensive mathematical exploration of auction frameworks, we develop lightweight CFA mechanisms that efficiently allocate resources with minimal processing power, ensuring optimal resource vectors for IoT nodes.
* Our proposed auction mechanism allows for the simultaneous distribution of computational load across IoT clusters. Additionally, we motivate performance improvements through graph-based signal processing to enhance the self-predictability of IoT systems.
* By properly allocating resources using auction theory, we activate STFS in IoT uplinks, minimizing signal collisions and compensating for external hardware and channel impairments.
* Our simulation results demonstrate the strength of the proposed mechanism, showing minimal energy consumption and resilience in the presence of risky bidding nodes. Furthermore, the adaptive protocol we derive effectively manages cloud-fog interactions, applying flexible constraints based on specific application needs within the CFA framework.
Organization: In Section <ref>, we present the system model for CFA strategies and formulate the resource allocation optimization problem using spreading techniques. Section <ref> outlines the mathematical foundation of auction-based mechanisms for CFA policies, extending the sealed-bid approach to handle problem incompleteness. Section <ref> delves into conventional auction types, highlighting their unique properties, and explores CFA capabilities through dispersion metric optimizations. Simulation results are provided in Section <ref>, followed by a discussion aligned with CFA practices, concluding in Section <ref>.
Notation: In this paper, bold upper case, lower case, and Calligraphy letters represent matrices, vectors, and sets or spaces respectively. 𝔼, , ×, †, ‖ . ‖ and |.| or n(.) denote statistical expectation, the cross product, multiplication operator, hermitian, the L_2-norm of a vector, and the cardinality of a set respectively. In addition, |_a refer to with respect to a, ∖ k or -k for without k. → for maps to, and ⊥ refer to perpendicular notations. Furthermore, ℝ, ℂ, ∪, I_K, 0_K N denote the universal set of real numbers, universal set of complex numbers, the union of sets, K K identity matrix, K N null matrix respectively.
§ SYSTEM MODEL AND PROBLEM STATEMENT
In this study, we explain the properties of density and heterogeneity of IoT networks to enhance the performance for limited and diverse categories of resource utilization based on the wide variety of demand functions plus surrounding details.
§.§ Network Model
Let's consider the massive K number of IoT devices , each with random traits, transmitting independent target signals to their respective IoT gateway for uplink activities, as shown in <ref>. These IoT PINs (Places in the Network) are deployed in a static architecture with self-awareness to sense various physical patterns nearby. While the devices use an orthogonal frequency division multiplexing (OFDM) framework to avoid self-band collisions during uplink transmission, this is insufficient for significantly increasing network capacity. Factors such as surrounding parameters and transceiver conditions contribute to channel impairments, potentially causing self-band interference. Taking into account the challenges from the spectrum leakage for the uplink signal which is fired by node k ∈{1, …, K} targeting the intended resource slot n ∈{1, …, N}, the time domain received signal at the gateway can be formulated as,
y^[n](t) = g_k(t) x_k^[n](t) + ∑_j=1, j≠ k^K g_j(t) x_j^[n](t) + w^[n](t)
where x_k(t) ∈ℂ^K 1, g^n_k∈ℂ^N K are the desired transmitted signal from node k and respective channel vector matching with n^th resource block in complex domain. Here, we are modeling the channel vector via a 2D complex bivariate Gaussian distribution for the sake of small-scale path fading effects causing Rayleigh scattering conveying via the wireless medium. Hence, g_k(t) x_k^[n](t) is the desired signal portion which is allocated to resource segment n, and g_j(t) x_j^[n](t) is the generalized fragment that is acting as an obstacle to utilize the n^th resource space. The last term in (<ref>) depicts the usual zero-mean complex additive white Gaussian noise term that appeared in n^th sub-band of the gateway at time t, and w^[n](t) ∼𝒩_ℂ(0, σ_w^2), where σ_w^2 is the average noise power.
The higher density and heterogeneity of the networks cause challenges which are not only in finding an arbitrary resource block but also in perfect matching with the most proper slot at the gateway. Consequently, we are dealing with the problem of mapping fewer N resources in a particular space-time-frequency band with K number of devices (K≥ N), to delve into performance enhancement within dense IoT networks. Let 0 ≤ c_k^[n] ≤ 1 imply the indication ratio to n^th resource block from IoT device k and depend on the hypothesis and CSI distributions of IoT pool. According to the resolution of advanced resource utilization, the indicating factor behaves in two major different ways such as,
* c_k^[n]∈{0,1}: The indicator strictly follows lower and upper boundaries in the inequality to generate a binary decision strategy vector including “assigned" or “not".
* c_k^[n]∈(0,1): In this case, the task of resource allocation is fine-tuned based on an auxiliary approach or in a non-orthogonal framework. let, consider a node k^' in a subspace 𝒦^' ⊆ 𝒦 with vector cardinalities n(𝒦) ≥ n(𝒦^') themselves, then c_k^[n] = c_(∪_k^'∈𝒦^' k^')^[n];
The key concept asserts each n resource segment is allocated for at most one device k explains a dedicated sub-block allocation. In other words, it's fair to assume each k IoT node is not applicable for simultaneous multiple resource allocation which explains the constraints below,
∑_k=1^K c_k^[n] ≤ 1; ∀ n ∈{1, …, N}
∑_n=1^Nc_k^[n] ≤ 1; ∀ k ∈{1, …, K}
Symbolize the information vector of node k is s_k(t) at time t. Then, the data symbol vector s = [s_1, …, s_k, …, s_K] which is generated on the top of DC transmission current vector, I^DC = [I_1^DC, …, I_k^DC, …, I_K^DC] follows a circular symmetric complex Gaussian distribution with P_k variance, s_k∼𝒞𝒩_ℂ(0, P_k∝|I_k^DC|^2) for node k. In addition, the k^th node data symbol, s_k∈{±1} complies with binary phase shift keying (BPSK) modulation scheme plus the covariance matrix of s is ∑_𝐬=𝐈_𝐊 where 𝐈_𝐊 is the identity matrix with the size of K for the sake of independent and identical, i.i.d. transmission. Therefore, transmit signal x_k^[n](t), from node k to the gateway using the resource block n at time t can be expressed as,
x_k^[n](t) = c_k^[n] s_k(t); ∀{k ← k^'}∈𝒦, ∀ n ∈{1, …, N}.
§.§ Space-Time-Frequency-Spreading (STFS)
The envisioned architecture in <ref> explains the mapping technique of data-stream s_k via a pre-compensation weight called the dispersion vector a_k∈A of node k before being transmitted over space, time, frequency or mixed strategy themselves. Then, the comprehensive dispersion matrix can be explained as A ∈ ℂ^K N_T N_F with N_T=n(𝒩_T), N_F=n(𝒩_F) and {𝒩_T, 𝒩_F}⊆𝒩 where the set of IoT nodes 𝒦 assigned with desired time and and frequency slots in respective 𝒩_T, 𝒩_F sets within the universal set of resources, 𝒩. Despite IoT devices following an orthogonal framework to avoid inter-symbol interference or intra-band self-correlations, the sudden variations in coherent multiple access channel (MAC) may cause multi-carrier interference in other words inter-band cross-correlations at the IoT gateway. Therefore, the rationale behind the dispersion matrix A is providing a regulation mechanism to transmit signals along the gain and phase themselves then resulting in minimal or null interference at the gateway, after receiving the instructive details via the indicating vector c_𝒦^𝒩 for optimal resource matching. In other words, the optimal dispersion matrix for a given space-time-frequency framework cooperates with gain enhancement plus distributed phase synchronization along the logic behind the water-filling approach. Additionally, the STFS technique <cit.> evolves with introducing the prior channel compensation metrics in the argand plane to deal with optimal power utilization and time-frequency matching before firing the signal from each IoT node. Accordingly, after selecting the proper modulation scheme for each bit-stream, the STFS mapping task is performed as follows [The composition of STFS is feasible to generate using two parallel data streams, along the first flow is modulated via M-ary phase shift keying (M-PSK) or M-ary quadrature amplitude modulation, (M-QAM) and the second one with M-ary frequency shift keying, (M-FSK) aid of chirp spread spectrum, (CSS). Then, both modulated streams are combined together in advance of being mapped on the STFS blocks.],
* Space time spreading (STS): The set of vectors which are spread on a 2-dimensional plane for a given frequency slot f_n represents A^ST_f_n∈ℂ^K N_T for time diversity only. The STS arrangement would be apt in the scenario with optimal STS indication parameter, c_k^f_n which is essentially biased with the task about bandwidth-hungry resource allocation for IoT node k, and c_k^f_n∈c_𝒦^𝒩. The devices may not be able to transmit at the same time slot t_n even over the same bandwidth due to the time orthogonality in the STS transmit frame reason for the interference minimization at the gateway.
* Space frequency spreading (SFS): A^SF_t_n∈ℂ^K N_F represents the set of vectors which are distributed on a 2-dimensional plane for a given time slot t_n alone with frequency diversity. The SFS arrangement would be possible in the context of; the optimal SFS indication parameter, c_k^t_n∈c_𝒦^𝒩 which is essentially biased with the task about time-sensitive resource allocation for IoT node k. The SFS framework follows inter-band orthogonality on the transmit signal of each IoT device, (may not be able to transmit over the same sub-band f_n) reason for the lack of interference at the gateway even in the same time slot t_n.
* Space time-frequency spreading (STFS): The set of 3-dimensional individual blocks represents the A ∈ ℂ^K N_T N_F which is the mapping framework of any arbitrary combination between STS and SFS. The optimal STFS allocation parameter c_k∈c_𝒦^𝒩 has more flexibility than aforementioned sub-spreading techniques, to assign resources in the existence of time or frequency sensitivity.
Subsequently, it's feasible to modify the transmit signal in (<ref>)
for IoT device k, utilizing the STFS mapping function as,
x_k^[n](t) |_a = a_k(t). x_k^[n](t); ∀ k ∈𝒦, ∀ n ∈𝒩
where p_k =a_k^†.a_k, a_k∈ℂ^l 1 and l is the length of symbol stream. Without loss of generality, the transmit signal in device k is required to be powered up with p_k to satisfy the receive signal strength indication (RSSI) at the gateway.
Consequently, the average power constraint P, can be established as,
∑_n=1^N𝔼(‖ x_k^[n](t) ‖_a^2) ≤ P; {c_k^[n]⊥ p_k^[n]}
c_k^[n]∈ [0,1], ∀ k ∈𝒦, ∀ n ∈𝒩
Subsequently, (<ref>) and (<ref>) derive the upper bound of the power variance of node k, P_k≤ P using the motivational gain from optimal STFS block a_k. Nonetheless, the indicating factor c_k^[n] might have a correlation with selecting the desired optimal STFS block a_k^[n], based on the IoT environment, it’s noticeable that the firing power space p_k^[n], of each node k aiming for n^th resource is independent with c_k^[n].
§.§ Problem Formulation
Introducing diverse types of CFA protocols results in a massive number of IoT devices and interoperability might be a reason for deviation of anticipated execution patterns from the usual deterministic orientation. Moreover, resource inadequacy (K>N) directly influences competitive behaviors among IoT devices hence it's obvious for the gateway to experience collisions of received signals from IoT uplinks. Thus, we consider the delay spread of the transmit signal might be larger than the coherence time of the channel in high-frequency stages resulting in small-scale multi-path fading effects. Therefore, the desired signal which is assigned for one particular resource block n from the IoT device k might have interfered with the receiving signals from the rest of the IoT bunch assigned for the resource block sub-space r_n^STFS at the vicinity of n. Hence, we can devise the lower bound of the standardized achievable rate referred to as the channel throughput which is reciprocal and flat faded.
γ_k^[n] = Blog_2 (1+ ‖ a_kg_kx_k^[n]‖^2/‖∑_j=1, j k^Ka_j g_j x_j^[n]‖^2+σ_w^2)
where B is the normalized bandwidth facilitated grounded on IoT attributes
and surrounding conditions.
Furthermore,
the term for interference power benefits the phase details among summation of interference signals making a proper resource arrangement utilizing c_k^[n] and executing a_k may cause to enhance the channel throughput of the received signal at the gateway, fired from IoT device k. Ultimately, the formal building blocks for the adaptive resource allocation optimization problem are as follows.
cl
𝐂,𝐀,𝐏 {∑_k∈𝒦 p_k(𝐂,𝐀) } ∈𝐏
γ_k(𝐂,𝐀,𝐏) ≥Γ^th_k ; c_k^[n]∈𝐂
p_k^min ≤p_k≤p_k^max ∈P
(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), ∀k∈𝒦, ∀n∈𝒩
The objective function (<ref>) represents the average transmit power of uplinks in the IoT pool with the regulation flexibility upon STFS indicator 𝐂 and dispersion metric 𝐀. The power minimization problem is constrained with a given acceptable lower bound of channel throughput value, Γ^th_k of node k as in (<ref>) for the purpose of better QoS. Therefore, for a specific node k, (<ref>) explains the feasible region of power space
which belongs to average power space P.
The rest of the constraints, (<ref>) act as supportive agents for the execution mechanism and the completion of the optimization problem.
Let's explore the optimization tool and the selective rationale, taking into account the evolving prominent features in the problem.
§ AUCTION-BASED RESOURCE ALLOCATION
The formulated optimization problem in (<ref>) is being convex for a specific CSI plus given hypothesis status which is common knowledge for all IoT nodes k and the centralized gateway. Nevertheless, it's feasible to obtain the optimal power strategy vector or STFS block arrangement for transmitting signals of IoT devices via linear programming problems, the algorithm is weak to capture user interaction in critical and incomplete information stages adaptively. In other words, the best resource allocation should be executed concerning the instantaneous characteristics of neighbor nodes that might impact performance themselves then, the problem mightn't be able to conserve the convexity itself. Additionally, the imperfect information among each IoT device and gateway may cause higher computational complexity along with an exhaustive approach for resource assignment due to the necessity of dealing with multi-dimensional randomness. Therefore it's essential to propose a resilient decision-making approach for CFA utilizing the available or possible extracted features from every corner entity. We can leverage Game theory as a versatile mechanism to behave among sub-optimization problems distributed through distinct types and type-selves entropies, then converging to the global stationary state. Auction theory, a special form of game theory, is suitable for dealing with cooperative optimization and sealed-bid auction is mirrored with Bayesian strategies to address the incompleteness of the problem.
§.§ Why Auction
We are aware of the limitations in resource utilization within dense IoT networks (K ≥ N), and unexpected competition may arise for certain special STFS blocks during critical stages based on the current hypothesized states of the nodes. Auction theory is a superior to find stable solutions in competitive and federated optimization problems capturing unique features among players. The terminology of auctions aids in understanding the principles of adaptive mechanisms and their relation to the desired resource allocation problem. In a typical auction, there are two major parties: buyers and sellers, which correspond to IoT devices and the IoT gateway, respectively. Buyers in an auction are motivated to save money or enhance self-surplus (<ref>), by placing lower bids. similarly, IoT devices aim to conserve energy by minimizing power consumption and demanding fair resource allocation for uplinks. Based on the proposed signal model in (<ref>), let's consider the desired sub-optimization problem of self-surplus maximization (<ref>), for the pool of IoT nodes as follows.
cl
𝐂,𝐀,𝐏 ∑_k=1^K S_k(𝐂,𝐀,𝐏)
S_k(𝐂,𝐀,𝐏)=P^tot_k-p_k
(<ref>)
To conserve the battery life or total power redundancy P^tot_k, IoT nodes are encouraged to minimize transmit power levels p_k, alternatively maximizing surplus (<ref>). The phase angle of the transmitted signal helps to select the most suitable time and frequency slot for the uplink, potentially competing with a random trade-off between maximal channel gain and node priority. In other words, the characteristics of the transmitted signal are compelled to allocate the uplink to a specific STFS slot that not only has a strong channel gain but also maintains the node’s highest priority margin. In contrast, the IoT gateway motivates to maximize the individual channel throughput assigned with n^th STFS resource block from desired signal x_k^[n] of the node k and, without violating the required RSSI or given threshold bounds as in (<ref>). We can present the particular revenue maximization sub-optimization as (<ref>),
cl
𝐂,𝐀,𝐏 ∑_n=1^N R_n(𝐂,𝐀,𝐏)
R_n(𝐂,𝐀,𝐏)=γ_n←k
(<ref>), (<ref>), (<ref>),
The IoT gateway aims to maximize channel throughput vector or, in auction terms, total revenue (<ref>), corresponding to the offered STFS resource blocks n, which are allocated to the winning IoT nodes k as in (<ref>). Although the gateway offers suitable STFS slots that may satisfy the given channel throughput threshold to the IoT pool, the individual nodes select the best STFS resources that may be supportive of conveying the transmit signal via a strong channel pathway and vice versa. Hence, the overall picture indicates a well-structured game architecture under limited insights among IoT entities regarding channel state information and node priorities for resource allocation strategies. However, optimal and fair resource allocation strategies can be achieved, benefiting both parties by selecting minimal power levels that satisfy the given throughput threshold. Auction theory gives tremendous optimal policies to stabilize the damped oscillator behavior between node surplus and gateway revenue, known as the equilibrium (Nash) state in game theory.
§.§ Why Sealed-Bid Auction
The auction theory literature deals with two major types based on the limitations for awareness about valuations: where common values and private values. It's a fair assumption which is IoT devices on the edge layer are trying to compute the required minimum resource satisfactions for the given challenging constraint bounds as common values. However, consciousness about these common values in nodes is fading from deterministic margin to uncertain space due to higher dimensional random behaviors among neighbor devices. In other words, the understanding of the self-valuation vector is partially unfamiliar to all IoT players built upon resource wealthiness, future strategies, predictions, etc. These factors fluctuate from node to node along the beauty of multi-colored applications or heterogeneity in the fog layer and hence, there are private components that lie in a given common distribution. Therefore, we are exploring distinct types of sealed bid auction mechanisms being generalized from game theory with incomplete information. Although each IoT device is aware of individual CSI based on the proposed dispersion vector estimation mechanism, it's a private factor for their opponent nodes and the IoT gateway. In addition, the instantaneous self-hypothesis state in light of different levels for priorities is also unique for individual nodes but mysterious for neighbor entities. Nonetheless, it's fair to suppose those pairs of factors are scattered in random distributions characterized analytically or empirically. In accordance with these basic facts, let's dive into the realm of mathematics to discuss the type of sealed bid auction game model 𝒢_𝒜 as follows.
𝒢_𝒜≜⟨𝒦, 𝒩, (𝒱_k, ℱ_k)_k ∈𝒦, ℬ, 𝒬, 𝒰∈{(𝒮_k)_k ∈𝒦∪ (ℛ_n)_n ∈𝒩}⟩
where,
* 𝒦 ={1,…,k, …, K} is the set of IoT devices or buyers.
* 𝒩 ={1,…,n, …, N} is the set of STFS resource slots or selling objects in the IoT gateway.
* 𝒱_k⊆ℝ: v_k anticipated the possible private valuations of node k∈𝒦. Represent by 𝒱^𝒦 𝒱_1…𝒱_k…𝒱_K the set of private value vectors.
* The common distributions for private valuations 𝒱_k for each node k can be described in terms of the cumulative distribution function of ℱ_k;
ℱ_𝒱(v) = ∫_v^vf_𝒱(t)dt, ∀{v, v }∈ℝ^+_0.
And the joint probability distribution of multi-dimensional mutual prior beliefs is modeled as, f_GH(g,h) employing marginal distribution f_G(g) = ∫_-∞^∞ f_GH(g,h) dh, and f_H(h) = ∫_-∞^∞ f_GH(g,h) dg for CSI and individual hypothesis vector respectively.
* The action set of each IoT node is ℬ = {b_1,…,b_k, …, b_K} and p: [0, ∞)^𝒦 → Δ(𝒦) is the function aligning each vector of bids b ∈ [0, ∞)^𝒦 with a distribution associating the winning IoT devices of auctioned resource object is recognized. [Note that Δ(𝒦) { x ∈ [0,1]^𝒦: ∑_k ∈𝒦x_k=1 } is the combination of possible probability distributions over the set of nodes 𝒦.]
* 𝒬: 𝒦 [0, ∞)^𝒦 → ℝ^𝒦 is the cost function associating with payment details for each node pays, and for each bidding vectors b ∈ [0, ∞)^𝒦 based on winning device k_*∈𝒦. Then, the gateway can earn q_n for the assigned resource slot n with the price q_n and n ∈𝒩.
* The payoff or utility function of the auction models is indicated by 𝒰: ℬ → ℝ associating the bidding distribution p where u_k_*(b) is the utility of winner node k_* and winning probability p_k_*(b={ b_1, …, b_K}) if b is the action profile. Every leading IoT node k_* consumes the sum cost: Q_k(k_*; b_1, …, b_K) and proportionally maps with revenue of n^th resources at the gateway, (k_*→ n). Hence, we can formulate the generalized utility space, 𝒰_𝒮,ℛ of surplus: 𝒮_𝒦 and revenue: ℛ_𝒩 for the IoT pool and gateway respectively as follows,
𝒮_𝒦 = ∑_k ∈𝒦(p_k(b)v_k - ∑_k_*→ n ∈𝒩 p_k_*(b)Q_k(k_*→ n;b))
ℛ_𝒩 = ∑_k_*→ n ∈𝒩 p_k_*(b)Q_k(k_*→ n;b)
Thus, the mathematical discussion for sealed bid auction fits perfectly with Harsanyi's game model <cit.> for the problems with incomplete information. Consequently, the equilibrium state of the game model can be obtained utilizing the conditional probability approach called Bayesian strategies. In a sealed-bid auction-based resource allocation, the pure bidding strategy β_k of node k is a measurable function [Note that, each number y ∈[0, ∞), the set of g^-1([0, y ])= {x ∈ X: g(x) ≤ y } is a measurable set ⇒ A real-valued function g: X →[0, ∞) is measurable; ∀ subset X ⊆ℝ.].
Let β = (β_1, …, β_K) with β_k: [0, ∞) → [0, ∞) is the pure strategy vector then β_-k(b_-k) {β_j(b_j)}_∀ j ∈𝒦∖ k, denotes the bidding vector for nodes other than k, with respect to the valuation type and actions. Then, the expected utility or surplus u_k(β; v_k) of IoT device k, based on the strategy vector β and private valuation v_k itself denotes as in equation (<ref>). Here, 𝒱_-k _j ≠ k𝒱_j is the private valuation vector space of all the opponent nodes except for node k, and the joint cumulative distribution function of the multidimensional prior beliefs is F_-k _j ≠ kF_j. Consider that, the expected surplus u_k(β;v_k) solely relies on node k's strategy β_k, the bid of node k under private value v_k. Simply the expected utility is feasible to be depicted as u_k(b_k, β_-k;v_k) of node k when the bid is b_k as per personal valuation v_k and the opponent nodes follow action vector β_-k. Then, the Bayesian Nash equilibrium (BNE) implies the ultimate bidding state β^* with no buyer or IoT node k potential to bid further to grab the desired resource in the auction with a profitable surplus from the equilibrium bidding β_k^*(v_k) strategy to another state.
u_k(β^*;v_k) ≥ u_k(b_k,β^*_-k;v_k)
;∀ b_k∈ [0,∞), ∀ v_k∈𝒱_k, ∀ k ∈𝒦.
Therefore, the BNE of the bidding strategy profile maps to the optimal resource allocation utilization of IoT nodes based on the private valuations for a given distribution. Despite that, the characteristics of the private valuation space of each IoT device have countless possibilities dependent on the parameters in the applicable problem statement causing re-thinking about a cooperative solving protocol instead of solo self-sufficient optimization approaches.
§ AUCTION TYPES
In this section, we discuss different types of auctions, particularly sealed-bid auction models, due to the imperfections in the stated resource allocation problem. We explore how various auction types perform under given constraints in dense IoT networks. First, it is essential to assess the wealth of each IoT device in terms of performance within a given auction, aiming for the best STFS slot instantaneously. We focus on two major parameters to model the valuation space, 𝒱_k of each node k ∈𝒦: the self-priority or individual hypothesis state, v_h^k and the channel state information, v_g^k. The quality of heterogeneity may require different types of resources from the gateway at multiple and critical stages in a massive IoT network. And, it is reasonable to consider that the hypothesis of each node follows a uniform distribution, 𝒱^H∼𝕌(a,b) and v_h∈ [a,b] with equiprobable priority levels then, the corresponding probability density function as, f_H(h)=1/b-a. Moreover, optimal transmit power is directly related to the link budget and the mobility of users or dynamics in surrounding parameters cause variation in channel gain. Hence, consider the rest portion of the valuation, v_g^k using the uplink channel gain vector of each node k is scattered according to a Rayleigh distribution, 𝒱^G∼ Rayl(σ^2) and v_g≥ 0 to model the small-scale path fading effect on the channel then, the corresponding probability density function as, f_G(g) = v_g/σ^2e^-(v_g^2/2σ^2). At this point, the overall valuation space 𝒱 is denoted as 𝒱=α𝒱^H + β𝒱^G; {α,β}∈ℝ^+ which is the summation of two linearly independent sub-spaces [If 𝒱^H and 𝒱^G are linearly independent, then for any scalars α and β: α𝒱^H + β𝒱^G=0 ⇒α=0 and β=0.]. It's flexible in choosing different numerical weights α and β in order to bias along hypothesis, v_h or channel strength, v_g dimensions, contingent upon the chosen IoT application.
Then the cumulative density function of the entire valuation metric can be derived using the joint probability density function, f_𝒱^G, 𝒱^H(v_g,v_h) = 1/(b-a).v_g/σ^2e^-(v_g^2/2σ^2) of two-dimensional sub-valuation distributions as[The joint probability density function for two independent random distributions X and Y can be written as, f_X, Y(x,y) = f_X(x).f_Y(y).],
F_𝒱(v) = ∫_-∞^∞∫_-∞^∞f_𝒱^G, 𝒱^H(v_g,v_h)dv_g dv_h
;v_h∈ [a,b], v_g≥ 0 and, 𝒱=α𝒱^H + β𝒱^G
F_𝒱(v) = 1 + βσ/(b-a)α√(π/2)[ erf (v-α b/√(2)βσ)-erf (v-α a/√(2)βσ) ]
The proof is given in Appendix <ref>. According to the objective function in (<ref>), the optimization variable is identified as the bidding strategy vector ℬ which is related to the payment cost 𝒬 as in (<ref>), for optimal resource demanding from each IoT node k ∈𝒦 to the preferred STFS slot n ∈𝒩. Let's consider different sealed-bid auction models to evaluate the bidding function and performance themselves.
§.§ Traditional auctions
Here, we consider classical single-object ascending and sealed-bid auctions to propose optimal resource assignments in a dense IoT network.
Therefore, to ensure fairness in resource allocation within dense IoT networks, each node can receive at most one STFS slot in a single auction (<ref>). Consequently, it is fair to view the overall multi-objective auction as a combination of individual single-object traditional auctions. In other words, all single-objective auctions within the multi-objective auction are independent during the bidding process.
In ascending bid auctions, the winner for the desired allocated resource is typically the node that makes the highest bid.
Additionally, if several nodes bid the same highest price for a particular resource, a fair lottery is conducted to determine which node might be assigned the desired STFS segment. [The likelihood of observing multiple players with identical highest bidding strategy vectors and the same probabilities for their self-valuations in a continuous probability distribution is negligible.]
§.§.§ Second-Price Sealed-Bid Auction (SPSB)
This is the simplest auction model (also known as Vickery auction) for determining the optimal request strategy or BNE for each bidding node, as stated in lemma <ref>, (proof: P91-94, theorem 4.15 in <cit.>).
In a second-price sealed-bid auction, the strategy b_k=v_k weakly dominates all other strategies, b ∈ ℬ, v ∈ 𝒱, k ∈ 𝒦
This emphasizes that the lemma <ref> is not merely stating that truthful bidding is the BNE, but rather making a much stronger statement: bidding truthfully is the dominant strategy that maximizes each IoT device’s surplus 𝒮 space, regardless of the opponents' bids. The winner node should pay the bidding amount of the second-highest bid as the price to the gateway. In other words, the IoT device k ∈𝒦 with the highest valuation v_k is assigned to a given STFS resource segment n∈𝒩 and c_[k]^n=1 taking into account the individual hypothesis v_h^k and CSI v_g^k. Then, the gateway accepts the channel throughput γ^n_k of k^th node for fair resource n, which exactly matches with the channel throughput threshold Γ^th,n_k^', of the IoT device k^' that has the second highest bid b_k^' (mirror the characteristics of node k^'-self) for the same STFS slot n, (q_k^[n]=b_k^'^[n]). Therefore, IoT device k has the opportunity to decrease the power consumption of the transmit signal to satisfy only with second best SINR instead of the self-highest SINR for STFS slot n, (Γ^th,n_k^'≤γ^n_k). Additionally, resource sharing in accordance with the flow of priority could further enhance guaranteed fair resource allocation.
§.§.§ First Price Sealed-Bid Auction (FPSB)
In an ascending sealed-bid first-price auction, IoT nodes submit sealed bids {b_1, …,b_K}, and the node that submits the highest bid is awarded the STFS resource slot and pays exactly bid amount to the gateway. Here the payment q_k^[n]=b_k from winner node k to the gateway for resource n represents the amount of harnessing energy required for transmission to meet the given channel throughput threshold Γ^n,th_k for STFS slot n. It also reflects the fairness factor concerning the suitability of the assigned resource slot relative to the self-hypothesis, similar to the SPSB auction. However, based on these rules, it is evident that nodes will be disinclined to bid true valuations themselves due to receiving zero profit or surplus as explained in (<ref>). Lower or zero surplus means the less or null power saving (S_k(v_k^g) → 0), and this, combined with reduced consideration for the hypotheses of neighboring devices (S_k(v_k^h) → 0) may negatively affect for the overall efficiency of the network. Therefore, the BNE of the bidding strategy in the FPSB auction should be lower than the actual valuation v_k of each node k to make a positive profit potentially. To find the BNE in the FPSB auction, let's consider that each node utilizes a bidding function 𝐛, which is strictly increasing, continuous, and differentiable for the signals in valuation. And, if node k ≠ k^'∈𝒦 uses identical bidding strategies b_k=𝐛(v_k) with the aforementioned properties and valuation characteristics v_k along a symmetric distribution, then the expected utility 𝔼[U_k(b_k, b_-k, v_k)] of node k for n^th STFS slot, as a function of bid b_k, (<ref>) is [The bidding vector of IoT pool except k^th node, b_-k = {b_1,…, b_k-1, b_k+1,…, b_K} and the losing nodes in the auction receive nothing results a null vector for utilities themselves, S_(k^lose)=0.],
𝔼[U_k(b_k, b_-k, v_k)] = S_(k^win). (k^win) + S_(k^lose). (k^lose)
= (v_k-b_k).[(b_j=𝐛(v_j) ≤ b_k, ∀ j ≠ k)]
= (v_k-b_k). F_𝒱^K-1[(𝐛^-1(b_k)]
where, [v_j≤𝐛^-1(b_k)]=F_𝒱^K-1[𝐛^-1(b_k)] by considering all j ∈{1 ,…, k-1, k+1,…, K} bidders except the bidder k ∈𝒦 for n^th STFS slot. [Here, the node valuation vectors are independent and the prior cumulative distributive function F_𝒱 of bidding strategy follows the rule of, (∩_∀ j ∈{𝒦∖ k} b_j≤ b_k) = ∏_j ∈𝒦∖ k(b_j≤ b_k) for the winning probability of node k with winning bid b_k.] Then we can find the Bayesian Nash equilibrium for L-dimensional valuation types among IoT nodes along the lemma <ref> (proof: P355-356, theorem 9.53 in <cit.>), and (<ref>) then maximizing expected utility in 𝔼[U_k(b_k, b_-k, v_k)] (<ref>), with respect to b_k for n^th resource segment.
In a game with incomplete information in which the number of types of each player is finite, every Bayesian equilibrium is also a Nash equilibrium, and conversely, every Nash equilibrium is also a Bayesian equilibrium.
max_b_k (v_k-b_k). F_𝒱^K-1[(𝐛^-1(b_k)]
After solving the first-order condition maximization problem, the general format for BNE of FPSB auction is derived as in (<ref>). The overall valuation metric v_k of node k can be found using the mapping function 𝐟_v as v_k = 𝐟_v(v^1, …, v^l,…, v^L) for the L-dimensional valuation types. Here v^l is the lower bound of the valuation metric of each IoT node corresponding to the l^th dimension and the detailed proof of (<ref>) can be found in Appendix <ref>, (<ref>), (<ref>). Ultimately, using the prerequisites of (<ref>), (<ref>) and appending (<ref>), the BNE of a FPSB auction for given two-dimensional valuation types, along with channel state information 𝒱^G and individual hypothesis 𝒱^H of each node k is feasible to obtain as in (<ref>). <ref> shows the optimal bids (b_k) or payment strategies (b_k→ q^[n]) with respect to individual valuation vectors (v_k∈𝒱) from each node k towards desired STFS segment n for I_auc=2000 SPSB and FPSB auction mechanisms. Although the optimal BNE points are scattered with higher variance in the SPSB auction, FPSB follows the derived analytical formula (<ref>). In addition, it's noticeable that the average of the distribution of 2^nd order statistic ℬ̅_(2)=N^-1∑_n=1^N(b_(2),n| v), (as explained in definition <ref>) of the BNE scatters in the SPSB auction aligns with the optimal strategies of FPSB auction.
definitionDefinition
The p^th∈{ 1, …, P}-order statistic, denoted X_(p) is the p^th largest realization, among N draws (N>P), of a random variable X. [Here, we consider the largest order statistic i.e. X_(p) = max{ X_1, …, X_p}, for given random variables X_1, …, X_p and all p^th-order statistics X_(p), are random variables, ex: X_(1) is the maximum of P draws, X_(2) is the second highest of P draws and X_(P) is the minimum.]
Consequently, an analogous relation exists between the optimal bidding vectors in the average space of the 2^nd order statistic in SPSB and BNE of FPSB auctions.
§.§ Revenue Equivalence Theorem (RET)
The idea in Corollary <ref> is surprising at first sight because, apparently, the gateway would benefit more from resource allocation at the price of the highest bid submitted, rather than the second-highest bid. However, IoT devices in an FPSB auction will submit lower bids compared to an SPSB auction, as the winning node pays the exact bidding amount in an FPSB auction but a lesser price than the bid in an SPSB auction. These two opposing facts balance each other, thereby supporting the intuitive idea of corollary <ref>.
corollaryCorollary
In the discussion regarding <ref>, in equilibrium, the expected payment from the IoT space 𝒦 is the same, whether the auction mechanism used is a sealed-bid first price or second price.
Therefore in a symmetric equilibrium, the expected payment q_k^[n] that a winning IoT node k with independent private value v ∈𝒱 makes for the n^th resource is F_Y(v) ×𝔼[Y|Y ≤ v] where Y = max{𝒱∖ v_k} in both FPSB and SPSB auctions which explains as:
* At equilibrium of SPSB, the node k wins the auction with probability F_Y(v), and the expected amount that node k pays is 𝔼[Y|Y ≤ v]. Therefore, the expected payment is claimed as F_Y(v) ×𝔼[Y|Y ≤ v].
* At equilibrium of FPSB, the node k submits a bid of 𝔼[Y|Y ≤ v] then, wins the auction with probability F_Y(v) and pays the accurate bid amount. Hence, the expected payment is also similar to F_Y(v) ×𝔼[Y|Y ≤ v].
Consequently, the earning details or the expected revenue at the gateway in both SPSB and FPSB can be formulated as in (<ref>). Here, the expected payment F_Y(v) ×𝔼[Y|Y ≤ v], of a node with private value v follows the overall expected payment made by each node with i.i.d. distributed valuations along f_𝒱(v) then make the summation to calculate the gateway's expected revenue from total K nodes.
𝔼[R] = K ∫_𝒱 F_Y(v)𝔼[Y|Y ≤ v] f_𝒱(v)dv
Accordingly, this intriguing behavior can be formally articulated in lemma <ref>, known as the revenue equivalence theorem, (proof: P478-480, corollary 12,24 in <cit.>), for traditional FPSB and SPSB auctions.
(RET)
Let the bidding function 𝐛, be a symmetric and monotonically increasing equilibrium in a sealed-bid symmetric auction with independent private values satisfying the following properties: (a) the winner node of the auction is the buyer node with the highest private value, and (b) the expected payment made by a buyer with private value 0 is 0. Then the IoT gateway receives the same expected revenue for each auction mechanism.
Nonetheless, while the BNE of SPSBs is trivial to find as explained in lemma <ref>, determining the BNE of FPSB might not be computationally efficient due to the complex formula derived in (<ref>) for the specified problem state. Furthermore, it becomes even more challenging with multi-dimensional random variables (>2) resulting in formulas with increased complexity and a higher degree of freedom about randomness based on the generalized BNE in FPSB (<ref>). To overcome the stated problem of efficiently finding the BNE of an FPSB auction, the incredibly powerful property of RET can be utilized. In other words, it's effective to find the BNE of bid prices in an FPSB auction numerically as functions of the distribution of private values instead of relying on complex analytical derivations. The insight of the numerical technique is straightforward, as RET facilitates the mapping medium along the expected space from trivial SPSB solutions to BNE of FPSBs.
According to the simulation algorithm <ref>, each node's given condition takes self-instantaneous valuation as the maximum than opponents, targeting for winning the auction then calculates the expected payment as the optimal bidding amount at the stage. That is the expected value of truncated value distribution where all opponents' valuations are less than the valuation of the desired node. RET helps to recognize the average space of truncated distribution as the second-order statistics of BNE in SPSB for numerical optimal in the FPSB auction. Essentially, <ref> illustrates the perfect matching between analytical and numerical solution curves, with a slight deviation (mean absolute error (MAE) ≈ -10 dB, indicated as the 4^th case f_v(𝒱^H, 𝒱^G)_a=0,b=1,σ^2=1, in <ref>), thereby providing graphical proof of RET. However, in FPSB and SPSB auctions, the allocation is performed for a specific STFS resource in a non-cooperative manner, rather than allocating multiple resources to multiple IoT devices simultaneously. As a result, there may be opportunities for selfish resource-grabbing behaviors among nodes with stronger valuations based on CSI and individual hypotheses. Although nodes follow the BNE bidding strategy in FPSB and SPSB with the initial setting of prior belief, there is limited capacity to capture user interactions among nodes, resulting in an inability to further enhance performance in surplus maximization. Therefore, it is necessary to consider an efficient simultaneous mechanism that facilitates optimal resource allocation under diverse interactions among transmit signals.
§.§ Vickery-Clarke-Groves (VCG) Auction
The VCG auction is also a type of sealed-bid auction, originating from the foundation of the SPSB or Vickrey auction via the Groves mechanism including Clarke taxation which is an elegant procedure to capture interactive involvement among uplinks of IoT nodes and then making allocation simultaneously. Moreover, the necessary proof to find BNE strategies in FPSB and SPSB auctions implies node independence (non-cooperative) and symmetric valuation distribution prerequisites. Notably, FPSB and SPSB auctions operate by reducing the surplus of nodes while the IoT gateway receives a higher revenue amount which is called revenue maximization as explained in the definition <ref> <cit.>. Therefore, this phenomenon may bias the IoT gateway towards selfish gratification, enhancing the self-channel throughput for receiving signals from IoT devices, perhaps exceeding the required threshold bounds. Although data rate maximization is beneficial for the IoT gateway in terms of better quality of service, it comes at a cost to IoT nodes, which consume more transmit power in the uplink than the necessary amount.
A quasilinear mechanism is revenue maximizing when, among the set of functions c_𝒦^𝒩 and 𝒬 that satisfy the other constraints, the mechanism selects the c_𝒦^𝒩 and 𝒬 that maximize 𝔼_c_𝒦^𝒩[ ∑_∀ k𝒬_k(b(v)) ], where b(v) denotes the node's equilibrium strategy profile.
Conversely, the IoT gateway might be inclined to regulate the channel throughput threshold towards a lower stage while accepting lesser RSSI of uplinks. Although IoT nodes are able to reduce the transmit power vectors extensively under the revenue minimization allocation mechanism explained in the definition <ref>, the IoT gateway might have to pay the counterpart to enhance the receiver antenna gain vector to more effort to sense fewer RSSIs, <cit.>.
A quasilinear mechanism is revenue minimizing when among the set of functions c_𝒦^𝒩 and 𝒬 that satisfy the other constraints, the mechanism selects the c_𝒦^𝒩 and 𝒬 that minimize max_c_𝒦^𝒩[ ∑_∀ k𝒬_k(b(v)) ], where b(v) denotes the node's equilibrium strategy profile.
Hence, the auctioneer gateway might be concerned with selecting a fair design that will be beneficial for all network entities. As a result, it is desirable to propose policies for optimal resource allocation that ensure the tricky max-min fairness conditions which describe the fairest utility as the one that makes the least-happy IoT entity the happiest as in the definition <ref><cit.>.
A quasilinear mechanism is max-min fair when among the set of functions c_𝒦^𝒩 and 𝒬 that satisfy the other constraints, the mechanism selects the c_𝒦^𝒩 and 𝒬 that maximize 𝔼_c_𝒦^𝒩[ min_k ∈𝒦 v_k( c_𝒦^𝒩 (b(v)) ) - 𝒬_k(b(v))] where b(v) denotes the nodes's equilibrium strategy profile.
Vickery-Clarke-Groves mechanism (<ref>) is efficient in dealing with max-min fairness resource allocation with the intuitive of whenever an inefficient choice is selected there's a possibility to recover a set of side payments among IoT bidders with the property that every bidder would prefer the efficient choice including the side payments to the inefficient choice. In other words, if a specific node k is selected an arbitrary STFS segment n via the indicator [c_k^n]^' inefficiently, the VCG mechanism facilitates the network entities to utilize collateral strategy space which is beneficial for all IoT nodes to compensate the inefficient resource allocation [c_k^n]^'.
cl
c_𝒦^𝒩(v̂) = max_b ∑_k v̂_k(b)
Q_k(v̂) = d_k(v̂_-k) - ∑_j ∈𝒦∖k v̂_j ( c_𝒦^𝒩 (v̂) )
d_k(v̂_-k) = ∑_j ∈𝒦∖k v̂_j ( c_𝒦^𝒩 (v̂_-k) )
Truth telling is a dominant strategy under any Groves mechanism.
The Grove mechanism (<ref>), (<ref>) optimizes the choice of each IoT node c_k^n, based on the assumption of all nodes disclosed the true utility function which explains in lemma <ref> (proof: P289, theorem 10.4.2 in <cit.>) similar to in SPSB, lemma <ref>. Under Groves's mechanism, the earning surplus of each node does not rely only on the individual resource allocation, due to imposed payments, which is explained in (<ref>). Since IoT node k is paid the surplus of all the other nodes k ∈𝒦 ∖ k under the chosen allocation, each IoT entity k motivates in surplus maximization of neighbor devices as in maximizing k^th own. In addition, the Green-Laffont lemma <ref> (proof: P290-292, theorem 10.4.3 in <cit.>) strictly implies the ability of efficient resource allocation in dominant bidding strategies among IoT nodes with arbitrary quasilinear surplus utilities via Groves mechanisms only despite the other types of dominant strategy truthful mechanisms presence in the quasilinear setting.
(Green-Laffont)
An efficient social choice function 𝒞: ℝ^c_K→ c ℝ^K can be implemented in dominant strategies for agents with unrestricted quasilinear utilities only if Q_k(v) = d_k(v_-k) - ∑_j ∈𝒦∖ k v_j(c_𝒦^𝒩(v) ).
Furthermore, Clarke's taxation d_k(v̂_-k) in (<ref>), describes the payment of node k which does not depend on k^th declaration c_k^n, for STFS slot n and is collected the sum of opponent nodes declared valuations for the mechanism's selection. Notoriously the first summation in the payment rule (<ref>) in VCG auction is charged via Clarke's taxation, and the second sum implies the total valuation of all IoT devices except the node k for the mechanism's choice. As a result, each IoT device affords the payment itself called social cost which aggregates the impact of the self-participation to opponent nodes in the resource allocation auction progress. Consequently, if a particular node k does not involve changing the resource allocation mechanism's choice c_𝒦^𝒩(v) = c_𝒦^𝒩(v_-k), then the two summations in VCG payment rule (<ref>) will cancel out and cause to null social cost for node agent k's participation with zero payment. In contrast, if node k is to be made to pay a nonzero payment, it should be pivotal to the extent that the STFS segment allocation mechanism's choice c_𝒦^𝒩(v) deviates from the allocation indicator without node k
i.e. c_𝒦^𝒩(v_-k). Hence, the VCG auction-based allocation is referred to as the pivot mechanism: only pivotal IoT devices are compelled to pay. Given this circumstance, certain nodes might enhance the efficient resource allocation of their opponents by engaging in allocation iteration. Such nodes will be obligated to pay the negative social cost or essentially, will be paid by the mechanism and vice versa. For instance, if the specific node k^th action is imparted to the fairest resource allocation while maximizing the overall surplus scope of IoT entities, the pricing valuation schema of node k is regulated to a less positive or negative amount describing the importance of the acting role for opponents itself. [The property in VCG-mechanism is similar to interactions in mSAA as shown in <ref> (a) regarding nodes subsets are voting for diverse STFS blocks (similar in spectrum auctions by the federation of communication commission (FCC), US <cit.>) and therefore, overall surplus maximization happens while enhancing the individual utilities among IoT entities.] The algorithm <ref> explains the application of the VCG mechanism to find the social cost and utilities of each IoT entity by considering the assignment procedure as a knapsack problem utilizing the common Hungarian algorithm, <cit.>.
Although the VCG mechanism provides efficient and simultaneous STFS resource allocation strategies, IoT nodes need to convey their entire valuation status to the gateway. Truth-telling, as described in lemma <ref>, may cause excess resource utilization and surplus deviation for nodes due to the necessity of bidding actual valuations themselves. Since the VCG auction-based assignment is a gateway-centralized approach, it creates more controllability for the IoT gateway while limiting the nodes' ability to adjust threshold bounds in critical stages. The IoT gateway, acting as the mechanism designer, can propose constraint margins in a preferred layer, ranging from revenue maximization to minimization (max-min fairness). However, this creates a monopolistic system governed solely by the IoT gateway, leaving nodes with fewer controlling rights. This phenomenon might not be applicable to novel resource allocation mechanisms, especially in IoT networks that operate independent cloud, fog, and cloud-fog bridging networks. Since most edge (IoT) devices are vendor-specific and constraint-independent, it is crucial to propose distributed resource allocation policies across all network entities rather than introducing special biases in the higher layers of the network. Additionally, it is essential to identify the specific performance of each agent role within the network to distribute computational complexity according to the performance capacity of the IoT entities. Furthermore, the entire VCG system is dependent on the standard metrics in the auction game model (<ref>) and is fragile in non-linear bidding interactions, which are referred to as non-neutral node behaviors. Hence, it is desirable to propose policies for optimal resource allocation that ensure fairness for both IoT nodes and gateways, with the potential to regulate the resource allocation fairness factor, balancing the bias between risk-motivated nodes and the gateway.
§ PROPOSED APPROACH
Like the VCG auction, which is derived from the SPSB auction model, Simultaneous Ascending Auction (SAA) [26], [72] is a natural generalization of the English or FPSB auction, incorporating cooperative behaviors among bidders and auctioneers in the multiple good allocation process. Here, we discuss the valuation 𝒱 estimation for each STFS resource segment through the optimal Dispersion metric A and conduct a potential risk analysis in the modified-SAA mechanism.
§.§ Modified Simultaneous Ascending Auction (mSAA)
According to figure <ref>, first, the IoT gateway serves the initial reservation price vector r_k^N 1 corresponding to the available STFS resources for each IoT node k ∈𝒦. After that, the initial valuation space ( 𝒱̂^H, 𝒱̂^G) is estimated along individual hypothesis details H_i=0, …, M-1 and optimized dispersion matrix A_k respectively. Then, the nodes provide binary responses for each offered STFS segment, indicating their willingness to accept the received prices for individual surplus maximization, as described in algorithm <ref>. Now, the gateway can assign available resource segments 𝒩 for winner node space 𝒦^*, (|𝒦^*|=|𝒩|) concerning revenue maximization ℛ_𝒩^max then, the rest of the node space is the loser set 𝒦^', (|𝒦^*|+|𝒦^'|=K).
However, there is a special loser subset 𝒦̅^'_i, (|̅𝒦̅^' | ≤ |𝒦^'|)_i in each iteration i which is motivated for future bidding process given by the updated price vector q_i^+ for iterations ahead, i^+. Algorithm <ref> line <ref> describes: In a random iteration i, the specific active loser node from the previous iteration k̅^'_i-1∈𝒦̅^'_i-1 who had the potential to grab the STFS resource segment n ∈𝒩, jumps to the winner pool k^*_i∈𝒦^*_i. In the meantime, the winner node in k^*_i-1∈𝒦^*_i-1 in iteration (i-1) who doesn't have a bidding motivation for the same STFS segment n ∈𝒩 further in iteration i, is being pushed to the loser pool k^'_i∈𝒦^'_i but might have the capability to drag for another STFS slot or the same slot under different user allocation combinations in future iterations i^+. Therefore, IoT devices are being inter-swapped between the winner and loser pools towards the global equilibrium stage along the simultaneous converging pattern. Here the metric of ability for further bidding is defined through the non-negativeness of the individual instantaneous surplus 𝒮_k̅^'_i among active loser set 𝒮_𝒦̅^'_i and the IoT devices which have violated given marginal conditions in i^th iteration are neglected permanently and continue the execution, (algorithm <ref>: lines <ref> and <ref>). If there are no active losers with the potential for further bidding, or if the iterations reach the upper bound of the processing period I^th, the auction is ended. Finally, the dispersion vectors are adjusted for the given STFS resource details along an orthogonal frame structure, which minimizes uplink interferences and ensures RSSI satisfaction to enhance the channel throughput of the received signals at the gateway.
<ref>(a) illustrates the internal behaviors of various IoT bidders inside the mSAA mechanism, each aiming for a suitable STFS resource slot over a random iteration block until the global optimum allocation is achieved. The overall diagram indicates instantaneous hopping patterns of IoT nodes while demanding available resources. The 1^st node grabs the 3^rd STFS slot in 3 iteration cycles then, the 2^nd node also participates in voting for the same 3^rd slot alternatively. Meanwhile, the node 2 attempts to check its suitability for the 2^nd STFS slot, creating an auction game with the neighboring node 5. Similarly, the 3^rd and 4^th devices vote for the 1^st STFS segment using a competitive bidding strategy ultimately, the node 3 emerges as the ultimate winner. Therefore, it’s noticeable that several node clusters as shown in the state graph of <ref>(b), aim for selected STFS resource spaces and follow oscillatory convergence behaviors through the binary voting feedback with the weighted probability of p_k^*_i,k̅^'_i for the IoT gateway. This is a pivotal phenomenon that describes capturing user interactions among potential bidding nodes and extracting unique features in similar correlations. Instead, a significant portion of the IoT pool is doing highly competitive bidding for a single or limited number of STFS segments, distributive sub-auctioning approaches aim to disrupt monopoly or oligopoly strategies among nodes and between nodes and the gateway. Additionally, this cooperative and decentralized architecture enables nodes to identify compatible STFS objects based on inference tasks themselves, through simple decision-making with fewer degrees of hypothesis in heterogeneous IoT networks. Therefore, the mSAA framework yields more productive and equitable surplus gains for almost all bidding nodes, thereby saving power consumption through transmission plus concerning priorities efficiently. Moreover, the periodic optimal journey enables nodes to make critical decisions adaptively, fluctuating among dynamic M-hypotheses while continuously monitoring or sensing processes. It could be highly advantageous to predict future margins rooted on recent memory blueprints, thereby reducing the high dimension of randomness and minimizing execution iterations.
§.§ Dispersion Matrix Optimization
After concluding the auction process for optimal resource allocation, IoT nodes receive details of the allocated STFS resource blocks c_𝒦^𝒩, which are distributed in an orthogonal frame across space, time, and frequency to enhance channel throughput (<ref>). Although the auction mechanism facilitates an independent architecture for each transmission signal from nodes to the gateway, there is a chance for slight deviations with time delaying and frequency offsets from pre-standard reference characteristics due to channel and hardware impairments. Therefore, each IoT device k ∈𝒦 is encouraged to regulate the gain and phase of its uplinks to satisfy the given RSSI upper bounds while minimizing interference with the neighbor firings at the gateway. To do that, let's define the dispersion metric a_k∈𝐀 optimization problem for IoT node k ∈𝒦 out of the opponent interfered nodes j ∈𝒦∖ k in the IoT network as follows.
cl
a_k ∈𝐀 ∑_k =1^K ‖z_k - z_k^*‖^2
z_k = a_kg_kx_k + w_k, z_k^* = c_k^[n]s_k, (<ref>)
-π≤∠a_k ≤π
(<ref>), (<ref>), (<ref>), (<ref>), ∀k∈𝒦
In (<ref>), each IoT node tries to align self-uplink z_k vector along the allocated resource vector z_k^* as describes in (<ref>) and the objective function of minimization the deviation of euclidean distances of all IoT devices. Here, the standard allocated STFS elements are independent and identically (i.i.d) distributed through a right-angled structure and the auctioneer gateway facilitates the guidance for the allocated resource vector z_k^* while serving indicator space c_𝒦^𝒩. Time or frequency deflections of the k^th uplink caused by complex reciprocal channel g_k and hardware noise w_k disruptions are being adjusted toward the standard arrangements with fewer collisions by regulating the phase jitters of dispersion elements (<ref>). As well, the attenuation of the k^th uplink is stimulated through the gain of the dispersion element, and the upper bound is given by (<ref>), (<ref>) in (<ref>).
The objective surface in polar-domain, <ref> illustrates the non-convexity of the defined optimization problem (<ref>), especially in the region of smaller gains (r_k≪) of the transmit signal due to weaker channel gains. However, considering the dilation of the objective function (<ref>) that is ∑_k ∈𝒦(r_k^2 + r_k^*^2 - 2r_kr_k^*cos(θ_k-θ_k^*)) along Euler's formula z_k=r_ke^jθ_k, it's observable the dimension of phase variation is sinusoidal and results a periodic alteration. Therefore, the problem is convexified based on two independent sub-optimization realms along the phase component first for an instantaneous fixed gain value (r_k > 0) following that, the gain adjustment based on optimized phase angle θ_k^*. Ultimately, each IoT node k can execute the optimization journey from any arbitrary initial point which is decided by instantaneous channel and hardware deficiencies and blindly converges back toward the corresponding global optimal relevant to the auctioned STFS resource block c_k^[n]. To do that, typical IoT devices with cheap computation complexity, desire to follow simple gradient descent algorithms. The effort of traveling from initiation to global optima is the required minimum excess details to satisfy constraints at the gateway for each uplink k and discovered as the optimized dispersion element a_k^*.
The <ref> shows the amplitude and phase details of individual uncompensated uplinks y_k and respective summation at the gateway after firing from IoT nodes and conveying through the noisy medium. The summation of the received signals explains the general interference strength and time or frequency shifting of the bunch of transmits at the gateway. In addition, this average implies the degree of difficulty in satisfying receiver sensitivity and threshold data rate for each uplink while acting as the desired signal in auction-wise allocated STFS segments. However, after performing the task of dispersion metric optimization a_k^*, the pre-equalized firings are spread back over a standard i.i.d STFS framework which has been instructed by the auction. As a result, the summation of the redundant complex hurdle part can be minimized along with gain and phase details while feeding the transmit signal characteristic at the vicinity of the given channel throughput threshold margin for reliable communication.
§ NUMERICAL RESULTS AND DISCUSSION
In this section, we discuss the performance of optimal STFS resource assignment for IoT nodes by gateway based on the scalability of the IoT pool and available resource capacity using the presented auction mechanisms. And, specific auctioning results which include several bidders who might have the motivation to take risks with underbidding may sharpen the dynamic interactions in the network.
<ref> (a), depicts the amount of competitiveness among IoT nodes while increasing the density of the network, resulting in the decrement of average productive surplus gains.
There is a notable surplus gap between conventional FPSB, SPSB auctions, and VCG, mSAA auction mechanisms, especially IoT capacity K^' < K/2 which are laid on the region of the lower vicinity of the IoT node scalability vector. This implies the selfish behaviors of IoT bidders in classical auctions executing independently and weaker to capture user interactions in the bidding process. However, the intelligent Bayesian bids lead to achieving greater surplus gains in cooperative VCG and mSAA auction mechanisms. Therefore, VCG and mSAA mechanisms are robust enough to handle a significant number of IoT devices with a lower outage probability in the dense network than in classical approaches.
Additionally, the average surplus curves are aligned together in FPSB and SPSB auctions reflecting the property of RET explained in lemma <ref>. Hence, each IoT node may have a chance to receive equal average benefits executing FPSB and SPSB mechanisms. Similarly, the alternative approaches of VCG and mSAA which originated from their parental FPSB and SPSB auctions respectively also lie together, with the aid of RET. Thus, all IoT entities have an equal chance of preserving power consumption in the uplink and maintaining a better quality fairness factor through VCG and mSAA mechanisms utilizing superior results than traditional approaches.
However, the evaluation framework is distinct in the auctions incorporating a special bidder space 𝒦_ζ which encourages taking risks in the bidding progress as outlined in <ref> (b). Here, several IoT devices are underbidding by holding lesser valuations 𝒱_ζ←{ v ∈𝒱| v - max(v).ζ%} for bidding than actuals in the voting process for STFS segments. This aims to reduce self-payments for deserved STFS objects and then, enhance surplus achievements. After assaying a lower bidding strategy slightly lesser than the actual, IoT nodes are capable of alleviating the rate of surplus reduction as shown in <ref> (b). Classical SPSB and FPSB auctions are adept at growing surplus vectors and reducing the rate of decrement of surplus gains respectively although the competition for resource allocation among neighbors has surged by boosting the number of nodes in the network. Similarly, winning IoT bidders are receiving higher surplus space for assigned resources in the mSAA auction with a lesser diminishing pattern and analogous with SPSB performance at the highest competitive region. Despite nodes being cost-cutting and alleviating the surplus deviation for the larger number of devices, the VCG mechanism results in an inferior performance along a rapid decline of surplus vector in risk mode being worse compared to the risk-free scenario. Therefore, the VCG architecture is not robust with parameter conditions that might have deviated from standard constraints and are susceptible to collisions easily. Underbidding is a contradiction of the presented rules in VCG auction, especially violating the fundamental truth-telling lemma <ref> in parental SPSB of VCG mechanism. Moreover, taking risks for bidding makes it not frugal to calculate social cost (<ref>) efficiently while capturing correlated features among users and pivotal conditions of the Clarke taxation (<ref>) are provoked with often penalties to the entire IoT pool.
<ref> (a), exhibits the revenue variation at the auctioneer gateway for N=10 STFS slots with bidder surplus vector results from different numbers of IoT devices in the network. An inverse proportionality exists between two utility parameters of revenue and surplus corresponding to IoT gateway versus nodes, respectively. In other words, if the competition among nodes is higher, the devices need to consume more power to transmit signals with shading fairness attributions, and the received signals at the IoT gateway are armed with immense power to satisfy given thresholds through lesser collisions among neighbor firings and vice versa. Therefore, the blueprint of optimal resource allocation aid of STFS objects is arranged as the equilibrium state of the mechanism which is impartial to the unique performance between node members and the IoT gateway. The FPSB and SPSB auctions result in similar higher surplus gains for IoT nodes slightly by decreasing revenue at the gateway for different numbers of IoT entities. This reminds the RET among classical auctions for single STFS slots however nodes are incapable of achieving higher surplus values at least half of the normalized surplus maximum under those two mechanisms.
Alternative approaches of VCG and mSAA auctions are capable of shifting the revenue vs surplus distribution diagonally and then improving the resource allocation performance while enhancing node surpluses and gateway revenue simultaneously. Furthermore, the evaluation scatterings are distributed along a directional broader region while offering the potential to gain larger surplus values for IoT nodes. This is a tremendous privilege for IoT devices that have lower pre-requests of doubtful channel budget and fewer priorities to occupy a position in the resource spectrum in critical stages with more effort from the connected gateway. Hence, the nodes with weaker characteristics still have a chance to employ an STFS segment from the gateway with lesser revenue while regulating the equilibrium states downward on the curve. And, this implies the adaptive resource assignment based on diverse and independent processing capacities among devices in the fog layer assisting with more contributions from the upper cloud layer in crucial stages. Nevertheless, IoT entities might attempt to take risks while underbidding in the auctioning game model to guarantee a desired STFS resource block with weaker firing strength plus lower rankings. In the risk stage of <ref> (b) portrays that performance metrics are shifted towards the right portion, advancing gateway revenues and node surplus values parallel to the risk-free stage. Even though the FPSB auction was able to refine the performance here than in the assured stage, nodes are reluctant to increase the self-surplus vector while attaining a fixed amount of surplus for the entire revenue space. On the other hand, it's notable that the remarkable development in the SPSB mechanism whereas the revenue and surplus are boosted simultaneously from a certain dense region which tightened the playing flexibility. In contrast, the VCG auction makes inferior results for IoT devices with a wider scope of scatters towards the lower surplus space with a higher revenue vector due to additional fees for the social cost in the underbidding risky scenario. Ultimately, the proposed mSAA protocol offers far superior outcomes starting from the higher performance region of the SPSB framework and spreading along the area of surplus maximization facilitating nodes to adapt to critical dynamics in the massive IoT network.
In particular, the strategic interactions for resource allocation concerning the individual limited energy budget that must be spent carefully, and self-priority margins of IoT devices rely on the amount of available STFS block at the gateway. <ref> elucidates the normalized average power consumption of each winner node's firings with varying the number of available STFS spectrum blocks at the gateway. Here the competitive spirit among node players is decreasing while increasing the capacity of the resource space. Despite non-cooperative and independent objective FPSB and SPSB auctioning (ζ=0%) approaches claim a fixed higher power budget for transmission, the VCG and mSAA mechanisms make a tremendous and coincidence teamwork to reduce power expenses, especially in the less competitive region (K/2 < N ≤ K). In contrast, IoT players might be encouraged to assay grabbing a spot in the available spectrum belonging to rich facilitated IoT gateway blindly whereas underbidding than actual valuations, (ζ=4%). This non-linear behavior forces to break the property of RET between classical FPSB and SPSB, and the extensions of novel VCG and mSAA as well. However, the resource allocation indicator c_𝒦^𝒩 which is regulated along the tradeoff space between IoT pool and gateway is more biased toward the node's beneficial region on each auctioning game mechanism to lower the working energy for transmission. In fact, nodes are able to conserve the excess amount of power expenditure notably in the soft resource market of the gateway by allowing winner nodes who have strengthened channel gains. Of course, taking risks for bidding causes to stand beyond the norm sustainable frame of conventional water-filling criteria between the strength of the channel gain and power consumption of node-link, this is more applicable for highly dense stages in the network or less available resource capacitance (N < K). Conversely, the performance of power utilization gradually reaches the effective region of risk-free mode at the end portion of graphs compatible with all types of auctions (N ≲ K).
<ref> depicts the relation between IoT devices' normalized average of achievable resource allocation fairness factor corresponding to the capacity of available STFS resource blocks. In other words, the IoT nodes that have more priority states in a given application are encouraged to deal with the STFS resource spectrum more frequently, and exhibiting a simultaneous strong uplink channel gain might be value added. Similar to the previous discussion, classical FPSB, and SPSB auctions are incapable of improving the fairness of resource allocation for IoT players based on individual priorities in the standard bidding mode. Additionally, VCG and mSAA mechanisms show prominent boosts in the fairness factor for spectrum utilization based on diverse node hypotheses while introducing more available heterogeneous resource slots. Nonetheless, IoT devices are lowballing (ζ=4%) the self-information to the gateway to transmit power redundancy plus jump to a hot seat on the spectrum. Consequently, the average fairness of each auction mechanism appears elevated mainly in the highly competitive region among nodes with lesser spectrum facilities. Subsequently, there is a remarkable equity-gaining among IoT entities along traditional FPSB and SPSB auctions, however diminishes towards standard risk-free results in the concluding portion. Even though the VCG mechanism is slightly dominated to minimize transmit power consumption than mSAA with risky bidding offers to the gateway in <ref>, this illustration shows the fairness performance of the VCG mechanism is deviating drastically. The underpinning of massive performance degradation in the VCG approach is attributed to pricing penalties associated with the social cost of low-reputed nodes, which can interfere with the convergence pattern of the global equilibrium. Ultimately, the proposed mSAA protocol is robust enough to accelerate fair resource allocation by delegating diverse STFS blocks often for nodes in high-priority hypotheses. Hence, the dense fog entities can anticipate about development of the CFA mechanism for resource orchestration via the proposed mSAA auctioning game while surviving in asymmetric CSI and diverse node hypotheses.
§ CONCLUSION
Auction-based proposed mechanism (mSAA) facilitates a federated STFS resource allocation optimization task with incomplete information regarding channel state information and individual hypotheses among IoT devices. And, the scalable proposed approach is applicable for diverse IoT tiers in CFA while capturing specific entity interactions to distribute the computational complexity in a surrounding with competitive demands. Additionally, optimal dispersion vectors involve prior compensation of environmental uncertainty for IoT firings to reduce interference at the gateway. The mSAA architecture has more potential to sharpen the prediction capabilities of bidding players following graph state diagrams of internal self-behaviors to enhance the overall performance itself. Furthermore, the optimal bidding strategy vector might be found numerically for the mSAA mechanism through VCG price rules along RET in standard risk-free game models.
§.§ Derivation of Equation (<ref>)
We can use the substitution t=v_g^2/2σ^2 to simplify the cumulative distributive function of 2-dimensional valuation metric F_𝒱(v) in (<ref>) as,
F_𝒱(v) = 1/(b-a).∫_a^b( 1-e^-(v-α.v_h/β)^2/2σ^2) dv_h
Apply partial integration,
= 1/(b-a).∫_a^bdv_h - 1/(b-a).∫_a^be^-(v-α.v_h/β)^2/2σ^2 dv_h
F_𝒱(v) = 1 + βσ/(b-a)α√(π/2)[ erf (v-α b/√(2)βσ)-erf (v-α a/√(2)βσ) ]
§.§ Proof of BNE in FPSB Auction
Let's take the first-order derivatives of equation (<ref>) to solve the maximization problem (<ref>).
d(v_k-b_k). F_𝒱^K-1[(𝐛^-1(b_k)]/db_k = (v_k-b_k).(K-1)
·F_𝒱^K-2[𝐛^-1(b_k)] · f_𝒱[𝐛^-1(b_k)]. d[𝐛^-1(b_k)]/db_k
-F_𝒱^K-1[𝐛^-1(b_k)]
let x=𝐛^-1(b_k) then, 𝐛(x) = b_k⇒(d𝐛(x)/db_k) (dx/db_k) = 1
d[𝐛^-1(b_k)]/db_k = 1/𝐛^'[𝐛^-1(b_k)]; 𝐛^'(·): derivative function w.r.t b_k
For stationary points, d(v_k-b_k). F_𝒱^K-1[𝐛^-1(b_k)]/db_k = 0
And, bidder k^th parameters are, b_k=𝐛(v), v_k=v then,
F_𝒱^K-1[𝐛^-1(𝐛(v))]=[v-𝐛(v)].(K-1).F_𝒱^K-2[𝐛^-1(𝐛(v))]
· f_𝒱[𝐛^-1(𝐛(v))]. 1/𝐛^'[𝐛^-1(𝐛(v)))]
F_𝒱(v).𝐛^'(v)+𝐛(v).(K-1).f_𝒱(v) = v(K-1)f_𝒱(v)
Now we can solve the first-order differential equation to find the optimal bid 𝐛(v) for any bidder node in the network. First, (<ref>) is multiplied both side by F_𝒱^K-2(v).
F_𝒱^K-1(v)𝐛^'(v)+𝐛(v).(K-1).f_𝒱(v).F_𝒱^K-2(v)
= v(K-1).f_𝒱(v).F_𝒱^K-2(v)
d[F_𝒱^K-1(t)𝐛(t)]/dt = t(K-1).f_𝒱(t).F_𝒱^K-2(t); v→ t
F_𝒱^K-1(t)𝐛(t) = ∫_v^v t(K-1)f_𝒱(t).F_𝒱^K-2(t) dt
Apply partial integration, ∫_v^v u_1 du_2= u_1u_2|_v^v- ∫_v^v u_2 du_1
u_1 = t and du_2 = (K-1)f_𝒱(t).F_𝒱^K-2(t) dt
then, du_1 = dt and u_2 = F_𝒱^K-1(t)
F_𝒱^K-1(v)𝐛(v) = t.F_𝒱^K-1(t)|_v^v - ∫_v^vF_𝒱^K-1(t)dt
The CDF value for lower bound v is 0, i.e. F_𝒱(v)=0
𝐛(v) = v-_v^vF_𝒱^K-1(t)dt/F_𝒱^K-1(v); v > v
IEEEtran
| The, cyber-physical systems (CPS) present a higher combination and coordination between physical and computational (cyber) elements <cit.>. Hence, IoT network-based performance analysis for co-designing cyber-physical systems might be a crucial landmark in next-generation communication systems. Especially, the universal metric of the Internet of Everything (IoX) opens a wide variety of key point indicators (KPI) for cell-free massive multiple-input-multiple-output (MIMO) protocols instead of limiting them to particular cellular clusters. In fact, IoX entities are intelligently interconnected with people, processes, data, and things where everything comes online <cit.> and creates a multi-tier cloud-fog architecture.
Cloud-Fog-Automation (CFA): The cloud layer in CFA systems primarily consists of coordinators making critical decisions, while the fog layers handle a growing number of IoT devices, particularly in industrial settings. Despite recent shifts from layered models (e.g., Pyramid automation) to centralized cloud automation approaches <cit.>, a research gap remains in hybrid CFA architectures. CFA optimizes resource allocation in IoT networks through temporary, localized clusters, aiming for deterministic connectivity, intelligence, and computing. However, IoT device constraints (size, power, communication) challenge these goals, especially in uplink communication. Unlike cloud automation, CFA's fog layers operate independently, ensuring secure boundaries while promoting intelligent, heterogeneous architectures <cit.>. A survey on the IoT-Fog-Cloud ecosystem <cit.> highlights key CFA challenges, such as IoT big data, real-time processing, and heterogeneity.
Authors in <cit.> introduced a time-critical balancing control mechanism for wireless CFA systems, improving reliability in industrial wireless networks. To reduce makespans and costs in fog-cloud environments, <cit.> addressed a multi-objective task-scheduling optimization problem with local stationary stages across cloud-fog layers. <cit.> proposed Platform-as-a-Service (PaaS) functionalities for hybrid cloud/fog environments, facilitating application development, deployment, and management. Resource allocation strategies for enhancing task scheduling and QoS in IoX spaces were examined using optimization techniques in <cit.> and <cit.>. Additionally, resource management applications addressed simultaneous resource sharing and URLLC requirements in IoV architectures <cit.>, <cit.>. Secure communication platforms for mitigating risks and vulnerabilities in cloud-fog-edge layers were emphasized in <cit.> and <cit.>.
Game (Bayesian) Theory for CFA:
While existing literature proposes novel approaches for CFA platforms, it lacks discussion on transformative technologies that could harmonize diverse layers for compatibility and synergy. Each CFA layer has specific competitive demands, but individual self-interest may harm overall CPS performance. Consequently, CFA's core task involves addressing sub-optimization problems driven by user interactions within competitive CPS environments. Modern CFA is framed as a cooperative game model, posing challenges in discovering unique states, particularly when optimizing multi-agent systems under complex network specifications <cit.>.
In contrast, individual node awareness or visibility of its surroundings may be constrained by common data distributions rather than a perfect information game model. Bayesian strategies, however, help govern sub-optimization problems by considering expected utility within a cooperative game model. Therefore, applying Bayesian game theory to CFA protocols is desirable. Existing work presents an incomplete game model for efficient power consumption with incomplete channel state information (CSI) in IoT uplinks <cit.>. Yet, the vast number of random network entities and complex action vectors create computational overhead for current solvers <cit.>. Hence, instead of centralized computation, it is more effective to employ mechanism design based on stochastic structures, avoiding game-theoretic limitations <cit.>. Revelation principles are also key for resource allocation in CFA, offering low computational complexity and distributed processing capabilities.
Auction Theory for CFA: Auction theory, as explored through Bayesian game theory for imperfect multi-agent optimization, serves as a foundation for mechanism design in allocation protocols. Auctions play a key role in price formation models, particularly in procurement, patent licensing, and public finance <cit.>, <cit.>. Advanced auction techniques are used in automated mechanism design, such as advertisement auctions in e-commerce platforms <cit.>, and in real-time bidding (RTB) for e-advertising <cit.>. Auction-based frameworks are also suitable for CFA, as seen in auto-bidding mechanisms <cit.>. Additionally, auction-driven methods address efficient resource allocation in cooperative wireless communications <cit.>, vehicular cloud-assisted networks <cit.>, and blockchain-driven fog environments <cit.>.
Authors in <cit.> provides a comprehensive survey on auction-based mechanisms in cloud/edge computing, focusing on efficient resource management and pricing challenges. <cit.> highlights auctions as effective tools for incentivizing task offloading and resource allocation in fog-cloud systems, prompting reconsideration of auction-based strategies for CFA. Auctions enable dynamic resource allocation by leveraging user synergy across cloud, fog, and edge layers, enhancing CFA performance. Each auction participant contributes to decision-making, distributing computational complexity via node collaboration. To optimize resource allocation with incomplete information in dense IoT networks, sealed-bid auction mechanisms are proposed, addressing IoT device limitations and promoting adaptive, sustainable models for CFA co-design.
Contribution: The primary contributions of our work are;
* We focus on a traffic-heavy cloud-fog network layer to propose CFA protocols that optimize resource allocation in dense IoT networks, addressing challenges like limited communication infrastructure.
* We leverage auction theory-based scalable mechanism design to develop policies that enhance user interactions among IoT gateways and nodes, improving the overall interoperability of the co-design.
* To manage the varying priorities and incomplete information among IoT devices, we employ sealed-bid auctions for resource management, addressing challenges such as inconsistent channel state information and unawareness of neighboring nodes' statuses.
* Recognizing the need for heterogeneous resource segments due to the independence of vendor-specific hardware and software, we introduce space-time-frequency spreading (STFS) techniques to reduce gateway interference and ensure compliance with URLLC requirements.
* Through a comprehensive mathematical exploration of auction frameworks, we develop lightweight CFA mechanisms that efficiently allocate resources with minimal processing power, ensuring optimal resource vectors for IoT nodes.
* Our proposed auction mechanism allows for the simultaneous distribution of computational load across IoT clusters. Additionally, we motivate performance improvements through graph-based signal processing to enhance the self-predictability of IoT systems.
* By properly allocating resources using auction theory, we activate STFS in IoT uplinks, minimizing signal collisions and compensating for external hardware and channel impairments.
* Our simulation results demonstrate the strength of the proposed mechanism, showing minimal energy consumption and resilience in the presence of risky bidding nodes. Furthermore, the adaptive protocol we derive effectively manages cloud-fog interactions, applying flexible constraints based on specific application needs within the CFA framework.
Organization: In Section <ref>, we present the system model for CFA strategies and formulate the resource allocation optimization problem using spreading techniques. Section <ref> outlines the mathematical foundation of auction-based mechanisms for CFA policies, extending the sealed-bid approach to handle problem incompleteness. Section <ref> delves into conventional auction types, highlighting their unique properties, and explores CFA capabilities through dispersion metric optimizations. Simulation results are provided in Section <ref>, followed by a discussion aligned with CFA practices, concluding in Section <ref>.
Notation: In this paper, bold upper case, lower case, and Calligraphy letters represent matrices, vectors, and sets or spaces respectively. 𝔼, , ×, †, ‖ . ‖ and |.| or n(.) denote statistical expectation, the cross product, multiplication operator, hermitian, the L_2-norm of a vector, and the cardinality of a set respectively. In addition, |_a refer to with respect to a, ∖ k or -k for without k. → for maps to, and ⊥ refer to perpendicular notations. Furthermore, ℝ, ℂ, ∪, I_K, 0_K N denote the universal set of real numbers, universal set of complex numbers, the union of sets, K K identity matrix, K N null matrix respectively. | null | null | null | null | Auction-based proposed mechanism (mSAA) facilitates a federated STFS resource allocation optimization task with incomplete information regarding channel state information and individual hypotheses among IoT devices. And, the scalable proposed approach is applicable for diverse IoT tiers in CFA while capturing specific entity interactions to distribute the computational complexity in a surrounding with competitive demands. Additionally, optimal dispersion vectors involve prior compensation of environmental uncertainty for IoT firings to reduce interference at the gateway. The mSAA architecture has more potential to sharpen the prediction capabilities of bidding players following graph state diagrams of internal self-behaviors to enhance the overall performance itself. Furthermore, the optimal bidding strategy vector might be found numerically for the mSAA mechanism through VCG price rules along RET in standard risk-free game models.
§.§ Derivation of Equation (<ref>)
We can use the substitution t=v_g^2/2σ^2 to simplify the cumulative distributive function of 2-dimensional valuation metric F_𝒱(v) in (<ref>) as,
F_𝒱(v) = 1/(b-a).∫_a^b( 1-e^-(v-α.v_h/β)^2/2σ^2) dv_h
Apply partial integration,
= 1/(b-a).∫_a^bdv_h - 1/(b-a).∫_a^be^-(v-α.v_h/β)^2/2σ^2 dv_h
F_𝒱(v) = 1 + βσ/(b-a)α√(π/2)[ erf (v-α b/√(2)βσ)-erf (v-α a/√(2)βσ) ]
§.§ Proof of BNE in FPSB Auction
Let's take the first-order derivatives of equation (<ref>) to solve the maximization problem (<ref>).
d(v_k-b_k). F_𝒱^K-1[(𝐛^-1(b_k)]/db_k = (v_k-b_k).(K-1)
·F_𝒱^K-2[𝐛^-1(b_k)] · f_𝒱[𝐛^-1(b_k)]. d[𝐛^-1(b_k)]/db_k
-F_𝒱^K-1[𝐛^-1(b_k)]
let x=𝐛^-1(b_k) then, 𝐛(x) = b_k⇒(d𝐛(x)/db_k) (dx/db_k) = 1
d[𝐛^-1(b_k)]/db_k = 1/𝐛^'[𝐛^-1(b_k)]; 𝐛^'(·): derivative function w.r.t b_k
For stationary points, d(v_k-b_k). F_𝒱^K-1[𝐛^-1(b_k)]/db_k = 0
And, bidder k^th parameters are, b_k=𝐛(v), v_k=v then,
F_𝒱^K-1[𝐛^-1(𝐛(v))]=[v-𝐛(v)].(K-1).F_𝒱^K-2[𝐛^-1(𝐛(v))]
· f_𝒱[𝐛^-1(𝐛(v))]. 1/𝐛^'[𝐛^-1(𝐛(v)))]
F_𝒱(v).𝐛^'(v)+𝐛(v).(K-1).f_𝒱(v) = v(K-1)f_𝒱(v)
Now we can solve the first-order differential equation to find the optimal bid 𝐛(v) for any bidder node in the network. First, (<ref>) is multiplied both side by F_𝒱^K-2(v).
F_𝒱^K-1(v)𝐛^'(v)+𝐛(v).(K-1).f_𝒱(v).F_𝒱^K-2(v)
= v(K-1).f_𝒱(v).F_𝒱^K-2(v)
d[F_𝒱^K-1(t)𝐛(t)]/dt = t(K-1).f_𝒱(t).F_𝒱^K-2(t); v→ t
F_𝒱^K-1(t)𝐛(t) = ∫_v^v t(K-1)f_𝒱(t).F_𝒱^K-2(t) dt
Apply partial integration, ∫_v^v u_1 du_2= u_1u_2|_v^v- ∫_v^v u_2 du_1
u_1 = t and du_2 = (K-1)f_𝒱(t).F_𝒱^K-2(t) dt
then, du_1 = dt and u_2 = F_𝒱^K-1(t)
F_𝒱^K-1(v)𝐛(v) = t.F_𝒱^K-1(t)|_v^v - ∫_v^vF_𝒱^K-1(t)dt
The CDF value for lower bound v is 0, i.e. F_𝒱(v)=0
𝐛(v) = v-_v^vF_𝒱^K-1(t)dt/F_𝒱^K-1(v); v > v
IEEEtran |
http://arxiv.org/abs/2409.17841v1 | 20240926134536 | Machine Learning-based vs Deep Learning-based Anomaly Detection in Multivariate Time Series for Spacecraft Attitude Sensors | [
"R. Gallon",
"F. Schiemenz",
"A. Krstova",
"A. Menicucci",
"E. Gill"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
§ ABSTRACT
In the framework of Failure Detection, Isolation and Recovery (FDIR) on spacecraft, new AI-based approaches are emerging in the state of the art to overcome the limitations commonly imposed by traditional threshold checking.
The present research aims at characterizing two different approaches to the problem of stuck values detection in multivariate time series coming from spacecraft attitude sensors. The analysis reveals the performance differences in the two approaches, while commenting on their interpretability and generalization to different scenarios.
§ INTRODUCTION
Failure Detection, Isolation, and Recovery (FDIR) onboard satellites is responsible of monitoring onboard parameters and functionalities, supervising the progression of the operations of the satellite under nominal conditions.
Newest trends in onboard FDIR <cit.>, <cit.>, <cit.> leverage Artificial Intelligence (AI) to face the issues traditionally affecting the methods based on the ECSS-E-ST-70-41C (Packet Utilization Standard, PUS, <cit.>), currently representing the state of the art for flying missions.
These issues mainly derive from the PUS-based FDIR relying on threshold- and expected values-monitoring, which make it inherently unable to detect certain kinds of data and/or anomalies, i.e. multivariate signals or univariate signals evolving anomalously in the threshold range.
AI-based solutions are expected to act on these limitations, aiming at enhancing detection notice and failures symptoms recognition.
§.§ Project Framework
The project Astrone KI, Developed at Airbus Defence and Space GmbH in Friedrichshafen, Germany, is a collaborative effort with the Universities of Stuttgart and Dresden, and ASTOS Solutions GmbH. It consists of a concept study for a drone-like vehicle for the exploration of Small Solar System Bodies, designed to perform autonomous relocation in an asteroid environment, aided by AI-augmented FDIR and vision-based navigation.
The Astrone KI system is meant to operate the AI-based FDIR functionalities alongside the PUS-based FDIR in order to prove the effectiveness of an innovative technology on a critical task while relieving the criticality of its decision-making. Indeed, an effective AI integration in the PUS-based FDIR can make use of the AI-enhanced detection, while relying on the traditional FDIR as fallback in those cases where the AI could fail or is not required.
In Astrone KI, the AI algorithms are meant to run on a dedicated AI module (i.e. a dedicated hardware), which not only performs the inference, but also the data preprocessing and postprocessing, including telemetry (TM) and telecommand (TC) handling.
The onboard sensors of the Astrone KI spacecraft, which will be subject to AI-based FDIR, include accelerometers and Inertial Measurement Units (IMU), respectively measuring accelerations and angular rates in the shape of multivariate time series.
The failure modes for the mentioned sensors are derived by Reliability, Availability, Maintainability and Safety (RAMS) analysis, with special consideration to those failures which may benefit from the AI introduction. In the case of the multivariate time series coming from the accelerometer and the IMU of Astrone KI, the analysis concentrated on stuck value faults. These faults constitute a suitable use-case for the AI-based FDIR functionalities, as in their presence the signal evolves anomalously within the ranges of thresholds monitored by the PUS-based FDIR. Classical methods can commonly detect stuck values only as long as the consequences propagate on higher levels (e.g. subsystem- or system-level), ultimately requiring a more drastic recovery. The AI introduction aims at detecting the stuck value directly at its occurrence in a particular signal, while being insured against missed detection by the presence of the classic PUS-based FDIR.
§.§ Literature Review
The literature research for the present work oriented towards AI-based anomaly detection for time series from both inside and outside the space domain, due to the generality of AI approaches in the field. In order to employ these algorithms onboard satellites, an important constraint is always represented by the limited computational resources, that pose limitations to the range of applicable solutions.
Commonly employed anomaly detection approaches make use of Machine Learning (ML) or Deep Learning (DL) to process onboard telemetry <cit.>. ML approaches are based on classifying the data into nominal and faulty, employing deterministic algorithms capable of learning from data. Typical examples of this category include Support Vector Machines (SVM, <cit.>) and Decision Trees <cit.>, <cit.>, which are classification algorithms employed in direct faults classification. More specifically, XGBoost is a Decision Tree algorithm based on Gradient Boosting <cit.>, proven to beat state-of-the-art performances on Kaggle public datasets <cit.>, which has found large employment in the field of anomaly detection <cit.>, <cit.>.
DL solutions represent a class of black-box algorithms learning from the data to accomplish a specific task, detaching from the ML counterpart due to the lack of explainability of the input-output link, as well as the interpretability of the learnt features. DL is inherently able to grasp complex patterns in big datasets with limited or absent feature engineering, making it suitable to solve complex tasks where traditionally ML has proven its inefficiency (e.g. computer vision), or introducing solutions where there were none before (e.g. large language models). Recent scientific literature has seen a raising of DL due to the mentioned reasons.
DL solutions can be used for direct fault classification of the input signal and for time series forecasting or reconstruction, in this case needing a further step for the actual classification. DL solutions (i.e. Neural Networks) are consistently employed in the field of anomaly detection, both for direct classification purposes <cit.>, and for reconstruction <cit.>. An example of a Convolutional Neural Network (CNN) for direct anomaly classification in the industrial field is given in <cit.>. The same task applied to simulated spacecraft onboard telemetry can be found in <cit.>, which employs a Long Short-term Memory (LSTM) cell with a preprocessing stage making use of an SVM. Finally, <cit.> is a pioneering work in the field of time series forecasting and classification in the space domain, proposing an approach of anomaly detection based on the comparison between the signal time series coming from the real unit and its forecast counterpart.
§.§ Contribution of the Present Work
In the present work, two approaches to the problem of stuck values detection in time series are proposed and applied to the use-case of the Astrone KI system.
On one side, the ML approach XGBoost is employed, focusing on the interpretability of the algorithm, designing it to mimic human-based stuck values recognition through a specific selection of hyperparameters. On the other side, a CNN is proposed to accomplish the same detection task, proving its outstanding performances at the cost of losing any interpretability and generalization capability.
§ RESULTS
§.§ Experimental Setup
As already discussed in <ref>, the AI-based FDIR strategy is focused on detecting stuck values (<ref> and <ref>). These faults realize in various combinations of three defining parameters: the signal value, stuck at the last value, or stuck at random value; the axis of occurrence, whether it affects a single random axis or all three axes of the sensor simultaneously; and the presence or absence of measurement noise on top of the fault.
By means of the Attitude and Orbit Control System (AOCS) Offline Simulation Environment (AOSE, <cit.>), the unit models of the Astrone KI sensors are simulated to generate the dataset for the AI training and testing. Each sensor is simulated over a set of spacecraft trajectories obtained from randomized initial conditions, and stuck values are randomly injected into the data. The criteria for the fault injection stage is meant to replicate realistic faults occurrence across the single trajectories, at the same time maintaining balance between nominal and faulty data.
Key intuition of the present analysis is that stuck value failures are essentially slices of a signal where the derivative sharply drops to zero, eventually following a sharp peak (if stuck at random). The peak is directly related to the sudden change in the signal's value between two successive time samples, as its derivative is computed numerically. The approached zero value is instead influenced by measurement noise (if present), which induces zero-mean oscillations in the signal's derivative.
The conditions for identifying a stuck value are summarized in <ref>.
The approaches considered for the stuck values detection are based on ML and DL respectively.
The first solution presented employs XGBoost. The algorithm has been designed to obtain a structure of the computed tree reflecting the rules expressed in <ref>, pursuing full interpretability of the classification process. Also, XGBoost has been employed to classify faults in the IMU signal only, as the reduced noise content of this sensor is expected to facilitate the application of the conditions in <ref>.
The second approach presented is a Multi-channel Convolutional Neural Network <cit.>, which is capable of detecting stuck values in the accelerometer and IMU signals coupled. The Network takes as input a sliding window over the sensors signals, with a fixed length, and classifies each final sample of the window based on the preceding signal history as included in the window itself.
The preference for using a CNN over other common approaches (i.e. RNN), is based on two main reasons. First, although RNNs are typically employed for time series due to their temporal analysis capabilities, CNNs can effectively leverage their spatial approach in the specific stuck value detection task presented. The chosen input features allow to focus on the signal's shape within individual windows to detect typical stuck value artifacts (i.e. flat signal).
Second, CNNs are significantly more parallelizable than RNNs and generally require fewer parameters to accomplish the same tasks. This results in lighter models and faster inference. Light computational resources and speed are particularly important, given the limited computational resources available in state-of-the-art space hardware, whether it be an OBC or dedicated hardware (e.g. FPGA) as required in Astrone KI.
The output of both XGBoost and the Neural Network is made by binary indexes marking the presence of a failure in the sensors separately, regardless of the occurring fault case or the interested axis. This choice reflect the consideration that the recovery action which is eventually triggered in the onboard FDIR would be the same for all the considered fault cases.
§.§ Training
The hyperparameters employed in XGBoost training are: binary crossentropy as loss function, specifically for the binary classification task; trees number set to 1 and max depth per tree set to 6. The number of trees refers to the number of classifiers trained at each step of the Gradient Boosting algorithm, while the max depth indicates the maximum number of splits per tree. Motivation to this specific hyperparameter setup can be once again found in the rules outlined in <ref> and further explained in <ref>.
The CNN is made by two convolutional + max pooling stages, followed by a concatenation layer and a last convolution + max pooling layer. The binary crossentropy is employed for the binary classification task, while the other hyperparameters, including the learning rate and the filters' size and number, are subject to optimization to maximize classification scores. The number of samples per input window is set to 180 (subject to optimization).
<ref> presents a comparison of the performances obtained with the two presented algorithms.
§ DISCUSSION
The two anomaly detection approaches proposed in Section <ref> present advantages and drawbacks, which are highlighted in the following analysis.
The most favorable feature on the XGBoost side is the clear understanding of the rules followed to classify the input. This feature, especially when led by robust feature engineering, constitutes the main advantage of ML algorithms over their DL counterparts. However, since the features choice is always made by the algorithm developer, it may sometimes result in suboptimal performances with respect to the unknown problem optimum. Neural Networks suffer much less this issue, because good performances can be achieved also with limited or absent feature engineering. Nevertheless, concerning the specific onboard FDIR application of the present work, an interpretable algorithm is deemed more dependable with respect to a Neural Network for the mentioned interpretability feature, therefore it is considered more likely to be employed in flying missions.
The hyperparameters employed to train XGBoost serve the purpose of making the algorithm reason as exposed in <ref>. In other words, the algorithm is designed to learn splits and rules classifying data based on the derivative value, the presence of peaks, and the signal value going significantly out of range. An analysis of the tree structure after training (not reported here for brevity) proves the realization of this reasoning.
In conclusion, the task left to the algorithm is only to quantify the numerical thresholds required by the conditions in <ref>.
As a further proof that the algorithm is reasoning as intended, also suggesting its generalization capabilities, an experiment of transfer learning was carried out by inferencing the XGBoost model on the accelerometer signal (<ref>). The experiment shows that the algorithm achieves comparable performances to the IMU case in the detection of stuck values with zero derivative (no noise, corresponding to a dead interface).
A deeper analysis of the fault predictions shows that all the false negatives (undetected faults) correspond to stuck values with noise, highlighting the biggest inherent limitation of the XGBoost approach. The measurement noise is indeed responsible of leading the derivative value beyond the thresholds posed by the Decision Tree in the learnt splits, determining a systematic error that it is not possible to delete by only acting on the algorithm. Note that this effect implies a higher deterioration of the recall in the accelerometer case with respect to the IMU, given the more noisy nature of the signal.
The CNN approach generally outperforms XGBoost in terms of performance metrics (<ref> and <ref>), while losing all interpretability. Additionally, it has the capability to simultaneously analyze both signals, potentially gathering valuable information for detection from both sources. Furthermore, unlike XGBoost, the CNN exhibits no systematic limitations in its classification capability, theoretically granting the same classification ability across all considered failure cases. Nevertheless, results show that the CNN struggles with classifying part of the stuck values with noise, which primarily contribute to the deterioration of recall (<ref> and <ref>). This confirms the complexity of classifying that specific fault case, as well as suggesting that the CNN is improving the XGBoost performances while reasoning similarly.
The cost for this CNN performance improvement is obviously the interpretability, as the Neural Network does not offer a clear insight on its way of reasoning as the decision tree does, preventing among the others any possible consideration on its limitations. Traditionally, Neural Networks can achieve outstanding performances, but need to be carefully maintained to ensure they keep performing correctly. Even a slight shift in the input data distribution with respect to the training data while operating the network may make a fine-tuning and/or retrain necessary <cit.>.
As a final remark, when it comes to choose between the CNN and XGBoost in a real scenario, it is important to keep into account case-dependent requirements, e.g. favor interpretability over performances, or maximize case-specific metrics not mentioned in this study. In this regard, the development of these metrics considering the real-time nature of the application, or also the impact of the AI on the system level, can be an interesting field for future work.
§ ACKNOWLEDGEMENTS
The results presented in this paper have been achieved by the project Astrone - Increasing the Mobility of Small Body Probes, which has received funding from the German Federal Ministry for Economic Affairs and Energy (BMWi) under funding number “50 RA 2130A”. The consortium consists of Airbus Defence and Space GmbH, Astos Solutions GmbH, Institute of Automation (Technische Universität Dresden) and Institute of Flight Mechanics and Controls (Universität Stuttgart). Responsibility of the publication contents is with the publishing author.
tocsectionReferences
| Failure Detection, Isolation, and Recovery (FDIR) onboard satellites is responsible of monitoring onboard parameters and functionalities, supervising the progression of the operations of the satellite under nominal conditions.
Newest trends in onboard FDIR <cit.>, <cit.>, <cit.> leverage Artificial Intelligence (AI) to face the issues traditionally affecting the methods based on the ECSS-E-ST-70-41C (Packet Utilization Standard, PUS, <cit.>), currently representing the state of the art for flying missions.
These issues mainly derive from the PUS-based FDIR relying on threshold- and expected values-monitoring, which make it inherently unable to detect certain kinds of data and/or anomalies, i.e. multivariate signals or univariate signals evolving anomalously in the threshold range.
AI-based solutions are expected to act on these limitations, aiming at enhancing detection notice and failures symptoms recognition.
§.§ Project Framework
The project Astrone KI, Developed at Airbus Defence and Space GmbH in Friedrichshafen, Germany, is a collaborative effort with the Universities of Stuttgart and Dresden, and ASTOS Solutions GmbH. It consists of a concept study for a drone-like vehicle for the exploration of Small Solar System Bodies, designed to perform autonomous relocation in an asteroid environment, aided by AI-augmented FDIR and vision-based navigation.
The Astrone KI system is meant to operate the AI-based FDIR functionalities alongside the PUS-based FDIR in order to prove the effectiveness of an innovative technology on a critical task while relieving the criticality of its decision-making. Indeed, an effective AI integration in the PUS-based FDIR can make use of the AI-enhanced detection, while relying on the traditional FDIR as fallback in those cases where the AI could fail or is not required.
In Astrone KI, the AI algorithms are meant to run on a dedicated AI module (i.e. a dedicated hardware), which not only performs the inference, but also the data preprocessing and postprocessing, including telemetry (TM) and telecommand (TC) handling.
The onboard sensors of the Astrone KI spacecraft, which will be subject to AI-based FDIR, include accelerometers and Inertial Measurement Units (IMU), respectively measuring accelerations and angular rates in the shape of multivariate time series.
The failure modes for the mentioned sensors are derived by Reliability, Availability, Maintainability and Safety (RAMS) analysis, with special consideration to those failures which may benefit from the AI introduction. In the case of the multivariate time series coming from the accelerometer and the IMU of Astrone KI, the analysis concentrated on stuck value faults. These faults constitute a suitable use-case for the AI-based FDIR functionalities, as in their presence the signal evolves anomalously within the ranges of thresholds monitored by the PUS-based FDIR. Classical methods can commonly detect stuck values only as long as the consequences propagate on higher levels (e.g. subsystem- or system-level), ultimately requiring a more drastic recovery. The AI introduction aims at detecting the stuck value directly at its occurrence in a particular signal, while being insured against missed detection by the presence of the classic PUS-based FDIR.
§.§ Literature Review
The literature research for the present work oriented towards AI-based anomaly detection for time series from both inside and outside the space domain, due to the generality of AI approaches in the field. In order to employ these algorithms onboard satellites, an important constraint is always represented by the limited computational resources, that pose limitations to the range of applicable solutions.
Commonly employed anomaly detection approaches make use of Machine Learning (ML) or Deep Learning (DL) to process onboard telemetry <cit.>. ML approaches are based on classifying the data into nominal and faulty, employing deterministic algorithms capable of learning from data. Typical examples of this category include Support Vector Machines (SVM, <cit.>) and Decision Trees <cit.>, <cit.>, which are classification algorithms employed in direct faults classification. More specifically, XGBoost is a Decision Tree algorithm based on Gradient Boosting <cit.>, proven to beat state-of-the-art performances on Kaggle public datasets <cit.>, which has found large employment in the field of anomaly detection <cit.>, <cit.>.
DL solutions represent a class of black-box algorithms learning from the data to accomplish a specific task, detaching from the ML counterpart due to the lack of explainability of the input-output link, as well as the interpretability of the learnt features. DL is inherently able to grasp complex patterns in big datasets with limited or absent feature engineering, making it suitable to solve complex tasks where traditionally ML has proven its inefficiency (e.g. computer vision), or introducing solutions where there were none before (e.g. large language models). Recent scientific literature has seen a raising of DL due to the mentioned reasons.
DL solutions can be used for direct fault classification of the input signal and for time series forecasting or reconstruction, in this case needing a further step for the actual classification. DL solutions (i.e. Neural Networks) are consistently employed in the field of anomaly detection, both for direct classification purposes <cit.>, and for reconstruction <cit.>. An example of a Convolutional Neural Network (CNN) for direct anomaly classification in the industrial field is given in <cit.>. The same task applied to simulated spacecraft onboard telemetry can be found in <cit.>, which employs a Long Short-term Memory (LSTM) cell with a preprocessing stage making use of an SVM. Finally, <cit.> is a pioneering work in the field of time series forecasting and classification in the space domain, proposing an approach of anomaly detection based on the comparison between the signal time series coming from the real unit and its forecast counterpart.
§.§ Contribution of the Present Work
In the present work, two approaches to the problem of stuck values detection in time series are proposed and applied to the use-case of the Astrone KI system.
On one side, the ML approach XGBoost is employed, focusing on the interpretability of the algorithm, designing it to mimic human-based stuck values recognition through a specific selection of hyperparameters. On the other side, a CNN is proposed to accomplish the same detection task, proving its outstanding performances at the cost of losing any interpretability and generalization capability. | null | null | §.§ Experimental Setup
As already discussed in <ref>, the AI-based FDIR strategy is focused on detecting stuck values (<ref> and <ref>). These faults realize in various combinations of three defining parameters: the signal value, stuck at the last value, or stuck at random value; the axis of occurrence, whether it affects a single random axis or all three axes of the sensor simultaneously; and the presence or absence of measurement noise on top of the fault.
By means of the Attitude and Orbit Control System (AOCS) Offline Simulation Environment (AOSE, <cit.>), the unit models of the Astrone KI sensors are simulated to generate the dataset for the AI training and testing. Each sensor is simulated over a set of spacecraft trajectories obtained from randomized initial conditions, and stuck values are randomly injected into the data. The criteria for the fault injection stage is meant to replicate realistic faults occurrence across the single trajectories, at the same time maintaining balance between nominal and faulty data.
Key intuition of the present analysis is that stuck value failures are essentially slices of a signal where the derivative sharply drops to zero, eventually following a sharp peak (if stuck at random). The peak is directly related to the sudden change in the signal's value between two successive time samples, as its derivative is computed numerically. The approached zero value is instead influenced by measurement noise (if present), which induces zero-mean oscillations in the signal's derivative.
The conditions for identifying a stuck value are summarized in <ref>.
The approaches considered for the stuck values detection are based on ML and DL respectively.
The first solution presented employs XGBoost. The algorithm has been designed to obtain a structure of the computed tree reflecting the rules expressed in <ref>, pursuing full interpretability of the classification process. Also, XGBoost has been employed to classify faults in the IMU signal only, as the reduced noise content of this sensor is expected to facilitate the application of the conditions in <ref>.
The second approach presented is a Multi-channel Convolutional Neural Network <cit.>, which is capable of detecting stuck values in the accelerometer and IMU signals coupled. The Network takes as input a sliding window over the sensors signals, with a fixed length, and classifies each final sample of the window based on the preceding signal history as included in the window itself.
The preference for using a CNN over other common approaches (i.e. RNN), is based on two main reasons. First, although RNNs are typically employed for time series due to their temporal analysis capabilities, CNNs can effectively leverage their spatial approach in the specific stuck value detection task presented. The chosen input features allow to focus on the signal's shape within individual windows to detect typical stuck value artifacts (i.e. flat signal).
Second, CNNs are significantly more parallelizable than RNNs and generally require fewer parameters to accomplish the same tasks. This results in lighter models and faster inference. Light computational resources and speed are particularly important, given the limited computational resources available in state-of-the-art space hardware, whether it be an OBC or dedicated hardware (e.g. FPGA) as required in Astrone KI.
The output of both XGBoost and the Neural Network is made by binary indexes marking the presence of a failure in the sensors separately, regardless of the occurring fault case or the interested axis. This choice reflect the consideration that the recovery action which is eventually triggered in the onboard FDIR would be the same for all the considered fault cases.
§.§ Training
The hyperparameters employed in XGBoost training are: binary crossentropy as loss function, specifically for the binary classification task; trees number set to 1 and max depth per tree set to 6. The number of trees refers to the number of classifiers trained at each step of the Gradient Boosting algorithm, while the max depth indicates the maximum number of splits per tree. Motivation to this specific hyperparameter setup can be once again found in the rules outlined in <ref> and further explained in <ref>.
The CNN is made by two convolutional + max pooling stages, followed by a concatenation layer and a last convolution + max pooling layer. The binary crossentropy is employed for the binary classification task, while the other hyperparameters, including the learning rate and the filters' size and number, are subject to optimization to maximize classification scores. The number of samples per input window is set to 180 (subject to optimization).
<ref> presents a comparison of the performances obtained with the two presented algorithms. | The two anomaly detection approaches proposed in Section <ref> present advantages and drawbacks, which are highlighted in the following analysis.
The most favorable feature on the XGBoost side is the clear understanding of the rules followed to classify the input. This feature, especially when led by robust feature engineering, constitutes the main advantage of ML algorithms over their DL counterparts. However, since the features choice is always made by the algorithm developer, it may sometimes result in suboptimal performances with respect to the unknown problem optimum. Neural Networks suffer much less this issue, because good performances can be achieved also with limited or absent feature engineering. Nevertheless, concerning the specific onboard FDIR application of the present work, an interpretable algorithm is deemed more dependable with respect to a Neural Network for the mentioned interpretability feature, therefore it is considered more likely to be employed in flying missions.
The hyperparameters employed to train XGBoost serve the purpose of making the algorithm reason as exposed in <ref>. In other words, the algorithm is designed to learn splits and rules classifying data based on the derivative value, the presence of peaks, and the signal value going significantly out of range. An analysis of the tree structure after training (not reported here for brevity) proves the realization of this reasoning.
In conclusion, the task left to the algorithm is only to quantify the numerical thresholds required by the conditions in <ref>.
As a further proof that the algorithm is reasoning as intended, also suggesting its generalization capabilities, an experiment of transfer learning was carried out by inferencing the XGBoost model on the accelerometer signal (<ref>). The experiment shows that the algorithm achieves comparable performances to the IMU case in the detection of stuck values with zero derivative (no noise, corresponding to a dead interface).
A deeper analysis of the fault predictions shows that all the false negatives (undetected faults) correspond to stuck values with noise, highlighting the biggest inherent limitation of the XGBoost approach. The measurement noise is indeed responsible of leading the derivative value beyond the thresholds posed by the Decision Tree in the learnt splits, determining a systematic error that it is not possible to delete by only acting on the algorithm. Note that this effect implies a higher deterioration of the recall in the accelerometer case with respect to the IMU, given the more noisy nature of the signal.
The CNN approach generally outperforms XGBoost in terms of performance metrics (<ref> and <ref>), while losing all interpretability. Additionally, it has the capability to simultaneously analyze both signals, potentially gathering valuable information for detection from both sources. Furthermore, unlike XGBoost, the CNN exhibits no systematic limitations in its classification capability, theoretically granting the same classification ability across all considered failure cases. Nevertheless, results show that the CNN struggles with classifying part of the stuck values with noise, which primarily contribute to the deterioration of recall (<ref> and <ref>). This confirms the complexity of classifying that specific fault case, as well as suggesting that the CNN is improving the XGBoost performances while reasoning similarly.
The cost for this CNN performance improvement is obviously the interpretability, as the Neural Network does not offer a clear insight on its way of reasoning as the decision tree does, preventing among the others any possible consideration on its limitations. Traditionally, Neural Networks can achieve outstanding performances, but need to be carefully maintained to ensure they keep performing correctly. Even a slight shift in the input data distribution with respect to the training data while operating the network may make a fine-tuning and/or retrain necessary <cit.>.
As a final remark, when it comes to choose between the CNN and XGBoost in a real scenario, it is important to keep into account case-dependent requirements, e.g. favor interpretability over performances, or maximize case-specific metrics not mentioned in this study. In this regard, the development of these metrics considering the real-time nature of the application, or also the impact of the AI on the system level, can be an interesting field for future work. | null |
http://arxiv.org/abs/2409.17832v1 | 20240926133311 | Mutation-acyclic quivers are totally proper | [
"Scott Neville"
] | math.RT | [
"math.RT",
"math.CO",
"13F60 (Primary), 05E16, 15B36 (Secondary)"
] |
Department of Mathematics, University of Michigan,
Ann Arbor, MI 48109, USA
[email protected]
Partially supported by NSF grants DMS-1840234, DMS-2054231, DMS-2348501.
Primary
13F60,
Secondary
05E16,
15B36.
§ ABSTRACT
Totally proper quivers, introduced by S. Fomin and the author <cit.>, have many useful properties including powerful mutation invariants.
We show that every mutation-acyclic quiver (i.e., a quiver that is mutation equivalent to an acyclic one) is totally proper.
This yields new necessary conditions for a quiver to be mutation-acyclic.
PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification
Scott Neville
September 28, 2024
========================================================================================
empty
§ INTRODUCTION
Quivers are finite directed graphs without oriented cycles of length 1 or 2.
Mutations are operations that transform a quiver, based on a choice of a vertex.
These notions are foundational in the study of cluster algebras <cit.>.
A mutation invariant is a characteristic of a quiver that is preserved under mutations.
Mutation invariants are helpful for deciding whether two quivers are mutation equivalent or not,
i.e., whether there is a sequence of mutations that transforms one quiver into the other.
See <cit.> for examples of known mutation invariants.
A cyclically ordered quiver (COQ) is a pair (Q, σ) where Q is a quiver and σ a cyclic ordering of its vertices.
Cyclically ordered quivers were introduced in <cit.> to develop new powerful mutation invariants.
A mutation in a COQ (Q, σ) transforms Q by the usual mutation rule,
while simultaneously changing the cyclic ordering σ in a prescribed way.
We note that mutations of COQs are only allowed at the vertices that satisfy a certain properness condition.
It is this condition that ultimately enables the introduction of new mutation invariants.
In this paper, we prove that for one important class of quivers, the properness requirement can be lifted,
so that mutations at all vertices are allowed and all invariants developed in <cit.> become true mutation invariants.
Specifically, we show that in any mutation-acyclic quiver Q
(i.e., a quiver that is mutation equivalent to an acyclic quiver),
every vertex v is proper, for a particular canonical cyclic ordering σ on Q.
After a mutation at v, we obtain a new COQ (Q',σ'), with canonical cyclic ordering σ',
that again has the same property: all the vertices are proper, so we can mutate at any one of them,
and the process continues.
This theorem yields new mutation invariants of mutation-acyclic quivers,
as well as new tools for proving that various quivers are not mutation-acyclic.
One such quiver is shown in Figure <ref>.
We give a short proof that this quiver is not mutation-acyclic in Example <ref>.
As mentioned above, mutation at a vertex v in a COQ (Q,σ)
is only allowed if v satisfies a combinatorial condition of “properness.”
Informally, v is proper if every 2-arrow oriented path through v
travels “clockwise,” i.e., in the direction of the cyclic ordering σ.
A COQ is proper if every vertex v in it is proper
(possibly after applying a sequence of transpositions called wiggles).
Thus, we can mutate at any vertex in a proper COQ (Q,σ)
and get a new COQ (Q',σ')—but (Q',σ') will not necessarily be proper.
If it is the case that applying any sequence of mutations to (Q,σ) results in a proper COQ,
then we say that (Q,σ) is totally proper.
We have shown in <cit.> that
a quiver Q can be upgraded to a totally proper COQ (Q,σ)
in at most one way (up to wiggles).
Moreover, the (essentially unique) candidate cyclic ordering σ=σ_Q can be constructed efficiently.
We say that a quiver Q is totally proper if the COQ (Q, σ_Q) is totally proper.
As shown in <cit.>,
totally proper quivers have a powerful mutation invariant, which we recall next.
This invariant is constructed as follows.
Input: a totally proper quiver Q.
Step 1. Construct the canonical cyclic ordering σ=σ_Q.
Step 2. “Tear” σ into a linear ordering <.
Step 3. Construct the skew-symmetric exchange matrix B=B_Q, with rows and columns ordered acording to <.
Step 4. Construct the unipotent companion U, the unique
unipotent upper-triangular matrix such that B=U^T - U.
Output: the integral congruence class of U, i.e.,
the set {G U G^T | G ∈_n()}.
We have shown in <cit.> that wiggles, cyclic shifts of the linear ordering,
and (proper) mutations of COQs preserve the integral congruence class of the unipotent companion. [3]
While the resulting invariant of proper mutations is very powerful,
it is not easy to use in practice, since
the problem of deciding whether two upper-triangular matrices are congruent over _n() seems to be rather difficult.
In <cit.>, we bypassed this difficulty as follows.
It is well known (and easy to see) that the integral congruence class of a matrix U∈_n()
uniquely determines the conjugacy class of its cosquare U^-T U.
It follows that whenever two COQs are related by proper mutations,
the cosquares of their respective unipotent companions must be conjugate in _n().
This conjugacy condition can be verified efficiently,
though the algorithm is quite nontrivial <cit.>.
The cosquare U^-T U can be used to construct other invariants of proper mutations,
such as the monic characteristic polynomial of U^-T U,
which we call the Alexander polynomial of the COQ Q.
(While much simpler to compute, the Alexander polynomial is less powerful
than the conjugacy class of U^-T U:
integer matrices may have the same characteristic polynomial while
not being conjugate in _n().)
The main result of this paper is the following theorem that settles <cit.>.
Any mutation-acyclic quiver is totally proper.
It suffices to show that any acyclic quiver is totally proper.
Furthermore, as noted in <cit.>, any acyclic quiver has a proper cyclic ordering
obtained by “closing up” any linear ordering compatible with the orientations of the quivers' arrows.
(All such cyclic orderings are related by wiggles.)
Our proof of Theorem <ref> shows that these cyclic orderings are totally proper.
In the course of proving Theorem <ref>,
we establish a few results that may be of independent interest.
In particular, Theorem <ref> gives a simple combinatorial constraint on the orientations of arrows in mutation-acyclic quivers.
Both Theorem <ref> and the ensuing Lemma <ref> assert that certain subquivers of a mutation-acyclic quiver must be acyclic.
Should someone wish to compute with COQs and unipotent companions, Lemmas <ref> and <ref> generalize and remove conditions from <cit.>.
This can speed up and simplify the computations.
An alternative proof of Theorem <ref> was independently discovered by Hugh Thomas, using categorification <cit.>.
§.§ Overview of the paper
Section <ref> establishes notation and reviews some material that does not involve COQs.
Much of this section is devoted to restating the results of A. Seven <cit.>, who constructed a distinguished symmetric matrix A associated with an arbitrary mutation-acyclic quiver.
Section <ref> is a condensed summary of the necessary results and notation from <cit.>.
It contains new examples, and Proposition <ref> gives an alternative description of the Markov invariant.
In Section <ref> we establish a handful of technical lemmas.
Lemmas <ref> and <ref> make it easier to compute the unipotent companion of a COQ after a proper mutation.
Theorem <ref> generalizes <cit.> using the results of A. Seven <cit.>.
Section <ref> contains the proof of Theorem <ref>.
We begin by defining U=(A-B)/2, where A comes from Seven's construction and B is the usual exchange matrix.
The fact that the integral congruence class of U is a mutation invariant follows quickly from Proposition <ref>.
The bulk of the effort is then spent showing that U is indeed a unipotent companion.
In Section <ref>, we give some applications and corollaries of Theorem <ref>.
This includes a description of invariants for some small quivers, inequalities satisfied by the coefficients of the Alexander polynomial for mutation-acyclic quivers (Corollary <ref>), and several examples of quivers that are shown to be not mutation-acyclic.
§.§ Acknowledgements
I would like to thank my advisor, Sergey Fomin, for his guidance, comments, and suggestions;
Grayson Moore, for reading and commenting on an early draft;
and Danielle Ensign for her software assistance.
I am also grateful to Roger Casals, Tucker Ervin, and Hugh Thomas for stimulating discussions.
I have used Magma and for various computations.
This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor.
§ QUIVERS, MUTATION, AND QUASI-CARTAN COMPANIONS
We begin by reviewing notation, definitions and results from the literature which do not involve cyclically ordered quivers.
This includes a summary of much from <cit.>.
A quiver is a finite directed graph with parallel edges allowed, but no directed 1 or 2-cycles.
Directed edges in a quiver are called arrows.
Each vertex is marked as mutable or frozen.
Unless otherwise indicated, all vertices are mutable.
We use the notation u → v to assert that there is at least one arrow from u to v.
We use the notation u x→ v to denote that there are x arrows from vertex u to vertex v.
Let (v) = {u | u → v} and (v) = {u | u ← v} respectively denote the inset and outset of v in a quiver.
By default, our quivers have labeled vertices.
Thus, we distinguish between isomorphic quivers on the same set of vertices that differ from each other by a permutation of this set.
To mutate a quiver Q at a mutable vertex v, do the following:
* for each oriented path u → v → w, draw a new arrow u → w (thus, if there are x arrows from u → v and y arrows v → w, we add xy arrows u → w);
* reverse all arrows adjacent to v;
* remove oriented 2-cycles (one cycle at a time) until we again have a quiver.
We denote the resulting quiver by μ_v(Q).
Mutation is an involution, μ_v ( μ_v(Q)) = Q.
Mutation does not change which vertices are mutable or frozen.
Two quivers are mutation equivalent if they are related by a sequence of mutations at mutable vertices.
The mutation class [Q] is the set of quivers mutation equivalent to Q.
A quiver is acyclic if it does not contain any directed cycles.
A quiver is mutation-acyclic if it is mutation equivalent to an acyclic quiver.
Only one of the two quivers in the Figure <ref> is mutation-acyclic.
Can you guess which is which?
For a quiver Q, the B-matrix B_Q = (b_uv) (also known as the exchange matrix)
is the skew-symmetric adjacency matrix of Q.
By convention we have b_uv > 0 when there are b_uv arrows u → v,
and b_uv < 0 when there are -b_uv arrows u ← v.
A (full) subquiver is an induced subgraph of a quiver.
We call the remaining vertices of a subquiver its support.
The mutable part
of a quiver Q is the subquiver supported by its mutable vertices.
The principal extension (also known as framing)
of a quiver Q is a quiver Q formed by adding a new frozen vertex v' and a single arrow v' → v for each vertex v.
Thus Q is the mutable part of Q and Q has twice as many vertices as Q.
We will use the following notation for mutation classes and associated matrices and data.
(Cf. also Definitions <ref> and <ref>.)
Let 𝕋_n be an n-regular tree whose edges are labeled by the integers 1, …, n, so that each edge label appears next to each vertex.
We write t i— t' to indicate that an edge labeled i joins the vertices t and t' in 𝕋_n.
The tree 𝕋_n always comes with a distinguished vertex t_0.
Fix a quiver Q_0 with n mutable linearly ordered vertices v_1 < ⋯ < v_n (and no frozen vertices).
We index the mutation class [Q_0] by simultaneously assigning a quiver Q̃_t ∈ [Q_0]
to each vertex t in 𝕋_n, so that Q̃_t_0=Q_0
and Q̃_t = μ_v_i(Q̃_t') whenever t i— t'.
We call Q_0 the initial quiver.
For each vertex t ∈𝕋_n, let Q_t be the mutable part of Q̃_t.
Let B_t = (b_v_i v_j; t) be the n × n exchange matrix of Q_t.
Similarly, let C_t = (c_v_i' v_j;t) be n × n bipartite adjacency matrix between frozen vertices and mutable vertices.
Thus
b_v_iv_j;t = #{arrows v_i → v_j in Q_t} - #{arrows v_i ← v_j in Q_t},
c_v_i'v_j;t = #{arrows v_i'→ v_j in Q̃_t} - #{arrows v_i' ← v_j in Q̃_t}.
The matrix C_t is called the C-matrix, and its columns c_v_i;t are called c-vectors <cit.>.
Note that B_t is skew-symmetric, while C_t need not have any symmetries.
Whenever we use a quiver or matrix indexed by t we will specify an initial quiver, or relevant conditions on that choice.
We will often take an acyclic initial quiver Q_0.
Let Q_0 be the acyclic quiver with vertices v_1,v_2,v_3,v_4, and arrows v_1 → v_2 2→ v_4, v_1→ v_3 → v_4, and v_1 3→ v_4 (see Figure <ref> for Q_0).
With initial quiver Q_0 (and linear order v_1 < v_2 < v_3 <v_4), we have
B_t_0 = [ 0 1 1 3; -1 0 0 2; -1 0 0 1; -3 - 2 - 1 0 ], C_t_0 = [ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1 ].
Indeed, by construction C_t_0 is the identity matrix regardless of the initial quiver.
If t 2— t_0 then
B_t = [ 0 -1 1 5; 1 0 0 -2; -1 0 0 1; -5 2 - 1 0 ], C_t = [ 1 0 0 0; 0 -1 0 2; 0 0 1 0; 0 0 0 1 ].
The map t ↦ (C_t, B_t) is called a (principal, tropical) Y-seed pattern in <cit.>,
as it is essentially equivalent to an assignment of tropical coefficient variables (usually denoted y)
and B-matrix to each vertex t∈𝕋_n, cf. <cit.>.
A (mutable) vertex v in a quiver Q_t (or Q̃_t) is green (resp., red) if every entry of c_v;t is nonnegative (resp., nonpositive).
Every vertex in Q_t_0 is green.
Mutation at vertex v toggles the color of v.
(Other vertices may change color as well!)
Every vertex in Q_t is either green or red (never both), regardless of the choice of initial quiver.
We next re-interpret Theorem <ref> using the following definition.
For t in 𝕋_n, let _t denote the quiver obtained from Q̃_t as follows:
* remove all frozen vertices and the arrows incident to them;
* add a single new frozen vertex v;
* for each green mutable vertex v_j,
add ∑_i c_v_i' v_j; t arrows v→ v_j;
* for each red mutable vertex v_j, add -∑_i c_v_i' v_j; t arrows v← v_j.
Note that, by Theorem <ref>, all arrows connecting frozen vertices to a given mutable vertex
are oriented in the same direction.
It also follows from Theorem <ref> that the “bundling” operation Q̃_t ↦_t
commutes with mutation:
If t i— t', or equivalently μ_v_i(Q̃_t)=Q̃_t', then
μ_v_i(_t) = _t'.
More generally, C-matrices can be used to compute the number of arrows between frozen and mutable vertices
for “triangular extensions” other than the principal framing,
such as those that involve adding a frozen vertex v and
any number of arrows v→ v, cf. <cit.>.
For a quiver Q we define the underlying unoriented simple graph K_Q.
The graph K_Q has the same vertex set as Q, and an edge u - v whenever there are any arrows u → v or v → u in Q.
There are no parallel edges.
By a cycle in K_Q, we mean a sequence of vertices in K_Q where each consecutive pair is joined by an edge 𝒪 = (w_0 - w_1 - ⋯ - w_ℓ=w_0), considered up to cyclic shifts.
In particular, cycles in K_Q have a direction of traversal.
We will distinguish between two cycles with the same vertices traversed in the opposite order.
Cycles thus correspond to an element of the first homology group H_1(K_Q, ).
[2]
A cycle 𝒪 = (w_0 - w_1 - ⋯ - w_ℓ=w_0) in K_Q is chordless if there are no edges between w_i and w_j for i ≠ j ± 1.
If the arrows between w_i and w_i+1 (in Q) are directed with the indexing of the cycle, w_i → w_i+1, we say the cycle is forward-oriented.
If instead the arrows are directed against the indexing, w_i ← w_i+1, then we say the cycle is backward-oriented.
A cycle is oriented if it is either forward-oriented or backward-oriented.
What we call a “chordless cycle in K_Q” is called a “cycle” in <cit.>.
Thus, if a chordless cycle in K_Q is oriented, then it is called an “oriented cycle.”
Similarly, cycles which are not oriented are called "nonoriented."
In <cit.>, cycles do not have an order of traversal, and so the forward/backward distinction does not arise.
The order of traversal of cycles plays an important role in <cit.> (cf. Definition <ref>).
Fix a quiver Q with B-matrix B_Q = (b_vw).
A quasi-Cartan companion of Q is a symmetric matrix A = (a_vw) such that a_vv=2 and a_vw = ± b_vw otherwise.
A quasi-Cartan companion of a given matrix B_Q is determined by a choice of n 2 signs.
The way these signs are chosen along chordless cycles will play a special role:
A quasi-Cartan companion A=(a_vw) of a quiver Q is admissible if for every chordless cycle 𝒪 = (w_0 - ⋯ - w_ℓ=w_0) in K_Q the number #{ i | a_w_i w_i+1 > 0 and i < ℓ} is odd when 𝒪 is oriented, and is even otherwise.
Fix an acyclic initial quiver Q_0, with exchange matrix B_Q_0 = (b_vw).
We define the initial quasi-Cartan companion A_t_0 = (a_vw) by a_vv=2 and a_vw = - |b_vw| for v≠ w.
For each t ∈𝕋_n we define the matrix A_t = C_t^T A_t_0 C_t.
The following theorems of A. Seven will play an important role in our proof of Theorem <ref>.
Suppose the initial quiver Q_0 is acyclic.
Then for every t ∈𝕋_n, the matrix A_t = C_t^T A_t_0 C_t (see Definition <ref>) is an admissible quasi-Cartan companion of Q_t.
We will also need the following result.
[3]
Let Q_0 be acyclic, and fix t ∈𝕋_n.
Let A_t= (a_uv;t).
* For an arrow u → v in Q_t, we have a_uv;t > 0 if and only if u is red and v is green in Q_t.
* Every oriented path of mutable vertices w_1 →⋯→ w_m in Q_t has at most one positive entry in A_t (thus a_w_i w_i+1;t > 0 for at most one i < m).
An illustration of Theorem <ref> can be found in Example <ref>.
To see how <cit.> implies the first
statement of Theorem <ref>, assume that b_ji < 0
and check the four combinations of ( c_i) and ( c_j).
For quivers mutation equivalent to acyclic quivers with sufficiently many arrows,
a number of properties of the matrices B_t and C_t matrices
have been recently established by T. Ervin, see <cit.>.
These results may potentially be used to obtain enhancements of Theorem <ref>.
We define a pair of square matrices M_Q(v, ± 1), associated to mutating a quiver Q at a given vertex v.
Let B_Q=(b_pq) be the exchange matrix of Q and choose ε = ± 1.
Let J be the diagonal matrix with J_vv = -1 and J_qq=1 for w ≠ v.
Let E = (e_pq) be the matrix with vth column e_qv = max(0,-ε b_qv) and 0 otherwise.
Then M_Q(v, ε) = J + E.
The matrix M_Q can be used to mutate the B and C-matrices (and sometimes the quasi-Cartan companion) associated with Q.
Recall that red and green vertices are determined by the signs of the C-matrix (Definition <ref>).
Fix an initial quiver Q_0.
Choose a quiver Q̃_t ∈ [Q_0], and let Q = Q_t (the mutable part of Q̃_t).
Select a vertex v_i of Q, and let t i— t'.
Then:
* for either value of ε=±1,
B_t' = M_Q(v_i, ε) B_t M_Q(v_i, ε)^T;
* the C-matrices satisfy
C_t' = C_t M_Q(v_i, 1)^T if v_i is green,
C_t M_Q(v_i, -1)^T if v_i is red;
* if the initial quiver Q_0 is acyclic then
A_t' = M_Q(v_i, 1) A_t M_Q(v_i, 1)^T if v_i is green,
M_Q(v_i, -1) A_t M_Q(v_i, -1)^T if v_i is red.
The latter two formulas, involving C_t and A_t, follow from their definitions and (<ref>).
[2]
We demonstrate Proposition <ref>.
Take our initial quiver Q_0 to be the acyclic quiver with vertices and arrows v_1 2→ v_3 → v_2 and v_1→ v_2. Let t ∈𝕋_n satisfy
t_0 3— t_1 1— t_2 2— t.
Consider the matrices B_t, and C_t, and their associated A_t shown below.
B_t = [ 0 3 -13; -3 0 5; 13 -5 0 ], C_t = [ 8 -3 0; 3 -1 0; 3 -1 -1 ],
A_t = [ 2 -3 13; -3 2 -5; 13 -5 2 ] = C_t^T [ 2 -1 -2; -1 2 -1; -2 -1 2 ] C_t.
By inspecting the signs of (the columns of) C_t, we see that v_1 is green while v_2 and v_3 are red in Q_t.
Let t' 1—t (so Q_t' = μ_v_1(Q_t)).
We compute the mutation at v_1 and find
B_t' =
[ 0 -3 13; 3 0 -34; -13 34 0 ].
The B-matrix B_t' for μ_v_1(Q_t) can be computed from B_t by performing either of two different congruences (corresponding to M_Q_t(v_1, 1), and M_Q_t(v_1, -1) respectively):
[ -1 0 0; 3 1 0; 0 0 1 ] B_t [ -1 0 0; 3 1 0; 0 0 1 ]^T
= [ -1 0 0; 0 1 0; 13 0 1 ] B_t [ -1 0 0; 0 1 0; 13 0 1 ]^T=B_t'.
However the same operations applied to A_t do not both result in quasi-Cartan companions.
In this case, because v_1 is green, congruence with M_Q_t(v_1, 1) gives the quasi-Cartan companion A_t':
[ -1 0 0; 3 1 0; 0 0 1 ][ 2 -3 13; -3 2 -5; 13 -5 2 ][ -1 0 0; 3 1 0; 0 0 1 ]^T =
[ 2 -3 -13; -3 2 34; -13 34 2 ].
If instead we multiply with the matrix M_Q_t(v_1, -1):
[ -1 0 0; 0 1 0; 13 0 1 ][ 2 -3 13; -3 2 -5; 13 -5 2 ][ -1 0 0; 0 1 0; 13 0 1 ]^T=
[ 2 3 -39; 3 2 -44; -39 -44 678 ].
Note in particular that one of the diagonal entries of the last matrix is not 2.
[1]
If we instead consider t”3—t,
then the associated B and C matrices are:
B_t” =
[ 0 -62 13; 62 0 -5; -13 5 0 ], C_t” =
[ 8 -3 0; 3 -1 0; 3 -6 1 ].
In this case, only congruence with M_Q_t(v_3, -1) gives a quasi-Cartan companion:
[ 1 0 0; 0 1 5; 0 0 -1 ] A_t
[ 1 0 0; 0 1 5; 0 0 -1 ]^T = [ 2 62 -13; 62 2 -5; -13 -5 2 ].
Taking A_t, B_t and Q_t as in Example <ref>, we see the only positive off-diagonal entry of A_t is a_v_1v_3;t = 13.
As predicted by Theorem <ref> we see v_3 → v_1, v_3 is red, and v_1 is green in Q_t.
§ CYCLICALLY ORDERED QUIVERS
We recall many definitions, notations, and results from <cit.>.
We include some new examples, but do not repeat proofs.
A cyclic ordering of a set V is a linear ordering considered up to cyclic shifts—that is, taking the minimal element v and forming a new linear ordering where v > u for all u ≠ v (the order is otherwise unchanged).
By repeated cyclic shifts, any element can be made maximal or minimal.
The cyclic ordering associated to a linear ordering v_1 < ⋯ < v_n will be denoted (v_1, …, v_n).
A cyclically ordered quiver (Q, σ) (abbreviated COQ) is a quiver Q equipped with a cyclic ordering of its (mutable) vertices σ.
Likewise, a linearly ordered quiver is a quiver equipped with a linear ordering of its (mutable) vertices.
We may omit the ordering when it is clear from context or not needed, and simply denote a COQ by Q.
There are n linearly ordered quivers associated to each COQ, given by “tearing” the cyclic order between different consecutive vertices in the cyclic ordering.
We say an oriented path u → v → w makes a right turn at v if u < v < w in some linear order associated to σ.
The unipotent companion of a linearly ordered quiver Q is the unipotent upper triangular matrix U such that U^T - U = B_Q, where the rows and columns of B_Q are written in the linear ordering.
The matrix U can be constructed by negating B_Q, then setting the lower triangular part to 0 and the diagonal entries to 1.
For a COQ Q, the unipotent companion of any associated linearly ordered quiver is a unipotent companion of Q.
Consider the leftmost COQ Q depicted in Figure <ref>.
This COQ has 4 associated linearly ordered quivers. We list their respective unipotent companions and linear orderings:
[ 1 -1 -1 -3; 0 1 0 -2; 0 0 1 -1; 0 0 0 1 ] [ 1 0 -2 1; 0 1 -1 1; 0 0 1 3; 0 0 0 1 ] [ 1 -1 1 0; 0 1 3 2; 0 0 1 -1; 0 0 0 1 ] [ 1 3 2 1; 0 1 -1 -1; 0 0 1 0; 0 0 0 1 ]
v_1< v_2< v_3<v_4 v_2< v_3<v_4 < v_1 v_3<v_4 < v_1< v_2 v_4 < v_1< v_2< v_3
All unipotent companions of a COQ are integrally congruent.
Recall that the integral congruence class of a unipotent companion U is the set of matrices { G U G^T | G ∈_n()}.
A wiggle is a transformation of a COQ which fixes the quiver Q, and transposes two adjacent vertices in the cyclic ordering that are not adjacent in Q.
We say that two COQs are wiggle equivalent if they have the same quiver, and their cyclic orderings are related by a sequence of wiggles.
The wiggle equivalence class of a COQ is the set of all wiggle equivalent COQs, see Figure <ref>.
If U is a unipotent companion of some COQ in a wiggle equivalence class, we say that U is a unipotent companion of the class.
All unipotent companions of wiggle equivalent COQs are integrally congruent.
Fix a COQ (Q, σ).
Let 𝒪 = (w_0 - w_1 - ⋯ - w_k=w_0) be a cycle in K_Q (Definition <ref>).
Fix a linear order < associated to σ.
The winding number _σ(𝒪) is the (signed) number of times we “wrap around” the linear order:
_σ(𝒪) = #{ w_i | w_i → w_i+1, w_i > w_i+1} -#{ w_i | w_i ← w_i+1, w_i < w_i+1}.
We omit the (straightforward) proofs that _σ(𝒪) does not depend on the choice of linear order associated to σ, and that this definition agrees with <cit.>.
Let Q be the underlying quiver of the COQs shown in Figure <ref>, and
consider the (chordless) cycle 𝒪 = (v_1 - v_2 - v_4 - v_1) in K_Q.
The winding number of this undirected cycle is 0 in the three COQs in the top row and 1 in the three COQs in the bottom row.
The winding number corresponds to the winding number of certain maps from the cycle (as a cell complex) to the circle.
Specifically, map the vertices onto the circle according to the cyclic ordering (not the order of traversal), and then map each edge to an arc between the endpoints, starting from the tail of an associated arrow and traveling clockwise to the head. See Figure <ref>.
Two COQs (Q, σ) and (Q, σ') are wiggle equivalent if and only if _σ(𝒪) = _σ'(𝒪) for every undirected cycle 𝒪 in K_Q.
[2]
We now turn to mutations of COQs.
A vertex v in a COQ (Q, σ) is proper if every oriented path u → v → w makes a right turn at v (Definition <ref>).
The wiggle equivalence class is proper if for every vertex v there is a COQ in the wiggle equivalence class in which v is proper.
In the wiggle equivalence classes shown in Figure <ref>, vertices v_1, v_4 are proper vertices in each class, as they have no oriented paths through them.
Vertex v_2 is proper only in the three COQs in the top row, while vertex v_3 is proper only in the three COQs in the leftmost column(s).
Thus the wiggle equivalence class in the top left, using the cyclic orderings (v_1, v_2, v_3, v_4) and (v_1, v_3, v_2, v_4), is the only class which is proper.
To mutate a COQ Q at a proper vertex v, mutate the quiver as usual and reposition v in the cyclic ordering so that v is still proper (that is, move v clockwise in the cyclic ordering past all the elements of (v) in the mutated quiver, without passing any vertices in (v);
all choices give wiggle equivalent COQs).
We denote the resulting wiggle equivalence class by μ_v(Q).
The proper mutation class of a COQ is the set of all COQs which can be obtained from the original by a sequence of proper mutations and wiggles.
Proper mutation is well defined up to wiggle equivalence—that is, if two COQs are wiggle equivalent and a vertex v is proper in each, then performing the proper mutation μ_v to each results in the same wiggle equivalence class of COQs.
The unipotent companions of COQs which are related by a proper mutation are integrally congruent.
A (wiggle equivalence class of) COQ(s) is totally proper if every COQ in its proper mutation class is proper.
We say a cyclic ordering σ is totally proper for a quiver Q if (Q,σ) is totally proper.
If Q has a totally proper cyclic ordering, we say Q is totally proper.
Certain unipotent companions are transformed by matrix conjugation with the same matrices (see Definition <ref>) that transform the B-matrix.
One case was established in <cit.>, and follows.
We generalize this result in the next section.
Suppose that v_j is a proper vertex in the COQ Q.
Let U be a unipotent companion of Q with linear order < such that v_i < v_j for v_i ∈(v_j) and v_i < v_k for v_k ∈(v_j).
Then the matrix M_Q(v_j, -1) U M_Q(v_j, -1)^T is a unipotent companion of μ_v_j(Q).
We conclude this section by recalling two easily expressed invariants of COQs, along with some examples.
Definition <ref> and Proposition <ref> do not play any role in the proof of Theorem <ref>, but do appear prominently in applications and corollaries.
Given a COQ Q with unipotent companion U, the Alexander polynomial of Q is given by Δ_Q(t) = (tU - U^T).
The Markov invariant M_Q is defined by M_Q = n + (coefficient of t^n-1 in Δ_Q(t)).
As the exchange matrix B and unipotent companion U of a linearly ordered quiver Q are related by B = U^T - U, it follows that
(B) = (U^T - U) = (-1)^n Δ_Q(1).
Let Q be an n-vertex acyclic quiver.
Fix a linear order < of the vertices of Q so that v_i < v_j whenever v_i → v_j.
Let U be the unipotent companion of the linearly ordered quiver (Q, <).
For a cycle 𝒪 = (w_0 - ⋯ - w_ℓ = w_0) in K_Q, let wt(𝒪) = ∏_i |b_w_i w_i+1| if there is exactly one location i with w_i← w_i+1 (thus w_j → w_j+1 for all j≠ i), and wt(𝒪) = 0 otherwise.
Then the Markov invariant
M_Q = ∑_v_i < v_j b_v_i v_j^2 + ∑_𝒪 wt(𝒪).
The t^n-1 coefficient of (tU - U^T) is negative of the trace of U^T U^-1.
Let N = I-U, note N is positive and strictly upper triangular.
Thus
U^-1 = I + N + N^2 + ⋯ + N^n-1,
and so (U^T U^-1) = ∑_k=0^n (U^T N^k).
We compute (U^T I) = n, (U^T N) =-∑_v_i < v_j b_v_i v_j^2 and (U^T N^k) = -∑_𝒪 wt(𝒪), where the sum is over the cycles 𝒪 of length k.
Let Q be the leftmost COQ shown in Figure <ref>.
Then
Δ_Q(t)= t^4 + 21 t^3 - 43 t^2 + 21 t + 1,
and M_Q = 1^2 + 1^2 + 1^2 + 2^2 + 3^2 + 1 · 2 · 3 + 1 · 1 · 3 = 4 + 21 = 25.
Amanda Schwartz has recently developed a combinatorial description for all of the coefficients of the Alexander polynomial Δ_Q(t) in the case that Q is a tree quiver (that is, K_Q is a tree) <cit.>.
Further examples appear in Section <ref>.
§ TECHNICAL LEMMAS
In this section we establish lemmas and propositions which we will need in the proof of Theorem <ref>.
We begin with generalizations of Lemma <ref>.
Suppose that v_j is a proper vertex in the COQ Q.
Let U=(u_v_i v_k) be a unipotent companion of Q with linear order < such that v_i < v_j for v_i ∈(v_j).
Then the matrix M_Q(v_j, -1) U M_Q(v_j, -1)^T is a unipotent companion of μ_v_j(Q).
We follow the proof of <cit.> closely.
Throughout (v_j) and (v_j) respectively refer to the inset and outset of v_j in Q.
For the mutated quiver Q' = μ_v_j(Q) let <' be a linear order such that
* v_j <' v_i for all v_i ∈(v_j),
* if both v_k ∈(v_j) and v_k < v_j then v_k <' v_j, and
* otherwise <' agrees with < (so we have only changed the relative position of v_j).
Such an order exists because v_j is proper in Q.
Let U'=(u'_v_i v_k) be the unipotent companion of Q' with linear ordering <'.
Let B_Q = (b_v_iv_k), set n_v_iv_j = -max(0, b_v_iv_j) and also
δ_v_i = -1 if v_i=v_j;
1 else.
We wish to show that:
u'_v_iv_k = δ_v_iδ_v_k u_v_i v_k - n_v_i v_j u_v_j v_kδ_v_k - δ_v_i u_v_i v_j n_v_k v_j + n_v_i v_j n_v_k v_j.
Let u”_v_i v_k denote the right hand side of (<ref>).
By construction, we have
u'_v_i v_k =
1 if v_i=v_k;
b_v_j v_k if v_i=v_j <' v_k;
b_v_i v_j if v_i <' v_j=v_k;
-b_v_i v_k - n_v_i v_j n_v_k v_j + max(0, -b_v_i v_j) max(0, -b_v_j v_k) if v_j ≠ v_i <' v_k ≠ v_j;
0 if v_i >' v_k.
Every case where both v_j ≤' v_i, and v_j ≤' v_k appears in <cit.>, and the arguments therein apply.
We consider the cases where v_j appears after v_i or v_k in <'.
Note by construction of <' that if w <' v_j then u_v_jw = 0 = n_wv_j and b_wv_j = -u_wv_j.
* If v_i = v_k <' v_j then u'_v_i v_i = 1 - 0 - 0 + 0 = u”_v_i v_i.
* If v_i <' v_j=v_k then,
u'_v_i v_j = b_v_i v_j = -u_v_i v_j - 0 - 0 + 0 = u”_v_i v_j.
* We split v_j ≠ v_i <' v_k ≠ v_j into 2 cases, depending on the relative position of v_j.
* If v_i <' v_j <' v_k then
n_v_k v_j = -max(0, b_v_k v_j) and u_v_i v_j= max(0, -b_v_i v_j).
So
u'_v_i v_k = -b_v_i v_k + max(0, -b_v_i v_j) max(0, -b_v_j v_k) = u_v_i v_k - 0 - u_v_i v_j n_v_k v_j + 0 = u”_v_i v_k.
* If v_i <' v_k <' v_j,
then
u'_v_i v_k = -b_v_i v_k = u_v_i v_k - 0 - 0 + 0 = u”_v_i v_k.
* If v_k <' v_i and v_k <' v_j then u'_v_i v_k = 0 - 0 - 0 + 0 = u”_v_i v_k.
Let π be the permutation matrix which transforms < into <'.
Equation (<ref>) can be rewritten as U' = π^T M_Q(v_j, -1) U M_Q(v_j, -1)^T π (recall the matrices E and J defined in Definition <ref>).
The claim follows.
We establish a corresponding version of Lemma <ref> for ε = 1.
Suppose that v_j is a proper vertex in the COQ Q.
Let U be a unipotent companion of Q with linear order < such that v_j < v_k for v_k ∈(v_j).
Then the matrix M_Q(v_j, 1) U M_Q(v_j, 1)^T is a unipotent companion of μ_v_j(Q).
We reduce to Lemma <ref> by introducing the opposite linearly ordered quiver R^op of another linearly ordered R.
The quiver of R^op has the same vertices as R, but each arrow is reversed (v → w becomes v ← w) and the linear ordering is reversed (v < w becomes w < v).
It is easy to see that if U_R is a unipotent companion of R, then U_R^T is a unipotent companion of R^op (note that U_R^T is upper triangular in the reversed linear order).
We also have M_R(v_j, 1) = M_R^op(v_j, -1).
Now we compute:
M_Q(v_j, 1) U M_Q(v_j, 1)^T = ( M_Q(v_j, 1) U^T M_Q(v_j, 1)^T )^T
= ( M_Q^op(v_j, -1) U^T M_Q^op(v_j, -1)^T )^T.
Lemma <ref> implies that M_Q^op(v_j, -1) U^T M_Q^op(v_j, -1)^T is a unipotent companion of Q^op.
Thus its transpose is a unipotent companion of Q, as desired.
Let Q be the leftmost COQ depicted in Figure <ref>.
The unipotent companions of Q appear in Example <ref>, let U be the rightmost unipotent companion corresponding to the linear order v_4 < v_1 < v_2 < v_3.
Vertex v_2 is proper (and both Q and μ_v_2(Q) appear in the center of Figure <ref>).
We compute the unipotent companion M_Q(v_2, -1) U M_Q(v_2, -1)^T of μ_v_2(Q):
[ 1 0 0 0; 0 1 1 0; 0 0 -1 0; 0 0 0 1 ][ 1 3 2 1; 0 1 -1 -1; 0 0 1 0; 0 0 0 1 ][ 1 0 0 0; 0 1 1 0; 0 0 -1 0; 0 0 0 1 ]^T
= [ 1 5 -2 1; 0 1 0 -1; 0 -1 1 0; 0 0 0 1 ].
This matrix is upper triangular with the linear order v_4 < v_2 < v_1 < v_3 (thus we have swapped the 2nd and 3rd rows and columns), in agreement with Lemma <ref>.
If we instead take U' to be the leftmost unipotent companion depicted in Example <ref> (linear order v_1 < v_2 < v_3 < v_4), then we may apply Lemma <ref>.
We find another unipotent companion M_Q(v, 1) U' M_Q(v, 1)^T:
[ 1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 2 0 1 ][ 1 -1 -1 -3; 0 1 0 -2; 0 0 1 -1; 0 0 0 1 ][ 1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 2 0 1 ]^T
= [ 1 1 -1 -5; 0 1 0 0; 0 0 1 -1; 0 -2 0 1 ].
This matrix is upper triangular with order v_1 < v_3 < v_4 < v_2.
[3]
We next establish that certain subquivers in a mutation-acyclic quiver are acyclic.
Suppose Q is mutation-acyclic, and v is a vertex in Q.
Then the subquiver of Q supported by (v) (resp. (v)) is acyclic.
This theorem generalizes an example provided in <cit.>.
As Q is mutation-acyclic, by Theorem <ref>, Q has an admissible quasi-Cartan companion A = (a_pq) (Definition <ref>).
Thus every subquiver of Q also has an admissible quasi-Cartan companion.
(By restricting the rows and columns of A.)
Thus we may reduce to the case that v is a source (resp. sink), and v is adjacent to all other vertices in Q.
If (v) (resp. (v)) contains an oriented cycle then it also contains an oriented chordless cycle (by a standard subdivision argument, any chord results in a smaller oriented cycle; cf. Lemma <ref>).
Assume for contradiction that the undirected simple graph K_Q consists only of a chordless forward-oriented cycle 𝒪 = (w_0 - ⋯ - w_ℓ = w_0) and the source vertex v with edges w_i - v (the case with v a sink is identical).
Then an odd number of the a_w_i w_i+1 are positive.
In particular, at least one a_w_i w_i+1 is positive.
Without loss of generality, say a_w_0 w_1 > 0.
Note that A remains an admissible quasi-Cartan companion if we simultaneously toggle the signs of the row and column corresponding to w_i.
By repeatedly toggling signs, we may assume that a_w_i w_i+1 < 0 for all i>0 [why?].
However, each of the chordless cycles (w_i - w_i+1 - v - w_i) in K_Q are not oriented, and so any admissible companion must satisfy an even number of the inequalities a_w_i w_i+1>0, a_w_i+1 v>0, and a_w_i v>0.
In particular, one of a_w_0 v or a_w_1 v is positive, without loss of generality say a_w_1 v > 0.
As a_w_1 w_2 < 0, we must have a_w_2 v > 0 for the chordless cycle (w_1 - w_2 - v - w_1) to have an even number of positive signs, and by induction we find that a_w_i v > 0 for all i.
But then the chordless cycle (w_0 - w_1 - v - w_0) has three positive entries in A, a contradiction.
Suppose Q is mutation-acyclic, and v is a vertex in Q.
Partition the other vertices of Q into three sets (v), (v) and the set of vertices S which are not adjacent to v.
Consider the subquivers supported by (v) ∪ S (resp. (v) ∪ S).
Essentially the same argument as in Theorem <ref> implies that every oriented cycle in these subquivers contains at most one vertex (and thus no arrows) in (v) (resp. (v)).
Thus, for example, the quiver in Figure <ref> is not mutation-acyclic.
[2]
The following elementary lemma lets us convert a cycle in the underlying unoriented graph (Definition <ref>) into a chordless cycle, while keeping a given vertex on an oriented path.
Fix a vertex v in a quiver Q.
Let 𝒪 be a cycle in K_Q, which contains the path u → v → w for some vertices u,w.
Then there exists a chordless cycle that is supported by a subset of the vertices of 𝒪 and contains a path u' → v → w' for some vertices u', w', not necessarily distinct from u,w.
We induct on the size of 𝒪.
If 𝒪 is chordless (i.e., if 𝒪 only contains u,v, and w), then there is nothing to prove.
Otherwise, 𝒪 contains a chord.
If 𝒪 contains a chord which does not involve v, then we get a shorter chordless cycle by including the chord edge, and removing vertices and edges so that u,v, and w remain.
Suppose instead that 𝒪 contains a chord with the vertex v, without loss of generality say v → w' for a vertex w' in 𝒪 and distinct from u,w.
We can replace the edge v → w by v → w' and remove vertices so that u, v and w' remain (and w does not).
The claim follows by induction.
Finally, we give a new way to check that two COQs are wiggle equivalent.
Fix a quiver Q and two linear orderings of its vertices <, and <'.
Let U=(u_vw) and U'=(u'_vw) be the unipotent companions of the linearly ordered quivers (Q, <) and (Q, <') respectively.
If u_vw = u'_vw for every pair of vertices v,w, then the COQs associated to (Q, <) and (Q, <') are wiggle equivalent.
Note that the condition u_vw = u'_vw for each pair of vertices is not the same as requiring that the matrices U and U' are equal when they are written in their own respective linear orderings.
For example, the vertex v in the first subscript may refer to a different row than in the second.
By Theorem <ref>, it suffices to check that both COQs have the same winding numbers.
From (<ref>), it suffices to identify the arrows which “wrap around” the respective linear orderings (and, if they differ, their orientations).
For any two vertices v,w we have v < w if and only if u_vw≠ 0.
Thus the same arrows contribute regardless of if we use the linear ordering < or <', and so the winding numbers agree.
Let Q be the acyclic quiver described in Figure <ref>.
Consider the two unipotent companions from different linear orderings of Q below.
U=
[ 1 3 2 1; 0 1 -1 -1; 0 0 1 0; 0 0 0 1 ]
U' =
[ 1 3 1 2; 0 1 -1 -1; 0 0 1 0; 0 0 0 1 ]
v_4 < v_1 < v_2 < v_3 v_4 < v_1 < v_3 < v_2
Each entry of U matches U'; for example, the 2 in the first row and third column of U corresponds to u_v_4, v_2 and agrees with the entry in the first row and fourth column of U'.
Thus U and U' are wiggle equivalent, indeed they are related by the wiggle (v_2 v_3).
§ PROOF OF THEOREM <REF>
We begin by establishing and recalling notation for this section.
Using the notation from Definition <ref>, suppose our initial quiver is acyclic with (mutable) vertices v_1, …, v_n.
Fix a quiver Q̃_t ∈ [ ], with mutable part .
Thus C_t denotes the C-matrix of , and B_t denotes the B-matrix of .
Finally, recall from Definition <ref> the associated quasi-Cartan companions A_t_0 and A_t = (a_ij;t) = C_t^T A_t_0 C_t.
We associate to each t ∈𝕋_n an additional matrix
U_t = (u_vw;t) = (A_t - B_t)/2.
Observe that u_vv;t=1 and U_t^T - U_t = B_t.
Let be the acyclic quiver of type E_6 shown (as a COQ) in Figure <ref>, and let t ∈𝕋_n be the vertex such that
t_0 3— t_1 2— t_2 5— t_3 4— t_4 2— t_5 1— t_6=t.
In this case,
B_t = [ 0 0 0 1 0 -1; 0 0 -1 -1 1 1; 0 1 0 0 -1 -1; -1 1 0 0 0 0; 0 -1 1 0 0 0; 1 -1 1 0 0 0 ], C_t =
[ -1 0 0 0 0 1; 0 0 0 -1 0 1; 0 -1 0 0 0 1; 0 -1 1 0 0 0; 0 0 0 0 -1 0; 0 0 0 0 0 1 ],
and therefore
A_t = [ 2 0 0 -1 0 -1; 0 2 -1 -1 -1 1; 0 -1 2 0 1 -1; -1 -1 0 2 0 0; 0 -1 1 0 2 0; -1 1 -1 0 0 2 ], U_t = [ 1 0 0 -1 0 0; 0 1 0 0 -1 0; 0 -1 1 0 1 0; 0 -1 0 1 0 0; 0 0 0 0 1 0; -1 1 -1 0 0 1 ].
All of the above is written using the linear order v_1 < v_2 < ⋯ < v_6.
We will construct a family of linear orderings such that U_t is upper triangular when written using one of them (see Lemma <ref>).
To facilitate the construction, we first create several new directed graphs from .
Fix a vertex v_j of .
Let _v_j() be the directed graph obtained by performing the following operations on :
* if v_j is red (resp., green), then for each oriented 2 path v_i → v_j → v_k with v_i,v_k both green (resp., red) in , add an arrow v_k v_j→ v_i labeled with the vertex v_j;
* reverse all arrows from a red vertex directed toward a green vertex. Equivalently, by Theorem <ref>, reverse all arrows v → w with a_v w;t > 0.
Continuing with Example <ref>, we see from the columns of C_t that v_3, v_6 are green while v_1, v_2, v_4, and v_5 are red in Q_t.
Therefore each _v_i() will reverse the arrows v_2 → v_6 and v_5 → v_3 (agreeing with the two positive off-diagonal entries of A_t), and we only add a new arrow to _v_i() if i ∈{2,3,6}.
Each directed graph _v() is acyclic. (In particular, there are no oriented 2-cycles; _v() is an acyclic quiver with some marked arrows.)
Furthermore, the subquiver of supported by all red (resp., green) vertices is acyclic.
By construction, there are no directed paths from a red vertex to a green vertex in _v().
So any oriented cycle in _v() must have all vertices of the same color (red or green).
We first consider oriented cycles which do not use any arrows labeled v, i.e. oriented cycles in which all have the same color.
Let _t be constructed from Q̃_t as in Definition <ref>.
By construction of _t, the set (v) (resp., (v)) is precisely the red (resp., green) vertices in .
As _0 is acyclic, by Corollary <ref>, _t is mutation-acyclic.
By Theorem <ref>, (v) (resp., (v)) is an acyclic subquiver of _t.
Thus there are no oriented cycles of all red (resp., green) vertices in .
Without loss of generality, suppose v is red.
The case where v is green is similar.
Then we do not add any arrows labeled v between red
vertices, so we have shown there are no oriented cycles of red
vertices.
Further, any directed cycle involving green
vertices must use at least one arrow labeled v (those created in Step 1 above).
As all arrows labeled v in _v() are oriented from (v) towards (v), any oriented cycle must also involve at least one arrow which is not labeled v.
We will now argue that any other cycles in _v() imply a problematic cycle 𝒪 in the underlying unoriented simple graph K_ of .
Suppose there is an oriented cycle w_0 v→ w_1 →⋯→ w_k = w_0 in _v() that involves just one arrow w_0 v→ w_1 labeled v.
By construction of _v(), there is a path w_0 ← v ← w_1 in .
Thus there is a cycle 𝒪 = (w_0 - v - w_1 - w_2- ⋯ - w_k) in K_.
If 𝒪 is not chordless then replace it with a chordless cycle containing v by Lemma <ref>.
Because the subquiver of green vertices in is acyclic, any chord between the green vertices w_i, w_j has the same orientation as the directed path it replaced.
Therefore 𝒪 is a chordless cycle which is not oriented.
But the only edge on 𝒪 which has a positive entry in A_t is w_0 - v, contradicting Theorem <ref>.
So there are no cycles in _v() with a single arrow labeled v.
Suppose there is an oriented cycle in _v() with at least 2 arrows labeled v, say
w_0 v→ w_1 →⋯→ w_j v→ w_j+1→⋯→ w_k = w_0
with j>1 and none of the arrows on the path w_1 → w_2 →⋯→ w_j labeled v (the other arrows may or may not be labeled v).
Thus 𝒪 = (v - w_1 - ⋯ - w_j - v) is a cycle in K_ which is not oriented.
After applying Lemma <ref> if necessary, we may assume 𝒪 is a chordless cycle.
But the only edge on 𝒪 which has a positive entry in A_t is w_j - v, contradicting Theorem <ref>.
Thus _v() is acyclic.
A linear extension of _v() is a linear order < on the vertices such that v_i < v_j whenever v_i → v_j (it does not matter whether the arrow is labeled v).
Let σ_v be a cyclic ordering associated to a linear extension of _v().
Continuing with Examples <ref> and <ref>,
there are several possible choices for the cyclic orderings σ_v for each fixed v, though all of them result in wiggle equivalent COQs (, σ_v).
We could take
σ_v_1 = σ_v_2 = σ_v_3 = (v_6, v_1, v_4, v_3, v_2, v_5), and
σ_v_4 = σ_v_5 = σ_v_6 = (v_6, v_3, v_1, v_4, v_2, v_5).
The matrix U_t=(u_v_i v_j;t) is upper triangular when written with a linear extension < of _v(). Thus U_t is a unipotent companion of (, σ_v).
Suppose v_i,v_j are vertices such that u_v_i v_j;t≠ 0.
We want to show that v_i < v_j.
Because |a_v_i v_j;t| = |b_v_i v_j;t|, and u_v_i v_j;t = a_v_i v_j;t - b_v_i v_j;t/2≠ 0, we have a_v_i v_j;t = -b_v_i v_j;t.
If a_v_i v_j;t>0 then v_i ← v_j (as b_v_i v_j;t < 0).
By Definition <ref>, the graph _v() has an arrow v_i → v_j.
Thus v_i < v_j in this case.
Otherwise a_v_i v_j;t<0 and v_i → v_j in _v(), so we have v_i < v_j.
Continuing with Example <ref>, we find that U_t is a unipotent companion of (, σ_v), for any vertex v.
As suggested by Example <ref>, if we use the linear order v_6 < v_3 < v_1< v_4< v_2< v_5, then
U_t = [ 1 -1 -1 0 1 0; 0 1 0 0 -1 1; 0 0 1 -1 0 0; 0 0 0 1 -1 0; 0 0 0 0 1 -1; 0 0 0 0 0 1 ].
[2]
The COQs (, σ_v), (, σ_w) are wiggle equivalent for all vertices v,w.
By Lemma <ref>, the matrix U_t is a unipotent companion of both (, σ_v) and (, σ_w).
The claim follows from Lemma <ref>.
The COQ (, σ_v) is proper.
It suffices to check that each vertex q is proper in the COQ (, σ_q) by Lemma <ref>.
Let < be a linear extension of _q().
We consider all oriented paths p → q → r in cases, based on the color (green or red) of p,q, and r in .
As noted in Lemma <ref>, the subquiver of all red or all green vertices is acyclic.
We have chosen our order < to be compatible with these acyclic subquivers.
Thus if all three of p,q,r are green (resp., red) then p → q → r is a right turn at q.
A similar argument applies if p or r is a different color than the other two vertices.
This leaves us with the alternating patterns: red-green-red or green-red-green.
By construction of <, if p,r are both a different color than q, then there is an arrow r q→ p in _q(), and so r<p.
Thus the path p → q → r is again a right turn at q.
Continuing with Examples <ref> and <ref>, Figure <ref> shows the COQ (, σ_v_1).
Note that every oriented path of length 2 is a right turn.
Let σ_t = σ_v_1 be a cyclic ordering constructed from as in Definition <ref>.
By Lemma <ref>, U_t is a unipotent companion of the COQ (, σ_t).
By Lemma <ref>, (,σ_t) is proper.
It remains to show that these COQs (as t ∈𝕋_n varies) are all proper mutation equivalent.
Fix an arbitrary t' i— t.
It suffices to show that the COQ (μ_v_i(), σ_t') is in the wiggle equivalence class μ_v_i((, σ_t)). Let < be a linear extension of _v_i().
Say that v_i is green (resp., red) in .
For every vertex w ∈(v_i) (resp., (v_i)), we have v_i < w (resp. w < v_i).
By Proposition <ref>, we have
B_t' = M_Q_t(v_i, ε) B_t M_Q_t(v_i, ε)^T,
A_t' = M_Q_t(v_i, ε) A_t M_Q_t(v_i, ε)^T
for ε = 1 (resp., -1).
Thus
U_t' = 1/2 ( M_Q_t(v_i, ε) A_t M_Q_t(v_i, ε)^T - M_Q_t(v_i, ε) B_t M_Q_t(v_i, ε)^T )
= M_Q_t(v_i, ε) U_t M_Q_t(v_i, ε)^T.
In particular, U_t' is exactly the unipotent companion of μ_v_i((, σ_t)) produced by Lemma <ref> (resp. <ref>).
The theorem then follows from Lemma <ref>.
§ APPLICATIONS
As noted in Theorem <ref>, the integral congruence class of a unipotent companion is a proper mutation invariant of COQs.
If a COQ is totally proper, then every mutation is proper and so we have a mutation invariant of the underlying quiver.
A quiver Q has at most one totally proper cyclic ordering (up to wiggle equivalence).
There is an efficient algorithm that computes a cyclic ordering σ_Q such that if Q has a totally proper cyclic ordering, then (Q, σ_Q) is totally proper.
If a quiver Q is mutation equivalent to a totally proper quiver Q', then the integral congruence class of a unipotent companion of the COQ (Q, σ_Q) will agree with that of (Q', σ_Q').
In particular, their Alexander polynomials and Markov invariants (Definition <ref>) will agree.
By Theorem <ref>, we now have many new totally proper quivers.
Let us discuss a few examples.
Suppose Q is an acyclic quiver whose underlying unoriented simple graph is a chordless cycle 𝒪.
Then a cyclic ordering σ of the vertices of Q is totally proper if and only if _σ(𝒪) = 0.
Note that Corollary <ref> makes no assumption about the multiplicities of the arrows.
As an illustration, we compute some mutation invariants for a few families.
Let Ã(r, ℓ) denote a quiver with n = r + ℓ vertices v_1, …, v_n which has r locations with a single arrow v_i → v_i+1 and ℓ locations with a single arrow v_i ← v_i+1 (see <cit.>).
While there are many ways to place the arrows, all are related by sink and/or source mutations.
Thus we may assume that Ã(r, ℓ) has arrows
v_1 → v_2 →⋯→ v_r ← v_r+1←⋯← v_n← v_1.
For r, ℓ > 0, the quiver Ã(r, ℓ) is acyclic, and thus totally proper.
One totally proper cyclic ordering is σ_rℓ = (v_1, …, v_r-1, v_n, v_n-1, …, v_r+1, v_r).
The Alexander polynomial of (Ã(r, ℓ), σ_rℓ) is
Δ_Ã(r, ℓ)(t)= (t^ℓ - (-1)^ℓ) (t^r - (-1)^r).
Every quiver mutation equivalent to (Ã(r, ℓ), σ_r ℓ) has integrally congruent unipotent companions, and in particular a matching Alexander polynomial.
Thus, if r=3 and ℓ=2 then one unipotent companion is
U = [ 1 -1 0 -1 0; 0 1 -1 0 0; 0 0 1 0 -1; 0 0 0 1 -1; 0 0 0 0 1 ],
and we can compute the Alexander polynomial
Δ_Ã(3,2)(t) = (t U - U^T)=t^5 - t^3 + t^2 - 1.
We can expand Example <ref> by considering quivers whose simple undirected graphs are cycles but have multiple arrows between vertices.
Choose nonnegative integers r, ℓ and positive integers a_1, …, a_r+ℓ.
Let C(r, ℓ) denote the quiver with n = r + ℓ vertices v_i, and arrows
v_1 a_1→ v_2 a_2→⋯a_r-1→ v_r a_r← v_r+1a_r+1←⋯a_n-1← v_na_n← v_1.
For r,ℓ > 0, the cyclic ordering σ_r ℓ is totally proper.
We compute the Alexander polynomials for several small values of r, ℓ.
Recall that for a 4-vertex quiver Q we can write the Alexander polynomial as
Δ_Q(t) = (t-1)^4 + M_Q · t(t-1)^2 + (B_Q) · t^2.
In particular,
Δ_C(4,0)(t) = (t-1)^4 + (a_1^2 + a_2^2 + a_3^2 + a_4^2-a_1a_2a_3a_4) t(t-1)^2
+ ( a_1^2a_3^2 + a_2^2a_4^2 - 2 a_1 a_2 a_3 a_4) t^2;
Δ_C(3,1)(t) = (t-1)^4 + ( a_1^2 + a_2^2 + a_3^2 + a_4^2 + a_1a_2a_3a_4 ) t (t-1)^2
+ ( a_1^2a_3^2 + a_2^2a_4^2 + 2 a_1 a_2 a_3 a_4 ) t^2;
Δ_C(2,2)(t) = (t-1)^4 + ( a_1^2 + a_2^2 + a_3^2 + a_4^2 ) t(t-1)^2
+ ( a_1^2a_3^2 + a_2^2a_4^2 - 2a_1a_2a_3a_4 ) t^2.
For a 5-vertex quiver Q, we can write the Alexander polynomial as
Δ_Q(t) = (t-1)^5 + M_Q · t(t-1)^3 + d · t^2 (t-1)
for some integer coefficient d. In particular,
Δ_C(4,1)(t) = (t-1)^5 + (a_1^2 + a_2^2 + a_3^2 + a_4^2 + a_5^2 + a_1 a_2 a_3 a_4 a_5) t (t-1)^3
+ ( a_1^2 a_3^2 + a_1^2 a_4^2 + a_2^2 a_4^2 + a_2^2 a_5^2 + a_3^2 a_5^2 - 3 a_1a_2a_3a_4a_5) t^2(t - 1);
Δ_C(3,2)(t) = (t-1)^5 + ( a_1^2 + a_2^2 + a_3^2 + a_4^2 + a_5^2) t (t-1)^3
+ (a_1^2a_3^2 + a_1^2a_4^2 + a_2^2a_4^2 + a_2^2a_5^2 + a_3^2a_5^2 - a_1a_2a_3a_4a_5) t^2(t - 1).
These examples suggest a pattern for the Markov invariant, agreeing with Proposition <ref>. Without loss of generality, assume r > 0. Then
M_C(r, ℓ) =
∑_i^n a_i^2 if ℓ≥ 2;
∑_i^n a_i^2 + ∏_i^n a_i if ℓ =1;
∑_i^n a_i^2 - ∏_i^n a_i if ℓ =0.
Recall Lagrange's four squares theorem, which states that every nonnegative integer is the sum of at most four squares (of integers).
Thus there is a totally proper 4-vertex COQ with Markov invariant x for every x ≥ 0.
However some integers are not the sum of exactly four positive squares.
For example, 2^2k+1, or 29 (see <cit.>).
Thus any quiver whose associated Markov invariant is one of these values must not be mutation equivalent to C(2, 2) (for any positive values a_i).
We have the following corollaries of Proposition <ref>.
Let Q be a connected, totally proper and mutation-acyclic COQ on n vertices.
Then the Markov invariant M_Q ≥ n-1. Further, if M_Q = n-1 then Q is mutation equivalent to a quiver whose underlying undirected graph is a tree (with no parallel arrows).
Let Q be a totally proper and mutation-acyclic COQ on n vertices.
Then the Markov invariant and the determinant of the exchange matrix satisfy the following inequality:
(B_Q) ≤ (2 M_Q)^n/2.
It suffices to consider acyclic COQs Q with vertices v_1, …, v_n.
By Proposition <ref>, ∑_i<j b_v_i v_j^2 ≤ M_Q.
Recall the Frobenius norm ||B_Q||_F = (∑_i,j b_v_i v_j^2 )^1/2 of B_Q (see also the Hilbert-Schmidt norm or Schur norm), thus
||B_Q||_F^2 ≤ 2 M_Q.
As (B_Q) = 0 if n is odd, we assume n is even.
As B_Q is skew-symmetric, its eigenvalues are all imaginary and come in signed pairs, say they have norms |λ_1|, …, |λ_n/2|.
Let λ = max( |λ_j|).
Then (B_Q) ≤λ^n.
As B_Q is skew-symmetric the spectral norm ρ(B_Q) = λ.
It is a classical fact that the spectral norm is a lower bound on the Frobenius norm: λ≤ ||B_Q||_F.
Therefore we have
(B_Q) ≤λ^n ≤ ||B_Q||_F^n≤ (2 M_Q)^n/2.
Corollaries <ref> and <ref> give short proofs that certain quivers are not mutation-acyclic.
The Somos sequences are integer sequences, the first 5 of which are associated to cluster algebras with particularly symmetric quivers.
The Somos-4 quiver S is shown in Figure <ref>.
The distinguished cyclic ordering σ_S = (v_1, v_4, v_2, v_3) (see <cit.>[Remark 11.11]) is the only potentially totally proper ordering for S.
This quiver is not mutation-acyclic though, which is quickly determined by computing the Markov invariant of the COQ (S, σ_S) and applying Corollary <ref>:
M_S = 1^2 + 1^2 + 2^2 + 3^2 + 2^2 + 1^2 - 6 - 6 + 2 + 2 - 12 = 0.
Fix an integer a ≥ 1.
Example <ref> generalizes to the 4-vertex quiver aS, which has the same vertices as S, and a arrows for every arrow in S (thus, for example, there are 3a arrows from v_2 to v_3).
Each has a nonpositive Markov invariant.
Continuing with Example <ref>, consider the COQ C(4,0) with cyclic ordering (v_1, v_2, v_3, v_4), shown below.
< g r a p h i c s >
Some such quivers are mutation-acyclic, for example when a_i=1 for all i then C(4,0) has type D_4.
Others are not.
Warkentin showed in <cit.>[Example 6.11] that if a_1=a_3≥ 2 and a_2=a_4≥ 2 then this quiver is not mutation-acyclic (by a precise description of the mutation class).
The same can be observed by Corollary <ref>, noting that M_C(4,0)≤ 0 for such quivers.
Suppose that a_4 = a_1 a_2 a_3. Then M_C(4,0) = a_1^2 + a_2^2 + a_3^2 ≥ 3.
Thus we cannot determine whether the quiver is mutation-acylic or not from Corollary <ref>.
However,
(B_C(4,0)) = (a_1 a_3 + a_1 a_2^2 a_3)^2 = (a_2^2 + 1)^2 (a_1 a_3)^2.
So we have (B_C(4,0)) > ( 2 M_C(4,0))^2 whenever (for example) a_2 ≥max(a_1, a_3) and a_1 a_3 ≥ 6.
Thus, for these values of a_i, the quiver C(4,0) is again not mutation-acyclic by Corollary <ref>.
Consider the quivers associated with the elliptic root system of double extended type E_7 or E_8, denoted E_7^(1,1) or E_8^(1,1) and shown in Figure <ref>.
(Note that the quiver in Figure <ref> is mutation equivalent to E_8^(1,1).)
These quivers have finite mutation classes, and can be shown to be totally proper by exhaustive computation.
(We emphasize the exhaustion; E_8^(1,1) has 5739 quivers, up to isomorphism, in its class.)
These quivers are not mutation-acyclic.
This has been shown by brute force, or by representation theory <cit.>[Section 2.3].
We sketch a new proof.
The canonical candidate cyclic orderings σ_E_7^(1,1) and σ_E_8^(1,1) for the quivers shown in Figure <ref> are (v_1, …, v_9) and (v_1, …, v_10) respectively.
(While (E_7^(1,1), σ_E_7^(1,1)) and (E_8^(1,1), σ_E_8^(1,1)) are totally proper, we do not rely on this fact.)
So we treat E_7^(1,1) and E_8^(1,1) as COQs with their respective cyclic ordering.
Their Markov invariants are:
M_E_7^(1,1) = 10 + 4 - 6 = 8, M_E_8^(1,1) = 11 + 4 - 6 = 9.
By Corollary <ref>, the only acyclic quivers that E_7^(1,1) (resp., E_8^(1,1)) could be mutation equivalent to are tree quivers (with no parallel arrows).
Up to isomoprhism, there are 47 (unoriented) trees with 9 vertices and 106 trees with 10 vertices.
All orientations of each of these trees are mutation equivalent.
By Theorem <ref>, all cyclic orderings of a tree are wiggle equivalent.
Thus any quiver mutation equivalent to a tree on 9 (resp., 10) vertices must have one of the 47 (resp., 106) Alexander polynomials. (Actually the list is slightly smaller, as some non-isomorphic trees have the same Alexander polynomial.)
So it suffices to check if Δ_E_7^(1,1)(t) and Δ_E_8^(1,1)(t) appear in these lists.
A (still tedious, but straightforward and human verifiable) computation shows that they are not.
The computation can be further simplified using the work of Amanda Schwartz, see Remark <ref>.
Choose positive integers a,b,c.
Let Q be the 4-vertex COQ below:
< g r a p h i c s >
Then M_Q = a^2 + 2 b^2 + 2c^2 - 2 abc.
Let Q' be the subquiver supported by v_1, v_2,v_3, thus M_Q' = a^2 + b^2 + c^2 - abc.
Note that M_Q = 2 M_Q' - a^2.
Thus M_Q < 0 whenever M_Q' < a^2/2.
(This last inequality is generally satisfied for mutation-infinite quivers; as M_Q' is constant on the mutation class, but the number of arrows grows arbitrarily large, we can find many quivers with some weight a much larger than M_Q'.)
Thus by Corollary <ref>, such Q cannot be mutation-acyclic.
The COQ Q constructed in Example <ref> can be constructed in another way.
Start with any 3-vertex quiver Q'; create a copy with all arrows reversed; pick an arrow v → w in Q' and identify v in Q' with w in the copy, and likewise w in Q' with v in the copy.
In general, whenever we have two quivers with a common subquiver we can identify that common subquiver to create a larger quiver.
The Markov invariant of the resulting quiver will contain all the terms from both their respective Markov invariants.
Generally giving fairly simple inequalities for when the result must not be mutation-acyclic.
We next illustrate a way of combining Corollary <ref> with another mutation invariant <cit.>, the multiset of GCDs of all the integers in the columns (or rows) of the B-matrix.
We construct a COQ as in Example <ref> with a=b=2 and c=4.
Thus M_Q = 12 ≥ 3, and we cannot use Corollary <ref> to conclude that Q is not mutation-acyclic.
However, note that the GCD of the entries of B_Q is 2.
Thus if Q is mutation-acyclic, it must be mutation equivalent to a connected acyclic quiver with the same GCD and M_Q=12.
There are two such quivers, both trees with 2 arrows between each pair of neighboring vertecies.
The B-matrices of these acyclic quivers have determinant 0 and 16, but (B_Q)=144.
Thus Q is not mutation-acyclic.
Finally, we note that Lemmas <ref> <ref> extend to the admissible quasi-Cartan companion A_U= U + U^T (discussed in <cit.>[Remark 13.2]).
Fix a totally proper COQ Q and a proper vertex v_j.
Let U_Q be a unipotent companion with linear order < so that v_i < v_j (resp., v_j < v_i) for all v_i ∈(v_j) (resp., (v_j)).
Then M_Q(v_j, ϵ) (U_Q + U_Q^T) M_Q(v_j, ϵ)
is an admissible quasi-Cartan companion of μ_v_j(Q) when ϵ = -1 (resp., 1).
99
BBRAcyclicHered
A. Buan, B. Marsh, and I. Reiten,
Cluster mutation via quiver representations.
Comment. Math. Helv.. 83 (2008), 143–177.
BGZsymmetrizable
M. Barot, C. Geiss, and A. Zelevinsky,
Cluster algebras of finite type and positive symmetrizable matrices,
J. London Math. Soc. 73 (2006), 545–564.
BHJ-opensource
W. Bley, T. Hofmann, and H. Johnston,
Computation of lattice isomorphisms and the integral matrix similarity problem,
Forum Math. Sigma 10 (2022), Paper No. e87, 36 pp.
CaoLiCmats
P. Cao and F. Li,
Uniform column sign-coherence and the existence of maximal green sequences,
J. Algebr. Comb. 50 (2019), 403–417.
casals-binary
R. Casals,
A binary invariant of matrix mutation,
,
to appear in J. Comb. Algebra.
QPs2signcoherence
H. Derksen, J. Weyman, and A. Zelevinsky,
Quivers with potentials and their representations II: applications to cluster algebras.
J. Amer. Math. Soc.. 23 (2010), 749–790.
EHO-magma
B. Eick, T. Hofmann, and E. A. O'Brien,
The conjugacy problem in (n,ℤ),
J. Lond. Math. Soc. 100 (2019), 731–756.
ErvinRedSize
T. Ervin,
All connected quivers have an unrestricted red size of n-1 or n,
.
ervinPrefork
T. Ervin,
New hereditary and mutation-invariant properties arising from forks,
Electron. J. Combin. 31 (2024), no. 1.16, 1–30.
ErvinComms
T. Ervin, private communication, June 2024.
COQ
S. Fomin and S. Neville,
Cyclically ordered quivers,
.
LMC
S. Fomin and S. Neville,
Long mutation cycles,
.
cats1
S. Fomin, M. Shapiro, and D. Thurston,
Cluster algebras and triangulated surfaces. Part I: Cluster complexes,
Acta Math. 201 (2008), 83–146.
CAtextbook1-3
S. Fomin, L. Williams, and A. Zelevinsky,
Introduction to cluster algebras. Chapters 1–3,
.
CA1
S. Fomin and A. Zelevinsky,
Cluster algebras I: Foundations,
J. Amer. Math. Soc. 15 (2002), 497–529.
KellerApp
B. Keller,
Quiver mutation in Java.
Java applet available at <https://webusers.imj-prg.fr/ bernhard.keller/quivermutation/> (2006).
KellerE8
B. Keller and I. Reiten,
Acyclic Calabi-Yau categories,
Compos. Math. 144.5 (2008), 1332–1348.
Nakanishi-Zelevinsky
T. Nakanishi and A. Zelevinsky,
On tropical dualities in cluster algebras,
Algebraic groups and quantum groups, 217–226,
Contemp. Math. 565,
American Mathematical Society, Providence, RI, 2012.
SchwartzPC
A. Schwartz,
private communication, June 2024.
Seven3x3
A. I. Seven,
Mutation classes of skew-symmetrizable 3 × 3 matrices,
Proc. Amer. Math. Soc. 141 (2013), 1493–1504.
SevenAMatrices
A. I. Seven,
Cluster algebras and symmetric matrices,
Proc. Amer. Math. Soc..
143, 469-478 (2015).
Seven-congruence
A. Seven and İ. Ünal,
Congruence invariants of matrix mutation,
.
A000534
N. J. A. Sloane and J. H. Conway,
Numbers that are not the sum of 4 nonzero squares,
Entry in The On-Line Encyclopedia of Integer Sequences,
<https://oeis.org/A000534>.
HughPC
H. Thomas,
private communication, August 2024.
VatneTypeD
D. Vatne,
The mutation class of D_n quivers,
Comm. Algebra.
38 (2010), 1137–1146.
Warkentin
M. Warkentin,
Exchange graphs via quiver mutation,
Ph.D. thesis,
Technische Universität Chemnitz,
2014,
<https://nbn-resolving.org/urn:nbn:de:bsz:ch1-qucosa-153172>.
| Quivers are finite directed graphs without oriented cycles of length 1 or 2.
Mutations are operations that transform a quiver, based on a choice of a vertex.
These notions are foundational in the study of cluster algebras <cit.>.
A mutation invariant is a characteristic of a quiver that is preserved under mutations.
Mutation invariants are helpful for deciding whether two quivers are mutation equivalent or not,
i.e., whether there is a sequence of mutations that transforms one quiver into the other.
See <cit.> for examples of known mutation invariants.
A cyclically ordered quiver (COQ) is a pair (Q, σ) where Q is a quiver and σ a cyclic ordering of its vertices.
Cyclically ordered quivers were introduced in <cit.> to develop new powerful mutation invariants.
A mutation in a COQ (Q, σ) transforms Q by the usual mutation rule,
while simultaneously changing the cyclic ordering σ in a prescribed way.
We note that mutations of COQs are only allowed at the vertices that satisfy a certain properness condition.
It is this condition that ultimately enables the introduction of new mutation invariants.
In this paper, we prove that for one important class of quivers, the properness requirement can be lifted,
so that mutations at all vertices are allowed and all invariants developed in <cit.> become true mutation invariants.
Specifically, we show that in any mutation-acyclic quiver Q
(i.e., a quiver that is mutation equivalent to an acyclic quiver),
every vertex v is proper, for a particular canonical cyclic ordering σ on Q.
After a mutation at v, we obtain a new COQ (Q',σ'), with canonical cyclic ordering σ',
that again has the same property: all the vertices are proper, so we can mutate at any one of them,
and the process continues.
This theorem yields new mutation invariants of mutation-acyclic quivers,
as well as new tools for proving that various quivers are not mutation-acyclic.
One such quiver is shown in Figure <ref>.
We give a short proof that this quiver is not mutation-acyclic in Example <ref>.
As mentioned above, mutation at a vertex v in a COQ (Q,σ)
is only allowed if v satisfies a combinatorial condition of “properness.”
Informally, v is proper if every 2-arrow oriented path through v
travels “clockwise,” i.e., in the direction of the cyclic ordering σ.
A COQ is proper if every vertex v in it is proper
(possibly after applying a sequence of transpositions called wiggles).
Thus, we can mutate at any vertex in a proper COQ (Q,σ)
and get a new COQ (Q',σ')—but (Q',σ') will not necessarily be proper.
If it is the case that applying any sequence of mutations to (Q,σ) results in a proper COQ,
then we say that (Q,σ) is totally proper.
We have shown in <cit.> that
a quiver Q can be upgraded to a totally proper COQ (Q,σ)
in at most one way (up to wiggles).
Moreover, the (essentially unique) candidate cyclic ordering σ=σ_Q can be constructed efficiently.
We say that a quiver Q is totally proper if the COQ (Q, σ_Q) is totally proper.
As shown in <cit.>,
totally proper quivers have a powerful mutation invariant, which we recall next.
This invariant is constructed as follows.
Input: a totally proper quiver Q.
Step 1. Construct the canonical cyclic ordering σ=σ_Q.
Step 2. “Tear” σ into a linear ordering <.
Step 3. Construct the skew-symmetric exchange matrix B=B_Q, with rows and columns ordered acording to <.
Step 4. Construct the unipotent companion U, the unique
unipotent upper-triangular matrix such that B=U^T - U.
Output: the integral congruence class of U, i.e.,
the set {G U G^T | G ∈_n()}.
We have shown in <cit.> that wiggles, cyclic shifts of the linear ordering,
and (proper) mutations of COQs preserve the integral congruence class of the unipotent companion. [3]
While the resulting invariant of proper mutations is very powerful,
it is not easy to use in practice, since
the problem of deciding whether two upper-triangular matrices are congruent over _n() seems to be rather difficult.
In <cit.>, we bypassed this difficulty as follows.
It is well known (and easy to see) that the integral congruence class of a matrix U∈_n()
uniquely determines the conjugacy class of its cosquare U^-T U.
It follows that whenever two COQs are related by proper mutations,
the cosquares of their respective unipotent companions must be conjugate in _n().
This conjugacy condition can be verified efficiently,
though the algorithm is quite nontrivial <cit.>.
The cosquare U^-T U can be used to construct other invariants of proper mutations,
such as the monic characteristic polynomial of U^-T U,
which we call the Alexander polynomial of the COQ Q.
(While much simpler to compute, the Alexander polynomial is less powerful
than the conjugacy class of U^-T U:
integer matrices may have the same characteristic polynomial while
not being conjugate in _n().)
The main result of this paper is the following theorem that settles <cit.>.
Any mutation-acyclic quiver is totally proper.
It suffices to show that any acyclic quiver is totally proper.
Furthermore, as noted in <cit.>, any acyclic quiver has a proper cyclic ordering
obtained by “closing up” any linear ordering compatible with the orientations of the quivers' arrows.
(All such cyclic orderings are related by wiggles.)
Our proof of Theorem <ref> shows that these cyclic orderings are totally proper.
In the course of proving Theorem <ref>,
we establish a few results that may be of independent interest.
In particular, Theorem <ref> gives a simple combinatorial constraint on the orientations of arrows in mutation-acyclic quivers.
Both Theorem <ref> and the ensuing Lemma <ref> assert that certain subquivers of a mutation-acyclic quiver must be acyclic.
Should someone wish to compute with COQs and unipotent companions, Lemmas <ref> and <ref> generalize and remove conditions from <cit.>.
This can speed up and simplify the computations.
An alternative proof of Theorem <ref> was independently discovered by Hugh Thomas, using categorification <cit.>.
§.§ Overview of the paper
Section <ref> establishes notation and reviews some material that does not involve COQs.
Much of this section is devoted to restating the results of A. Seven <cit.>, who constructed a distinguished symmetric matrix A associated with an arbitrary mutation-acyclic quiver.
Section <ref> is a condensed summary of the necessary results and notation from <cit.>.
It contains new examples, and Proposition <ref> gives an alternative description of the Markov invariant.
In Section <ref> we establish a handful of technical lemmas.
Lemmas <ref> and <ref> make it easier to compute the unipotent companion of a COQ after a proper mutation.
Theorem <ref> generalizes <cit.> using the results of A. Seven <cit.>.
Section <ref> contains the proof of Theorem <ref>.
We begin by defining U=(A-B)/2, where A comes from Seven's construction and B is the usual exchange matrix.
The fact that the integral congruence class of U is a mutation invariant follows quickly from Proposition <ref>.
The bulk of the effort is then spent showing that U is indeed a unipotent companion.
In Section <ref>, we give some applications and corollaries of Theorem <ref>.
This includes a description of invariants for some small quivers, inequalities satisfied by the coefficients of the Alexander polynomial for mutation-acyclic quivers (Corollary <ref>), and several examples of quivers that are shown to be not mutation-acyclic.
§.§ Acknowledgements
I would like to thank my advisor, Sergey Fomin, for his guidance, comments, and suggestions;
Grayson Moore, for reading and commenting on an early draft;
and Danielle Ensign for her software assistance.
I am also grateful to Roger Casals, Tucker Ervin, and Hugh Thomas for stimulating discussions.
I have used Magma and for various computations.
This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor. | null | null | null | null | null |
http://arxiv.org/abs/2409.17495v1 | 20240926030732 | Human Mobility Modeling with Limited Information via Large Language Models | [
"Yifan Liu",
"Xishun Liao",
"Haoxuan Ma",
"Brian Yueshuai He",
"Chris Stanford",
"Jiaqi Ma"
] | cs.AI | [
"cs.AI",
"cs.SI"
] |
Towards Forever Access for Implanted Brain-Computer Interfaces
Abhishek Bhattacharjee
September 28, 2024
==============================================================
empty
empty
§ ABSTRACT
Understanding human mobility patterns has traditionally been a complex challenge in transportation modeling. Due to the difficulties in obtaining high-quality training datasets across diverse locations, conventional activity-based models and learning-based human mobility modeling algorithms are particularly limited by the availability and quality of datasets. Furthermore, current research mainly focuses on the spatial-temporal travel pattern but lacks an understanding of the semantic information between activities, which is crucial for modeling the interdependence between activities. In this paper, we propose an innovative Large Language Model (LLM) empowered human mobility modeling framework. Our proposed approach significantly reduces the reliance on detailed human mobility statistical data, utilizing basic socio-demographic information of individuals to generate their daily mobility patterns. We have validated our results using the NHTS and SCAG-ABM datasets, demonstrating the effective modeling of mobility patterns and the strong adaptability of our framework across various geographic locations.
Towards Forever Access for Implanted Brain-Computer Interfaces
Abhishek Bhattacharjee
September 28, 2024
==============================================================
§ INTRODUCTION
Understanding and accurately generating human mobility patterns has long been an essential challenge in the field of transportation research. Its implications extend far beyond traffic management, influencing city development, urban planning, public health policies, and even retail strategies <cit.>. An accurate human mobility pattern generation can lead to more efficient transportation systems, better-designed urban spaces, and improved quality of life for civilians.
Over the past decades, traditional activity-based models (ABMs) have significantly shaped our understanding of human mobility by simulating daily activities based on individual and household socio-economic characteristics. These models, emerging prominently since 1999, have been utilized extensively by governmental bodies like the Southern California Association of Governments (SCAG) to analyze and predict local mobility patterns <cit.>. While ABMs are adept at incorporating complex behavioral and economic dynamics, they require extensive local data inputs and rely on numerous assumptions about human activity patterns and economic behaviors. Parallel to ABM advancements, the development of data-driven approaches, particularly those leveraging neural networks, has gained momentum in capturing dynamic mobility patterns. These methods utilize large datasets from sources such as mobile phones or GPS, analyzing spatio-temporal trends to model mobility trends <cit.>. Despite their effectiveness, these approaches present challenges, including the substantial requirements for civilians' mobility data which raises privacy concerns and the difficulty in adapting to sudden changes in urban settings. Additionally, the interpretability of these models often remains limited, posing challenges in validating and understanding their predictions in real-world applications <cit.>.
With the advancement of computational power and the rise of large-scale knowledge-based AI, especially through Large Language Models (LLMs) or Vision Language Models (VLMs), new opportunities have emerged in the field of human mobility modeling. These models, such as GPT-3.5 and its successors <cit.>, have demonstrated remarkable capabilities in understanding and generating human-like text across diverse domains <cit.>. A key strength of LLMs lies in their ability to understand and generate complex sequences with strong interdependencies and interpretability<cit.>. Trained on diverse textual data, LLMs can incorporate a wide range of human experiences and behaviors, potentially leading to more nuanced and varied human mobility modeling compared with conventional methods, as daily routines often involve intricate chains of activities with subtle interrelations.
Building upon the concept of "activity chain," which represents the daily activity sequence of individual agents <cit.>, our proposed framework leverages the strong inference and reasoning abilities of LLMs to generate agents' daily activity chains based on provided socio-demographic information, high-level statistics from public resources, and structured guidelines. Our method guides LLMs in modeling agents' mobility patterns that align with their socio-demographic information and represent real-world logical patterns. While overcoming the limitations of existing approaches, our approach shows promising results with the lowest Jensen-Shannon Divergence (JSD) of 0.011 when compared with the National Household Travel Survey dataset (NHTS) and the synthetic results from SCAG's ABM model (SCAG-ABM).
Empowered with LLMs, our approach significantly reduces the reliance on extensive historical data and assumptions about human behavior. This innovation opens new possibilities for understanding and generating complex human mobility patterns across different geographic locations. By understanding human dynamics through micro-simulation of daily activities, our method could potentially enhance traffic demand modeling and support multi-scale, multi-model transportation simulations, providing the foundation for advanced urban planning <cit.>.
§.§ Contributions
Our study makes several key contributions to the field of human pattern modeling compared with existing literature:
* We demonstrate the capability of LLMs to generate the location-based mobility pattern of a human being using only his or her social-demographic information and accessible statistical data of the region. This approach significantly reduces dependency on extensive and often unavailable mobility or trajectory datasets.
* We introduce a semantic method to address the mobility pattern generation problem, compared to traditional location-based trajectory generation. This approach allows for a more interpretable human mobility modeling and enables context-aware generation that better reveals the underlying dependency of real-world human travel behaviors.
* We are the first to employ LLMs for the task of activity chain generation. This innovative use of LLMs opens new avenues for modeling complex behavioral patterns with fewer data requirements and higher scalability than traditional methods.
§ RELATED WORKS
§.§ Human Mobility Modeling
Efforts to understand human mobility patterns have evolved over decades. In the 1940s, the "Law of Intervening Opportunities" was proposed, which suggested that travel between two places is motivated by socio-economic opportunities <cit.>. In recent years, the advent of GPS systems and personal electronic devices with tracking functions has led to an abundance of activity patterns and trajectory data. This has enabled sophisticated data collection and the rapid evolution of generative models aimed at reconstructing human mobility patterns. The generation of activity chains has emerged as one of the most representative tasks in this field.
Activity-Based Models (ABM) represent a significant advancement in transportation planning, focusing on modeling human mobility patterns like travel behavior including simulating the daily activities of individuals <cit.>. These models utilize detailed data such as demographics, land use, transportation networks, and travel surveys to construct intricate activity chains that represent the sequence and type of activities performed by individuals <cit.>. For example, SimAGENT <cit.> was applied by the Southern California Association of Governments (SCAG) to analyze regional travel behaviors and assess the impacts of transportation policies. Despite their detailed output, ABMs come with substantial limitations. They require extensive data collection and sophisticated modeling of transportation networks and travel behaviors, making them resource-intensive. Additionally, the heavy reliance on assumptions about travel patterns often restricts their applicability to regions outside the original model setup, posing challenges for their adaptation to different geographic or demographic contexts.
In recent years, with advancements in computational hardware and the proliferation of trajectory data collected through mobile devices or social media platforms <cit.>, learning-based methods have emerged as an alternative solution that is easier to implement. Techniques such as deep learning using raw position data <cit.>, Graph Convolutional Networks (GNNs) with activity history <cit.>, and transformer architecture <cit.> have shown promising results with training on mobility or trajectory data collections. However, the performance of learning-based models heavily depends on the quantity and quality of available data. This dependency highlights a significant challenge: human mobility data is often expensive or difficult to access, frequently requiring Non-Disclosure Agreements.
The limitations of previous methods in mobility modeling are often rooted in their reliance on extensive and accurate datasets, or numerous assumptions about human behavior and mobility patterns. Such dependencies can introduce biases or inaccuracies when applied across diverse geographic locations or demographic groups <cit.>. Consequently, there is a pressing need to explore the potential of training-free tools like LLMs in synthesizing human mobility patterns using more accessible and open-source data sources. This approach could potentially overcome the data acquisition challenges discussed earlier while retaining the benefits of advanced modeling techniques.
§.§ Large Language Models
Recent advancements in artificial intelligence have seen LLMs, trained on trillions of tokens from diverse data sources, emerge as potent tools capable of overcoming many of these limitations. These models leverage the transformer architecture, enabling them to excel in tasks ranging from personal assistant and legal advising to vehicle diver <cit.>. The inherent flexibility and generalization capabilities of LLMs, derived from their vast and varied training datasets, allow them to adapt quickly to new and unseen scenarios with minimal additional input <cit.>.
In our research, we exploit these capabilities by employing an LLM-based approach to model human mobility patterns. This method significantly reduces the reliance on complete, location-specific training data, enabling effective activity chain generation even in regions where detailed mobility data is scarce. By utilizing LLMs, we bypass the need for extensive pre-training or supplementary feature engineering, making our framework uniquely suited for rapid deployment across different geographic contexts without the necessity for extensive dataset reconfiguration.
§ PROBLEM STATEMENT
Building upon the concept of activity chain <cit.>, the problem addressed in this study is defined as generating the daily activity chain for individual agents based on their social-demographic information. For each agent i with his or her social-demographic information collection D_i = {d_i^1, d_i^2, …, d_i^n}, we aim to generate a daily activity chain C for that agent where each activity in the chain is defined by its type A, start time T_s, and end time T_e. The output activity chain C_i for agent i can be represented as C_i = {[A_i^1, T_si^1, T_ei^1], [A_i^2, T_si^2, T_ei^2]…, [A_i^n, T_si^n, T_ei^n]}.
§ METHODOLOGY
The architecture of our proposed framework is shown in Figure <ref>. The model inputs consist of social-demographic data for each agent, including attributes like gender, employment status, and education status. This data equips our framework with the necessary context to generate realistic and representative daily activity chains, leveraging LLMs' robust inferential capabilities to generate logical activity chains based on the agent's profile.
§.§ Agent Demographic
The demographic information of each agent forms the input of the model, helping to tailor the activity chain generation to the specific characteristics of the agent. In order to best utilize the LLMs' comprehension and interpretation of these attributes, as shown in Figure <ref>, we represent the agent demographic data into natural language expressions, which ensures a deeper contextual understanding by the LLMs, enabling more accurate and relevant activity chain generation that reflects the nuanced profiles of individual agents.
§.§ Large Language Model
We provide the LLMs with a structured system prompt that guides the generation of activity chains. The components of the system prompt are designed to provide comprehensive context and clear instructions, ensuring the generated outputs are both logically reasonable and aligned with the demographic data. The structured input consists of the following elements:
* Task Description: This component provides the overview of the task, directing the LLM to generate a sequence of daily activities based on the provided social-demographic information of agents.
* High-Level Statistical Data: To augment the LLM's contextual understanding and improve its generation accuracy, we incorporate high-level statistical data that is readily accessible from public sources. This includes demographic profiles from national censuses or governmental databases, transport usage statistics from public transit authorities, and economic indicators such as median household income from financial institutions. These statistics provide a brief overview of general mobility patterns and activity preferences, facilitating informed generations by the LLMs about activity types and timings. Such easily accessible data offers a macroscopic view of societal behaviors and trends and helps LLMs generate realistic and contextually appropriate activity chains.
* Guidelines for Generation: To ensure the generated activity chains are realistic and diverse, guidelines outline the expected structure and realism of the outputs. These include maintaining logical sequences of activities and appropriate time allocations, which aid the model in generating feasible daily schedules.
* Few-Shot Examples: A set of example activity chains is also provided to illustrate the desired format and to familiarize the LLMs with characteristics such as the length of activities, frequency of different activity types, and the logical sequence of activities within a day. These examples serve as practical templates that guide LLMs in accurately constructing activity chains by showcasing typical patterns and transitions.
Figure <ref> provides an example of the system prompt. These components collectively ensure that the LLM has a clear understanding of the task requirements and the contextual background needed to generate accurate and representative activity chains.
§.§ Activity Chain
The output of our framework consists of activity chains for each agent, detailing the activity type, start time, and end time. Leveraging the capabilities of LLMs, this approach transcends traditional data-intensive modeling techniques, offering a flexible and scalable method to generate daily human activities with minimal input requirements.
§ DATASET
§.§ National Household Travel Survey Dataset
The 2017 National Household Travel Survey (NHTS) dataset <cit.>, serves as the reference dataset for our study. This dataset provides an extensive snapshot of travel behaviors throughout different states in the United States, encompassing a wide array of social-demographic information such as sex, gender, and employment status of individual agents, etc. Additionally, it details their daily activities, categorizing each by type, start time, and end time, which are critical for our analysis. As illustrated in Table <ref>, we aggregate the activities into 15 types considering the location of the activities, each depicting a unique facet of daily life.
§.§ Activity-Based Model Dataset from Southern California Association of Governments
In our evaluation framework, we incorporate the synthetic results from the Activity-Based Model (ABM) from the Southern California Association of Governments (SCAG), which synthesis activity patterns and travel demand across a region inhabited by approximately 26 million people <cit.>. SCAG’s synthetic results, built on extensive datasets and a detailed modeling process, serve as another reference specific to the California area in our analysis. It provides weekday activity chains that include activity types, start times, and end times, activity types were also aggregated to the same as the NHTS dataset shown in Table <ref>.
§ EXPERIMENT AND RESULTS
We evaluated our approach using the NHTS dataset in the California area by randomly sampling 500 agents and generating their daily activity chains based on their social-demographic data. These chains were then validated against the comprehensive daily activity records from the NHTS dataset and the SCAG dataset. In our experiments, we utilized two large language models: OpenAI's GPT-4 and Meta's Llama2-70b with the temperature setting of 1.2 and 1.0 respectively, comparing their performance in accurately simulating daily human activities.
The evaluation metrics will compare the distributions of activity type, start time, end time, duration, and the number of daily activities. We employed Jensen-Shannon Divergence (JSD) <cit.> as shown in Equation <ref> to quantify the differences between the generated activity chains and the reference activity chains from the NHTS and the SCAG dataset.
JSD(P Q) = 1/2∑_x ∈ X[P(x) log(P(x)/M(x))]
+ 1/2∑_x ∈ X[Q(x) log(Q(x)/M(x))]
Where M=(P+Q)/2. In the JSD equation, P denotes the distribution of activity patterns derived from the generated activity chains, and Q denotes the distribution of activity patterns obtained from actual ground truth activity chains. The measured activity pattern includes the distribution of activity type, start/end time, duration, and length of the activity chain.
We analyzed the JSD values between our models and the reference datasets, as detailed in Table <ref> and Table <ref>, and between the two reference datasets themselves, presented in Table <ref>. A JSD value closer to 0 indicates a more accurate approximation with the reference dataset's distribution.
§.§ Activity Type
The distribution of activity type shown in Figure <ref>d demonstrates GPT-4's close alignment with both NHTS and SCAG datasets, particularly in frequent categories like home and work-related activities. The JSD scores concrete this observation, where GPT-4 achieves exceptionally low scores (0.016 with NHTS and 0.051 with SCAG), indicating a high level of accuracy in replicating these distributions. Llama-2-70b, though competitive with scores of 0.047 with NHTS and 0.074 with SCAG, shows slight deviations in infrequent activities like visiting, healthcare, and religious activities.
§.§ Start and End Times
The start and end time distributions from our approach, illustrated in Figure <ref>a, align closely with those from the reference datasets, displaying a common trend where activities intensify during daytime hours. Both models capture peaks at the start and close of typical workday hours, reflecting common active times. Specifically, both GPT-4 and Llama2-70b demonstrate a capability to replicate these peaks, although Llama2-70b shows a marginally better precision in this aspect, as indicated by its lower JSD scores (0.011 for start and 0.013 for end times with NHTS; similarly low with SCAG). GPT-4, while slightly higher in JSD scores (0.032 for start and 0.039 for end times with NHTS; 0.039 and 0.038 with SCAG), still competently captures the general trends.
However, it is important to consider potential biases inherent in the reference datasets due to the difference between the survey-based dataset (NHTS) and the synthetic dataset generated by a choice-based model (SCAG-ABM). The SCAG and NHTS datasets do not exhibit a similar distribution of start and end times with a JSD of 0.006 for the start time and 0.005 for the end time. These discrepancies highlight the need for careful interpretation of model outputs and suggest a potential area for refining data preprocessing or model training to better account for regional variations in activity patterns.
§.§ Activity Duration
The distribution of activity durations, shown in Figure <ref>b, illustrates differences between the datasets and the model outputs. The NHTS dataset exhibits a high frequency of short-duration activities, which may indicate a bias toward capturing brief, incidental activities. Conversely, the SCAG dataset displays a greater prevalence of medium-length activities, such as work or school sessions and home activities that typically occur at the beginning and end of the day, suggesting a focus on more structured daily routines.
The model outputs, particularly those from GPT-4, align more closely with the SCAG dataset, reflecting a strong capability in capturing the broader, more sustained activities that dominate daily life. GPT-4's performance, with JSD scores of 0.016 with NHTS and a JSD score of 0.014 with SCAG, demonstrates its adeptness at modeling activity durations with high accuracy. This alignment, especially without being explicitly informed of the distribution characteristics of SCAG's activity durations, underscores the robust inferential and reasoning capabilities of our framework.
Llama-2-70b, while effective, shows a broader spread in the duration plots with a JSD of 0.028 with NHTS and 0.020 with SCAG, indicating a slightly less precise capture of the typical duration profiles, particularly the peak durations.
§.§ Activity Chain Length
Figure <ref>c shows the distribution of activity chain lengths across the models and datasets. The similarity between the NHTS and SCAG datasets in terms of chain length distribution is notable, with both exhibiting a prevalence of shorter activity chains, primarily ranging from 3 to 6 activities per chain. GPT-4 closely matches this pattern, demonstrating its capability to generate realistic activity sequences within this range, as evidenced by its JSD scores of 0.057 with NHTS and a particularly low 0.040 with SCAG.
However, both GPT-4 and Llama-2-70b, show limitations in generating longer activity sequences, with Llama-2-70b displaying higher JSD scores (0.116 with NHTS and 0.095 with SCAG). This suggests challenges in accurately modeling more complex daily routines that include longer chains of activities. The limited ability of LLMs to extend beyond medium-length chains can be partly attributed to the lack of detailed distributional information in the training data for specific geographic areas, highlighting a potential area for further research and model training enhancement.
§.§ Activity pattern in different social group
As shown in Figure <ref>, our analysis of activity start and end times for specific social groups, namely students and workers, revealed distinct patterns effectively captured by the LLMs, albeit with some variations across models and datasets. For students, the activity patterns showed that GPT-4 closely matched the NHTS dataset, whereas Llama-2-70b aligned more with the SCAG dataset. This suggests that the models can successfully identify and replicate the characteristic activity patterns of students. Additionally, the differences observed between GPT-4 and Llama-2-70b in capturing these patterns may also stem from variations in their training datasets or methodologies, further emphasizing the need to consider the training background of each model when assessing their outputs.
For the worker group, both the NHTS and SCAG datasets displayed similar activity patterns, characterizing the typical workday with peaks around the start and end times, reflective of morning and evening commutes. Both GPT-4 and Llama-2-70b adeptly mirrored these trends, though they tended to underrepresent activities during the mid-day period. This could suggest a model sensitivity to the less pronounced activity patterns typically occurring during working hours.
§.§ Activity pattern in different location
As shown in Figure <ref>, we also compare the activity start and end times across different activities in different location scopes (California and nationwide). We observed distinct patterns for routine and non-routine activities based on data from our approaches focusing on the California area, the SCAG dataset from California, and the NHTS dataset from the National wide scope. For routine activities such as `Home' and `Work', our models displayed a strong alignment with both the California-specific SCAG and the national NHTS datasets, indicating little variation between regional and national patterns for these activities. This consistency suggests that our LLM-based approach can effectively capture and replicate routine activity trends across different datasets.
In contrast, for non-routine activities like `Eat out', our approach's outputs more closely followed the SCAG dataset, reflecting California-specific dining habits and times, which differ from national averages as captured in the NHTS dataset. The capability of our approach to align closely with region-specific data for non-routine activities suggests it can be a valuable tool for modeling human mobility patterns that are sensitive to local context, further affirming its applicability across diverse geographical areas.
§ LIMITATIONS
While demonstrating robust modeling capabilities, the results acknowledge several limitations that could influence the outcomes and interpretations of the results. The reference datasets, namely NHTS and SCAG, utilized for validating our model's performance may contain inherent biases due to their acquired methodologies. These biases can affect the reliability of these datasets as benchmarks, as they might not accurately represent the full spectrum of mobility patterns across different populations or regions.
Our models' ability to generate realistic activity chain lengths could be enhanced by incorporating more detailed and granular data. Currently, the lack of depth in the available data restricts our models' capacity to simulate more complex activity patterns, particularly for longer activity chains that require a nuanced understanding of daily human behaviors.
The differences in architecture and training datasets of the LLMs, namely GPT-4 and Llama-2-70b, result in inconsistent patterns in the activity chains generated. These inconsistencies pose challenges for validation, especially when the ground truth datasets might also exhibit biases, making it difficult to ascertain the accuracy and reliability of the model outputs.
§ CONCLUSION AND FUTURE WORK
This study introduced a novel approach that efficiently employs LLMs to generate human mobility patterns using minimal social-demographic data. Our approach, utilizing GPT-4 and Llama2-70b alongside the NHTS and SCAG datasets, has proven capable of accurately simulating daily activities. GPT-4 has shown exceptional proficiency in matching activity types and durations, while Llama2-70b has excelled in the precision of timing generations. The low JSD value across both models underscores their robust ability to capture complex activity patterns with significantly less data dependency than traditional models.
These achievements have substantial implications for urban planning and smart city applications, illustrating the potential of LLMs to refine transportation modeling. The adaptability of our framework to different geographic and demographic settings, without extensive retraining, presents a promising avenue for broader implementation in policy-making and urban development.
Despite notable achievements, our models exhibited limitations in accurately generating the distribution of activity chain lengths and potential biases within the training datasets. Future efforts will focus on enhancing the models' ability to simulate extended and multiday activity chains by integrating more granular data that capture a comprehensive range of daily human activities. This enhancement will involve employing richer datasets and developing more sophisticated fine-tuning techniques, such as training mixture-of-experts (MOE) models specifically for transportation or human mobility. Additionally, we will refine our evaluation methods to better recognize and adjust for biases in ground truth datasets. This may include employing advanced statistical methods or utilizing alternative, more nuanced data sources for human mobility to ensure a more accurate and robust analysis.
Our goal is to extend the current day-to-day generations to multi-day activity chain forecasts, thereby capturing the cyclical nature of human behaviors throughout the week. By doing so, we aim to set a new benchmark in mobility modeling accuracy, providing more detailed insights for urban planning and transportation policy-making across diverse environments.
§ ACKNOWLEDGEMENTS
Supported by the Intelligence Advanced Research Projects Activity (IARPA) via the Department of Interior/Interior Business Center (DOI/IBC) contract number 140D0423C0033. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government.
IEEEtran
| Understanding and accurately generating human mobility patterns has long been an essential challenge in the field of transportation research. Its implications extend far beyond traffic management, influencing city development, urban planning, public health policies, and even retail strategies <cit.>. An accurate human mobility pattern generation can lead to more efficient transportation systems, better-designed urban spaces, and improved quality of life for civilians.
Over the past decades, traditional activity-based models (ABMs) have significantly shaped our understanding of human mobility by simulating daily activities based on individual and household socio-economic characteristics. These models, emerging prominently since 1999, have been utilized extensively by governmental bodies like the Southern California Association of Governments (SCAG) to analyze and predict local mobility patterns <cit.>. While ABMs are adept at incorporating complex behavioral and economic dynamics, they require extensive local data inputs and rely on numerous assumptions about human activity patterns and economic behaviors. Parallel to ABM advancements, the development of data-driven approaches, particularly those leveraging neural networks, has gained momentum in capturing dynamic mobility patterns. These methods utilize large datasets from sources such as mobile phones or GPS, analyzing spatio-temporal trends to model mobility trends <cit.>. Despite their effectiveness, these approaches present challenges, including the substantial requirements for civilians' mobility data which raises privacy concerns and the difficulty in adapting to sudden changes in urban settings. Additionally, the interpretability of these models often remains limited, posing challenges in validating and understanding their predictions in real-world applications <cit.>.
With the advancement of computational power and the rise of large-scale knowledge-based AI, especially through Large Language Models (LLMs) or Vision Language Models (VLMs), new opportunities have emerged in the field of human mobility modeling. These models, such as GPT-3.5 and its successors <cit.>, have demonstrated remarkable capabilities in understanding and generating human-like text across diverse domains <cit.>. A key strength of LLMs lies in their ability to understand and generate complex sequences with strong interdependencies and interpretability<cit.>. Trained on diverse textual data, LLMs can incorporate a wide range of human experiences and behaviors, potentially leading to more nuanced and varied human mobility modeling compared with conventional methods, as daily routines often involve intricate chains of activities with subtle interrelations.
Building upon the concept of "activity chain," which represents the daily activity sequence of individual agents <cit.>, our proposed framework leverages the strong inference and reasoning abilities of LLMs to generate agents' daily activity chains based on provided socio-demographic information, high-level statistics from public resources, and structured guidelines. Our method guides LLMs in modeling agents' mobility patterns that align with their socio-demographic information and represent real-world logical patterns. While overcoming the limitations of existing approaches, our approach shows promising results with the lowest Jensen-Shannon Divergence (JSD) of 0.011 when compared with the National Household Travel Survey dataset (NHTS) and the synthetic results from SCAG's ABM model (SCAG-ABM).
Empowered with LLMs, our approach significantly reduces the reliance on extensive historical data and assumptions about human behavior. This innovation opens new possibilities for understanding and generating complex human mobility patterns across different geographic locations. By understanding human dynamics through micro-simulation of daily activities, our method could potentially enhance traffic demand modeling and support multi-scale, multi-model transportation simulations, providing the foundation for advanced urban planning <cit.>.
§.§ Contributions
Our study makes several key contributions to the field of human pattern modeling compared with existing literature:
* We demonstrate the capability of LLMs to generate the location-based mobility pattern of a human being using only his or her social-demographic information and accessible statistical data of the region. This approach significantly reduces dependency on extensive and often unavailable mobility or trajectory datasets.
* We introduce a semantic method to address the mobility pattern generation problem, compared to traditional location-based trajectory generation. This approach allows for a more interpretable human mobility modeling and enables context-aware generation that better reveals the underlying dependency of real-world human travel behaviors.
* We are the first to employ LLMs for the task of activity chain generation. This innovative use of LLMs opens new avenues for modeling complex behavioral patterns with fewer data requirements and higher scalability than traditional methods. | null | The architecture of our proposed framework is shown in Figure <ref>. The model inputs consist of social-demographic data for each agent, including attributes like gender, employment status, and education status. This data equips our framework with the necessary context to generate realistic and representative daily activity chains, leveraging LLMs' robust inferential capabilities to generate logical activity chains based on the agent's profile.
§.§ Agent Demographic
The demographic information of each agent forms the input of the model, helping to tailor the activity chain generation to the specific characteristics of the agent. In order to best utilize the LLMs' comprehension and interpretation of these attributes, as shown in Figure <ref>, we represent the agent demographic data into natural language expressions, which ensures a deeper contextual understanding by the LLMs, enabling more accurate and relevant activity chain generation that reflects the nuanced profiles of individual agents.
§.§ Large Language Model
We provide the LLMs with a structured system prompt that guides the generation of activity chains. The components of the system prompt are designed to provide comprehensive context and clear instructions, ensuring the generated outputs are both logically reasonable and aligned with the demographic data. The structured input consists of the following elements:
* Task Description: This component provides the overview of the task, directing the LLM to generate a sequence of daily activities based on the provided social-demographic information of agents.
* High-Level Statistical Data: To augment the LLM's contextual understanding and improve its generation accuracy, we incorporate high-level statistical data that is readily accessible from public sources. This includes demographic profiles from national censuses or governmental databases, transport usage statistics from public transit authorities, and economic indicators such as median household income from financial institutions. These statistics provide a brief overview of general mobility patterns and activity preferences, facilitating informed generations by the LLMs about activity types and timings. Such easily accessible data offers a macroscopic view of societal behaviors and trends and helps LLMs generate realistic and contextually appropriate activity chains.
* Guidelines for Generation: To ensure the generated activity chains are realistic and diverse, guidelines outline the expected structure and realism of the outputs. These include maintaining logical sequences of activities and appropriate time allocations, which aid the model in generating feasible daily schedules.
* Few-Shot Examples: A set of example activity chains is also provided to illustrate the desired format and to familiarize the LLMs with characteristics such as the length of activities, frequency of different activity types, and the logical sequence of activities within a day. These examples serve as practical templates that guide LLMs in accurately constructing activity chains by showcasing typical patterns and transitions.
Figure <ref> provides an example of the system prompt. These components collectively ensure that the LLM has a clear understanding of the task requirements and the contextual background needed to generate accurate and representative activity chains.
§.§ Activity Chain
The output of our framework consists of activity chains for each agent, detailing the activity type, start time, and end time. Leveraging the capabilities of LLMs, this approach transcends traditional data-intensive modeling techniques, offering a flexible and scalable method to generate daily human activities with minimal input requirements. | null | null | null |
http://arxiv.org/abs/2409.17710v1 | 20240926102537 | Surface Scattering Expansion for the Casimir-Polder Interaction of a Dielectric Wedge | [
"Thorsten Emig"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/2409.17252v1 | 20240925180838 | Where is the Super-Virial Gas? The Supply from hot inflows | [
"Manami Roy",
"Kung-Yi Su",
"Smita Mathur",
"Jonathan Stern"
] | astro-ph.GA | [
"astro-ph.GA"
] |
UTF8mj
Manami Roy
[email protected]
0000-0001-9567-8807]Manami Roy
Center for Cosmology and Astro Particle Physics (CCAPP), The Ohio State University, 191 W. Woodruff Avenue, Columbus, OH 43210, USA
Department of Astronomy, The Ohio State University, 140 W. 18th Ave., Columbus, OH 43210, USA
0000-0003-1598-0083]Kung-Yi Su
Black Hole Initiative, Harvard University, 20 Garden St., Cambridge, MA 02138, USA
0000-0002-4822-3559]Smita Mathur
Center for Cosmology and Astro Particle Physics (CCAPP), The Ohio State University, 191 W. Woodruff Avenue, Columbus, OH 43210, USA
Department of Astronomy, The Ohio State University, 140 W. 18th Ave., Columbus, OH 43210, USA
0000-0002-7541-9565
]Jonathan Stern
School of Physics & Astronomy, Tel Aviv University, Tel Aviv 69978, Israel
§ ABSTRACT
In an effort to understand the presence of super-virial gas detected in the Milky Way, we present our findings from isolated galaxy simulations of Milky Way-like systems using GIZMO with the FIRE-2 (Feedback In Realistic Environments) stellar feedback model. It unveils the presence of a significant super-virial temperature ( T>6×10^6K) gas component within 20 kpc from the galactic center. This super-virial gas has a mass of 1-2×10^7 M_⊙ and is found close to the disk, where typical gas densities are 0.004-0.01 cm^-3. We find that some of the virial gas (T∼10^6K) forms a rotating hot inflow, where gravitational energy is converted to heat mainly via compressive heating. This process causes gas infalling close to the rotation axis to reach super-virial temperatures just before cooling and joining the disk. Stellar feedback heating accounts for less than 1% of the super-virial gas, indicating its minimal influence despite expectations. Even in scenarios with no stellar feedback effects considered, abundant super-virial gas persists, highlighting the dominance of alternative heating mechanisms. We also show that cosmic rays do not have a significant effect on heating the gas to a super-virial temperature. Our study illuminates the intricate dynamics of hot virial and super-virial gas surrounding Milky Way-like galaxies, emphasizing the prominent role of infall-driven and compressive heating processes in shaping thermal evolution.
§ INTRODUCTION
Galaxies are the building blocks of our universe and how galaxies form and evolve is an active area of research. A spiral galaxy has two main components, a star-forming disk and a diffuse gaseous halo surrounding the disk. This diffuse gaseous halo, known as Circumgalactic medium (CGM) <cit.>, plays a crucial part in galaxy evolution. The CGM is the habitat for large-scale gas flows that fundamentally regulate the evolution of the galaxy by providing fresh and recycled gaseous fuel for star formation and regulating the galaxy's interactions with other galaxies. A comprehensive understanding of the CGM stands to illuminate key unresolved questions pertaining to the processes driving galaxy formation and evolution.
Unfortunately, the CGM is very hard to detect in emission due to its very low density. However, from absorption lines in the spectra of background quasars, we can infer about the physical properties of the CGM. When light from a background quasar passes through the CGM of a galaxy, it suffers absorption from the metals and ions in the CGM. Absorption studies made by COS spectrograph onboard HST along with XMM-Newton and Chandra has enabled us to detect this multiphase diffuse gas around Milky Way (MW) mass galaxy in UV [e.g. <cit.>] and X-ray [e.g. <cit.>] respectively. In the X-ray band, the MW CGM has been detected in X-ray emission as well [e.g. <cit.>.]
Untill 2019, These observations led us to a picture where the CGM is multi-phase in nature and has three main components; virial temperature warm-hot gas (T∼ 10^6-6.5 K) , warm gas (T∼ 10^5-5.5 K) and cool gas (T<10^4 K). However this picture got modified with the recent discovery of the super-virial component (T∼ 6×10^6 K -2×10^7K; <cit.>) in the MW CGM
using absorption lines from XMM-Newton observation towards the Blazar IES 1553+113.
Since then, there have been multiple detections of this hotter super-virial gas in both emission and absorption <cit.>. In emission, <cit.> detected the super-virial component towards the same sight-line of Blazar IES 1553+113 using XMM-Newton, and <cit.> found this component along with virial component along four sightlines from Suzaku and Chandra. Later, <cit.> also detected this super-virial gas in the MW CGM in emission towards the Blazar Mrk 421 and 5 other sightlines near the Blazar field. Even in a recent study by <cit.> detected SiXIV Kα and SXVI Kα, associated with a gas phase with super-virial temperature, in the CGM of the MW, with the use of stacked X-ray spectral observations towards multiple of different extragalactic sightlines.
This super-virial component has not only been detected towards some particular sightlines, but all-sky surveys from Halo-Sat and eROSITA have also found this component abundantly in all over the sky. <cit.> utilized data from the HaloSat all-sky survey to delve into the emissions originating from the MW's halo and detected the supervirial gas. <cit.>, in their study of the MW halo within the eFEDDS field employing eROSITA, also detected this component.
However, the gas at these temperatures defies the predictions of galaxy formation simulations (e.g., <cit.>). Its discovery begs the questions like: where is this gas situated; in the ISM, inner CGM or outer CGM? How this gas is formed; through feedback or some other heating mechanisms like cosmic ray (CR) heating? What is the nature of this gas; is it as diffuse as the volume filling virial temperature gas or is it more or less dense?
In this paper, we will address above questions in light of an idealized simulation. We investigate a suite of simulations of MW-type hosts where we focus on these super-virial gas components. The paper is structured as follows. In Section <ref>, we discuss the methodology of our simulation, where we describe the initial conditions of our simulation setup (in Section <ref>) along with our definition of super-virial CGM gas for our analysis (in Section <ref>). In Section <ref>, we demonstrate our results from our analysis of the simulations, where we discuss the location (Section <ref>), density (in Section <ref>), amount (Section <ref>), and origin (Section <ref>) of the super-virial gas in the CGM. We also discuss if feedback and cosmic rays heating have any effect on the origin of this gas in Section <ref> and <ref>. We discuss the limitations of our work and planned future work in Section <ref>. Finally, we summarize our results and discuss future work in Section <ref>.
§ METHODOLOGY
Our simulations utilize GIZMO[A public version of this code is available at http://www.tapir.caltech.edu/ phopkins/Site/GIZMO.htmlhttp://www.tapir.caltech.edu/∼phopkins/Site/GIZMO.html] <cit.>, in its meshless finite mass (MFM) mode, which is a Lagrangian mesh-free Godunov method, capturing the advantages of grid-based and smoothed-particle hydrodynamics (SPH) methods. Numerical implementation details and extensive tests are presented in a series of methods papers for, e.g., hydrodynamics and self-gravity <cit.>, magnetohydrodynamics <cit.>, anisotropic conduction and viscosity <cit.>, and cosmic rays <cit.>.
In all simulations except for the runs with no feedback, we incorporate the FIRE-2 model, an implementation of the Feedback In Realistic Environments (FIRE) approach. This encompasses treatments of the interstellar medium (ISM), star formation, and stellar feedback, the details of which are given in <cit.> along with extensive numerical tests.
Our simulations cover a broad temperature range from 10 to 10^10 K, accounting for various cooling mechanisms including photoelectric and photoionization heating, collisional, Compton, fine structure, recombination, atomic, and molecular cooling.
Star formation is facilitated through a sink particle approach, allowed only in molecular gas regions with densities surpassing 100 cm^-3, where self-shielding and local self-gravitation dominate. Upon formation, star particles are treated as cohesive stellar populations, inheriting the metallicity of their parent gas particles. Feedback mechanisms, including supernovae and mass-loss rates, are derived from IMF-averaged values calculated from STARBURST99 <cit.> with a Kroupa initial mass function (IMF) <cit.>.
The stellar feedback model encompasses diverse processes: (1) Radiative feedback, involving photo-ionization, photo-electric heating, and single and multiple-scattering radiation pressure tracked in five bands (ionizing, FUV, NUV, optical-NIR, IR), (2) Continuous stellar mass loss and injection of mass, metals, energy, and momentum through OB and AGB winds, and (3) Type II and Ia supernovae (including both prompt and delayed populations), injecting appropriate mass, metals, momentum, and energy into the surrounding gas based on tabulated rates.
All the simulations integrate magnetohydrodynamics (MHD), fully anisotropic conduction, and viscosity, employing the Spitzer-Braginski coefficients to capture the relevant physics.
The model for cosmic ray (CR) treatment includes streaming, which occurs at the local Alfvén speed or sound speed, whichever is larger, incorporating an appropriate streaming loss term that thermalizes energy, as described in <cit.>. Diffusion is modeled with a fixed diffusivity ∼3×10^28 cm^2 s^-1, alongside adiabatic energy exchange between gas and CR pressure, and includes hadronic and Coulomb losses (following <cit.>). We consider a single energy bin for GeV CRs, which dominate the pressure, and treat them in the ultra-relativistic limit. Both streaming and diffusion are fully anisotropic along magnetic field lines. CRs are injected by supernovae (SNe), with 10% of each SNe energy being transferred into CRs, consistent with studies like <cit.>. For details on cosmic ray physics is demonstrated in <cit.>.
§.§ Initial conditions
The initial setup closely follows the detailed specifications outlined in <cit.>. To further stabilize the host CGM, the simulation region is expanded to 3 times the viral radius, and the simulations are run adiabatically (no cooling or star formation) for 4.5 Gyr for relaxing any initial transients before the satellites are placed into the CGM.
The simulation properties are summarized in table 1 of <cit.>. In this paper, we only focus on the m12 halo which is corresponding to the MW-like halo of mass ∼1.8×10^12 M_⊙.
We consider only the runs with 20 satellites of 2×10^9 M_⊙ DM halo mass (20xm09). We vary different physical properties to test the impact of the stellar feedback and cosmic rays. We summarize all of our runs in Table <ref>.
We initialize the dark matter (DM) halo, bulge, black hole, and gas+stellar disk of m12 by following <cit.>.
The initial metallicity goes down from solar (Z=0.02) to Z=0.001 with radius as Z=0.02 (0.05+0.95/(1+(r/20 kpc)^1.5)). Magnetic fields are initialized azimuthal with | B|=0.03 μ G/(1+(r/20 kpc)^0.375). For our cooling flow run, we have the same initial condition but with a cooling flow solution with cooling flow of roughly 5 M⊙ yr^-1, a sonic radius of 1 kpc and a circularized radius of 10 kpc <cit.>.
We initialize CRs in the high CR runs such that the CR energy density is in equipartition with thermal energy density. In this case, we decrease the temperature by half, keeping the density same, and put the rest of the energy in CR. So the pressure gradient is maintained.
In low CR runs, CRs are initialized such that the CR energy density is in equipartition with magnetic energy density, which is ∼5 order of magnitude lower than the thermal energy density.
We initialize the satellites similarly as the host described above, though without a CGM gas halo. The properties are also summarized in table 1 of <cit.>. More details of initial set up is described in Methodology section of <cit.>.
§.§.§ Gas temperature of the initial condition
All the runs have similar initial setup except for the high CR run. We check that our initial setups for different runs do not contain any super-virial gas. We confirm that the initial temperature is not more than 4×10^6K in all the runs. In the high CR run, the initial temperature profile is even lower for maintaining hydro-static pressure balance, not crossing 2×10^6K. The variation in the initial condition will be useful to test whether the super-virial gas can be found even with a cooler halo, and test whether the results are sensitive to the initial set-up. We also confirms that the initial density ranges similarly in all the cases, with virial gas having density 10^-(3-4) cm ^-3.
§.§ Defining Super-virial Gas
Observationally, the gas with temperature 6×10^6-2×10^7K has been identified as super-virial phase. For this analysis, we use temperature cut of 6×10^6K for super-virial gas. Therefore, we consider gas with temperature above this threshold to be called as `super-virial'.
§ RESULTS
§.§ Where is the super-virial gas?
This section delves into the investigation of the distribution of supervirial gas within the simulation box. We present the 2-d PDFs of temperature of the gas weighted by its mass with the variation of its position and density, in the left and right panels of Figure <ref> respectively. The 2d-PDFs denote the four different runs at 1 Gyr snapshot of the simulation for no CR (1st row), no CR and no Feedback (2nd row), low CR (3rd row), and high CR (bottom panel) cases.
In all the 2-d distribution plots, we did a radial cut of 6 times of the gas scale radius region around the satellites to exclude the ISM of the satellites, as we choose this radius of the satellite to be the radius beyond which the gravitational pull from the satellites is negligible. Note, however, that we do not exclude satellite particles which get stripped from the satellites. The horizontal line in Figure <ref> denotes the super-virial temperature cut of 6×10^6K; gas with temperature above this threshold is called `super-virial'.
In the right bar of each 2-d histogram plot in Figure <ref>, we show the 1-d histograms of mass-weighted temperature at two different positions (left panels) and density (right panels) denoted by the similarly colored vertical lines in the 2-d histograms. For example, in the left panel of Figure <ref>, there is a red vertical line at 20 kpc and the corresponding histogram in the right bar, showing the temperature distribution at 20 kpc. We see that the distribution not extends above 6×10^6 K. On the other hand, the temperature distribution shown by the black histogram (corresponding to 6.5 kpc) extends far beyond 6×10^6 K. Therefore, we conclude that the majority of the super-virial gas is concentrated within a 20 kpc radius across all of the simulation runs.
We also checked if we can see this gas at an earlier snapshot at t=0.5 Gyr and we found it for all of cases except for the high CR one, where the super-virial gas is not there at 0.5 Gyr, but appears at 1 Gyr. This is mainly due to the fact that the high CR runs have cooler initial halo, therefore taking longer time to get heated.
In summary, we investigate the distribution of supervirial gas within the simulation box, initially examining its temperature and density distributions. Excluding regions around satellites, the analysis reveals that the majority of super-virial gas, exceeding 6×10^6 K, is concentrated within a 20 kpc radius across all runs. Notably, the high CR runs exhibit delayed appearance of super-virial gas at 1 Gyr due to cooler initial halo conditions, highlighting sensitivity to initial setup and heating dynamics.
§.§ How dense is the super-virial gas?
In this section, we scrutinize the density distribution of the super-virial gas. In the right panel of Figure <ref>, we present 2D histograms illustrating the relationship between temperature and density, weighted by mass. It shows the temperature evolution of 2-d probability distribution function (PDF) of gas mass as a function of the temperature and density
of the gas at t=1 Gyr for no CR (1st row), no CR and no Feedback (2nd row), low CR (3rd row), and high CR (bottom panel) cases. In these plots, we excluded the region around the satellites like before.
Notably, in all the cases, the density of the super-virial gas is approximately 3×10^-(2-1) cm ^-3
as the gas is closer to the disk, as shown in left panel of Figure <ref>.
§.§ How much super-virial gas is there in the galaxy?
Next, we quantify the mass of super-virial gas present in the galaxy. Given that the bulk of the super-virial gas resides within a 20 kpc radius, we depict the temporal evolution of super-virial gas masses within 20 kpc and 10 kpc using solid and dashed lines, respectively, in Figure <ref>. Here, we do not exclude the region around the satellites, as we would like to check if satellites also contributes to this gas phase. However, as the satellites are in circular orbits between 100kpc and 150kpc, the regions around them are automatically cut out while we calculate super-virial gas mass inside 20 kpc and 10 kpc. Key insights from this analysis include:
* Mass: The mass of super-virial gas fluctuates within the range of 1-2×10^7 M_⊙ throughout the entire 1.5 Gyr simulation period within 10 kpc, with 2 times larger mass within 20kpc. CGM gas mass being 2×10^11 M_⊙, the supervirial gas is only 0.1-1% of the total CGM gas, hence sub-dominant in the baryonic budget and does not contribute to the missing baryon problem.
* Effect of Feedback: The presence of the super-virial gas could have been a direct result of heating by stellar feedback. Therefore, if we don't include feedback in our simulations, we might expect no (or significantly smaller amount of) super-virial gas. Surprisingly, however, the simulations with no feedback display a higher and similar amount of mass of super-virial gas among all the runs. This shows that in our simulations, the stellar feedback is not predominantly responsible for generating the super-virial gas.
* Effect of Cosmic Rays: CR heating by streaming heating or Coulomb heating can be one of the heating source of the super-virial gas. It can also drive outflow that can heat the gas. We investigate this with two runs with CRs, one with low CR and one with high CR. Minimal disparities exist between CR and no CR runs. The runs incorporating CRs exhibit slightly more pronounced mass fluctuations compared to runs without CRs, with the high CR run showing an even larger fluctuations. Hence, in our simulations, the CR is not predominantly responsible for heating the gas to super-virial phase.
* Effect of Resolution:
We have conducted higher-resolution simulations (10 times higher resolution compared to our low resolution run), represented by dashed lines in Figure <ref>. Our anticipation was that these high-resolution (hr) runs would capture denser,super-virial gas compared to the low-resolution (lr) counterparts. We observe that the super-virial gas mass in the hr runs qualitatively similar to that in the lr runs. This consistency suggests that our findings remain robust across varying levels of resolution.
* Effect of Other Satellite Runs: We also check the other satellite runs described in <cit.>, with m10 and m08 satellites to check whether the satellite distribution has a significant effect on our results (not shown in the figure). We find that there is no significant difference in the super-virial gas mass budget with the difference in satellite distribution.
In summary, we find that the temporal evolution of super-virial gas mass within 20 and 10 kpc radii reveals fluctuations between 1-2×10^7 M_⊙, with a doubling of mass within the broader radius, constituting only 0.1-1% of the total CGM gas mass. Surprisingly, simulations without feedback display higher and more consistent super-virial gas masses, suggesting super-virial gas does not predominantly originate from feedback heating processes. Minimal disparities in super-virial gas mass fluctuations are observed between runs with and without cosmic rays, with higher CR runs showing slightly more pronounced fluctuations.
§.§ Where is the super-virial gas coming from?
Now the question is what is the mechanism by which this gas is heated. As noted above, CR heating and feedback heating do not seem to be responsible for heating to super-virial temperatures in our simulations. Therefore, we track the host gas particles with initial temperature T_ ini∼ 2 × 10^6 K, which reached their maximum temperature at a simulation time of 1 Gyr. We calculate the past temperature, density, and spherical radius of all such gas particles (host and satellite). We find that there are three sets of gas particles that contribute to the super-virial phase, with two sets from the host and one from the satellites. One set has initial temperature similar to the virial gas 2×∼10^6K (set-1 hereafter) whereas other set has initial temperature of a cold gas 10^4K (set-2 hereafter). There are another set (set-3 hereafter), which are gas particles, initially coming from satellites and found to be at super-virial temperature. We plot median and dispersion of properties of all these particles for each cases. We list the properties of set-1, set-2 and set-3 particles below :
§.§.§ Set-1: Host gas particles, T_ ini∼2×10^6K
In Figure <ref>, blue curves show the time evolution of the temperature (top row), density (2nd row), spherical radius (3rd row), and scale height (bottom row) of host gas particles with initial temperature T_ ini∼ 2 × 10^6 K, which reached their maximum temperature at a simulation time of 1 Gyr. These gas particles have initial temperature ∼2×10^6K in all the case except for high CR run where the initial temperature was ∼1×10^6K. This is because for high CR case, the initial halo has a lower thermal pressure, and in turn a lower temperature-halo. The Figure shows that gas heats up slowly over time and reaches over 6×10^6K at t=1 Gyr.
In the second row of Figure <ref>, we plot past density of these super-virial gas particles. All of them start from 10^-3 cm^-3, which is a typical density of the diffuse virial gas. Following their rise to super-virial temperatures at 1 Gyr, they cool down, reducing their density to 1, cm^-3.
Again high CR run starts from the lower density of 5×10^-4cm ^-3 than the density of other runs, as we are tracking the gas particles from a larger radius as we mention in next paragraph. But eventually they reach the same final density as all the other runs. Therefore, there is almost a factor of 2000 and 1000 increase in density in high CR and all the other cases respectively.
In the third panel of Figure <ref>, we plot the time evolution of the position of these super-virial gas. All of them are falling from outer halo (50-80 kpc) into the central 10 kpc region. For all the other cases the particles fall from 40-50 kpc whereas for high CR run, the gas particles fall from little farther away, 80-90 kpc.
As the gas particles fall inwards, they start heating up as this infall changes the gravitational potential energy, which can be converted to thermal energy and slowly heat up the gas. Until 0.9 Gyr, the temperature only increases to 3-4×10^6K. At 0.9 Gyr, these gas particles roughly reach 10 kpc which is the radius of Galactic disc. At that point, they get compressed by going from spherical to disc geometry, as shown by bottom-most row of Figure <ref>. And due to this compression, the gas is heated to over 6×10^6K and becomes super-virial gas. Therefore, it is clear that set-1 gas particles are infalling onto the disc and they are heated up by compression, and hence we see enhancement of density and temperature. Once they heat up to the super-virial phase, they cool down immediately because they become rotationally supported and radiate thermal energy away after they join the disc, as shown by the time evolution of the spherical radius r, which does not decline and remains constant after t = 1 Gyr. Please note that the smooth evolution of the density and temperature curves suggests the heating is compressive rather than shocks, as shown by tracking of individual particles in Figure <ref>.
§.§.§ Set-2: Host gas particle, T_ ini∼10^4K
In the top row of Figure <ref>, we plot the time evolution of the temperature (top row), density (2nd row), and spherical radius (bottom row) of the host gas particles with initial temperature T_ ini∼ 10^4 K, which reached their maximum temperature at a simulation time of 1 Gyr. The gas particles in these sets are very few except for high CR case and absent in no feedback case. These gas particles get colder and hotter over time by fluctuating between 10^4 K and 10K until 0.9 Gyr where the gas heats up and reaches over 6×10^6K at about 1 Gyr. These gas particles are probably heating from stellar feedback, hence are absent in no feedback case and are more in high CR case due to the additional heating from CR feedback by larger amount of CR.
In the middle row of Figure <ref>, we plot past density of these gas particles. All of them start from 10^-2cm ^-3 and finally become 10^2cm ^-3 right before the heating up at 1 Gyr. Then their density again drops down to 10^-2cm ^-3 once they heated up by feedback. In bottom row of Figure <ref>, we plot the past position of these super-virial gas. All of them remain in the 10 kpc to 20 kpc region.
Thus, we see that these set-2 gas particles represent the cold ISM gas, which gets colder and denser over time and may form stars. Then they suddenly heat up by stellar feedback. However, set-2 gas particles are less than 0.01 times the set-1 gas particles. As we can see in Figure <ref>, the fraction of total supervirial gas host mass is ∼1% due to feedback and ∼99% due to compressive heating. This implies the mass in the supervirial phase as traced by set-2 gas particles is smaller by a factor of ∼100, hence set-2 gas particles contribute very little to super-virial gas mass budget.
§.§.§ Set-3: Particles Initially Coming From Satellites
There is another set of gas particles that can heat up to super-virial temperature, as shown in Figure <ref>. The gas from satellites gets stripped and falls inward. As they accrete onto the galactic disc, they heat up due to the gravitational potential energy loss and compression, just as the
set-1 gas particles.
We showed in the Figure <ref>, that the fraction in this set is initially zero but starts developing at the time-scale of satellite gas infall, around 1.2 Gyr. It starts with as low as 0.1% of the supervirial budget , but slowly goes up to even 10% at 1.5 Gyr. Therefore, at later time, the contribution from satellite gas phase can also contribute to supervirial gas phase significantly.
§.§ What distinguishes gas which heats up to super-virial temperatures?
Orange curves in Figure <ref> plot the time evolution of the temperature (top row), density (2nd row), spherical radius (3rd row), and scale height (bottom row) of infalling virial gas particles that do not get heated up to super-virial temperatures. As we can see, they are also falling inwards, and as they do that, the temperature and density of the gas increases.
In Figure <ref>, we investigate the distribution of
probability distribution function of the angle (θ) to the galaxy rotation axis at t = 0 Gyr (left panel) and at maximum temperature (right panel), for all infalling gas particles with a virial temperature beyond 20 kpc. In the Left panel, we can see the gas particles that heat up to super-virial temperatures are closer to the rotation axis, whereas the gas particles that remain at virial temperature are further from the rotation axis. It implies that infalling virial gas which originates from close to the rotation axis reaches higher temperatures. This is consistent with the behavior of rotating hot inflows (or `rotating cooling flows') where the hot CGM phase inflows and spins up on a cooling timescale <cit.>. In these accretion flows gas has lower densities near the rotation axis, due to rotation support (see figure 4f in ). When the hot inflow approaches the disk, it transitions from a spherical geometry to a disk geometry. During this change in geometry, gas particles near the rotation axis suffer smaller radiative losses (which scale as density squared, see figure 4k there), and experience larger compressive heating rates (which scale inversely with density, see figure 4l there).
Supporting the rotating hot inflow interpretation, the right panel of Figure 7 shows that the θ distribution when the temperature is maximal is close to 90 degrees, distinctly different from the initial θ distribution. This means that the super-virial gas particles have indeed transitioned from a spherical to a disk geometry. Further support for this interpretation is that the gas cools immediately after reaching super-virial temperatures (Figure <ref>), as expected in rotating hot inflows which cool immediately onto the ISM after transition into a disk geometry <cit.>.
Therfore, some of the virial temperature gas is infalling from outer CGM (50-60 kpc) to the inner CGM (20-10 kpc). All of this inflowing gas loses gravitational energy and is heated up by a factor of 1.5-2.
When it reaches the inner CGM, the gas gets further heated up by another factor of 1.5-2 due to compressive heating when the hot inflow changes from a spherical to a disc geometry. This change in geometry is instigated by rotation support in the hot inflow.
Immediately thereafter, the hot inflow cools and joins the ISM disc.
Specifically, inflowing virial gas closer to the rotation axis of the galaxy suffers less radiative cooling and more compressive heating due to lower gas density near the axis, and therefore heats up more.
§.§ What amount of infalling virial gas heats up to super-virial phase?
To investigate how much virial gas is infalling, we calculate a cumulative distribution function (CDF) of the final position (at t= 1 Gyr) of the gas particles which starts at virial temperature beyond 20 kpc at t=0 Gyr. In the left panel of Figure <ref>, we show that only 10% gas particles enter 20 kpc at t=1 Gyr. With these gas particles which are 20 kpc at t=1 Gyr, we make a CDF of temperature distribution in right panel of Figure <ref>, where we see only 1-2% of these gas will heat up to super-virial temperatures.
§.§ Fate of this super-virial gas
This supervirial gas will also eventually cools down once it is supported by rotation, as we see in Figure <ref>, <ref>, and <ref>. In these figures, we denote the grey-shaded region to be the time evolution of the gas after they become super virial. We can see all the supervirial gas cools down in the later snapshot, consistent with the hot rotating inflow solution discussed in <cit.>. But the cooled gas will immediately be replenished by heating of continuously inflowing virial gas by the same process. Therefore, we see a constant amount of supervirial gas over time.
To summarize, we investigate mechanisms driving the heating of virial gas within a galaxy, finding that gas particles exhibit distinct heating behaviors based on their initial conditions and interactions. Set-1 gas particles, originating from the halo, closer to the rotation axis, heat up as they infall where the gas converts gravitational energy to heat mainly via compressive heating. This process causes gas infalling close to the rotation axis to reach super-virial temperatures just before cooling and joining the disk. In contrast, set-2 gas particles, likely representing the cold ISM gas, heat up sporadically, potentially due to stellar feedback, contribute minimally to the super-virial gas mass budget. Additionally, gas particles stripped from satellites and accreting onto the disc also heat up via compressive heating, further contributing to the super-virial phase. Some virial gas particles fail to heat up to supervirial temperature as they infall further from the rotation axis hence suffers more radiative cooling and less compressive heating.
§ DISCUSSION, LIMITATIONS AND FUTURE WORK
We also investigate the temperature profile in a run where the virial temperature phase of the CGM is initialized with a cooling flow rather then hydrostatic equilibrium, using the cooling flow solution from <cit.>. We find that our cooling flow run also produces super-virial gas within 20 kpc and its mass is of the same order as the other runs (Figure <ref>). This makes sense since HSE initial conditions are expected to develop into a cooling flow within a cooling time <cit.>.
Bottom panel of Figure <ref> and right panel of Figure <ref> clearly indicates that the super-virial gas is extra-planar, just above the ISM disk. It will thus be very interesting to investigate future X-ray observations of edge-on galaxies, which we will discuss in the follow-up paper. In future work, we will also post process and produce observable like X-ray emission and absorption maps from the simulation snapshots. This will both facilitate direct comparison with observed maps, and assist in distinguishing between emission and absorption origins.
It is fascinating to see that conventional CR/feedback heating do not produce this super-virial phase as anticipated, whereas gravitational and compressive heating is the dominant heating mechanisms in our idealized GIZMO simulations with FIRE stellar feedback model. However, this still depends on the feedback prescription used in the simulation. With a different feedback prescription <cit.>, one may get a different result. For example, the idealized simulation by the recent work of <cit.> reproduced this temperature component, likely to be caused due to heating by stellar feedback, but with a very high star-formation rate unlike the MW. Another recent work by <cit.> conclude that the super-virial gas is a result of the stellar feedback in their idealized simulation. Additionally, we do not have AGN feedback, that can have larger impact than stellar feedback in terms of heating the gas <cit.>. For example, a recent work by <cit.> also examines the physical properties of gas within the CGM surrounding 132 MW-like central galaxies at a redshift of z=0, using data from the cosmological simulation TNG50, which is part of the IllustrisTNG project. They found that energy from supermassive black hole (SMBH)-driven kinetic winds heats up the gas to super-virial temperatures (exceeding 10^6.5 to 10^7 K). Therefore, it would be useful to explore several different simulations and investigate the similar questions to compare. It will not only give us an overall picture of this newly-found super-virial phase, but also can constrain varying feedback mechanisms in different simulations.
Also, another limitations of our work is that it does not consider the galaxy in cosmological context. Our future works will explore the same questions with different cosmological simulations such as FIRE zoom-in runs <cit.>, HESITA <cit.>, TNG <cit.> and EAGLE <cit.> simulations. These insights will deepen our understanding of galactic gas dynamics and provide valuable implications for broader inquiries into galaxy evolution and structure formation.
§ CONCLUSIONS
Several observations have recently established that the MW contains gas at temperatures higher than its virial temperature. However, location, origin and properties of this gas are still unknown. In this paper, we analyze a suite of idealized simulations to investigate this newly-discovered gas phase and find this super-virial phase gas in our simulations of a MW-like galaxy. Our main findings are the following:
* Most of the super-virial gas is within 20 kpc from the center of the galaxy (Left Panel of Figure <ref>), which implies they are situated mostly in extended disc region, not in the CGM.
* This super-virial gas phase is found close to the disk where typical gas densities are ≤ 10^-2 cm^-3 (Right Panel of Figure <ref> and 3rd panel of Figure <ref>, and bottom panel <ref>).
* The mass of this gas is roughly constant, in the range 1-2×10^7 M_⊙. This phase is retained in the simulations even at 1.5 Gyr (Figure <ref>).
* We also investigate the effect of stellar feedback in heating up the gas. We see no change in the gas mass in the case of no feedback run (Figure <ref>). However, we see that some cold 10^4 K gas in the central galaxy is heated up to super-virial phase which is absent in no feedback run. However, the feedback heating contributes only less than 1% of the super-virial gas, implying this mechanism of heating is sub-dominant, at least for the FIRE-feedback prescription (Figure <ref>).
* We also explore effect of CR in the heating of the gas. With high and low CR initial setup, we found no change in the super virial gas mass hence implying that the CR has little role in heating up the gas to super-virial temperatures (Figure <ref>).
* While investigating the origin of super-virial gas, we found that the dominant contribution to heating is caused by a rotating inflow of the virial CGM phase as in the model of <cit.>. In this model gravitational energy is converted to heat mainly via compressive heating in the inflow. We show this process causes the 1-2% of gas infalling closest to the rotation axis to reach super-virial temperatures just before cooling and joining the disk (Figs. <ref>, <ref>). Gas inflowing near the rotation axis has a lower density and thus suffers less cooling losses and is more compressively heated than gas farther away from the axis, allowing it to reach higher temperatures. Gas stripped from satellites also inflows onto central galaxy and heats up similarly (Figure <ref>).
In conclusion, we identify compressive heating of the hot inflow as a significant contributor to the super-virial gas in the Milky Way. However, several caveats require further investigation in future studies, including more detailed comparisons with observational data (see Section <ref>).
§ ACKNOWLEDGMENTS
We also thank FIRE collaboration for useful discussions and suggestions. MR acknowledges support from the Center for Cosmology and particle physics fellowship at Ohio State University. MR acknowledges ACCESS allocations PHY240003. MR also acknowledges Aspen Center for Physics and Simons Foundation as this work was performed in part at the Aspen Center for Physics, which is supported by a grant from the Simons Foundation (1161654, Troyer). KS acknowledges support from the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation, and acknowledges ACCESS allocations TG-PHY220027 and TG-PHY220047 and Frontera allocation AST22010. JS was supported by the Israel Science Foundation (grant No. 2584/21). The computations in this work were run at facilities supported by the Bridges, University of Pittsburgh.
aasjournal
| Galaxies are the building blocks of our universe and how galaxies form and evolve is an active area of research. A spiral galaxy has two main components, a star-forming disk and a diffuse gaseous halo surrounding the disk. This diffuse gaseous halo, known as Circumgalactic medium (CGM) <cit.>, plays a crucial part in galaxy evolution. The CGM is the habitat for large-scale gas flows that fundamentally regulate the evolution of the galaxy by providing fresh and recycled gaseous fuel for star formation and regulating the galaxy's interactions with other galaxies. A comprehensive understanding of the CGM stands to illuminate key unresolved questions pertaining to the processes driving galaxy formation and evolution.
Unfortunately, the CGM is very hard to detect in emission due to its very low density. However, from absorption lines in the spectra of background quasars, we can infer about the physical properties of the CGM. When light from a background quasar passes through the CGM of a galaxy, it suffers absorption from the metals and ions in the CGM. Absorption studies made by COS spectrograph onboard HST along with XMM-Newton and Chandra has enabled us to detect this multiphase diffuse gas around Milky Way (MW) mass galaxy in UV [e.g. <cit.>] and X-ray [e.g. <cit.>] respectively. In the X-ray band, the MW CGM has been detected in X-ray emission as well [e.g. <cit.>.]
Untill 2019, These observations led us to a picture where the CGM is multi-phase in nature and has three main components; virial temperature warm-hot gas (T∼ 10^6-6.5 K) , warm gas (T∼ 10^5-5.5 K) and cool gas (T<10^4 K). However this picture got modified with the recent discovery of the super-virial component (T∼ 6×10^6 K -2×10^7K; <cit.>) in the MW CGM
using absorption lines from XMM-Newton observation towards the Blazar IES 1553+113.
Since then, there have been multiple detections of this hotter super-virial gas in both emission and absorption <cit.>. In emission, <cit.> detected the super-virial component towards the same sight-line of Blazar IES 1553+113 using XMM-Newton, and <cit.> found this component along with virial component along four sightlines from Suzaku and Chandra. Later, <cit.> also detected this super-virial gas in the MW CGM in emission towards the Blazar Mrk 421 and 5 other sightlines near the Blazar field. Even in a recent study by <cit.> detected SiXIV Kα and SXVI Kα, associated with a gas phase with super-virial temperature, in the CGM of the MW, with the use of stacked X-ray spectral observations towards multiple of different extragalactic sightlines.
This super-virial component has not only been detected towards some particular sightlines, but all-sky surveys from Halo-Sat and eROSITA have also found this component abundantly in all over the sky. <cit.> utilized data from the HaloSat all-sky survey to delve into the emissions originating from the MW's halo and detected the supervirial gas. <cit.>, in their study of the MW halo within the eFEDDS field employing eROSITA, also detected this component.
However, the gas at these temperatures defies the predictions of galaxy formation simulations (e.g., <cit.>). Its discovery begs the questions like: where is this gas situated; in the ISM, inner CGM or outer CGM? How this gas is formed; through feedback or some other heating mechanisms like cosmic ray (CR) heating? What is the nature of this gas; is it as diffuse as the volume filling virial temperature gas or is it more or less dense?
In this paper, we will address above questions in light of an idealized simulation. We investigate a suite of simulations of MW-type hosts where we focus on these super-virial gas components. The paper is structured as follows. In Section <ref>, we discuss the methodology of our simulation, where we describe the initial conditions of our simulation setup (in Section <ref>) along with our definition of super-virial CGM gas for our analysis (in Section <ref>). In Section <ref>, we demonstrate our results from our analysis of the simulations, where we discuss the location (Section <ref>), density (in Section <ref>), amount (Section <ref>), and origin (Section <ref>) of the super-virial gas in the CGM. We also discuss if feedback and cosmic rays heating have any effect on the origin of this gas in Section <ref> and <ref>. We discuss the limitations of our work and planned future work in Section <ref>. Finally, we summarize our results and discuss future work in Section <ref>. | null | Our simulations utilize GIZMO[A public version of this code is available at phopkins/Site/GIZMO.html <cit.>, in its meshless finite mass (MFM) mode, which is a Lagrangian mesh-free Godunov method, capturing the advantages of grid-based and smoothed-particle hydrodynamics (SPH) methods. Numerical implementation details and extensive tests are presented in a series of methods papers for, e.g., hydrodynamics and self-gravity <cit.>, magnetohydrodynamics <cit.>, anisotropic conduction and viscosity <cit.>, and cosmic rays <cit.>.
In all simulations except for the runs with no feedback, we incorporate the FIRE-2 model, an implementation of the Feedback In Realistic Environments (FIRE) approach. This encompasses treatments of the interstellar medium (ISM), star formation, and stellar feedback, the details of which are given in <cit.> along with extensive numerical tests.
Our simulations cover a broad temperature range from 10 to 10^10 K, accounting for various cooling mechanisms including photoelectric and photoionization heating, collisional, Compton, fine structure, recombination, atomic, and molecular cooling.
Star formation is facilitated through a sink particle approach, allowed only in molecular gas regions with densities surpassing 100 cm^-3, where self-shielding and local self-gravitation dominate. Upon formation, star particles are treated as cohesive stellar populations, inheriting the metallicity of their parent gas particles. Feedback mechanisms, including supernovae and mass-loss rates, are derived from IMF-averaged values calculated from STARBURST99 <cit.> with a Kroupa initial mass function (IMF) <cit.>.
The stellar feedback model encompasses diverse processes: (1) Radiative feedback, involving photo-ionization, photo-electric heating, and single and multiple-scattering radiation pressure tracked in five bands (ionizing, FUV, NUV, optical-NIR, IR), (2) Continuous stellar mass loss and injection of mass, metals, energy, and momentum through OB and AGB winds, and (3) Type II and Ia supernovae (including both prompt and delayed populations), injecting appropriate mass, metals, momentum, and energy into the surrounding gas based on tabulated rates.
All the simulations integrate magnetohydrodynamics (MHD), fully anisotropic conduction, and viscosity, employing the Spitzer-Braginski coefficients to capture the relevant physics.
The model for cosmic ray (CR) treatment includes streaming, which occurs at the local Alfvén speed or sound speed, whichever is larger, incorporating an appropriate streaming loss term that thermalizes energy, as described in <cit.>. Diffusion is modeled with a fixed diffusivity ∼3×10^28 cm^2 s^-1, alongside adiabatic energy exchange between gas and CR pressure, and includes hadronic and Coulomb losses (following <cit.>). We consider a single energy bin for GeV CRs, which dominate the pressure, and treat them in the ultra-relativistic limit. Both streaming and diffusion are fully anisotropic along magnetic field lines. CRs are injected by supernovae (SNe), with 10% of each SNe energy being transferred into CRs, consistent with studies like <cit.>. For details on cosmic ray physics is demonstrated in <cit.>.
§.§ Initial conditions
The initial setup closely follows the detailed specifications outlined in <cit.>. To further stabilize the host CGM, the simulation region is expanded to 3 times the viral radius, and the simulations are run adiabatically (no cooling or star formation) for 4.5 Gyr for relaxing any initial transients before the satellites are placed into the CGM.
The simulation properties are summarized in table 1 of <cit.>. In this paper, we only focus on the m12 halo which is corresponding to the MW-like halo of mass ∼1.8×10^12 M_⊙.
We consider only the runs with 20 satellites of 2×10^9 M_⊙ DM halo mass (20xm09). We vary different physical properties to test the impact of the stellar feedback and cosmic rays. We summarize all of our runs in Table <ref>.
We initialize the dark matter (DM) halo, bulge, black hole, and gas+stellar disk of m12 by following <cit.>.
The initial metallicity goes down from solar (Z=0.02) to Z=0.001 with radius as Z=0.02 (0.05+0.95/(1+(r/20 kpc)^1.5)). Magnetic fields are initialized azimuthal with | B|=0.03 μ G/(1+(r/20 kpc)^0.375). For our cooling flow run, we have the same initial condition but with a cooling flow solution with cooling flow of roughly 5 M⊙ yr^-1, a sonic radius of 1 kpc and a circularized radius of 10 kpc <cit.>.
We initialize CRs in the high CR runs such that the CR energy density is in equipartition with thermal energy density. In this case, we decrease the temperature by half, keeping the density same, and put the rest of the energy in CR. So the pressure gradient is maintained.
In low CR runs, CRs are initialized such that the CR energy density is in equipartition with magnetic energy density, which is ∼5 order of magnitude lower than the thermal energy density.
We initialize the satellites similarly as the host described above, though without a CGM gas halo. The properties are also summarized in table 1 of <cit.>. More details of initial set up is described in Methodology section of <cit.>.
§.§.§ Gas temperature of the initial condition
All the runs have similar initial setup except for the high CR run. We check that our initial setups for different runs do not contain any super-virial gas. We confirm that the initial temperature is not more than 4×10^6K in all the runs. In the high CR run, the initial temperature profile is even lower for maintaining hydro-static pressure balance, not crossing 2×10^6K. The variation in the initial condition will be useful to test whether the super-virial gas can be found even with a cooler halo, and test whether the results are sensitive to the initial set-up. We also confirms that the initial density ranges similarly in all the cases, with virial gas having density 10^-(3-4) cm ^-3.
§.§ Defining Super-virial Gas
Observationally, the gas with temperature 6×10^6-2×10^7K has been identified as super-virial phase. For this analysis, we use temperature cut of 6×10^6K for super-virial gas. Therefore, we consider gas with temperature above this threshold to be called as `super-virial'. | §.§ Where is the super-virial gas?
This section delves into the investigation of the distribution of supervirial gas within the simulation box. We present the 2-d PDFs of temperature of the gas weighted by its mass with the variation of its position and density, in the left and right panels of Figure <ref> respectively. The 2d-PDFs denote the four different runs at 1 Gyr snapshot of the simulation for no CR (1st row), no CR and no Feedback (2nd row), low CR (3rd row), and high CR (bottom panel) cases.
In all the 2-d distribution plots, we did a radial cut of 6 times of the gas scale radius region around the satellites to exclude the ISM of the satellites, as we choose this radius of the satellite to be the radius beyond which the gravitational pull from the satellites is negligible. Note, however, that we do not exclude satellite particles which get stripped from the satellites. The horizontal line in Figure <ref> denotes the super-virial temperature cut of 6×10^6K; gas with temperature above this threshold is called `super-virial'.
In the right bar of each 2-d histogram plot in Figure <ref>, we show the 1-d histograms of mass-weighted temperature at two different positions (left panels) and density (right panels) denoted by the similarly colored vertical lines in the 2-d histograms. For example, in the left panel of Figure <ref>, there is a red vertical line at 20 kpc and the corresponding histogram in the right bar, showing the temperature distribution at 20 kpc. We see that the distribution not extends above 6×10^6 K. On the other hand, the temperature distribution shown by the black histogram (corresponding to 6.5 kpc) extends far beyond 6×10^6 K. Therefore, we conclude that the majority of the super-virial gas is concentrated within a 20 kpc radius across all of the simulation runs.
We also checked if we can see this gas at an earlier snapshot at t=0.5 Gyr and we found it for all of cases except for the high CR one, where the super-virial gas is not there at 0.5 Gyr, but appears at 1 Gyr. This is mainly due to the fact that the high CR runs have cooler initial halo, therefore taking longer time to get heated.
In summary, we investigate the distribution of supervirial gas within the simulation box, initially examining its temperature and density distributions. Excluding regions around satellites, the analysis reveals that the majority of super-virial gas, exceeding 6×10^6 K, is concentrated within a 20 kpc radius across all runs. Notably, the high CR runs exhibit delayed appearance of super-virial gas at 1 Gyr due to cooler initial halo conditions, highlighting sensitivity to initial setup and heating dynamics.
§.§ How dense is the super-virial gas?
In this section, we scrutinize the density distribution of the super-virial gas. In the right panel of Figure <ref>, we present 2D histograms illustrating the relationship between temperature and density, weighted by mass. It shows the temperature evolution of 2-d probability distribution function (PDF) of gas mass as a function of the temperature and density
of the gas at t=1 Gyr for no CR (1st row), no CR and no Feedback (2nd row), low CR (3rd row), and high CR (bottom panel) cases. In these plots, we excluded the region around the satellites like before.
Notably, in all the cases, the density of the super-virial gas is approximately 3×10^-(2-1) cm ^-3
as the gas is closer to the disk, as shown in left panel of Figure <ref>.
§.§ How much super-virial gas is there in the galaxy?
Next, we quantify the mass of super-virial gas present in the galaxy. Given that the bulk of the super-virial gas resides within a 20 kpc radius, we depict the temporal evolution of super-virial gas masses within 20 kpc and 10 kpc using solid and dashed lines, respectively, in Figure <ref>. Here, we do not exclude the region around the satellites, as we would like to check if satellites also contributes to this gas phase. However, as the satellites are in circular orbits between 100kpc and 150kpc, the regions around them are automatically cut out while we calculate super-virial gas mass inside 20 kpc and 10 kpc. Key insights from this analysis include:
* Mass: The mass of super-virial gas fluctuates within the range of 1-2×10^7 M_⊙ throughout the entire 1.5 Gyr simulation period within 10 kpc, with 2 times larger mass within 20kpc. CGM gas mass being 2×10^11 M_⊙, the supervirial gas is only 0.1-1% of the total CGM gas, hence sub-dominant in the baryonic budget and does not contribute to the missing baryon problem.
* Effect of Feedback: The presence of the super-virial gas could have been a direct result of heating by stellar feedback. Therefore, if we don't include feedback in our simulations, we might expect no (or significantly smaller amount of) super-virial gas. Surprisingly, however, the simulations with no feedback display a higher and similar amount of mass of super-virial gas among all the runs. This shows that in our simulations, the stellar feedback is not predominantly responsible for generating the super-virial gas.
* Effect of Cosmic Rays: CR heating by streaming heating or Coulomb heating can be one of the heating source of the super-virial gas. It can also drive outflow that can heat the gas. We investigate this with two runs with CRs, one with low CR and one with high CR. Minimal disparities exist between CR and no CR runs. The runs incorporating CRs exhibit slightly more pronounced mass fluctuations compared to runs without CRs, with the high CR run showing an even larger fluctuations. Hence, in our simulations, the CR is not predominantly responsible for heating the gas to super-virial phase.
* Effect of Resolution:
We have conducted higher-resolution simulations (10 times higher resolution compared to our low resolution run), represented by dashed lines in Figure <ref>. Our anticipation was that these high-resolution (hr) runs would capture denser,super-virial gas compared to the low-resolution (lr) counterparts. We observe that the super-virial gas mass in the hr runs qualitatively similar to that in the lr runs. This consistency suggests that our findings remain robust across varying levels of resolution.
* Effect of Other Satellite Runs: We also check the other satellite runs described in <cit.>, with m10 and m08 satellites to check whether the satellite distribution has a significant effect on our results (not shown in the figure). We find that there is no significant difference in the super-virial gas mass budget with the difference in satellite distribution.
In summary, we find that the temporal evolution of super-virial gas mass within 20 and 10 kpc radii reveals fluctuations between 1-2×10^7 M_⊙, with a doubling of mass within the broader radius, constituting only 0.1-1% of the total CGM gas mass. Surprisingly, simulations without feedback display higher and more consistent super-virial gas masses, suggesting super-virial gas does not predominantly originate from feedback heating processes. Minimal disparities in super-virial gas mass fluctuations are observed between runs with and without cosmic rays, with higher CR runs showing slightly more pronounced fluctuations.
§.§ Where is the super-virial gas coming from?
Now the question is what is the mechanism by which this gas is heated. As noted above, CR heating and feedback heating do not seem to be responsible for heating to super-virial temperatures in our simulations. Therefore, we track the host gas particles with initial temperature T_ ini∼ 2 × 10^6 K, which reached their maximum temperature at a simulation time of 1 Gyr. We calculate the past temperature, density, and spherical radius of all such gas particles (host and satellite). We find that there are three sets of gas particles that contribute to the super-virial phase, with two sets from the host and one from the satellites. One set has initial temperature similar to the virial gas 2×∼10^6K (set-1 hereafter) whereas other set has initial temperature of a cold gas 10^4K (set-2 hereafter). There are another set (set-3 hereafter), which are gas particles, initially coming from satellites and found to be at super-virial temperature. We plot median and dispersion of properties of all these particles for each cases. We list the properties of set-1, set-2 and set-3 particles below :
§.§.§ Set-1: Host gas particles, T_ ini∼2×10^6K
In Figure <ref>, blue curves show the time evolution of the temperature (top row), density (2nd row), spherical radius (3rd row), and scale height (bottom row) of host gas particles with initial temperature T_ ini∼ 2 × 10^6 K, which reached their maximum temperature at a simulation time of 1 Gyr. These gas particles have initial temperature ∼2×10^6K in all the case except for high CR run where the initial temperature was ∼1×10^6K. This is because for high CR case, the initial halo has a lower thermal pressure, and in turn a lower temperature-halo. The Figure shows that gas heats up slowly over time and reaches over 6×10^6K at t=1 Gyr.
In the second row of Figure <ref>, we plot past density of these super-virial gas particles. All of them start from 10^-3 cm^-3, which is a typical density of the diffuse virial gas. Following their rise to super-virial temperatures at 1 Gyr, they cool down, reducing their density to 1, cm^-3.
Again high CR run starts from the lower density of 5×10^-4cm ^-3 than the density of other runs, as we are tracking the gas particles from a larger radius as we mention in next paragraph. But eventually they reach the same final density as all the other runs. Therefore, there is almost a factor of 2000 and 1000 increase in density in high CR and all the other cases respectively.
In the third panel of Figure <ref>, we plot the time evolution of the position of these super-virial gas. All of them are falling from outer halo (50-80 kpc) into the central 10 kpc region. For all the other cases the particles fall from 40-50 kpc whereas for high CR run, the gas particles fall from little farther away, 80-90 kpc.
As the gas particles fall inwards, they start heating up as this infall changes the gravitational potential energy, which can be converted to thermal energy and slowly heat up the gas. Until 0.9 Gyr, the temperature only increases to 3-4×10^6K. At 0.9 Gyr, these gas particles roughly reach 10 kpc which is the radius of Galactic disc. At that point, they get compressed by going from spherical to disc geometry, as shown by bottom-most row of Figure <ref>. And due to this compression, the gas is heated to over 6×10^6K and becomes super-virial gas. Therefore, it is clear that set-1 gas particles are infalling onto the disc and they are heated up by compression, and hence we see enhancement of density and temperature. Once they heat up to the super-virial phase, they cool down immediately because they become rotationally supported and radiate thermal energy away after they join the disc, as shown by the time evolution of the spherical radius r, which does not decline and remains constant after t = 1 Gyr. Please note that the smooth evolution of the density and temperature curves suggests the heating is compressive rather than shocks, as shown by tracking of individual particles in Figure <ref>.
§.§.§ Set-2: Host gas particle, T_ ini∼10^4K
In the top row of Figure <ref>, we plot the time evolution of the temperature (top row), density (2nd row), and spherical radius (bottom row) of the host gas particles with initial temperature T_ ini∼ 10^4 K, which reached their maximum temperature at a simulation time of 1 Gyr. The gas particles in these sets are very few except for high CR case and absent in no feedback case. These gas particles get colder and hotter over time by fluctuating between 10^4 K and 10K until 0.9 Gyr where the gas heats up and reaches over 6×10^6K at about 1 Gyr. These gas particles are probably heating from stellar feedback, hence are absent in no feedback case and are more in high CR case due to the additional heating from CR feedback by larger amount of CR.
In the middle row of Figure <ref>, we plot past density of these gas particles. All of them start from 10^-2cm ^-3 and finally become 10^2cm ^-3 right before the heating up at 1 Gyr. Then their density again drops down to 10^-2cm ^-3 once they heated up by feedback. In bottom row of Figure <ref>, we plot the past position of these super-virial gas. All of them remain in the 10 kpc to 20 kpc region.
Thus, we see that these set-2 gas particles represent the cold ISM gas, which gets colder and denser over time and may form stars. Then they suddenly heat up by stellar feedback. However, set-2 gas particles are less than 0.01 times the set-1 gas particles. As we can see in Figure <ref>, the fraction of total supervirial gas host mass is ∼1% due to feedback and ∼99% due to compressive heating. This implies the mass in the supervirial phase as traced by set-2 gas particles is smaller by a factor of ∼100, hence set-2 gas particles contribute very little to super-virial gas mass budget.
§.§.§ Set-3: Particles Initially Coming From Satellites
There is another set of gas particles that can heat up to super-virial temperature, as shown in Figure <ref>. The gas from satellites gets stripped and falls inward. As they accrete onto the galactic disc, they heat up due to the gravitational potential energy loss and compression, just as the
set-1 gas particles.
We showed in the Figure <ref>, that the fraction in this set is initially zero but starts developing at the time-scale of satellite gas infall, around 1.2 Gyr. It starts with as low as 0.1% of the supervirial budget , but slowly goes up to even 10% at 1.5 Gyr. Therefore, at later time, the contribution from satellite gas phase can also contribute to supervirial gas phase significantly.
§.§ What distinguishes gas which heats up to super-virial temperatures?
Orange curves in Figure <ref> plot the time evolution of the temperature (top row), density (2nd row), spherical radius (3rd row), and scale height (bottom row) of infalling virial gas particles that do not get heated up to super-virial temperatures. As we can see, they are also falling inwards, and as they do that, the temperature and density of the gas increases.
In Figure <ref>, we investigate the distribution of
probability distribution function of the angle (θ) to the galaxy rotation axis at t = 0 Gyr (left panel) and at maximum temperature (right panel), for all infalling gas particles with a virial temperature beyond 20 kpc. In the Left panel, we can see the gas particles that heat up to super-virial temperatures are closer to the rotation axis, whereas the gas particles that remain at virial temperature are further from the rotation axis. It implies that infalling virial gas which originates from close to the rotation axis reaches higher temperatures. This is consistent with the behavior of rotating hot inflows (or `rotating cooling flows') where the hot CGM phase inflows and spins up on a cooling timescale <cit.>. In these accretion flows gas has lower densities near the rotation axis, due to rotation support (see figure 4f in ). When the hot inflow approaches the disk, it transitions from a spherical geometry to a disk geometry. During this change in geometry, gas particles near the rotation axis suffer smaller radiative losses (which scale as density squared, see figure 4k there), and experience larger compressive heating rates (which scale inversely with density, see figure 4l there).
Supporting the rotating hot inflow interpretation, the right panel of Figure 7 shows that the θ distribution when the temperature is maximal is close to 90 degrees, distinctly different from the initial θ distribution. This means that the super-virial gas particles have indeed transitioned from a spherical to a disk geometry. Further support for this interpretation is that the gas cools immediately after reaching super-virial temperatures (Figure <ref>), as expected in rotating hot inflows which cool immediately onto the ISM after transition into a disk geometry <cit.>.
Therfore, some of the virial temperature gas is infalling from outer CGM (50-60 kpc) to the inner CGM (20-10 kpc). All of this inflowing gas loses gravitational energy and is heated up by a factor of 1.5-2.
When it reaches the inner CGM, the gas gets further heated up by another factor of 1.5-2 due to compressive heating when the hot inflow changes from a spherical to a disc geometry. This change in geometry is instigated by rotation support in the hot inflow.
Immediately thereafter, the hot inflow cools and joins the ISM disc.
Specifically, inflowing virial gas closer to the rotation axis of the galaxy suffers less radiative cooling and more compressive heating due to lower gas density near the axis, and therefore heats up more.
§.§ What amount of infalling virial gas heats up to super-virial phase?
To investigate how much virial gas is infalling, we calculate a cumulative distribution function (CDF) of the final position (at t= 1 Gyr) of the gas particles which starts at virial temperature beyond 20 kpc at t=0 Gyr. In the left panel of Figure <ref>, we show that only 10% gas particles enter 20 kpc at t=1 Gyr. With these gas particles which are 20 kpc at t=1 Gyr, we make a CDF of temperature distribution in right panel of Figure <ref>, where we see only 1-2% of these gas will heat up to super-virial temperatures.
§.§ Fate of this super-virial gas
This supervirial gas will also eventually cools down once it is supported by rotation, as we see in Figure <ref>, <ref>, and <ref>. In these figures, we denote the grey-shaded region to be the time evolution of the gas after they become super virial. We can see all the supervirial gas cools down in the later snapshot, consistent with the hot rotating inflow solution discussed in <cit.>. But the cooled gas will immediately be replenished by heating of continuously inflowing virial gas by the same process. Therefore, we see a constant amount of supervirial gas over time.
To summarize, we investigate mechanisms driving the heating of virial gas within a galaxy, finding that gas particles exhibit distinct heating behaviors based on their initial conditions and interactions. Set-1 gas particles, originating from the halo, closer to the rotation axis, heat up as they infall where the gas converts gravitational energy to heat mainly via compressive heating. This process causes gas infalling close to the rotation axis to reach super-virial temperatures just before cooling and joining the disk. In contrast, set-2 gas particles, likely representing the cold ISM gas, heat up sporadically, potentially due to stellar feedback, contribute minimally to the super-virial gas mass budget. Additionally, gas particles stripped from satellites and accreting onto the disc also heat up via compressive heating, further contributing to the super-virial phase. Some virial gas particles fail to heat up to supervirial temperature as they infall further from the rotation axis hence suffers more radiative cooling and less compressive heating. | null | null |
http://arxiv.org/abs/2409.17248v1 | 20240925180519 | Sign changes along geodesics of modular forms | [
"Dubi Kelmer",
"Alex Kontorovich",
"Christopher Lutsko"
] | math.NT | [
"math.NT",
"11M36, 11E45"
] |
Where is the Super-Virial Gas? The Supply from hot inflows
[
September 28, 2024
==========================================================
§ ABSTRACT
Given a compact segment, β, of a cuspidal geodesic on the modular surface, we study the number of sign changes of cusp forms and Eisenstein series along β. We prove unconditionally a sharp lower bound for Eisenstein series along a full density set of spectral parameters.
Conditioned on certain moment bounds, we extend this to all spectral parameters, and prove similar theorems for cusp forms.
The arguments rely in part on the authors' mean square bounds <cit.>, and on removing the assumption of the Lindelöf hypothesis from recent work of Ki <cit.>.
§ INTRODUCTION
Let Γ < _2()
be a discrete, cofinite group
acting on the upper half-plane by fractional linear transformations.
Given a real-valued automorphic function f : Γ→, we denote by Z_f its zero set, which separates the space into connected nodal domains. A key question in the analysis of f is to consider the number of nodal domains. Of particular interest, with applications to quantum chaos, is to study the number of nodal domains of
eigenfunctions of the hyperbolic Laplace-Beltrami operator,
as the eigenvalue goes to infinity. Henceforth we work specifically with the modular group Γ : = _2(); the proofs below can be generalized to congruence subgroups, as long as they include reflection symmetries.
Recall the spectral decomposition of L^2(Γ) into cusp forms and Eisenstein series.
The Eisenstein series for the modular group Γ is given by
E(z,s) : = ∑_γ∈Γ_∞Γ(γ z)^s,
where _∞ is the stabilizer of ∞ in .
This series converges absolutely for (s)>1, and has meromorphic continuation for all s∈. For s=1/2+it, the function
E_t(z):=E(z,1/2+it)
is an eigenfunction of the Laplacian (as well as all Hecke operators), and has Laplace eigenvalue =1/4+t^2 .
Moreover, a Maass cusp form is a function ϕ: → satisfying
* Δϕ +λϕ=0, λ = λ_ϕ >0,
* ϕ(γ z)=ϕ(z), γ∈Γ,
* and ϕ∈ L^2 (Γ) with L^2 norm 1.
Given such a cusp form ϕ, we write its eigenvalue as λ_ϕ =1/4 + t_ϕ^2.
A heuristic argument of Bogomolny and Schmit <cit.> gives a very precise prediction for the asymptotic number N^Ω(ϕ) of nodal domains of a Maass cusp form ϕ in a compact domain Ω⊆Γ, namely that N^Ω(ϕ) grows like a constant times λ_ϕ, as λ_ϕ→∞. While their prediction is supported by numerics, it seems currently out of reach, and even the weaker claim that N^Ω(ϕ)→∞ as λ_ϕ→∞ is
not currently known unconditionally (and may not be true for general surfaces, see <cit.>).
The space Γ has an orientation reversing isometry, σ(x+iy)=-x+iy.
We say that a nodal domain is inert if it is preserved by σ, and split if it is paired with another domain. We denote by N_ in(f) and N_ sp(f) the number of inert and split domains.
Let δ⊂_Γ denote the set of fixed points of σ, which is naturally partitioned as
δ=δ_1∪δ_2∪δ_3,
with δ_1={iy: y≥ 1}, δ_2={1/2+iy:y≥√(3)2} and δ_3={x+iy: 0<x<1/2, x^2+y^2=1}.
It was then observed in <cit.> that for an even cusp form (i.e., a cusp form satisfying ϕ(σ z)=ϕ(z)), one can bound N_ in(ϕ) by counting the number of sign changes of ϕ along δ, or more generally, along a non-empty compact segment β⊆δ. Explicitly, given a segment β⊆δ, let K^β(ϕ) denote the number of sign changes of ϕ along β, and N_ in^β(ϕ) the number of nodal domains intersecting β; then
1+1/2 K^β(ϕ)≤ N^β_ in(ϕ)≤ |Z_ϕ∩β|.
It is thus possible to reduce the problem of studying the number of (inert) nodal domains to studying the number of sign changes/zeros.
For this problem,
<cit.> proved,
assuming the Lindelöf hypothesis for the
L-functions attached to ϕ,
that,
given a compact geodesic segment β in δ_1 or δ_2,
t_ϕ^ν≪ |Z_ϕ∩β| ≪ t_ϕ,
for any ν < 1/12. (Note that
the upper bound here is unconditional and follows from general complexification techniques <cit.>.) In addition, these techniques can be applied to give a similar, although still conditional, lower bound for the same problem on Eisenstein series. Following this Jang and Jung <cit.> used arithmetic quantum unique ergodicity, to prove qualitatively that the number of nodal domains goes to ∞ with the eigenvalue. Moreover, Jung and Young <cit.> proved an unconditional but weaker lower bound for Eisenstein series with ν < 1/51.
Recently, Ki <cit.> proved an essentially sharp (in the exponent) lower bound for both Maass forms and the Eisenstein series, conditional on both the Lindelöf hypothesis for the associated L-function and a fourth moment bound along β. Explicitly, Ki shows that for any >0,
|Z_f∩β| ≫_ t_f^1-,
where f is either a Maass form or the Eisenstein series
(Ki's technique can also be applied to sign changes, K^β(f)). Our Theorem <ref> recovers this sharp lower bound for Eisenstein series without the assumption of the Lindelöf hypothesis, and in
Theorem
<ref> we also remove assumption on the fourth moment bound, by restricting to a full-density subset of forms. Moreover, Theorems <ref> and <ref> show similar results for cusp forms,
conditioned on an L^2 estimate for L-functions (namely, Conjecture <ref>).
While we specialize to the modular surface, we can extend this work to congruence subgroups with reflection symmetries. In addition, we specialize our analysis to the central line z=iy, but
this
can also be
extended
to any cuspidal geodesic, see Remark <ref>.
§.§ Main results
The main goal of
this paper is to prove the same bound as Ki's (<ref>) for the Eisenstein series, without assuming the Lindelöf hypothesis.
Let β=i[a,b] be a compact segment of the imaginary line, and
suppose that there is some p>2 such that for all >0,
(∫_a^b |E_t(iy)|^p y y)^1/p≪_ t^,
as t→∞.
Then
for any >0,
K^β(E_t) ≫_
t^1-,
as t→∞.
Explicitly what we show is that the bound of order t^ϵ for the L^p norm, implies a lower bound of order t^1-ϵ' for K^β(E_t) with any ϵ'>8p/p-2ϵ (see <ref>). In particular, a sufficiently strong subconvex bound for the sup norm of E_t of order t^ν with ν<1/8 is already sufficient to obtain a non trivial lower bound for K^β(E_t). We note however that with the current best known bound for the sup norm of E_t we can only take ν>1/3, which is not sufficient to get an unconditional improvement here.
While (<ref>) is beyond the reach of current techniques, it follows from the sup-norm conjecture. We can show that the sup norm bounds do hold for a full-density set of spectral parameters; here we say that ⊂ is of full density to mean that |∩ [T,2T]|/T→ 1 as T→∞. This yields the following unconditional estimate.
Let β=i[a,b] be a compact segment of the imaginary line. For any m>17 there is a set =(m)⊆ of full density, such that for all
t ∈,
K^β(E_t) ≥t/(log t)^m.
The key insight in the proof of Theorem <ref> is to show that, rather than the Lindelöf hypothesis, one can make do with an estimate on the L^2 norm of the L-function associated to the Eisenstein series, which translates to a fourth moment estimate on the Riemann zeta function. For Maass forms, we can make the same simplification. However, while the L^2 estimate for the associated L-function is certainly weaker than the Lindelöf hypothesis and is known in many instances, it is still not known in the precise setup needed
in our context.
In fact, such estimates also appear in the study of restricted quantum unique ergodicity for Maass forms and would be of interest there (see <cit.>). We state the requisite L^2 estimate below as Conjecture <ref>. Assuming this conjecture holds, we can prove the analogues of Theorem <ref> and Theorem <ref> in the context of cusp forms:
Fix β=i[a,b] a compact segment of the imaginary line, and
assume that there is some p>2 such that for any even Hecke cusp form ϕ and any >0,
(∫_a^b |ϕ(iy)|^p y y)^1/p≪_ t_ϕ^.
Further, assume Conjecture <ref>.
Then for any >0,
K^β(ϕ) ≫_ t_ϕ^1-.
Once again, we can prove the sup-norm conjecture for ϕ for a set of forms of full density, as follows.
Fix β, a compact segment of the imaginary line i[a,b], and assume Conjecture <ref>. For any >0, there is a full density set =()⊆ such that
for all
j ∈,
K^β(ϕ_j) ≥ t_ϕ_j^1-.
In Ki's recent paper, he establishes the aforementioned conditional version of Theorem <ref>, conditional on the Lindelöf hypothesis for the appropriate L-function. Furthermore, he proves a completely unconditional statement for all cusp forms (or Eisenstein series) <cit.>, however this theorem applies only to certain geodesics with real part outside of exceptional intervals. Thus he is unable to say anything about the special case of the imaginary line.
Nodal domains: Studying sign changes is intrinsically an interesting problem, moreover this problem is related to the study of nodal domains. Let N(f) denote the number of nodal domains of f on Γ, using a heuristic argument and the random wave model, Bogomolny and Schmit <cit.> conjectured the following asymptotic law for N(f) when f is a even Maass cusp form:
[Bogomolny-Schmit Conjecture]
Let {ϕ_n}_n∈ be the even Maass cusp forms ordered according to eigenvalue. Then
N(ϕ_n) ∼2/π (3√(3)-5)n
as n →∞.
By a general argument of Courant <cit.> we know that
N(ϕ) ≤ n_ϕ = 1/24 t_ϕ^2(1+o(1)),
however a lower bound is far more elusive. One approach is to split the nodal domains into inert and split domains (those either fixed or paired off by σ respectively). In this case, one can lower bound the number of inert domains intersecting iβ, N_in^β(ϕ) (and thus the total number of inert domains) by
K^β(ϕ) ≤ N_in^β(ϕ).
Unfortunately, we can bound the total number of inert domains intersecting the imaginary axis by N_in < t_ϕlog t_ϕ. Hence this approach has no hope of proving the Bogomolny-Schmit conjecture. That said, our Theorem <ref> gives an improved (conditional) lower bound, and Theorem <ref> gives the same lower bound for almost all cusp forms.
Note that for certain real Riemann surfaces, Zelditch showed logarithmic growth of the number of nodal domains, along a full-density sequence of eigenvalues, see <cit.>. Using the bound (<ref>), our Theorem <ref> (conditionally) produces nearly linear growth.
As stated, the above theorems concern the geodesic z=iy. In fact, the proof below works for any cuspidal geodesic x+iy with x= p/q a rational number. For this, we require estimates on the second moment of the series
∑_na_f(n)e(nx)/n^s
and a lower bound on the L^2-norm of the Eisenstein series/cusp form along β={x+iy: a<y<b}.
The lower bound is proved in <cit.> for Eisenstein series, and in <cit.> (although this is only proved for the lines x=0, and x=12) for cusp forms.
For the estimates on the twisted L series, we split into congruence classes modulo q using Dirichlet characters. This allows us to write the L function as
1/ϕ(q)∑_a q e_q(aq) ∑_χχ(a) ∑_na_f(n)χ(n)/n^s.
Now for cusp forms, bounding the inner twisted L-function requires us to extend Conjecture <ref> to these. For the Eisenstein series, this requires known estimates for the 4th moment of Dirichlet L-functions <cit.>.
§.§ Proof strategy
For both Eisenstein series and Maass forms, the proofs of Theorems <ref>, <ref>, <ref> and <ref> follow the same strategy. The starting point is <cit.>, wherein Ki conditionally proves the inequality (<ref>) for all cusp forms (the method also applies to Eisenstein series).
The first key idea in our proof is a modification of Ki's argument, allowing us to replace the full strength of the Lindelöf hypothesis with corresponding bounds on the second moment of the associated L-function. For cusp forms, this is Conjecture <ref>, while for the Eisenstein series, this boils down to fourth moment estimates on the Riemann zeta function which are well-known (see <ref>).
The second point (necessary only for the proofs of Theorems <ref> and <ref>) is that, while the sup norm bound remains open for both cusp forms and the Eisenstein series, it is known on average over the spectral parameter. For the Eisenstein series, the authors <cit.> proved a mean square bound which implies the sup norm bound for almost all spectral parameters (see <ref>). For cusp forms,
a simple argument using the pre-trace formula gives similar bounds on average
(see <ref>).
The proof of Theorem <ref> is identical, but (<ref>) allows one to avoid appealing to sup norm bounds on average. We focus on Theorems <ref> and <ref> below since the proofs include one extra step.
§.§ Notation
We use standard Vinogradov notation that f≪ g if there is a constant C>0 so that f(x)≤ C g(x) for all x.
§.§ Acknowledgements
We thank Valentin Blomer, Henryk Iwaniec, and Matt Young for many insightful discussions.
This paper was written while the second-named author was visiting Princeton University; he would like to express his gratitude for their hospitality.
§ PRELIMINARIES
§.§ Littlewood's sign changes lemma
A key analytic ingredient in Ki's proof is <cit.>, which is a variant on a theorem of Littlewood <cit.> controlling the number of zeros of a real valued function. While Ki's formulation (as well as Littlewood's) discusses the number of zeros, we note that the argument actually controls the number of sign changes. For the sake of completeness, we include the proof of this result below.
Given a real valued function f on the interval I=[a,b] let M_p(f) denote the L^p(I) norm:
M_p(f) : = (1/|I|∫_If(y)^p y )^1/p.
The following is a slight variant of <cit.>.
Let f be a real valued function defined on an open interval containing I=[a,b].
Let N∈ be sufficiently large so that f is defined on [a,b+η] with η=|I|/N, and define
J(f, η) = 1/|I|∫_I ∫_0^η f(y+ v) v y.
Suppose that there is some c∈(0,1) such that M_1(f)≥ c M_2(f) and that
J(f, η) < c^3 η M_2(f)/16.
Then the number of sign changes, K^I(f), of f on I satisfies
K^I(f) ≥c^2 N/8.
By scaling and shifting f we may assume that I=[0,1] and η=1/N.
For any 1≤ m≤ N let I_m=[m-1/N,m/N), and define
J_m(f,η):=∫_I_m|∫_0^η f(y+t)dt|dy.
Let _1={1≤ m≤ N: }
and let _2 be its complement. Since in any interval I_m with m∈_1 there is at least one sign change of f, we have that K^I(f)≥ |_1|=N-|_2|.
Let E=∪_m∈_2 I_m so that |E|=|_2|/N and the result will follow by showing that |E|<1-c^2/8. We assume now that |E|>1-c^2/8 and proceed by contradiction.
Let H:={y∈ I: |f(y)|≥c M_2(f)/2}, then the assumption M_1(f)≥ c M_2(f) implies that |H|≥ 1-c^2/4.
Indeed, we can estimate
c M_2(f) ≤ M_1(f) ≤ ∫_Hf+∫_H^cf ≤ |H|^1/2M_2(f)+(1-|H|)c M_2(f)/2.
Setting X=√(|H|), then from the above display we see that X^2-2/cX+1≤ 0 hence X> c/2 so |H|>c^2/4.
For any m∈_2 we have that
J_m^*(f,η):=∫_I_m∫_0^η |f(y+t)|dtdy=J_m(f,η). We can estimate on one hand
∑_m∈_2J_m^*(f,η)=∑_m∈_2J_m(f,η)≤ J(f,η)<c^3 η M_2(f)/16.
On the other hand, we have
∑_m∈_2J_m^*(f,η)=∫_0^η∫_E |f(y+t)|dydt=∫_0^η∫_E_t |f(y)|dydt,
where E_t is the shift of E by t. By our assumption |E_t|=|E|>1-c^2/8 and since
c^2/4≤ |H|=|H∩ E_t|+|H∩ E_t^c|<|H∩ E_t|+c^2/8,
using our bounds on |H|, we also have that |H∩ E_t|>c^2/8.
We can thus bound
∫_0^η∫_E_t |f(y)|dydt≥∫_0^η∫_E_t∩ H |f(y)|dydt> c^3 η M_2(f)/16,
in contradiction.
§.§ Preparation for Eisenstein series
We now collect a number of results regarding the Eisenstein series and its L-function that will be needed in our proof.
In previous work, the authors
proved the following mean square bounds on the Eisenstein series:
Given a compact region Ω⊆ there is c>0 such that for all z∈Ω
1/T∫_T^2TE(z, 1/2 + it ) ^2 t ≤ c log^4(T).
For any compact set Ω and any a>2 there is a set =_Ω,a⊆ satisfying that
* |∩ [T,2T]|=T(1+O(1/log(T)^2a-4))
* For any t∈ for any z∈Ω we have |E_t(z)| ≤log(t)^a.
The next result is a lower bound on M_2(E_t) given in <cit.> that is needed in order to apply Lemma <ref>.
M_2(E_t) : = (1/b-a∫_a^b E_t(iy)^2 y )^1/2≫ (log T)^1/2
for any t∈ and any fixed segment (a,b).
The final result we need is about the size of the L-function of the Eisenstein series on the critical line, which can be written explicitly in terms of the Riemmann zeta function. Recall, the Lindelöf hypothesis predicts that, for any >0 and all t ∈, one has |ζ(1/2 + it)|= O((1+|t|)^). While the Lindelöf hypothesis is far from reach of modern technology, there are some results concerning moment bounds on the zeta function which will suffice for our purposes. The following classical theorem was proven by Heath-Brown
There is κ>0 such that for any T large one has
1/T∫_0^Tζ(1/2+ it)^4 t = P_4(log(T))+O(T^-κ),
with P_4(x) a polynomial of degree 4.
§.§ Preparation for Maass forms
We now collect the corresponding results we need to apply the argument for Maass forms.
The first result regards the sup norm of Maass forms. While we cannot prove the conjectured sup norm bounds for Maass forms, we can prove the following mean square bounds on them, which imply the mean square bounds on average. While this result is not new (see e.g <cit.> we include a proof for the sake of completeness.
There is C>0 such that for all z in a compact set Ω⊂ we have the following bounds:
∑_t_ϕ_j≤ Tϕ_j(z)^2 ≤ C T^2
We recall some well known results on the pre-trace formula and refer to <cit.> for more details.
Given a point pair invariant k(z,w)=k(sinh^2(d(z,w))) with d(z,w) the hyperbolic distance and k∈ C^∞_c(ℝ^+), its spherical transform is defined as H(s)=∫_ℍ^2 k(z,i)(z)^sdμ(z). By <cit.> the point pair invariant can be recovered from H(s) as follows : Let h(r)=H(12+ir) and let g(u)=1/2π∫_-∞^∞ h(r)e^-irudr denote its Fourier transform, then, defining the auxiliary function Q∈ C^∞_c(ℝ^+) by g(u)=Q(sinh^2(u/2)) we have that k(t)=-1/π∫_t^∞dQ(r)/√(r-t). We also recall that
k(0)=1/2π∫_0^∞ h(r)rtanh(π r)dr (see <cit.>).
Given any such point pair invariant we have the pre-trace formula
∑_γ∈Γ k(z,γ z)=∑_j h(t_ϕ_j)|ϕ_j(z)|^2+1/2π∫_ℝ h(t)| E(z,12+it)|^2dt.
Now, fix a smooth even compactly supported function g(u)∈ C^∞_c((-1,1)) with Fourier transform h(r)≥ 0 for r∈ and h(r)≥1/2 for |r|≤ 1. For any T≥ 1 let g_T(u)=Tg(Tu) so that h_T(r)=h(r/T) and k_T(z,w) the corresponding point pair invariant.
Since g_T(u) is supported on (-1/T,1/T) the point pair invariant k_T(z,w) is supported on the set
{(z,w)| d(z,w)≤1/T} with d(z,w) the hyperbolic distance. Since Γ acts properly discontinuously on ℍ^2 for any fixed z there is δ=δ(z) such that d(z,γ z)≥δ for any γ∈Γ with γ z≠ z. In particular taking T_0≥sup{z∈Ω: 1/δ(z)}, for any T≥ T_0 we have that
k_T(z,γ z)=0 if γ z≠ z. Hence for any T≥ T_0 we have
∑_j h(t_ϕ_j/T)|ϕ_j(z)|^2+1/2π∫_ℝ h(t/T)| E(z,12+it)|^2dt = |Γ_z|k_T(0).
Since h(t)≥ 0 is positive we can bound
∑_t_ϕ_j≤ T |ϕ_j(z)|^2 ≤ 2|Γ_z|k_T(0)
= |Γ_z|/π∫_0^∞ h(r/T)rtanh(π r)dr≪ T^2.
For any compact set Ω⊆^2 and any >0 there is a set =_Ω,⊆ satisfying that
* |∩ [T,2T]|=T(1+O(T^-))
* For any j∈ for any z∈Ω we have |ϕ_j(z)| ≤ t_ϕ_j^.
Once again, the lower bound we need for M_2(ϕ) is known, this time having been proved by Ghosh, Reznikov and Sarnak <cit.>.
M_2(ϕ) : = (1/b-a∫_a^b ϕ(iy)^2 y )^1/2≫ 1
for any segment (a,b).
The final ingredient we need is an estimate for the L-function associated to the cusp for ϕ, we now describe.
Given a cusp for ϕ, we consider the Fourier expansion
∑_n ≠ 0ρ_ϕ(n) y^1/2 K_it_ϕ (2πn y) e(nx),
where K is the K-Bessel function. Furthermore, we let λ_ϕ(n) = ρ_ϕ(n)/ρ_ϕ(1) denote the eigenvalues of the Hecke operators.
With the Fourier coefficients in hand we define the associated L-function
L_ϕ(s) : = ∑_n=1^∞λ_ϕ(n)/n^s.
The following conjecture gives a mean square bound for this L-function.
Let ϕ be a Maass form with spectral parameter t_ϕ. There exists a δ>0 such that, for 2T≤ t_ϕ≤ T^1+δ and every >0, we have
1/T∫_T^2TL_ϕ(1/2+it)^2 t ≪ t_ϕ^,
as T→∞.
Such an estimate clearly follows from the Lindelöf hypothesis, and we note that for the range 2T > t_ϕ the estimate (<ref>) is known (see <cit.>). While it is possible that our range 2T≤ t_ϕ≤ T^1+δ is also within reach of current technology we were not able to establish it and thus leave it as an open conjecture.
While we cannot prove the conjectured sup norm bounds for Maass forms, we can prove the following mean square bounds on them, which imply the mean square bounds on average.
There is C>0 such that or all z in a compact set Ω⊂ we have the following bounds:
∑_λ_j≤ Tϕ_j(z)^2 ≤ C T
We recall some well known results on the pre-trace formula and refer to <cit.> for more details.
Given a point pair invariant k(z,w)=k(sinh^2(d(z,w))) with d(z,w) the hyperbolic distance and k∈ C^∞_c(ℝ^+), its spherical transform is defined as H(s)=∫_ℍ^2 k(z,i)(z)^sdμ(z). By <cit.> the point pair invariant can be recovered from H(s) as follows : Let h(r)=H(12+ir) and let g(u)=1/2π∫_-∞^∞ h(r)e^-irudr denote its Fourier transform, then, defining the auxiliary function Q∈ C^∞_c(ℝ^+) by g(u)=Q(sinh^2(u/2)) we have that k(t)=-1/π∫_t^∞dQ(r)/√(r-t). We also recall that
k(0)=1/2π∫_0^∞ h(r)rtanh(π r)dr (see <cit.>).
Given any such point pair invariant we have the pre-trace formula
∑_γ∈Γ k(z,γ z)=∑_j h(r_j)|ϕ_j(z)|^2+∑_i=1^κ1/2π∫_ℝ h(r)| E_Γ,ξ_i(12+ir,z)|^2dr,
where ξ_1,…,ξ_κ are the cusps of Γ.
Now, fix a smooth even compactly supported function g(u)∈ C^∞_c((-1,1)) with Fourier transform h(r)≥ 0 for r∈ and h(r)≥1/2 for |r|≤ 1. For any T≥ 1 let g_T(u)=Tg(Tu) so that h_T(r)=h(r/T) and k_T(z,w) the corresponding point pair invariant.
Since g_T(u) is supported on (-1/T,1/T) the point pair invariant k_T(z,w) is supported on the set
{(z,w)| d(z,w)≤1/T} with d(z,w) the hyperbolic distance. Since Γ acts properly discontinuously on ℍ^2 for any fixed z there is δ=δ(z) such that d(z,γ z)≥δ for any γ∈Γ with γ z≠ z. In particular taking T_0≥sup{z∈Ω: 1/δ(z)}, for any T≥ T_0 we have that
k_T(z,γ z)=0 if γ z≠ z. Hence for any T≥ T_0 we have
∑_j h(r_j/T)|ϕ_j(z)|^2+∑_i=1^κ1/2π∫_ℝ h(r/T)| E_Γ,ξ_i(12+ir,z)|^2dr = |Γ_z|k_T(0).
Since h(r)≥ 0 is positive we can bound
∑_r_j≤ T |ϕ_j(z)|^2 ≤ 2|Γ_z|k_T(0).
= |Γ_z|/π∫_0^∞ h(r/T)rtanh(π r)dr≪ T^2
§.§.§ Moment bounds for zeta
Recall, the Lindelöf hypothesis predicts that, for any >0 and all t ∈, one has |ζ(1/2 + it)|= O((1+|t|)^). While the Lindelöf hypothesis is far from reach of modern technology, there are some results concerning moment bounds on the zeta function which will suffice for our purposes. The following classical theorem was proven by Ingham.
For any T large one has
1/T∫_0^Tζ(1/2+ it)^4 t ∼1/2π^2log(T)^4.
§.§.§ Mean square bounds on Eisenstein series
In a previous paper, we proved the following mean square bounds on the Eisenstein series:
For all z in a compact region Ω⊂
1/T∫_T^2TE(z, 1/2 + it ) ^2 t ≤ c log^4(T).
§ PROOF OF THEOREM <REF>
We start by proving Theorem <ref>. The proof for cusp forms is more or less identical; we explain the major differences in <ref>. The proof for both is an application of Theorem <ref> for which we require a lower bound on M_2(·) (see Proposition <ref>) and an upper bound on J(·).
§.§ Upper bound on J
Rather than work with E(z,s) it is more convenient to work with
f_s(z) = 1/√(y)E(z,s),
since y is bounded away from 0 and ∞, any statement about nodal lines for f_s holds equally well for E(·, s).
Thanks to Theorem <ref>, our goal is now to bound
J_f(s) = J(f_s,(a,b), η) := 1/b-a∫_a^b ∫_0^η f_s(i(y+v)) v y
on the line s = 1/2+it.
For any compact interval (a,b) ⊂_>0 and η = t^δ-1 for some δ >0, we have that there exists an >0 such that
J_f(s) ≪1/t^2 + t^-η + 1/t^1-δ_1
for any choice of δ_1 >0.
First, Fourier expand the Eisenstein series: <cit.>
E(z,s) = y^s + φ(s) y^1-s + 4 √(y)/θ(s)∑_n=1^∞η_s-1/2(n) K_s-1/2(2π n y) cos(2π n x)
with θ(s) = π^-sΓ(s) ζ(2s) and φ(s) = θ(1-s) θ(s)^-1, and where
η_t(n) = ∑_ab=n(a/b)^t.
With that, we define the L-function
L(t,r) = ∑_n ≥ 1η_it(n)/n^r
It's well-known that this L-function can be related to the Riemann zeta function:
L(t,r) = ∑_n ≥ 11/n^r+it∑_d| n d^2it
= ∑_d ≥ 1 d^2it∑_n ≡ 0 d 1/n^r+it
= ∑_d ≥ 1 d^2it∑_n ≥ 11/n^r+it d^r+it
=ζ(r+it)ζ(r-it).
Now using <cit.> we can relate J_f(1/2+it)^2 to this L function. More specifically, if we set s=1/2+it we can write
J_f(1/2+it)^2 = (1/b-a∫_a^b ∫_0^η f_s(i(y+v)) v y)^2
≪(1/b-a∫_a^b (y+η)^s+1/2 - y^s+1/2/s+1/2 +φ(s)(y+η)^3/2-s - y^3/2-s/3/2-s y)^2
++ + (1/b-a∫_a^b ∫_0^η[4 /θ(s)∑_n=1^∞η_it(n) K_it(2π n (y+v)) e(nx) ] v y)^2
≪1/t^2- + (t),
where
(t) = (1/b-a∫_a^b ∫_0^η[4 /θ(s)∑_n=1^∞η_it(n) K_it(2π n (y+v)) e(nx) ] v y)^2
From here we can expand the K-Bessel function <cit.>, that is
K_it(z) = (z/2)^it/4π i∫_(c)Γ(ν) Γ(ν-it) (z/2)^-2νν,
and set c=1/4, yielding
(t) ≪∫_a^b ∫_0^η∫_(1/4)[ (y+v)^-2ν+it/θ(1/2 + it)Γ(ν) Γ(ν-it) ∑_n=1^∞η_it(n) (π n)^-2ν+it] ν v^2 y
≪∫_a^b ∫_0^η∫_(1/4)[ (y+v)^-2ν/θ(1/2 + it)Γ(ν+it/2) Γ(ν-it/2) ∑_n=1^∞η_it(n) (π n)^-2ν] ν v^2 y
≪∫_a^b ∫_(1/2)I(η,ν,y) γ(ν,t) L(t,ν)^2 ν y
where γ(ν,t) = Γ(ν+it/2) Γ(ν- it/2)/θ(1/2 + it)π^-ν and I(η,ν,y) : = ∫_0^η (y+v)^-ν v.
Write ν = 1/2+ ir, then we can bound the integral, I by
I(η,ν,y) = ∫_0^η (y+v)^-1/2 e^ -i log(y+v)r v
≪min(η, 1/r)
(note here that we will take η∼ t^δ-1 and (a,b) is fixed).
Via Stirling's formula the γ-factor can be bounded by
γ(ν,t) = Γ(1/2 + ir +it/2) Γ(1/2 + ir -it/2)/θ(1/2 + it)π^-1-ir
≪ e^-π (t+r)/4 e^-πr-t/4/e^-π t/2ζ(1+2it)1/((1+r-t)(r+t))^1/4.
Note we can bound ζ(1+2it) ≫1/log(t)^7 using standard techniques (see <cit.>).
First consider the region (ν) > t log t:
∫_a^b ∫_tlog tI(1/2+r,t,y) γ(1/2+r,t) L_0(t,1/2+r)^2 ν y≪
t^∫_tlog te^-π (r-t)/r^3L(t,1/2+r)^2 ν .
Using (<ref>) and the convexity bound on the zeta function ζ(1/2 + it) ≪ t^1/4 we then have
∫_a^b ∫_tlog tI(1/2+r,t,y) γ(1/2+r,t) L(t,1/2+r)^2 ν y≪
t^∫_tlog te^-π r/2/r^2ν≪1/t^2 .
This leaves us with the contribution for (ν) < tlog t which we can bound by
(t) ≪ t^η^2 ∫_0^1/η1/(r^2-t^2)^1/2L(t,1/2+ir)^2 r
=====+
t^-1/2η^2 ∫_1/η^t (1+r-t)^-1/2L(t,1/2+ir)^2 r
=====+
1/t^2+1/2-δ_1∫_t ^tlog t (1+r-t)^-1/2L(t,1/2+ir)^2 r.
The first term can be bounded using the triangle inequality, and for the latter terms we can shift r by t
(t) ≪ t^-1η^2 ∫_0^1/ηL(t,1/2+ir)^2 r+
t^-1/2η^2 ∫_0^t- 1/η (1+ r)^-1/2L(t,1/2+i(t-r))^2 r +
++++++++++1/t^2+1/2-δ_1∫_0 ^tlog t-t (1+r)^-1/2L(t,1/2+i(r+t))^2 r,
Both terms inside the brackets can be bounded similarly, since the second term is integrating over a slightly longer segment, we focus on that one.
Dyadically decomposing the integral yields
∫_0 ^tlog t-t (1+r)^-1/2L(t,1/2+i(r+t))^2 r
≪∫_0^1L(t,1/2+i(r+t))^2 r
+∑_k=0^log(t^1+)∫_ 2^k ^2^k+1 r^-1/2L(t,1/2+i(r+t))^2 r.
Now we can use (<ref>) relate this to ζ
≪∫_0^1ζ(1/2+ir) ζ(1/2+ir+2it)^2 r
+∑_k=0^log(t^1+)1/2^k/2∫_ 2^k ^2^k+1ζ(1/2+ir) ζ(1/2+ir+2it)^2 r
≪(∫_0^1ζ(1/2+ir)^4 r ∫_0^1ζ(1/2+ir+2it)^4 r)^1/2
+++
+∑_k=0^log(t^1+)1/2^k/2(
∫_ 2^k ^2^k+1ζ(1/2+ir)^4 r∫_ 2^k ^2^k+1ζ(1/2+ir+2it)^4 r
)^1/2
Finally, applying the fourth moment bound (<ref>), we conclude
≪(∫_0^3tζ(1/2+ir)^4 r)^1/2
+t^∑_k=0^log(t^1+)(
∫_ 0 ^tlog tζ(1/2+ir)^4 r
)^1/2≪ t^1/2+.
Rather than work with E_t(z) it is more convenient to work with
f_t(z) = 1/√(y)E_t(z),
since y is bounded away from 0 and ∞, any statement about zeroes or nodal lines for f_t holds equally well for E_t.
Thanks to Theorem <ref>, our goal is now to bound
J(f_t,η) := 1/b-a∫_a^b ∫_0^η f_t(i(y+v)) v y.
Fix an interval (a,b) ⊂_>0 for all 2/t<η <1 and t≥ 10 sufficiently large
J(f_t,η) ≪η ( log(t)^9/√(η t) +log(t)^7/t^κ/2)+ (log(t))^9/t
First, Fourier expand the Eisenstein series: <cit.>
E(z,s) = y^s + φ(s) y^1-s + 4 √(y)/θ(s)∑_n=1^∞η_s-1/2(n) K_s-1/2(2π n y) cos(2π n x)
with θ(s) = π^-sΓ(s) ζ(2s) and φ(s) = θ(1-s) θ(s)^-1, and where
η_t(n) = ∑_ab=n(a/b)^t.
With that, we define the L-function
L(t,ν) = ∑_n ≥ 1η_it(n)/n^ν
It's well-known that this L-function can be related to the Riemann zeta function:
L(t,ν) = ∑_n ≥ 11/n^ν+it∑_d| n d^2it
= ∑_d ≥ 1 d^2it∑_n ≡ 0 d 1/n^ν+it
= ∑_d ≥ 1 d^2it∑_n ≥ 11/n^ν+it d^ν+it
=ζ(ν+it)ζ(ν-it).
Following <cit.> we can relate J(f_t,η)^2 to this L-function. Specifically, we can write
J(f_t,η)^2 = (1/b-a∫_a^b ∫_0^η f_t(i(y+v)) v y)^2
≪(1/b-a∫_a^b (y+η)^1+it - y^1+it/1+it +φ(1/2+it)(y+η)^1-it - y^1-it/1-it y)^2
++ + (1/b-a∫_a^b ∫_0^η[4 /θ(s)∑_n=1^∞η_it(n) K_it(2π n (y+v)) e(nx) ] v y)^2
≪1/t^2 + (t),
where
(t) = (1/b-a∫_a^b ∫_0^η[4 /θ(s)∑_n=1^∞η_it(n) K_it(2π n (y+v)) e(nx) ] v y)^2.
From here we can expand the K-Bessel function <cit.>, that is,
K_it(z) = (z/2)^it/4π i∫_(c)Γ(ν) Γ(ν-it) (z/2)^-2νν,
and set c=1/4, yielding
(t) ≪∫_a^b ∫_0^η∫_(1/4)[ (y+v)^-2ν+it/θ(1/2 + it)Γ(ν) Γ(ν-it) ∑_n=1^∞η_it(n) (π n)^-2ν+it] ν v^2 y
= ∫_a^b ∫_0^η∫_(1/4)[ (y+v)^-2ν/θ(1/2 + it)Γ(ν+it/2) Γ(ν-it/2) ∑_n=1^∞η_it(n) (π n)^-2ν] ν v^2 y
≪∫_a^b ∫_(1/2)I(η,ν,y) γ(ν,t) L(t,ν)^2 ν y
where γ(ν,t) = Γ(ν+it/2) Γ(ν- it/2)/θ(1/2 + it)π^-ν and I(η,y;ν) : = ∫_0^η (y+v)^-ν v.
We now estimate the inner integral.
Write ν = 1/2+ ir, and using the invariance under r↦ -r it is enough to estimate the integral
∫_0^∞I(η,y,1/2+ir) γ(1/2+ir,t) L(t,1/2+ir)^2 r.
Noting that η<1 and that the interval (a,b) is fixed, we can bound the integral, I by
I(η,y; 12+ir) = ∫_0^η (y+v)^-1/2 e^ -i log(y+v)r v
≪min(η, 1/|r|).
Using Stirling's formula, the γ-factor can be bounded by
γ(1/2+ir,t) = Γ(1/2 + ir +it/2) Γ(1/2 + ir -it/2)/θ(1/2 + it)π^-1-ir
≪ e^-π |t+r|/4 e^-πr-t/4/e^-π t/2ζ(1+2it)1/((1+r-t)(r+t))^1/4
≪ (log(t))^7 e^-π |t+r|/4 e^-πr-t/4e^π t/2/ ((1+r-t)(1+|r+t|))^1/4
,
where we used the bound ζ(1+2it) ≫1/log(t)^7 (see <cit.>).
First when r≥ t we can bound
| I(η,y; 12+ir) γ(1/2+ir,t) |^2 ≪log(t)^14e^-π (r-t)/((1+(r-t))(1+(r+t))^1/2 r^2,
and using the convexity bound ζ(1/2 + it) ≪ t^1/4 for the zeta function we can bound
|L(t,1/2+ir)|^2=|ζ(1/2+i(t+r))ζ(1/2+i(r-t))|^2≪ ((1+(r-t))(1+(r+t))^1/2,
hence in this range
| I(η,y; 12+ir) γ(1/2+ir,t) |^2 |L(t,1/2+ir)|^2 ≪ t^-2log(t)^14 e^-π (r-t),
and we can bound
∫_t^∞I(η,y,1/2+ir) γ(1/2+ir,t) L(t,1/2+ir)^2 r ≪log(t)^14/t^2.
Next for the range r≤1/η≤t/2 we can bound
| I(η,y; 12+ir) γ(1/2+ir,t) |^2 ≪η^2 log(t)^14/t,
to get
∫_0^1/ηI(η,y,1/2+ir) γ(1/2+ir,t) L(t,1/2+ir)^2 r ≪η^2 log(t)^14/t∫_0^1/η |L(t,1/2+ir)|^2 r.
Now use Cauchy-Schwarz for the inner integral together with (<ref>) to bound
∫_0^1/η |L(t,1/2+ir)|^2 r ≪ (∫_0^1/η| ζ(1/2+i(t-r))|^4 r)∫_0^1/η|ζ(1/2+i(t+r)|^4 r)^1/2
≪∫_t-1/η^t+1/η |ζ(1/2+ir)|^4dt
≪ (t+1/η)P_4(log(t+1/η)))-(t-1/η)P_4(log(t-1/η))+O(t^1-κ)
≪log(t)^4/η+t^1-κ
to conclude that
∫_0^1/ηI(η,y,1/2+ir) γ(1/2+ir,t) L(t,1/2+ir)^2 r≪η^2(log(t)^18/η t +log(t)^14/t^κ).
Finally, in the range 1/η≤ r≤ t we first bound
| I(η,y; 12+ir) γ(1/2+ir,t) |^2 ≪ (log(t))^14/r^2((1+(t-r))(1+t+r))^1/2,
hence
∫_1/η^t I(η,y,1/2+ir) γ(1/2+ir,t) L(t,1/2+ir)^2 r ≪ (log(t))^14∫_1/η^t |L(t,1/2+ir)|^2/r^2 ((1+(t-r))(1+t+r)))^1/2 r
≪ (log(t))^14/√(t)∫_0^t-1/η|L(t,1/2+i(t-r))|^2/ (t-r)^2(1+r)^1/2 r.
Split the integral into dyadic intervals to estimate
∫_0^t-1/η|L(t,1/2+i(t-r))|^2/ (t-r)^2(1+r)^1/2 r ≪ t^-2∫_0^1 |L(t,1/2+i(t-r))|^2 r
+∑_k=1^log(t-1/η)1/2^k/2(t-2^k)^2∫_2^k-1^2^k|L(t,1/2+i(t-r))|^2 r
We can bound the first integral by
∫_0^1 |L(t,1/2+i(t-r))|^2 r =∫_0^1 |ζ(1/2+ir)|^2|ζ(1/2+i(2t-r)|^2 r
≤( ∫_0^1 |ζ(1/2+ir)|^4 r∫_2t-1^2t |ζ(1/2+ir|^4 r )^1/2
≪ t^1/2 (log t)^2
and for each dyadic interval with A=2^k≤ t we have
∫_A^2A|L(t,1/2+i(t-r))|^2 r ≪∫_A^2A |ζ(1/2+i(2t-r)|^2|ζ(1/2+ir)|^2 r
≪(∫_2t-2A^2t-A |ζ(1/2+ir|^4 r ∫_A^2A|ζ(1/2+ir)|^4 r)^1/2
≪ (Alog^4(t)+t^1-κ)^1/2(Alog^4(t))^1/2≪log^4(t)(A+t^1-κ/2A^1/2).
Hence
∫_0^t-1/η|L(t,1/2+i(t-r))|^2/ (t-r)^2(1+r)^1/2 r ≪ t^-3/2(log t)^2+(log(t))^4 ∑_k=1^log(t-1/η)2^k/2 +t^1-κ/2/(t-2^k)^2
≪ t^-3/2+ (log t)^4∫_1^log(t-1/η)2^u/2 +t^1-κ/2/(t-2^u)^2du
≪ (log t)^4/ t^3/2,
and
∫_1/η^t I(η,y,1/2+ir) γ(1/2+ir,t) L(t,1/2+ir)^2 r ≪ (log(t))^18/t^2.
Combining the three terms and integrating over the outer interval (a,b) we get that
J(f_t,η)^2≪η^2(log(t)^18/η t +log(t)^14/t^κ)+ (log(t))^18/t^2,
and taking a square root concludes the proof.
§.§ Proof of Theorem <ref>
First, by Proposition <ref>, there is a constant C_1 such that
M_2(f_t)≥ C_1
uniform for all t. Let ω=1/(b-a), N=t/(log t)^2m, and η=1/N.
By Proposition <ref>, there is a constant C_2 so that
J(f_t,η) ≤ C_2 η( log(t)^9-m +log(t)^7/t^κ/2)
Let a>2. Then by Theorem <ref>, there exists a set _β,a⊆ with
|_β, a∩ [T,2T]|=T(1+O(1/log(T)^2a-4)) so that that for any t∈_β,a we have that
sup_y∈ [a,b]f_t(iy)≤log(t)^a.
Hence for any t∈_β, a we can bound
M_1(f_t)≥M_2(f_t)/(log t)^a.
Let c=(log t)^-a then M_1(f_t)≥ c M_2(f_t).
Assuming m>9+3a, for all sufficiently large t we can bound
J(f_t,η) ≤c^3/16η M_2(f_t),
and hence by Theorem <ref> we can conclude that
N_β(f_t)≥t/(log t)^2(m+a).
And so the same statement holds for E_t(iy).
§.§ Proof of Theorem <ref>
Assume we have an L^p bound
M_p(f_t)≪_ϵ t^ϵ and use L^p interpolation to bound
M_2(f_t)^2≤ M_1(f_t)^p-2/p-1M_p(f_t)^p/p-1.
This combined with the lower bound M_2(f_t)≫ 1 implies that there is a constant C_1=C_1(ϵ) so that
M_1(f_t)≥ C_1 M_2(f_t) t^-ϵ p/p-2.
Let c=C_1t^- pϵ/p-2 and η=1/N=t^δ-1,
so that c^3η M_2(f_t)≥ C_2 η t^-3 pϵ/p-2.
From the upper bound
J(f_t,η)≤ C_5ηlog(t)^9t^-δ/2,
we see that J(f_t,η)≤c^3/16η M_2(f_t) as long as δ>6 pϵ/p-2 in which case Theorem <ref> implies that
K^β(f_t)≥ C_3 t^1-δ-κϵϵ/p-2
for an appropriate constant C_3>0.
In particular, we see that for any κ>8, for all sufficiently large t we have that
K^β(f_t)≥ t^1-κϵϵ/p-2,
from which the claim follows.
First, by Proposition <ref> we have that, for any s=1/2+it we have
∫_a^b f(y)^2 y ≫ 1.
Furthermore, by Proposition <ref> we have
J_f (s) ≪ t^δ_1-1 + t^-η.
If we set η = t^1-δ for some δ, we can ensure that, for any c ∈ (0,1]
J(f_s,η) < 1/16 c^3 η≪ M_2(f_s).
By Theorem <ref>, there exists a set _T,β⊂[T,2T] such that for all t∈_T,β and any >0, one has
sup_y∈ [a,b]f_s(iy)≪ t^.
Hence, by L^p interpolation, there exists a δ >0 such that
M_1(f_s) ≥ t^-δ M_2(f_s)
Thus, we can apply Theorem <ref> to conclude
N(f_s) ≫ t^1-δ.
Hence the same statement holds for E(·, s) on the line (z)=0.
§ PROOF OF THEOREM <REF>
The proof for cusp forms follows along nearly identical lines. Once again, to apply Theorem <ref> we require a lower bound on M_2(·) (see Proposition <ref> and an upper bound on J(·)).
§.§ Upper bound on J
For the bound on J(ϕ,η) we again renormalize
f(z) = 1/√(y)ϕ(z).
Hence our goal is to bound
J(f,η) := 1/b-a∫_a^b ∫_0^η f(i(y+v)) v y,
as follows.
For any compact interval (a,b) ⊂_>0 and η∈ (2/t,1), for any >0 we have that
J(f,η) ≪ηt_ϕ^/√(η t_ϕ)+t^-1 .
As for the Eisenstein series, we can again Fourier expand the Maass form,
f(iy)= ∑_n ≠ 0ρ_ϕ(n) K_it_ϕ (2πn y),
with ρ_ϕ(n)=ρ_ϕ(1)λ_ϕ(n),
and use the integral equation of the K-Bessel function to relate J(f,η) to the L-function (<ref>).
That is, we have that
J(f,η)^2 = (1/b-a∫_a^b ∫_0^η[ρ_ϕ(1)∑_n=1^∞λ_ϕ(n) K_it(2π n (y+v)) e(nx) ] v y)^2
≪∫_a^b ∫_(1/2)I(η,y; ν) γ(ν,t) L_ϕ(ν)^2 ν y
where γ(ν,t) = ρ_ϕ(1) Γ(ν+it/2) Γ(ν- it/2) π^-ν and I(η,y; ν) : = ∫_0^η (y+v)^-ν v.
We have the bound (<ref>) for I(η,y; 1/2+ir) as before and
using the bound ρ_ϕ(1) ≪ t_ϕ^ e^π t_ϕ/2 <cit.>, and Stirling's formula, we can similarly bound
γ(1/2+ir,t) ≪ t_ϕ^ e^-π |t_ϕ+r|/4 e^-πr-t_ϕ/4e^π t/2/ ((1+r-t_ϕ)(1+|r+t^ϕ|))^1/4.
We can again reduce the inner integral to the range 0<r<∞ and split it into three ranges
∫_0^∞I(η,y; 1/2+ir) γ(1/2+ir,t) L_ϕ(1/2+ir)^2 r= _0^1/η+_1/η^t_ϕ+_t_ϕ^∞.
For the last the range r≥ t_ϕ, we can use the convexity bound |L_ϕ(1/2+ir)|≪ (1+r+t_ϕ)^1/4+ (see, e.g., <cit.>) to estimate
_t_ϕ^∞ ≪ t_ϕ^-2∫_t_ϕ^∞ e^-π (r-t_ϕ) (1+r+t_ϕ)^1/2+/ ((1+r-t_ϕ)(1+|r+t^ϕ|))^1/2 r
≪ t_ϕ^-2∫_0^∞ e^-π r (1+r+2t_ϕ)^1/2+/(1+ r)(r+2t^ϕ|))^1/2 r≪ t_ϕ^-2.
In the first range when r≤ 1/η, we have that
_0^1/η ≪ t_ϕ^-1η^2 ∫_0^1/η |L_ϕ(1/2+ir)|^2 r
≪η^2 t_ϕ^/η t_ϕ.
Finally, for 1/η<t<t_ϕ, split to dyadic intervals and apply Conjecture <ref>:
_1/η^t_ϕ ≪ t_ϕ^∫_1/η^t_ϕ| L_ϕ(1/2+ir)|^2/((r+t_ϕ)(t_ϕ-r) )^1/2 r^2 r
≪ t_ϕ^∑_k=log(1/η)^log(t_ϕ)1/((2^k+t_ϕ)(t_ϕ-2^k) )^1/2 2^2k∫_2^k-1^2^k| L_ϕ(1/2+ir)|^2 r
≪ t_ϕ^2∑_k=log(1/η)^log(t_ϕ)1/((2^k+t_ϕ)(t_ϕ-2^k) )^1/2 2^k
≪ t_ϕ^2∫_log(1/η)^log(t_ϕ)1/((2^u+t_ϕ)(t_ϕ-2^u) )^1/2 2^udu
≪ t_ϕ^2∫_1/η^t_ϕ1/((v+t_ϕ)(t_ϕ-v) )^1/2 v^2dv
≪ t_ϕ^2-2∫_1/η t^11/ ((1+v)(1-v) )^1/2 v^2 dv≪ t_ϕ^2-2.
Integrating over (a,b) we see that
J(f,η)^2≪η^2 t_ϕ^/η t_ϕ+t_ϕ^ -2,
and taking square roots concludes the proof.
The proof for cusp forms follows nearly identical lines. Once again, to apply Theorem <ref> we require a lower bound on M_2(·) and an upper bound on J(·).
§.§ Lower bound
The lower bound was originally proven by Ghosh, Reznikov and Sarnak <cit.>.
M_2(ϕ) : = (1/b-a∫_a^b ϕ(iy)^2 y )^1/2≫ 1
for any segment (a,b).
§.§ Upper bound on J
For the bound on J(ϕ) we again renormalize
f(z) = 1/√(y)ϕ(z).
Hence our goal is to bound
J(f,η) := 1/b-a∫_a^b ∫_0^η f(i(y+v)) v y.
for which we have the following bound.
For any compact interval (a,b) ⊂_>0 and η = t^δ-1 for some δ >0, we have that there exists an >0 such that
J(f,η) ≪1/t^2 + t^-η + 1/t^1-δ_1
for any choice of δ_1 >0.
As for the Eisenstein series, we can again Fourier expand the Maass form, and use the integral equation of the K-Bessel function to relate J(f,η) to the L function (<ref>)
∑_n ≠ 0ρ_ϕ(n) y^1/2 K_it_ϕ (2πn y),
J(f,η)^2 = (1/b-a∫_a^b ∫_0^η[ρ_ϕ(1)∑_n=1^∞λ_ϕ(n) K_it(2π n (y+v)) e(nx) ] v y)^2
≪∫_a^b ∫_(1/2)I(η,ν,y) γ(ν,t) L_ϕ(ν)^2 ν y
where γ(ν,t) = ρ_ϕ(1) Γ(ν+it/2) Γ(ν- it/2) π^-ν and I(η,ν,y) : = ∫_0^η (y+v)^-ν v.
From here, (<ref>) and (<ref>) both also hold for this I function and γ factor (for which we use that ρ_ϕ(1) ≪ t_ϕ^ e^π t_ϕ/2). From here we again use Cauchy-Schwarz and the second moment bound (Lemma <ref>) for the L-function to conclude the proof.
As above, let f_j(z)=y^-1/2ϕ_j(z) and let t_j=t_ϕ_j.
Fix >0 and let _={j∈: sup_z∈β|f_j(z)|}≤ t_j^/16. By Corollary <ref>, we have that
_ is of full density.
Now for any j∈_, let c_j=t_j^-/16, let N_j=t_j^1-/2, and fix ω=1/|β| and η_j=N_j^-1, so that η_j=N_j/ω (b-a). Since |f_j(iy)|≤ c_j for any y∈ [a,b], we have that
M_1(f_j)≥ c_jM_2(f_j). By prop:lower bound there is an absolute constant C_1 so that
M_2(f_j)≥ C_1. Let '</16; then by prop:J bound, there is a constant C_2>0 so that
J(f_j,η_j) ≤ C_2 η_jt_j^'-/4 ≤ c_j^3/16η_j M_2(f_j),
when t_j is sufficiently large. Hence by Theorem <ref>, we have that
N_β(f_j)≥c_j^2N_j/10(ω+2)≥ t_j^1-,
as claimed.
alpha
| Let Γ < _2()
be a discrete, cofinite group
acting on the upper half-plane by fractional linear transformations.
Given a real-valued automorphic function f : Γ→, we denote by Z_f its zero set, which separates the space into connected nodal domains. A key question in the analysis of f is to consider the number of nodal domains. Of particular interest, with applications to quantum chaos, is to study the number of nodal domains of
eigenfunctions of the hyperbolic Laplace-Beltrami operator,
as the eigenvalue goes to infinity. Henceforth we work specifically with the modular group Γ : = _2(); the proofs below can be generalized to congruence subgroups, as long as they include reflection symmetries.
Recall the spectral decomposition of L^2(Γ) into cusp forms and Eisenstein series.
The Eisenstein series for the modular group Γ is given by
E(z,s) : = ∑_γ∈Γ_∞Γ(γ z)^s,
where _∞ is the stabilizer of ∞ in .
This series converges absolutely for (s)>1, and has meromorphic continuation for all s∈. For s=1/2+it, the function
E_t(z):=E(z,1/2+it)
is an eigenfunction of the Laplacian (as well as all Hecke operators), and has Laplace eigenvalue =1/4+t^2 .
Moreover, a Maass cusp form is a function ϕ: → satisfying
* Δϕ +λϕ=0, λ = λ_ϕ >0,
* ϕ(γ z)=ϕ(z), γ∈Γ,
* and ϕ∈ L^2 (Γ) with L^2 norm 1.
Given such a cusp form ϕ, we write its eigenvalue as λ_ϕ =1/4 + t_ϕ^2.
A heuristic argument of Bogomolny and Schmit <cit.> gives a very precise prediction for the asymptotic number N^Ω(ϕ) of nodal domains of a Maass cusp form ϕ in a compact domain Ω⊆Γ, namely that N^Ω(ϕ) grows like a constant times λ_ϕ, as λ_ϕ→∞. While their prediction is supported by numerics, it seems currently out of reach, and even the weaker claim that N^Ω(ϕ)→∞ as λ_ϕ→∞ is
not currently known unconditionally (and may not be true for general surfaces, see <cit.>).
The space Γ has an orientation reversing isometry, σ(x+iy)=-x+iy.
We say that a nodal domain is inert if it is preserved by σ, and split if it is paired with another domain. We denote by N_ in(f) and N_ sp(f) the number of inert and split domains.
Let δ⊂_Γ denote the set of fixed points of σ, which is naturally partitioned as
δ=δ_1∪δ_2∪δ_3,
with δ_1={iy: y≥ 1}, δ_2={1/2+iy:y≥√(3)2} and δ_3={x+iy: 0<x<1/2, x^2+y^2=1}.
It was then observed in <cit.> that for an even cusp form (i.e., a cusp form satisfying ϕ(σ z)=ϕ(z)), one can bound N_ in(ϕ) by counting the number of sign changes of ϕ along δ, or more generally, along a non-empty compact segment β⊆δ. Explicitly, given a segment β⊆δ, let K^β(ϕ) denote the number of sign changes of ϕ along β, and N_ in^β(ϕ) the number of nodal domains intersecting β; then
1+1/2 K^β(ϕ)≤ N^β_ in(ϕ)≤ |Z_ϕ∩β|.
It is thus possible to reduce the problem of studying the number of (inert) nodal domains to studying the number of sign changes/zeros.
For this problem,
<cit.> proved,
assuming the Lindelöf hypothesis for the
L-functions attached to ϕ,
that,
given a compact geodesic segment β in δ_1 or δ_2,
t_ϕ^ν≪ |Z_ϕ∩β| ≪ t_ϕ,
for any ν < 1/12. (Note that
the upper bound here is unconditional and follows from general complexification techniques <cit.>.) In addition, these techniques can be applied to give a similar, although still conditional, lower bound for the same problem on Eisenstein series. Following this Jang and Jung <cit.> used arithmetic quantum unique ergodicity, to prove qualitatively that the number of nodal domains goes to ∞ with the eigenvalue. Moreover, Jung and Young <cit.> proved an unconditional but weaker lower bound for Eisenstein series with ν < 1/51.
Recently, Ki <cit.> proved an essentially sharp (in the exponent) lower bound for both Maass forms and the Eisenstein series, conditional on both the Lindelöf hypothesis for the associated L-function and a fourth moment bound along β. Explicitly, Ki shows that for any >0,
|Z_f∩β| ≫_ t_f^1-,
where f is either a Maass form or the Eisenstein series
(Ki's technique can also be applied to sign changes, K^β(f)). Our Theorem <ref> recovers this sharp lower bound for Eisenstein series without the assumption of the Lindelöf hypothesis, and in
Theorem
<ref> we also remove assumption on the fourth moment bound, by restricting to a full-density subset of forms. Moreover, Theorems <ref> and <ref> show similar results for cusp forms,
conditioned on an L^2 estimate for L-functions (namely, Conjecture <ref>).
While we specialize to the modular surface, we can extend this work to congruence subgroups with reflection symmetries. In addition, we specialize our analysis to the central line z=iy, but
this
can also be
extended
to any cuspidal geodesic, see Remark <ref>.
§.§ Main results
The main goal of
this paper is to prove the same bound as Ki's (<ref>) for the Eisenstein series, without assuming the Lindelöf hypothesis.
Let β=i[a,b] be a compact segment of the imaginary line, and
suppose that there is some p>2 such that for all >0,
(∫_a^b |E_t(iy)|^p y y)^1/p≪_ t^,
as t→∞.
Then
for any >0,
K^β(E_t) ≫_
t^1-,
as t→∞.
Explicitly what we show is that the bound of order t^ϵ for the L^p norm, implies a lower bound of order t^1-ϵ' for K^β(E_t) with any ϵ'>8p/p-2ϵ (see <ref>). In particular, a sufficiently strong subconvex bound for the sup norm of E_t of order t^ν with ν<1/8 is already sufficient to obtain a non trivial lower bound for K^β(E_t). We note however that with the current best known bound for the sup norm of E_t we can only take ν>1/3, which is not sufficient to get an unconditional improvement here.
While (<ref>) is beyond the reach of current techniques, it follows from the sup-norm conjecture. We can show that the sup norm bounds do hold for a full-density set of spectral parameters; here we say that ⊂ is of full density to mean that |∩ [T,2T]|/T→ 1 as T→∞. This yields the following unconditional estimate.
Let β=i[a,b] be a compact segment of the imaginary line. For any m>17 there is a set =(m)⊆ of full density, such that for all
t ∈,
K^β(E_t) ≥t/(log t)^m.
The key insight in the proof of Theorem <ref> is to show that, rather than the Lindelöf hypothesis, one can make do with an estimate on the L^2 norm of the L-function associated to the Eisenstein series, which translates to a fourth moment estimate on the Riemann zeta function. For Maass forms, we can make the same simplification. However, while the L^2 estimate for the associated L-function is certainly weaker than the Lindelöf hypothesis and is known in many instances, it is still not known in the precise setup needed
in our context.
In fact, such estimates also appear in the study of restricted quantum unique ergodicity for Maass forms and would be of interest there (see <cit.>). We state the requisite L^2 estimate below as Conjecture <ref>. Assuming this conjecture holds, we can prove the analogues of Theorem <ref> and Theorem <ref> in the context of cusp forms:
Fix β=i[a,b] a compact segment of the imaginary line, and
assume that there is some p>2 such that for any even Hecke cusp form ϕ and any >0,
(∫_a^b |ϕ(iy)|^p y y)^1/p≪_ t_ϕ^.
Further, assume Conjecture <ref>.
Then for any >0,
K^β(ϕ) ≫_ t_ϕ^1-.
Once again, we can prove the sup-norm conjecture for ϕ for a set of forms of full density, as follows.
Fix β, a compact segment of the imaginary line i[a,b], and assume Conjecture <ref>. For any >0, there is a full density set =()⊆ such that
for all
j ∈,
K^β(ϕ_j) ≥ t_ϕ_j^1-.
In Ki's recent paper, he establishes the aforementioned conditional version of Theorem <ref>, conditional on the Lindelöf hypothesis for the appropriate L-function. Furthermore, he proves a completely unconditional statement for all cusp forms (or Eisenstein series) <cit.>, however this theorem applies only to certain geodesics with real part outside of exceptional intervals. Thus he is unable to say anything about the special case of the imaginary line.
Nodal domains: Studying sign changes is intrinsically an interesting problem, moreover this problem is related to the study of nodal domains. Let N(f) denote the number of nodal domains of f on Γ, using a heuristic argument and the random wave model, Bogomolny and Schmit <cit.> conjectured the following asymptotic law for N(f) when f is a even Maass cusp form:
[Bogomolny-Schmit Conjecture]
Let {ϕ_n}_n∈ be the even Maass cusp forms ordered according to eigenvalue. Then
N(ϕ_n) ∼2/π (3√(3)-5)n
as n →∞.
By a general argument of Courant <cit.> we know that
N(ϕ) ≤ n_ϕ = 1/24 t_ϕ^2(1+o(1)),
however a lower bound is far more elusive. One approach is to split the nodal domains into inert and split domains (those either fixed or paired off by σ respectively). In this case, one can lower bound the number of inert domains intersecting iβ, N_in^β(ϕ) (and thus the total number of inert domains) by
K^β(ϕ) ≤ N_in^β(ϕ).
Unfortunately, we can bound the total number of inert domains intersecting the imaginary axis by N_in < t_ϕlog t_ϕ. Hence this approach has no hope of proving the Bogomolny-Schmit conjecture. That said, our Theorem <ref> gives an improved (conditional) lower bound, and Theorem <ref> gives the same lower bound for almost all cusp forms.
Note that for certain real Riemann surfaces, Zelditch showed logarithmic growth of the number of nodal domains, along a full-density sequence of eigenvalues, see <cit.>. Using the bound (<ref>), our Theorem <ref> (conditionally) produces nearly linear growth.
As stated, the above theorems concern the geodesic z=iy. In fact, the proof below works for any cuspidal geodesic x+iy with x= p/q a rational number. For this, we require estimates on the second moment of the series
∑_na_f(n)e(nx)/n^s
and a lower bound on the L^2-norm of the Eisenstein series/cusp form along β={x+iy: a<y<b}.
The lower bound is proved in <cit.> for Eisenstein series, and in <cit.> (although this is only proved for the lines x=0, and x=12) for cusp forms.
For the estimates on the twisted L series, we split into congruence classes modulo q using Dirichlet characters. This allows us to write the L function as
1/ϕ(q)∑_a q e_q(aq) ∑_χχ(a) ∑_na_f(n)χ(n)/n^s.
Now for cusp forms, bounding the inner twisted L-function requires us to extend Conjecture <ref> to these. For the Eisenstein series, this requires known estimates for the 4th moment of Dirichlet L-functions <cit.>.
§.§ Proof strategy
For both Eisenstein series and Maass forms, the proofs of Theorems <ref>, <ref>, <ref> and <ref> follow the same strategy. The starting point is <cit.>, wherein Ki conditionally proves the inequality (<ref>) for all cusp forms (the method also applies to Eisenstein series).
The first key idea in our proof is a modification of Ki's argument, allowing us to replace the full strength of the Lindelöf hypothesis with corresponding bounds on the second moment of the associated L-function. For cusp forms, this is Conjecture <ref>, while for the Eisenstein series, this boils down to fourth moment estimates on the Riemann zeta function which are well-known (see <ref>).
The second point (necessary only for the proofs of Theorems <ref> and <ref>) is that, while the sup norm bound remains open for both cusp forms and the Eisenstein series, it is known on average over the spectral parameter. For the Eisenstein series, the authors <cit.> proved a mean square bound which implies the sup norm bound for almost all spectral parameters (see <ref>). For cusp forms,
a simple argument using the pre-trace formula gives similar bounds on average
(see <ref>).
The proof of Theorem <ref> is identical, but (<ref>) allows one to avoid appealing to sup norm bounds on average. We focus on Theorems <ref> and <ref> below since the proofs include one extra step.
§.§ Notation
We use standard Vinogradov notation that f≪ g if there is a constant C>0 so that f(x)≤ C g(x) for all x.
§.§ Acknowledgements
We thank Valentin Blomer, Henryk Iwaniec, and Matt Young for many insightful discussions.
This paper was written while the second-named author was visiting Princeton University; he would like to express his gratitude for their hospitality. | null | null | null | null | null |
http://arxiv.org/abs/2409.17588v1 | 20240926070714 | DualCoTs: Dual Chain-of-Thoughts Prompting for Sentiment Lexicon Expansion of Idioms | [
"Fuqiang Niu",
"Minghuan Tan",
"Bowen Zhang",
"Min Yang",
"Ruifeng Xu"
] | cs.CL | [
"cs.CL"
] |
Improving Fast Adversarial Training via Self-Knowledge Guidance
Chengze Jiang, Junkai Wang, Minjing Dong, Jie Gui, Senior Member, IEEE, Xinli Shi, Senior Member, IEEE, Yuan Cao, Yuan Yan Tang, Life Fellow, IEEE, James Tin-Yau Kwok, Fellow, IEEE
C. Jiang, J. Wang, and X. Shi are with the School of Cyber Science and Engineering, Southeast University, Nanjing 210000, China (e-mail: [email protected]; [email protected]; [email protected]).
M. Dong is with the Department of Computer Science, City University of Hong Kong. (e-mail: [email protected]).
J. Gui is with the School of Cyber Science and Engineering, Southeast University and with Purple Mountain Laboratories, Nanjing 210000, China (e-mail: [email protected]).
Y. Cao is with the School of Information Science and Engineering, Ocean University of China, Qingdao 266100, China, (e-mail: [email protected]).
Y. Tang is with the Department of Computer and Information Science, University of Macau, Macau 999078, China (e-mail: [email protected]).
J. T. -Y. Kwok is with the Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (e-mail: [email protected]).
September 28, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
UTF8gbsn
Improving Fast Adversarial Training via Self-Knowledge Guidance
Chengze Jiang, Junkai Wang, Minjing Dong, Jie Gui, Senior Member, IEEE, Xinli Shi, Senior Member, IEEE, Yuan Cao, Yuan Yan Tang, Life Fellow, IEEE, James Tin-Yau Kwok, Fellow, IEEE
C. Jiang, J. Wang, and X. Shi are with the School of Cyber Science and Engineering, Southeast University, Nanjing 210000, China (e-mail: [email protected]; [email protected]; [email protected]).
M. Dong is with the Department of Computer Science, City University of Hong Kong. (e-mail: [email protected]).
J. Gui is with the School of Cyber Science and Engineering, Southeast University and with Purple Mountain Laboratories, Nanjing 210000, China (e-mail: [email protected]).
Y. Cao is with the School of Information Science and Engineering, Ocean University of China, Qingdao 266100, China, (e-mail: [email protected]).
Y. Tang is with the Department of Computer and Information Science, University of Macau, Macau 999078, China (e-mail: [email protected]).
J. T. -Y. Kwok is with the Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (e-mail: [email protected]).
September 28, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Idioms represent a ubiquitous vehicle for conveying sentiments in the realm of everyday discourse, rendering the nuanced analysis of idiom sentiment crucial for a comprehensive understanding of emotional expression within real-world texts. Nevertheless, the existing corpora dedicated to idiom sentiment analysis considerably limit research in text sentiment analysis. In this paper, we propose an innovative approach to automatically expand the sentiment lexicon for idioms, leveraging the capabilities of large language models through the application of Chain-of-Thought prompting. To demonstrate the effectiveness of this approach, we integrate multiple existing resources and construct an emotional idiom lexicon expansion dataset (called EmoIdiomE), which encompasses a comprehensive repository of Chinese and English idioms. Then we designed the Dual Chain-of-Thoughts (DualCoTs) method, which combines insights from linguistics and psycholinguistics, to demonstrate the effectiveness of using large models to automatically expand the sentiment lexicon for idioms. Experiments show that DualCoTs is effective in idioms sentiment lexicon expansion in both Chinese and English. For reproducibility, we will release the data and code upon acceptance.
§ INTRODUCTION
Idioms are commonly used to imply a specific evaluation or emotional stance towards the things they represent <cit.>.
Published sentiment lexicons suggest that idioms typically carry sentiment. In the case of English idioms, SLIDE <cit.> analyzed a lexicon corpus of 5,000 frequently used idioms selected from Wiktionary, where over 40% of the entries were labeled as sentiment-bearing. For Chengyu (成语), a type of frequently used Chinese idioms, out of the 40,000 commonly recognized items, 13,000 were annotated with at least one emotion in the Chinese Affective Lexicon Ontology (CALO) <cit.>.
Previous research has examined how idioms can impact sentiment analysis.
In English, <cit.> studied the role of idioms in automated sentiment analysis approaches and demonstrated that including idioms as features improves traditional sentiment analysis results. <cit.> have described an automated method to enhance sentiment analysis with idiom-based features.
As for Chinese Chengyu, <cit.> proposed an unsupervised framework that utilizes Chinese idiom resources to develop a domain-independent sentiment classifier. This classifier was subsequently enhanced for domain-specific sentiment classification using labeled data.
However, the sentiment lexicons for idioms have limited entries and may become outdated due to infrequent updates. This means that many recognized idioms and newly coined idioms are not included, and their sentiments are not annotated. Consequently, it is important to investigate how we can automatically predict the sentiment labels of idioms in order to enhance existing idiom sentiment lexicons.
With the advancement of large language models (LLMs) such as Bloom <cit.>, GPT4 <cit.>, ChatGLM <cit.>, and Llama <cit.>, their capabilities in reasoning, planning, and interaction are becoming increasingly strong.
An example of this is seen in ChatGPT <cit.>, which outperforms human crowd-workers in various annotation tasks such as relevance, stance, topics, and frames detection.
In large language models, the ability to reason naturally emerges through a method called Chain-of-Thought (CoT).
CoT prompts LLMs to generate a coherent series of intermediate reasoning steps leading to the final answer to the original question.
CoT-based techniques have been successfully applied to sentiment analysis.<cit.> introduces the Three-hop Reasoning (THOR) CoT framework, which mimics the human-like reasoning process for implicit sentiment analysis (ISA).
THOR achieves impressive performance improvements in both supervised and zero-shot setups.
Considering the significance of sentiment lexicon expansion for idioms in the fields of sentiment analysis and opinion mining, we propose using existing resources and more refined CoTs to effectively elicit LLMs for expanding sentiment lexicons for idioms.
In the fields of linguistics and psycholinguistics, the dual idiom representation model suggests that idiomatic expressions can be viewed as both “long words” and compositional phrases <cit.>.
This model has prompted the exploration of two primary approaches for representing and processing idioms: the compositional approach <cit.>) and the noncompositional approach.
The compositional approach places its emphasis on the individual constituents comprising an idiom, dissecting it into its constituent words, and treating it as a sequence of interconnected lexical elements. In contrast, the noncompositional approach perceives idiomatic expressions as holistic entities, wherein the meaning emerges from the idiom as an integrated whole, transcending the sum of its individual parts.
Building upon the dual idiom representation model, we introduce the Dual Chain-of-Thoughts (DualCoTs) prompting method.
DualCoTs consist of two distinct chains: the literal chain and the etymological chain.
The literal chain aims to simulate a literal understanding of idiomatic expressions by treating them as compositional phrases. It focuses on the surface forms and analyzes the constituent words of the idiom.
Conversely, the “etymological” chain explicitly incorporates directives to delve into the origin and historical context of idiomatic expressions.
By doing so, LLMs are encouraged to perceive and interpret idiomatic expressions as integrated units, thereby capturing the essence of their idiomatic nature.
These two chains, the literal and etymological, are combined to predict the sentiment associated with a particular idiom. By incorporating both approaches, the DualCoTs method provides a comprehensive and nuanced analysis of idiomatic expressions.
To facilitate the verification of our proposed methods and benchmark sentiment lexicon expansion for idioms, we construct a emotional idiom lexicon expansion dataset (EmoIdiomE) consisting of both English language and Chinese language.
EmoIdiomE pairs each idiom with multiple sentences to indicate its usage.
The idioms are divided into Train, Dev and Test splits.
Therefore, the dataset can be used for supervised learning.
There is also an extra Unlabelled split for human evaluation.
The dataset can not only used for sentiment lexicon expansion but also emotional lexicon expansion.
But in this paper, we will focus on sentiment lexicon expansion.
We conduct experiments over EmoIdiomE using both prompt-learning and CoT-based prompting methods.
Our CoT-based prompting methods take advantage of idioms' properties (fixedness, origins) and integrate instructions accordingly to address these different aspects.
Through our experiments, we discover that idioms' properties have a discernible impact on the performance of the CoTs methodology.
In order to comprehensively address this issue, we add one control set to make deeper analysis how each property of idioms may influence the results.
Furthermore, We also apply our method to unlabelled idioms for human annotation. The annotated results of this annotation process were characterized by a notably high level of accuracy. This underscores the efficacy of our approach in the context of sentiment lexicon expansion for Idioms.
In summary, our work makes the following contributions:
(1) We construct the EmoIdiomE dataset, which can be used not only for sentiment lexicon expansion of idioms but also finds application in the broader domain of sentiment analysis tasks.
The dataset covers two languages, Chinese and English, thus rendering it applicable for a wide range of cross-lingual research endeavors.
(2) We proposed the DualCoTs prompting method, specifically designed for idioms, to expand idiom sentiment lexicons and address the existing limitations.
(3) We demonstrate DualCoTs' effectiveness in sentiment judgment through experiments, and show its applicability to expanding sentiment lexicons for unlabeled idioms.
§ RELATED WORK
§.§ Automatic Construction of Sentiment Lexicons
In recent years, the construction of sentiment lexicons has evolved with diverse approaches tailored to various domains and contexts. For instance, <cit.> developed an optimization framework that integrates different information sources to learn a sentiment lexicon that adapts to domain specifics and contextual aspects of unlabeled opinion texts. Additionally, <cit.> explored unsupervised sentiment analysis by leveraging emotional signals such as emoticons and product ratings, which underscore the feasibility of extracting sentiment data without direct supervision.
Further developments in deep learning have refined the methods used for constructing sentiment lexicons. Notably, phrase-level sentiment classification techniques <cit.> and neural architectures that integrate sentiment supervision at both the document and word levels <cit.> have markedly improved the quality and utility of sentiment-aware embeddings. In addition, domain-specific methodologies, such as the emoji sentiment lexicon developed by <cit.> and the topic-adaptive sentiment lexicon by <cit.>, cater to the variability of sentiment expressions across different topics and domains, thus providing more sophisticated tools for specialized sentiment analysis applications. Furthermore, <cit.> have introduced the idioms with emotions dataset, employing pre-training models in supervised learning settings to categorize idioms effectively.
§.§ Chain-of-Thought Prompting
A Chain-of-Thought <cit.> prompts sufficiently LLMs to generate a coherent series of intermediate reasoning steps that lead to the final answer to the original question, thereby naturally eliciting the reasoning abilities of these models.
Chain-of-Thought prompting methods have been applied across various fields in natural language processing, and new techniques have been proposed to tackle specific challenges that prevent LLMs from operating rationally. To enhance the reasoning ability of LLMs, <cit.> first samples a diverse set of reasoning paths and then selects the most consistent answer by marginalizing out all possible reasoning paths. In sentiment analysis, the Three-hop Reasoning (THOR) <cit.> CoT framework simulates the human-like reasoning process for implicit sentiment analysis (ISA). Moreover, <cit.> leverage the CoT approach to analyze the implicit aspects and opinions in texts, thereby facilitating a more nuanced analysis of sentiments.
§ DATASET CONSTRUCTION
§.§ Existing Datasets
The construction of a lexicon expansion dataset requires the acquisition of appropriate training data that includes both the contextual usage of idioms and corresponding affective lexicons. To meet this requirement, we have pursued distinct strategies for Chinese and English idioms:
For Chinese idioms, we utilize the ChID<cit.>, which provides contexts from reading comprehension, along with the CALO<cit.> for emotion and sentiment annotations. For English idioms, we combine resources from Idioment<cit.>, IdiomParaphrases<cit.>, and SLIDE<cit.> for sentiment annotations and retrieve contexts from large corpora such as the British National Corpus(BNC) <cit.> and the One Billion Word Corpus.
CALO The Chinese Affective Lexicon Ontology (CALO) was developed to support Affective Computing (AC) in the Chinese language. The creation of CALO was influenced by mainstream emotional classification research <cit.>, combined with traditional Chinese emotional categories. However, the category `enjoyment' (乐) did not adequately describe some positive emotions such as `respect' (尊敬) and `belief' (相信). Therefore, an additional category, `good'(好), was introduced. In total, CALO contains seven main categories, each with several subcategories graded by intensity levels 1,3,5,7,9. In this paper, we refer to the seven main categories as coarse-grained emotions and the twenty-one subcategories as fine-grained emotions. The mapping between coarse-grained and fine-grained emotions is detailed in Appendix <ref>.
ChID ChID is a large-scale Chinese Idiom Dataset with the goal to facilitate the study of Chengyu comprehension using deep learning models.
The dataset was created in the “cloze” style.
The text includes novels and essays from the Internet and news articles.
To construct the candidate answer set for each masked Chengyu, the authors considered synonyms, near-synonyms and other Chengyu either irrelevant or opposite in meaning to the ground truth Chengyu.
Idioment <cit.> collected a set of 580 idioms that are relevant to sentiment analysis, i.e. the ones that can be mapped to an emotion.
A total of 2900 annotations were collected for all 580 idioms with 5 annotations per idiom.
A total of 8610 annotations were collected for all sentences in the corpus with at least 3 annotations per sentence.
IdiomParaphrases <cit.> annotated 1.4K idiom paraphrase pairs for the task short-text paraphrase identification with idiomatic expressions.
The idioms are collected from a site[<http://www.usingenglish.com>] for English learners.
Each idiom has a unique description giving a clear explanation of the idiom’s meaning.
Although there's no sentiment labels on this dataset, we use the dataset to expand our vocabulary of idioms and treat them as unlabelled data.
SLIDE Sentiment Lexicon of IDiomatic Expressions is an English corpus on sentiment annotation of idiomatic multiword expressions collected using crowdsourcing.
It collects 10 annotations for each idiom and the aggregated label is shown to have good agreement with expert annotations.
SLIDE is much larger than previous idiom lexicons which includes 5,000 frequently occurring idioms, as estimated from a large English corpus.
The idioms were selected from Wiktionary, and over 40% of them were labeled as sentiment-bearing.
§.§ EmoIdiomE Dataset
To validate the effectiveness of our proposed method in the automated construction of sentiment lexicons, we construct a dataset, named EmoIdiomE, encompassing idioms in both the Chinese and English languages.
During the dataset construction process, we notice that the number of passages for each idiom is largely skewed which ranges from several to hundreds.
This disparity in passage counts could potentially lead to overfitting to more commonly used idioms.
To mitigate this issue, we have taken measures to construct a more balanced dataset, where each idiom is coupled with at most K passages.
If the number of passages for an idiom is less than K, all the passages are used.
In this work, we choose K=1,4,8,16.
For both English language (EN) and Chinece language (ZH) in this dataset, the three splits Train, Dev and Test contain only idioms that have labels from existing lexicons.
We also include all the Unlabelled idioms in this dataset for further usage.
ZH To construct an affection-oriented dataset for Chinese Idioms, we reuse the training split of ChID which contains 520k passages covering 3848 distinct idioms.
Among the 3848 idioms, there are 2864 idioms which have been labeled in CALO.
There is only one fine-grained emotion type `NK (envy, 妒忌)' is not covered in ChID.
Therefore, we further add 12 `NK' idioms with 156 passages[We manually crawled them from literature works.] to the dataset.
EN For English idioms, we combine Idioment, IdiomParaphrases and SLIDE.
However, these datasets does not provide us sentences that contain the idioms.
As a result, we extract sentences from two corpora BNC and One Billion Word <cit.> for each idiom.
Similar to the practices used by SLIDE, we preprocess idioms that have personal pronouns or possessive adjectives when retrieving sentences.
Table <ref> provides a comprehensive overview of our EmoIdiomE dataset. This dataset encompasses a total of 7,828 idioms. Specifically, there are 2,854 labeled data entries for Chinese idioms, along with 1,006 unlabeled entries. In the case of English idioms, there are 4,974 labeled data entries and 1,309 unlabeled entries. Impressively, the dataset comprises a substantial number of example sentences, totaling 741,135 instances. The extensive coverage of both the number of idioms and example sentences in this dataset underscores its versatility and potential for broad applicability in the field related to idioms. This breadth allows the dataset to address a wide spectrum of issues and challenges associated with idiom.
§ DUAL CHAIN-OF-THOUGHTS PROMPTING
In this section, we begin by introducing several baseline methods derived from prompt learning and CoT-based prompting. Furthermore, we present a pre-trained model approach, which serves as a point of comparison with the CoT methodology. Following this, we proceed to introduce our instructed prompting approach for the DualCoTs method.
§.§ Baseline Methods
Direct Inquiry This method involves directly soliciting sentiment predictions from LLMs based on the idiomatic expressions they have learned, without context beyond the idiom's surface form.
Idiom Inquiry This approach enhances sentiment prediction by informing LLMs that a phrase is an idiom, prompting them to retrieve relevant background knowledge and insights.
Usage Inquiry This method solicits sentiment assessment of an idiom within a contextually provided example sentence, facilitating a better understanding of the idiom's commonly used meaning.
Origin Inquiry Understanding an idiom's origins, often rooted in historical or cultural contexts, is crucial, and this inquiry method focuses on enhancing LLMs' comprehension of idioms for improved sentiment predictions.
Origin and Usage Inquiry This comprehensive method combines inquiries into an idiom's origin and usage, sequentially prompting LLMs to provide a deeper understanding of the idiom's sentiment in context.
PromptT5 To contrast with the CoT method, we have compared it to a well-performing prompt learning <cit.> method. Prompt learning directly adapts pre-trained language models (PLMs) to downstream tasks using either generation-based or mask-filling-based approaches, and can achieve promising performances on various tasks.
We use the OpenPrompt <cit.> toolkit to implement the prompt model for sentiment lexicon expansion for idioms.
Our pipeline consists of a template and a verbalizer.
We use a template is “<pre text><idiom><post text>. <idiom> is <mask>”, where
the token <pre text> stands for the original text before the idiom and <post text> stands for the original text after the idiom.
Our verbalizer uses the label words positive, negative and neutral.
For English, we use the T5 <cit.> model[<https://huggingface.co/t5-base>].
For Chinese, we use a T5 model[<https://huggingface.co/uer/t5-v1_1-base-chinese-cluecorpussmall>] released by <cit.>.
§.§ DualCoTs
Our Dual Chain-of-Thoughts (DualCoTs) serves to emulate the dual representation model of idioms using large language models. Within the DualCoTs framework, two distinctive chains operate in tandem:
(i) The literal chain is to directly elicit judgement of sentiment from LLMs with surface forms of the idioms.
(ii) The etymological chain employs idiom-aware instructions before prompting LLMs for their sentiment assessments. This added layer of instruction aims to enhance the LLMs' understanding of the idiomatic context and its potential impact on sentiment analysis.
To investigate if LLMs rely on usage of idioms when making the judgements, we further extend each chain by injecting instructions of giving example sentences containing the idiom.
Specifically, as illustrated in Figure <ref>, there are two separate chains with different instructions.
Within the literal chain, we have divided the process into two steps.
(1) We initiate the process by employing a usage inquiry approach to prompt LLMs to generate multiple example sentences that effectively convey the sentimental implications of the idiom. This initial step aims to elucidate the extent of knowledge retained by LLMs regarding the idiom.
(2) Subsequently, leveraging the example sentences provided in the first step, we proceed to extract the sentimental connotations conveyed by the idiom within each sentence. This two-step methodology enables a more comprehensive understanding of the idiomatic expressions and their associated sentiments across various contextual applications.
Within the etymological chain, we have divided the process into three steps.
(1) Commencing with an inquiry about the origins of the idiom, we explicitly engage with LLMs to prompt them for insights into the idiom's historical or cultural roots. This initial step serves to acquaint LLMs with the idiomatic nature of the provided phrase and its original usage as an idiom.
(2) Building upon the knowledge of the idiom's origins, we prompt LLMs to provide practical examples that stem from these origins. This step is instrumental in enhancing LLMs' understanding of the idiom's usage and enhances their cognitive capabilities in the context of idiomatic expressions.
(3) Drawing from the knowledge of the idiom's origins and the provided examples, we inquire about the emotional meanings conveyed by the idiom, thus capturing its emotional nuances within an idiomatic context. This three-step process collectively aims to deepen LLMs' comprehension of the idiom's unique characteristics and its emotional dimensions.
Each chain generates five sentiment predictions. The ultimate sentiment prediction is determined through a voting mechanism that combines the results obtained from both chains.
§ EXPERIMENTS
In this section, we perform comprehensive experiments on our EmoIdiomE dataset. In addition to the EmoIdiomE dataset, we tested several methods on the Idioment dataset to assess our method generalizability. Furthermore, we present the results of our method when applied to unlabelled data, thus substantiating the efficacy of our approach in the automatic expansion of idiom sentiment lexicons.
We conduct experiments with ChatGPT (gpt-3.5-turbo[<https://platform.openai.com/docs/models/gpt-3-5>]), LLaMA (LLama 2-70b[<https://huggingface.co/meta-llama/Llama-2-70b-chat-hf>]) and ChatGLM(ChatGLM-6b[<https://huggingface.co/THUDM/chatglm-6b>]), which are popular and powerful LLMs. Additionally, we opted for ChatGLM-6B, primarily due to its excellent performance in the context of the Chinese language.
§.§ Sentiment Lexicon Expansion for Idioms
Intrinsic Evaluation on Labelled Data
We evaluate how each method performs over Test split of EmoIdiomE and list all experiment results in Table <ref>.
We first include results for PromptT5 using one or four sentences per idiom.
Then we list all the CoT-based methods, Direct Inquiry, Usage Inquiry, Idiom Inquiry, Origin Inquiry and Origin & Usage Inquiry.
Our DualCoTs is shown at the bottom.
Generally speaking, we have the following main findings:
(1) PromptT5 with supervision is still competitive compared to prompting methods over LLMs. But the gap is very small.
Using more example sentences for prompt learning can help improving the accuracy of sentiment expansion.
(2) The use of example sentences and the introduction of idiom origin in the prompt method show significant improvements, which also serve as evidence of the effectiveness of our DualCoTs.
(3) The size of large language models is one significant factor to the prompting performance.
ChatGPT and LLaMA works consistently better than ChatGLM-6B. The reason for LLaMA potentially outperforming ChatGPT in the results might be due to the fact that LLaMA was fine-tuned using only English data, which could lead to better consistency and, consequently, better results compared to ChatGPT.
It's worth noting that the difference between Chinese idioms and English Idioms on EmoIdiomE.
Chinese idioms are all Chengyu, a commonly used idiom type, which have higher fixedness and can be easily recognized.
English idioms tends to be more flexible <cit.> that have multiple senses and may only be potential idiomatic expressions <cit.>.
We also notice that, for ChatGPT, the Idiom Inquiry surpasses the Origin Inquiry by a large margin.
We suspect this is due to the test set of EmoIdiomE contains many idioms with no clear origins.
For example, the test set of EmoIdiomE contains frequently used terms like “come down with” and “come in” which have no clear origins and ChatGPT cannot offer related information through Origin Inquiry for the judgement of their sentiment.
To further verify this, we use Idioment as a control set in which idioms have clearer origins compared with EmoIdiomE.
Combining results from ZH of test set on EmoIdiomE and Idioment, we can conclude that for idioms that have strong idiomatical background and clearer origins, DualCoTs can achieve consistent improvement over other CoT-based prompting methods.
Evaluation on Unlabelled Data
We randomly select 50 idioms for each language from the Unlabelled split of EmoIdiomE.
Then we use the best method DualCoTs over gpt-3.5-turbo to predict their sentiments.
Two annotators are asked to evaluate the predictions.
The prediction accuracy and annotation consistency are listed in Table <ref>.
The annotation results indicate that DualCoTs can be used for automatically expanding sentiment lexicons for idioms with high reliability.
The method has potential usage in lexicon construction to reduce annotation costs.
Human Evaluation on New Idioms
To test the performance of DualCoTs over out-of-domain phrase sentiment prediction, we use most recent cyberwords
that do not exist in our vocabulary and manually assess their sentiment.
These newly-coined idioms emerge within a short period online that LLMs have smaller possibilities to have seen their annotations or even usages.
We give two examples below.
For Chinese language, we utilize the idiom “心满离” (heart, full, leave) which is a recent buzzword online.
Although LLMs cannot determine the exact origin of this idiom, they are able to develop a certain understanding of it.
Here is an example sentence for the idiom, “这部电影的结尾令人感动,主人公与自己的过去告别,但他的心中却充满了一种心满离的满足感” (The ending of this movie is moving, as the protagonist bids farewell to their past, yet their heart is filled with a sense of “心满离” satisfaction), which conveys positive sentiment that aligns with the sentiment associated with internet buzzwords.
For English language, we utilize the idiom Composter Syndrome[<https://www.urbandictionary.com/author.php?author=peepeecomposter%20>].
Currently, Composter Syndrome refers to a psychological phenomenon experienced by individuals who have an intense desire to efficiently organize and manage composting processes.
In our DualCoTs, when exploring Composter Syndrome through the idiomatic Chain, LLMs cannot provide information related to idioms.
However, in the literal Chain, LLMs tend to associate Composter Syndrome with positive impressions and provide an example sentence: “Her `composter syndrome' became evident when she proudly showed off her meticulously organized composting system, complete with color-coded bins and a spreadsheet tracking decomposition rates.”
This example sentence conveys a positive sentiment just as the idiom Composter Syndrome in internet discourse.
§.§ Discussion over Generated Origins
Large language models suffer a lot from factual consistency issues <cit.>.
As our method relies much on the generation ability of LLMs, it is inevitable for the method to use unfaithfully generated text for inference.
To control the risk caused by this, we investigate the most probable misinformation that might intervene the reasoning chains in our method.
As far as we are concerned, the generation of origins is more challenging for LLMs as these resources are scarce even online.
The generated origins, see Figure <ref>, usually contain two major types of information, the background of the idiom and its idiomatic meaning.
We find that the background information has low factual consistency in most cases.
For example, the background information for “如花似玉” (as pretty as flower and jade) generated from ChatGPT is the book “《史记·卷七十三·孔子世家》” (Records of the Grand Historian, Volume 73: The House of Confucius).
But in fact, it should be “《诗·魏风·汾沮洳》” (Poetry: Airs of Wei – Fen Marshlands).
However, the idiomatic meaning of the idiom is accurate.
Considering the judgement of sentiment of an idiom is more related to its idiomatic meaning, our method is still robust against such misinformation.
§ CONCLUSION
In this paper, we explore the task of automatic expansion of sentiment lexicons for idioms using Chain-of-Thought prompting over large language models.
Through the integration of diverse existing resources, we construct the EmoIdiomE dataset, which stands out as one of the most comprehensive datasets in terms of idiom coverage. With an extensive collection of idiomatic examples, this dataset holds the potential to address a wide array of issues within the realm of idioms.
Our innovative chain-of-thought methodology, DualCoTs, has been adequate assessed through practical experiments to validate its effectiveness in the task of automatically expanding sentiment lexicons for idioms.
§ LIMITATIONS
Our DualCoTs have made significant advancements in the field of Sentiment Lexicon Expansion of Idioms. However, it is important to acknowledge the limitations we have encountered during its implementation.
Firstly, we have observed that the origins generated by LLMs may lack factual consistency, as discussed in Section <ref>.
This may lead to potential inaccuracies in the judgment of idioms.
How to ensure reliability and accuracy of the generated information remains a challenge.
Secondly, we have found that LLMs face difficulties in providing origins for newly-coined idioms. This restricts DualCoTs from handling these types of expressions.
Lastly, LLMs have shown biases when dealing with flexible idioms. In the absence of contextual cues, the generated contexts may lean towards a particular sentiment.
How to address these biases and ensuring a more balanced sentiment analysis is a significant challenge for our DualCoTs.
§ ETHICS STATEMENT
The datasets utilized in this study are sourced from openly accessible repositories, ensuring transparency and compliance with open data practices. The models employed are open-source and also include services from LLMs provided by OpenAI and Meta AI. In deploying these models, we adhere strictly to their terms and policies, ensuring that our research practices uphold the ethical standards set forth by these organizations.
§ CALO CLASSIFICATION
§ CASE ENGLISH TRANSLATION
0.95奉为圭臬 (Hold up as an authoritative standard): The idiom “奉为圭臬” can be traced back to ancient China during the Shang dynasty. In the Shang period, there was an instrument used by the Shang kings in sacrificial ceremonies called a “圭”, which was a jade-made, horn-shaped tool used to measure the weight and size of objects. Due to its authoritative and standardized nature, the “圭” was regarded as an important standard or criterion. Over time, the idiom “奉为圭臬” was formed, which is used to describe a certain standard or criterion that is universally recognized and respected, becoming an unshakable authority.
0.95如花似玉 (As beautiful as a flower and as pure as jade): The idiom “如花似玉” first appeared in Records of the Grand Historian, Volume 73: The House of Confucius. The original text reads, "Her appearance is elegant and refined, as radiant as a flower, as beautiful as jade." It means to describe a woman's beauty, comparing her to the brilliance of flowers and the purity of jade.
UTF8gbsn
| Idioms are commonly used to imply a specific evaluation or emotional stance towards the things they represent <cit.>.
Published sentiment lexicons suggest that idioms typically carry sentiment. In the case of English idioms, SLIDE <cit.> analyzed a lexicon corpus of 5,000 frequently used idioms selected from Wiktionary, where over 40% of the entries were labeled as sentiment-bearing. For Chengyu (成语), a type of frequently used Chinese idioms, out of the 40,000 commonly recognized items, 13,000 were annotated with at least one emotion in the Chinese Affective Lexicon Ontology (CALO) <cit.>.
Previous research has examined how idioms can impact sentiment analysis.
In English, <cit.> studied the role of idioms in automated sentiment analysis approaches and demonstrated that including idioms as features improves traditional sentiment analysis results. <cit.> have described an automated method to enhance sentiment analysis with idiom-based features.
As for Chinese Chengyu, <cit.> proposed an unsupervised framework that utilizes Chinese idiom resources to develop a domain-independent sentiment classifier. This classifier was subsequently enhanced for domain-specific sentiment classification using labeled data.
However, the sentiment lexicons for idioms have limited entries and may become outdated due to infrequent updates. This means that many recognized idioms and newly coined idioms are not included, and their sentiments are not annotated. Consequently, it is important to investigate how we can automatically predict the sentiment labels of idioms in order to enhance existing idiom sentiment lexicons.
With the advancement of large language models (LLMs) such as Bloom <cit.>, GPT4 <cit.>, ChatGLM <cit.>, and Llama <cit.>, their capabilities in reasoning, planning, and interaction are becoming increasingly strong.
An example of this is seen in ChatGPT <cit.>, which outperforms human crowd-workers in various annotation tasks such as relevance, stance, topics, and frames detection.
In large language models, the ability to reason naturally emerges through a method called Chain-of-Thought (CoT).
CoT prompts LLMs to generate a coherent series of intermediate reasoning steps leading to the final answer to the original question.
CoT-based techniques have been successfully applied to sentiment analysis.<cit.> introduces the Three-hop Reasoning (THOR) CoT framework, which mimics the human-like reasoning process for implicit sentiment analysis (ISA).
THOR achieves impressive performance improvements in both supervised and zero-shot setups.
Considering the significance of sentiment lexicon expansion for idioms in the fields of sentiment analysis and opinion mining, we propose using existing resources and more refined CoTs to effectively elicit LLMs for expanding sentiment lexicons for idioms.
In the fields of linguistics and psycholinguistics, the dual idiom representation model suggests that idiomatic expressions can be viewed as both “long words” and compositional phrases <cit.>.
This model has prompted the exploration of two primary approaches for representing and processing idioms: the compositional approach <cit.>) and the noncompositional approach.
The compositional approach places its emphasis on the individual constituents comprising an idiom, dissecting it into its constituent words, and treating it as a sequence of interconnected lexical elements. In contrast, the noncompositional approach perceives idiomatic expressions as holistic entities, wherein the meaning emerges from the idiom as an integrated whole, transcending the sum of its individual parts.
Building upon the dual idiom representation model, we introduce the Dual Chain-of-Thoughts (DualCoTs) prompting method.
DualCoTs consist of two distinct chains: the literal chain and the etymological chain.
The literal chain aims to simulate a literal understanding of idiomatic expressions by treating them as compositional phrases. It focuses on the surface forms and analyzes the constituent words of the idiom.
Conversely, the “etymological” chain explicitly incorporates directives to delve into the origin and historical context of idiomatic expressions.
By doing so, LLMs are encouraged to perceive and interpret idiomatic expressions as integrated units, thereby capturing the essence of their idiomatic nature.
These two chains, the literal and etymological, are combined to predict the sentiment associated with a particular idiom. By incorporating both approaches, the DualCoTs method provides a comprehensive and nuanced analysis of idiomatic expressions.
To facilitate the verification of our proposed methods and benchmark sentiment lexicon expansion for idioms, we construct a emotional idiom lexicon expansion dataset (EmoIdiomE) consisting of both English language and Chinese language.
EmoIdiomE pairs each idiom with multiple sentences to indicate its usage.
The idioms are divided into Train, Dev and Test splits.
Therefore, the dataset can be used for supervised learning.
There is also an extra Unlabelled split for human evaluation.
The dataset can not only used for sentiment lexicon expansion but also emotional lexicon expansion.
But in this paper, we will focus on sentiment lexicon expansion.
We conduct experiments over EmoIdiomE using both prompt-learning and CoT-based prompting methods.
Our CoT-based prompting methods take advantage of idioms' properties (fixedness, origins) and integrate instructions accordingly to address these different aspects.
Through our experiments, we discover that idioms' properties have a discernible impact on the performance of the CoTs methodology.
In order to comprehensively address this issue, we add one control set to make deeper analysis how each property of idioms may influence the results.
Furthermore, We also apply our method to unlabelled idioms for human annotation. The annotated results of this annotation process were characterized by a notably high level of accuracy. This underscores the efficacy of our approach in the context of sentiment lexicon expansion for Idioms.
In summary, our work makes the following contributions:
(1) We construct the EmoIdiomE dataset, which can be used not only for sentiment lexicon expansion of idioms but also finds application in the broader domain of sentiment analysis tasks.
The dataset covers two languages, Chinese and English, thus rendering it applicable for a wide range of cross-lingual research endeavors.
(2) We proposed the DualCoTs prompting method, specifically designed for idioms, to expand idiom sentiment lexicons and address the existing limitations.
(3) We demonstrate DualCoTs' effectiveness in sentiment judgment through experiments, and show its applicability to expanding sentiment lexicons for unlabeled idioms. | §.§ Automatic Construction of Sentiment Lexicons
In recent years, the construction of sentiment lexicons has evolved with diverse approaches tailored to various domains and contexts. For instance, <cit.> developed an optimization framework that integrates different information sources to learn a sentiment lexicon that adapts to domain specifics and contextual aspects of unlabeled opinion texts. Additionally, <cit.> explored unsupervised sentiment analysis by leveraging emotional signals such as emoticons and product ratings, which underscore the feasibility of extracting sentiment data without direct supervision.
Further developments in deep learning have refined the methods used for constructing sentiment lexicons. Notably, phrase-level sentiment classification techniques <cit.> and neural architectures that integrate sentiment supervision at both the document and word levels <cit.> have markedly improved the quality and utility of sentiment-aware embeddings. In addition, domain-specific methodologies, such as the emoji sentiment lexicon developed by <cit.> and the topic-adaptive sentiment lexicon by <cit.>, cater to the variability of sentiment expressions across different topics and domains, thus providing more sophisticated tools for specialized sentiment analysis applications. Furthermore, <cit.> have introduced the idioms with emotions dataset, employing pre-training models in supervised learning settings to categorize idioms effectively.
§.§ Chain-of-Thought Prompting
A Chain-of-Thought <cit.> prompts sufficiently LLMs to generate a coherent series of intermediate reasoning steps that lead to the final answer to the original question, thereby naturally eliciting the reasoning abilities of these models.
Chain-of-Thought prompting methods have been applied across various fields in natural language processing, and new techniques have been proposed to tackle specific challenges that prevent LLMs from operating rationally. To enhance the reasoning ability of LLMs, <cit.> first samples a diverse set of reasoning paths and then selects the most consistent answer by marginalizing out all possible reasoning paths. In sentiment analysis, the Three-hop Reasoning (THOR) <cit.> CoT framework simulates the human-like reasoning process for implicit sentiment analysis (ISA). Moreover, <cit.> leverage the CoT approach to analyze the implicit aspects and opinions in texts, thereby facilitating a more nuanced analysis of sentiments. | null | null | null | In this paper, we explore the task of automatic expansion of sentiment lexicons for idioms using Chain-of-Thought prompting over large language models.
Through the integration of diverse existing resources, we construct the EmoIdiomE dataset, which stands out as one of the most comprehensive datasets in terms of idiom coverage. With an extensive collection of idiomatic examples, this dataset holds the potential to address a wide array of issues within the realm of idioms.
Our innovative chain-of-thought methodology, DualCoTs, has been adequate assessed through practical experiments to validate its effectiveness in the task of automatically expanding sentiment lexicons for idioms. |
http://arxiv.org/abs/2409.17624v1 | 20240926081921 | HGS-Planner: Hierarchical Planning Framework for Active Scene Reconstruction Using 3D Gaussian Splatting | [
"Zijun Xu",
"Rui Jin",
"Ke Wu",
"Yi Zhao",
"Zhiwei Zhang",
"Jieru Zhao",
"Zhongxue Gan",
"Wenchao Ding"
] | cs.RO | [
"cs.RO"
] |
Recognizing Lawyers as AI Creators and Intermediaries in Contestability
Mark Riedl
September 28, 2024
=======================================================================
empty
empty
§ ABSTRACT
In complex missions such as search and rescue, robots must make intelligent decisions in unknown environments, relying on their ability to perceive and understand their surroundings. High-quality and real-time reconstruction enhances situational awareness and is crucial for intelligent robotics. Traditional methods often struggle with poor scene representation or are too slow for real-time use. Inspired by the efficacy of 3D Gaussian Splatting (3DGS), we propose a hierarchical planning framework for fast and high-fidelity active reconstruction. Our method evaluates completion and quality gain to adaptively guide reconstruction, integrating global and local planning for efficiency. Experiments in simulated and real-world environments show our approach outperforms existing real-time methods.
§ INTRODUCTION
In tasks such as search and rescue or target finding, which rely on active exploration, robots must preserve as much geometric and texture information from the environment as possible to support effective decision-making <cit.>. Online active reconstruction plays a crucial role in these missions by enabling robots to construct and update environmental models in real time, allowing them to navigate and adapt more efficiently in complex and unknown environments.
However, conventional active reconstruction<cit.> that fuse sensor data across space and time only capture coarse structures and struggle with rich scene details and novel view evaluation. Recently, Neural Radiance Field (NeRF)<cit.>-based methods <cit.> have gained popularity for their high-fidelity scene representation and efficient memory usage. However, NeRF’s inherent volumetric rendering process requires dense sampling of every pixel, resulting in long training times and poor real-time performance<cit.>. Additionally, its use of implicit neural representations makes it challenging to evaluate reconstruction quality accurately in real time. In fact, active reconstruction systems demand quick responses and the ability to make decisions based on real-time reconstruction quality dynamically. NeRF’s computational bottlenecks make it unsuitable for scene representation in active reconstruction, especially in scenarios that require real-time responses.
Compared to NeRF, 3D Gaussian Splatting (3DGS)<cit.> offers a more efficient explicit representation, reducing computational complexity and better suiting online active reconstruction<cit.>. Additionally, the Gaussian map's real-time integration of new data gives it the potential to provide immediate feedback on the rendering quality of new views. However, despite these notable advantages, the application of 3DGS for active reconstruction in unknown environments is still largely unexplored.
Though high-quality reconstruction can be achieved using 3DGS, the task with 3D Gaussian representation faces three main challenges. First, efficiently and accurately evaluating novel view quality without ground truth is crucial for guiding robot motion planning, but it remains challenging. Second, while efficiency is critical to active reconstruction, Gaussian maps can only represent occupied areas, posing a challenge for efficiently reconstructing unobserved regions. Third, effectively integrating Gaussian map data into closed-loop motion planning is essential for active reconstruction, yet how to do this effectively is still an open question.
To address the above problems, we propose an efficient 3D Gaussian-based real-time planning framework for active reconstruction. To the best of our knowledge, our framework is the pioneering work exploring 3DGS representation for online active reconstruction. Firstly, we introduce Fisher Information, which represents the expectation of observation information and is independent of ground truth<cit.>, to evaluate novel view quality gain in online reconstruction. Secondly, we improve exploration efficiency in 3D Gaussian representation by integrating unknown voxels into the splatting-based rendering process, allowing us to assess new viewpoints' coverage of unexplored areas. Thirdly, we use Gaussian map data to adaptively select viewpoints, which balances reconstruction quality and efficiency, and integrate it into an active planning framework. Our experimental results confirm that our framework supports efficient and high-quality online reconstruction.
To summarize, our contributions are:
* To the best of our knowledge, we propose the first online adaptive hierarchical autonomous reconstruction system using 3DGS.
* We design a novel viewpoint selection strategy based on reconstruction coverage and quality and implement it within an autonomous reconstruction framework.
* We conduct extensive simulation and real-scene experiments to validate the effectiveness of the proposed system.
§ RELATED WORK
§.§ High-fidelity Reconstruction Representation
Various scene representations are used for reconstruction, including meshes, planes, and surfel clouds. Recently, Neural Radiance Field (NeRF) <cit.> has gained prominence due to its photorealistic rendering capabilities. NeRF methods can be categorized into three types: implicit, hybrid representation, and explicit. Implicit method <cit.> is memory-efficient but faces challenges such as catastrophic forgetting and significant computational overhead in larger scenes. Hybrid representation methods<cit.> integrate the benefits of implicit MLPs with structural features, significantly improving scene scalability and precision. The explicit method introduced in <cit.> directly embeds map features within voxels, bypassing the use of MLPs, which allows for faster optimization.
Although NeRF excels in photorealistic reconstruction <cit.>,
its ray sampling approach leads to high computational costs, making it impractical for real-time autonomous reconstruction<cit.>. In contrast, 3DGS<cit.> facilitates real-time rendering of novel views through its fully explicit representation and innovative differential splatting rendering, which has been utilized in real-time SLAM, allowing the scene reconstruction from RGB-D images<cit.>.
§.§ Active Reconstruction System
The active reconstruction system integrates data acquisition into the decision-making loop, guiding robots in data collection tasks<cit.>. Scene representations can categorize these systems: voxel-based methods<cit.>, surface-based methods<cit.>, neural network-based methods<cit.> and 3D Gaussian-based methods<cit.>.
Voxel methods<cit.> use compact grids for efficient space representation, while surface-based methods<cit.> focus on geometric details. However, both largely neglect color and texture details. Neural network-based methods, such as NeurAR<cit.> and Naruto<cit.>, combine NeRF with Bayesian models for view planning but are computationally intensive, causing frequent delays. 3D Gaussian Splatting (3DGS) offers high-fidelity scene representation and fast data fusion, but its application in active reconstruction is still rare. GS-Planner <cit.> combines 3DGS with voxel maps but lacks effective information gain evaluation and relies on random sampling, reducing efficiency and risking local optima.
§ METHOD
§.§ Problem Statement and System Overview
This study aims to efficiently explore unknown and spatially constrained 3D environments and reconstruct high-quality 3D models using a mobile robot by generating a trajectory composed of a sequence of paths and viewpoints<cit.>. In previous greedy-based NBV methods<cit.>, the path design seeks to identify the trajectory leading to the next optimal view. However, from a global perspective, this approach always converges on local optima, reducing reconstruction efficiency significantly. We design a hierarchical autonomous reconstruction framework through a novel viewpoint selection criterion, selecting a series of optimal viewpoints for global and local path planning, enabling rapid and high-fidelity reconstruction.
As illustrated in Fig. <ref>, our proposed hierarchical autonomous reconstruction framework consists of two main components. The 3D Gaussian Representation module reconstructs high-fidelity scenes and offers real-time evaluations of potential future viewpoints by leveraging 3DGS’s efficient data fusion and online rendering capabilities. These evaluations encompass gains in both coverage information and reconstruction quality. The Active Reconstruction Planning module is divided into two subcomponents: global planning and local planning. Global planning generates a path that enhances exploration efficiency and avoids local optima, while local planning identifies optimal viewpoints through view sampling and adaptive selection, developing a local path. Finally, the global and local paths are merged into an exploration path that guides the robot's movement.
§.§ 3D Gaussian Representation
We use SplaTam <cit.>, a 3D Gaussian-based SLAM method, for online 3D Gaussian Splatting reconstruction. The scene is represented as numerous isotropic 3D Gaussian, each characterized by eight parameters: center position ξ∈ℝ^3, RGB color r ∈ℝ^3, radius μ∈ℝ, and opacity ρ∈ℝ. The opacity function π of a point α∈ℝ^3, computed from each 3D Gaussian, is defined as follows:
π (α, ρ) = ρexp(-|α-ξ|^2/2 μ^2).
We adopt a differentiable approach to render the images to optimize the Gaussian parameters for scene representation. The final rendered RGB color R_pix and depth D_pix can be mathematically formulated as the alpha blending of N sequentially ordered points that overlap the pixel,
R_pix = ∑_i=1^N r_iπ_i∏_j=1^i-1(1-π_j),
D_pix = ∑_i=1^N d_iπ_i∏_j=1^i-1(1-π_j).
where d_i is the depth of the i-th 3D Gaussian center, corresponding to the z-coordinate of its center position in the camera coordinate system.
§.§ Reconstruction Coverage Gain Evaluation
To improve the efficiency and completeness of scene reconstruction, we implemented an evaluation of reconstruction coverage gain for candidate viewpoints. Calculating the increase in reconstructed areas from a new viewpoint requires considering occupied and unobserved regions. Yet, under the 3D Gaussian representation, we can only recognize the former, making it difficult to determine which regions have yet to be observed. To solve this problem, similar to GS-Planner<cit.>, we maintain a voxel map to represent unobserved volume and integrate it into the splatting rendering. However, unlike GS-Planner, we employ a more streamlined calculation method that leverages uniform voxel volumes to achieve model-consistent pixel-level reconstruction coverage gain within the 3DGS rendering process.
Specifically, given a set of 3D Gaussians and a viewpoint pose, we first sort the Gaussians from front to back by depth. Then, using the ordered 3D Gaussians, we can efficiently render depth images by alpha-compositing the splatted 2D projection of each Gaussian sequentially in pixel space. During rendering, by integrating the unobserved voxels from the maintained voxel map into the Gaussian map, we can determine whether an unobserved region exists between adjacent Gaussians. Considering both the uniform volume of each voxel and the inherent opacity attribute of the Gaussians, we can evaluate the visibility gain of unobserved regions for each viewpoint by utilizing a transmittance weight, which can be expressed as:
V_pix=∑_i=1^nV∏_j=1^m_i(1-α_j).
where n is the number of unobserved volumes along the ray, m_i is the number of the related 3D Gaussians before the i-th unobserved voxel Gaussian, ∏_j=1^m_i(1-α_j) is the transmittance weight, V represents the same unobserved voxel volume.
Leveraging the fast splatting-based rendering, the Reconstruction Coverage evaluation process runs in parallel with the reconstruction process, resulting in highly efficient overall computation. To illustrate the coverage evaluation process more intuitively, we provide Fig. <ref>.
§.§ Reconstruction Quality Gain Evaluation
To enhance reconstruction quality and accuracy, we employ Fisher Information to quantify the quality gains from novel viewpoints, leveraging its independence from ground truth<cit.>.
The primary goal of neural rendering is to minimize the negative log-likelihood (NLL) between rendered and ground truth images, described by:
-logℙ(Ψ|𝐱,𝐰)=(Ψ-f(𝐱,𝐰))^T(Ψ-f(𝐱,𝐰)).
where 𝐱 is the camera pose, Ψ the corresponding image, 𝐰 the model parameters, and f(𝐱,𝐰) the rendering model. Under the regularity conditions<cit.>, Fisher Information for Eq. <ref> is defined as the Hessian of the log-likelihood function concerning 𝐰:
ℐ(𝐰)=-𝔼_ℙ(Ψ|𝐱,𝐰)[∂^2logℙ(Ψ|𝐱,𝐰)/∂𝐰^2|𝐰]=𝐇^''[Ψ|𝐱,𝐰],
where 𝐇^''[Ψ|𝐱,𝐰] is the Hessian of Eq. <ref>. In the evaluation process, we can obtain the initial estimation of parameters 𝐰^* by using {Ψ_i^acq} as the training set D^train. Our quality evaluation purpose is to identify the viewpoints that can maximize the Information Gain<cit.> among the viewpoints 𝐱_i^acq ∈ D^candidate in comparison to D^train, where D^candidate represents the collection of candidate viewpoints:
ℐ[𝐰^*;{Ψ_i^acq}|{𝐱_i^acq},D^train]
=H[𝐰^*|D^train]-H[𝐰^*|{Ψ_i^acq},{𝐱_i^acq},D^train],
where H[·] is the entropy<cit.>.
Considering the log-likelihood form in Eq. <ref>, specifically the rendering loss, the entropy difference in the R.H.S. of Eq. <ref> only depends on H[𝐰^*|{Ψ_i^acq},{𝐱_i^acq},D^train], then the Hessian can be approximated using just the Jacobian matrix of f(𝐱,𝐰)<cit.>:
𝐇”[Ψ|𝐱,𝐰^*]=∇_𝐰f(𝐱;𝐰^*)^T∇_𝐰f(𝐱;𝐰^*).
As expected, the trace of Eq. <ref> can be computed without ground truths {Ψ_i^acq}, as Fisher Information is independent of observations. Furthermore, with the Laplace approximation<cit.>, Eq. <ref> can be approximated by considering only diagonal elements and adding a log-prior regularizer λ I:
𝐇”[Φ|𝐱,𝐰^*]≈diag(∇_𝐰f(𝐱,𝐰^*)^T∇_𝐰f(𝐱,𝐰^*))+λ I.
Like coverage reconstruction, we integrate quality evaluation into splatting-based rendering for computational efficiency.
§.§ Adaptive Hierarchical Planning
To avoid local optima in the exploration path, inspired by TARE<cit.>, we propose an adaptive hierarchical planning framework, which combines global planning with adaptive local planning to improve the efficiency of scene reconstruction. The entire scene is divided into two regions: the local space 𝒞 for local planning and the space outside 𝒞 which is partitioned into evenly cuboid subspaces for global planning.
§.§.§ Global planning
Each cuboid subspace is classified into three states based on the voxel map mentioned in Sec. <ref>: "reconstructed" (only observed voxels), "reconstructing" (both observed and unobserved voxels), and "unreconstructed" (only unobserved voxels). In global planning, only "reconstructing" subspaces are taken into account. The global is to find a global path Γ _global that traverses all "reconstructing " subspaces, connecting their centers and the robot's current location. To achieve this, similar to <cit.>, we construct a sparse random roadmap in the traversable space expanded from the past trajectory. Then we apply A* search on the roadmap to find the shortest paths among the subspaces and the current pose followed by solving a Traveling Salesman Problem (TSP)<cit.> to get Γ _global.
§.§.§ Adaptive local planning
Due to the trade-off between efficiency and efficacy in reconstruction, we design adaptive local planning to adjust the weights of these two aspects dynamically. Similar to the global planning approach, we use the A* algorithm combined with a TSP solver to perform local path planning after selecting the best views. The whole best views selecting algorithm is listed as Alg. <ref>.
Specifically, we first calculate the intersection points between the global path and the local horizon and uniformly sample viewpoints within the local region (Lines 1-2). Then, we combine these intersections and sampled points and assess a comprehensive 360-degree information gain for each (Line 3-4). This information gain comprises two components: coverage gain and quality gain, which are weighted relative to the proportion of observed areas within the local region:
G=G(C)+λ_o G_(Q)
where G is the final information gain, G(C) is the coverage gain, G_(Q) is the quality gain, λ_o is the proportion of observed voxels within the local region. Subsequently, leveraging the 360-degree information gain, we select viewpoints that exceed a threshold of information gain and use a sliding-window technique to identify the optimal yaw angles(Lines 5-12). Finally, we obtain the exploration path by connecting the global and local paths. The reconstruction is complete when all cuboid subspaces are "reconstructed" and no more viewpoints are selected.
§ RESULTS
§.§ Implementation details
We run our active reconstruction system on a desktop PC with a 2.9 GHz Intel i7-10700 CPU and an NVIDIA RTX 3090 GPU, using the Autonomous Exploration Development Environment<cit.> for simulation. The system's car is equipped with an RGB-D sensor and a Lidar VLP-16, providing real-time RGB-D images at 1200 ×680 resolution with a 5-meter range, and uses LOAM<cit.> for localization. The maximum velocity limit is 1.0 m / s, and the depth data includes a uniform noise of 2 cm.
We validate our method through simulations in three complex Matterport3D (MP3D)<cit.> scenes: 17DRP, 2t7WU, and Gdvg with the local planner range of 6 m × 6 m and the resolution of voxel map integrated into Gaussian map to 0.1 m. Viewpoints are sampled with a minimum distance of 1.5 m to avoid excessive overlap.
Similar to <cit.> and <cit.>, we evaluate our method in terms of effectiveness and efficiency. We adopt scene quality metrics from NARUTO<cit.>: Accuracy (cm), Completion (cm), and Completion Ratio (the percentage of points in the reconstructed mesh with Completion under 5 cm). We extract geometric centroids from Gaussian spheres to simulate mesh vertices due to the absence of a standard method for converting 3DGS into mesh. In these metrics, about 300k points are sampled from the surfaces. For efficiency, similar to <cit.>, we evaluate each step planning time (second) T_P and the path length (meter) P.L.. For each planning cycle during the reconstruction, T_P is divided into viewpoints sampling and evaluation time T_VE, local path planning time T_LP and global planning time T_GP, with average T_GP times of approximately 0.017 s (17DRP), 0.018 s (2t7WU) and 0.0 15s (Gdvg). i.e. T_P=T_VE+T_LP+T_GP. In TARE, the time taken to evaluate viewpoints corresponds to the time required to update the information about the areas they cover.
§.§ Efficacy of the Method
Following <cit.>, we evaluate our method's efficacy based on its validity and efficiency. We create variants of our method with 3D Gaussian representation: V1 (TARE <cit.>), V2 (Coverage evaluation only), V3 (Quality evaluation only), and V4 (On both without an adaptive strategy). Our method proves highly effective and more efficient than other approaches.
1) Quality evaluation in real-time reconstruction: Fig. <ref> shows that quality evaluation closely matches actual losses, even without ground truth. Highlighted areas on the loss map indicate regions with lower reconstruction quality, aligning with our quality gain evaluation.
2) Novel view evaluation criterion: We make V1 as a baseline applying hierarchical planning, V2 (for coverage), V3 (for quality), and V4 (on both) to verify our evaluation criterion's efficacy. Metrics in Table <ref> show that evaluating coverage and quality improves reconstruction. However, V2 results in low-quality reconstruction as it overlooks complex details, while V3 yields poor completeness by only refining already-covered areas and neglecting unobserved regions.
3) Adaptive hierarchical planning: To validate the adaptive hierarchical planning, we establish V4 as our baseline. Combining these two tasks noticeably hampers the speed of scene exploration and may result in local optima, especially when dealing with intricate reconstruction details. The introduction of adaptive hierarchical planning (Ours) ensures efficient exploration while maintaining reconstruction quality, preventing the process from getting stuck in local optima.
§.§ Comparison with existing reconstruction methods
We benchmark two recent works: NARUTO <cit.> based on view information gain fields, and GS-Planner <cit.> using 3D Gaussian reconstruction. The metrics in Table <ref> show our framework outperforms both planning efficiency and reconstruction quality. NARUTO neglects uncovered areas reducing the exploration efficiency, while GS-Planner often gets stuck in local optima during exploration. Fig. <ref> and metrics in Table <ref> highlight our method's superior reconstruction. We refer readers to the supplementary video for more visual results and the reconstruction process. We implemented the GS-Planner algorithm on a mobile vehicle. Fig. <ref> compares the trajectories of our method and GS-Planner in scene 2t7WU, showing GS-Planner's focus on smaller areas reduces overall efficiency. Our framework achieves more efficient scene reconstruction.
§.§ Robot experiments in real scene
We implemented our proposed framework on an UGV equipped with Realsense Depth Camera D435i and Ouster Lidar to perform the real scene reconstruction. FAST-LIO<cit.> provides the localization. Since we use an Ackermann-steering vehicle, we replace the A* algorithm with Kino-A* to ensure the path meets kinematic constraints. The detailed process will be shown in the supplementary video.
§ CONCLUSIONS
In this paper, we developed a hierarchical planning framework for efficient and high-fidelity active reconstruction with 3DGS. We introduced Fisher Information to evaluate reconstruction quality and assessed coverage gain by integrating the Voxel and Gaussian maps. We also designed a novel viewpoint selection strategy within hierarchical planning. Extensive experiments show our method's superior performance. For future work, we aim to extend our research to swarm robotics in large-scale scenes.
unsrt
| In tasks such as search and rescue or target finding, which rely on active exploration, robots must preserve as much geometric and texture information from the environment as possible to support effective decision-making <cit.>. Online active reconstruction plays a crucial role in these missions by enabling robots to construct and update environmental models in real time, allowing them to navigate and adapt more efficiently in complex and unknown environments.
However, conventional active reconstruction<cit.> that fuse sensor data across space and time only capture coarse structures and struggle with rich scene details and novel view evaluation. Recently, Neural Radiance Field (NeRF)<cit.>-based methods <cit.> have gained popularity for their high-fidelity scene representation and efficient memory usage. However, NeRF’s inherent volumetric rendering process requires dense sampling of every pixel, resulting in long training times and poor real-time performance<cit.>. Additionally, its use of implicit neural representations makes it challenging to evaluate reconstruction quality accurately in real time. In fact, active reconstruction systems demand quick responses and the ability to make decisions based on real-time reconstruction quality dynamically. NeRF’s computational bottlenecks make it unsuitable for scene representation in active reconstruction, especially in scenarios that require real-time responses.
Compared to NeRF, 3D Gaussian Splatting (3DGS)<cit.> offers a more efficient explicit representation, reducing computational complexity and better suiting online active reconstruction<cit.>. Additionally, the Gaussian map's real-time integration of new data gives it the potential to provide immediate feedback on the rendering quality of new views. However, despite these notable advantages, the application of 3DGS for active reconstruction in unknown environments is still largely unexplored.
Though high-quality reconstruction can be achieved using 3DGS, the task with 3D Gaussian representation faces three main challenges. First, efficiently and accurately evaluating novel view quality without ground truth is crucial for guiding robot motion planning, but it remains challenging. Second, while efficiency is critical to active reconstruction, Gaussian maps can only represent occupied areas, posing a challenge for efficiently reconstructing unobserved regions. Third, effectively integrating Gaussian map data into closed-loop motion planning is essential for active reconstruction, yet how to do this effectively is still an open question.
To address the above problems, we propose an efficient 3D Gaussian-based real-time planning framework for active reconstruction. To the best of our knowledge, our framework is the pioneering work exploring 3DGS representation for online active reconstruction. Firstly, we introduce Fisher Information, which represents the expectation of observation information and is independent of ground truth<cit.>, to evaluate novel view quality gain in online reconstruction. Secondly, we improve exploration efficiency in 3D Gaussian representation by integrating unknown voxels into the splatting-based rendering process, allowing us to assess new viewpoints' coverage of unexplored areas. Thirdly, we use Gaussian map data to adaptively select viewpoints, which balances reconstruction quality and efficiency, and integrate it into an active planning framework. Our experimental results confirm that our framework supports efficient and high-quality online reconstruction.
To summarize, our contributions are:
* To the best of our knowledge, we propose the first online adaptive hierarchical autonomous reconstruction system using 3DGS.
* We design a novel viewpoint selection strategy based on reconstruction coverage and quality and implement it within an autonomous reconstruction framework.
* We conduct extensive simulation and real-scene experiments to validate the effectiveness of the proposed system. | §.§ High-fidelity Reconstruction Representation
Various scene representations are used for reconstruction, including meshes, planes, and surfel clouds. Recently, Neural Radiance Field (NeRF) <cit.> has gained prominence due to its photorealistic rendering capabilities. NeRF methods can be categorized into three types: implicit, hybrid representation, and explicit. Implicit method <cit.> is memory-efficient but faces challenges such as catastrophic forgetting and significant computational overhead in larger scenes. Hybrid representation methods<cit.> integrate the benefits of implicit MLPs with structural features, significantly improving scene scalability and precision. The explicit method introduced in <cit.> directly embeds map features within voxels, bypassing the use of MLPs, which allows for faster optimization.
Although NeRF excels in photorealistic reconstruction <cit.>,
its ray sampling approach leads to high computational costs, making it impractical for real-time autonomous reconstruction<cit.>. In contrast, 3DGS<cit.> facilitates real-time rendering of novel views through its fully explicit representation and innovative differential splatting rendering, which has been utilized in real-time SLAM, allowing the scene reconstruction from RGB-D images<cit.>.
§.§ Active Reconstruction System
The active reconstruction system integrates data acquisition into the decision-making loop, guiding robots in data collection tasks<cit.>. Scene representations can categorize these systems: voxel-based methods<cit.>, surface-based methods<cit.>, neural network-based methods<cit.> and 3D Gaussian-based methods<cit.>.
Voxel methods<cit.> use compact grids for efficient space representation, while surface-based methods<cit.> focus on geometric details. However, both largely neglect color and texture details. Neural network-based methods, such as NeurAR<cit.> and Naruto<cit.>, combine NeRF with Bayesian models for view planning but are computationally intensive, causing frequent delays. 3D Gaussian Splatting (3DGS) offers high-fidelity scene representation and fast data fusion, but its application in active reconstruction is still rare. GS-Planner <cit.> combines 3DGS with voxel maps but lacks effective information gain evaluation and relies on random sampling, reducing efficiency and risking local optima. | §.§ Problem Statement and System Overview
This study aims to efficiently explore unknown and spatially constrained 3D environments and reconstruct high-quality 3D models using a mobile robot by generating a trajectory composed of a sequence of paths and viewpoints<cit.>. In previous greedy-based NBV methods<cit.>, the path design seeks to identify the trajectory leading to the next optimal view. However, from a global perspective, this approach always converges on local optima, reducing reconstruction efficiency significantly. We design a hierarchical autonomous reconstruction framework through a novel viewpoint selection criterion, selecting a series of optimal viewpoints for global and local path planning, enabling rapid and high-fidelity reconstruction.
As illustrated in Fig. <ref>, our proposed hierarchical autonomous reconstruction framework consists of two main components. The 3D Gaussian Representation module reconstructs high-fidelity scenes and offers real-time evaluations of potential future viewpoints by leveraging 3DGS’s efficient data fusion and online rendering capabilities. These evaluations encompass gains in both coverage information and reconstruction quality. The Active Reconstruction Planning module is divided into two subcomponents: global planning and local planning. Global planning generates a path that enhances exploration efficiency and avoids local optima, while local planning identifies optimal viewpoints through view sampling and adaptive selection, developing a local path. Finally, the global and local paths are merged into an exploration path that guides the robot's movement.
§.§ 3D Gaussian Representation
We use SplaTam <cit.>, a 3D Gaussian-based SLAM method, for online 3D Gaussian Splatting reconstruction. The scene is represented as numerous isotropic 3D Gaussian, each characterized by eight parameters: center position ξ∈ℝ^3, RGB color r ∈ℝ^3, radius μ∈ℝ, and opacity ρ∈ℝ. The opacity function π of a point α∈ℝ^3, computed from each 3D Gaussian, is defined as follows:
π (α, ρ) = ρexp(-|α-ξ|^2/2 μ^2).
We adopt a differentiable approach to render the images to optimize the Gaussian parameters for scene representation. The final rendered RGB color R_pix and depth D_pix can be mathematically formulated as the alpha blending of N sequentially ordered points that overlap the pixel,
R_pix = ∑_i=1^N r_iπ_i∏_j=1^i-1(1-π_j),
D_pix = ∑_i=1^N d_iπ_i∏_j=1^i-1(1-π_j).
where d_i is the depth of the i-th 3D Gaussian center, corresponding to the z-coordinate of its center position in the camera coordinate system.
§.§ Reconstruction Coverage Gain Evaluation
To improve the efficiency and completeness of scene reconstruction, we implemented an evaluation of reconstruction coverage gain for candidate viewpoints. Calculating the increase in reconstructed areas from a new viewpoint requires considering occupied and unobserved regions. Yet, under the 3D Gaussian representation, we can only recognize the former, making it difficult to determine which regions have yet to be observed. To solve this problem, similar to GS-Planner<cit.>, we maintain a voxel map to represent unobserved volume and integrate it into the splatting rendering. However, unlike GS-Planner, we employ a more streamlined calculation method that leverages uniform voxel volumes to achieve model-consistent pixel-level reconstruction coverage gain within the 3DGS rendering process.
Specifically, given a set of 3D Gaussians and a viewpoint pose, we first sort the Gaussians from front to back by depth. Then, using the ordered 3D Gaussians, we can efficiently render depth images by alpha-compositing the splatted 2D projection of each Gaussian sequentially in pixel space. During rendering, by integrating the unobserved voxels from the maintained voxel map into the Gaussian map, we can determine whether an unobserved region exists between adjacent Gaussians. Considering both the uniform volume of each voxel and the inherent opacity attribute of the Gaussians, we can evaluate the visibility gain of unobserved regions for each viewpoint by utilizing a transmittance weight, which can be expressed as:
V_pix=∑_i=1^nV∏_j=1^m_i(1-α_j).
where n is the number of unobserved volumes along the ray, m_i is the number of the related 3D Gaussians before the i-th unobserved voxel Gaussian, ∏_j=1^m_i(1-α_j) is the transmittance weight, V represents the same unobserved voxel volume.
Leveraging the fast splatting-based rendering, the Reconstruction Coverage evaluation process runs in parallel with the reconstruction process, resulting in highly efficient overall computation. To illustrate the coverage evaluation process more intuitively, we provide Fig. <ref>.
§.§ Reconstruction Quality Gain Evaluation
To enhance reconstruction quality and accuracy, we employ Fisher Information to quantify the quality gains from novel viewpoints, leveraging its independence from ground truth<cit.>.
The primary goal of neural rendering is to minimize the negative log-likelihood (NLL) between rendered and ground truth images, described by:
-logℙ(Ψ|𝐱,𝐰)=(Ψ-f(𝐱,𝐰))^T(Ψ-f(𝐱,𝐰)).
where 𝐱 is the camera pose, Ψ the corresponding image, 𝐰 the model parameters, and f(𝐱,𝐰) the rendering model. Under the regularity conditions<cit.>, Fisher Information for Eq. <ref> is defined as the Hessian of the log-likelihood function concerning 𝐰:
ℐ(𝐰)=-𝔼_ℙ(Ψ|𝐱,𝐰)[∂^2logℙ(Ψ|𝐱,𝐰)/∂𝐰^2|𝐰]=𝐇^''[Ψ|𝐱,𝐰],
where 𝐇^''[Ψ|𝐱,𝐰] is the Hessian of Eq. <ref>. In the evaluation process, we can obtain the initial estimation of parameters 𝐰^* by using {Ψ_i^acq} as the training set D^train. Our quality evaluation purpose is to identify the viewpoints that can maximize the Information Gain<cit.> among the viewpoints 𝐱_i^acq ∈ D^candidate in comparison to D^train, where D^candidate represents the collection of candidate viewpoints:
ℐ[𝐰^*;{Ψ_i^acq}|{𝐱_i^acq},D^train]
=H[𝐰^*|D^train]-H[𝐰^*|{Ψ_i^acq},{𝐱_i^acq},D^train],
where H[·] is the entropy<cit.>.
Considering the log-likelihood form in Eq. <ref>, specifically the rendering loss, the entropy difference in the R.H.S. of Eq. <ref> only depends on H[𝐰^*|{Ψ_i^acq},{𝐱_i^acq},D^train], then the Hessian can be approximated using just the Jacobian matrix of f(𝐱,𝐰)<cit.>:
𝐇”[Ψ|𝐱,𝐰^*]=∇_𝐰f(𝐱;𝐰^*)^T∇_𝐰f(𝐱;𝐰^*).
As expected, the trace of Eq. <ref> can be computed without ground truths {Ψ_i^acq}, as Fisher Information is independent of observations. Furthermore, with the Laplace approximation<cit.>, Eq. <ref> can be approximated by considering only diagonal elements and adding a log-prior regularizer λ I:
𝐇”[Φ|𝐱,𝐰^*]≈diag(∇_𝐰f(𝐱,𝐰^*)^T∇_𝐰f(𝐱,𝐰^*))+λ I.
Like coverage reconstruction, we integrate quality evaluation into splatting-based rendering for computational efficiency.
§.§ Adaptive Hierarchical Planning
To avoid local optima in the exploration path, inspired by TARE<cit.>, we propose an adaptive hierarchical planning framework, which combines global planning with adaptive local planning to improve the efficiency of scene reconstruction. The entire scene is divided into two regions: the local space 𝒞 for local planning and the space outside 𝒞 which is partitioned into evenly cuboid subspaces for global planning.
§.§.§ Global planning
Each cuboid subspace is classified into three states based on the voxel map mentioned in Sec. <ref>: "reconstructed" (only observed voxels), "reconstructing" (both observed and unobserved voxels), and "unreconstructed" (only unobserved voxels). In global planning, only "reconstructing" subspaces are taken into account. The global is to find a global path Γ _global that traverses all "reconstructing " subspaces, connecting their centers and the robot's current location. To achieve this, similar to <cit.>, we construct a sparse random roadmap in the traversable space expanded from the past trajectory. Then we apply A* search on the roadmap to find the shortest paths among the subspaces and the current pose followed by solving a Traveling Salesman Problem (TSP)<cit.> to get Γ _global.
§.§.§ Adaptive local planning
Due to the trade-off between efficiency and efficacy in reconstruction, we design adaptive local planning to adjust the weights of these two aspects dynamically. Similar to the global planning approach, we use the A* algorithm combined with a TSP solver to perform local path planning after selecting the best views. The whole best views selecting algorithm is listed as Alg. <ref>.
Specifically, we first calculate the intersection points between the global path and the local horizon and uniformly sample viewpoints within the local region (Lines 1-2). Then, we combine these intersections and sampled points and assess a comprehensive 360-degree information gain for each (Line 3-4). This information gain comprises two components: coverage gain and quality gain, which are weighted relative to the proportion of observed areas within the local region:
G=G(C)+λ_o G_(Q)
where G is the final information gain, G(C) is the coverage gain, G_(Q) is the quality gain, λ_o is the proportion of observed voxels within the local region. Subsequently, leveraging the 360-degree information gain, we select viewpoints that exceed a threshold of information gain and use a sliding-window technique to identify the optimal yaw angles(Lines 5-12). Finally, we obtain the exploration path by connecting the global and local paths. The reconstruction is complete when all cuboid subspaces are "reconstructed" and no more viewpoints are selected. | §.§ Implementation details
We run our active reconstruction system on a desktop PC with a 2.9 GHz Intel i7-10700 CPU and an NVIDIA RTX 3090 GPU, using the Autonomous Exploration Development Environment<cit.> for simulation. The system's car is equipped with an RGB-D sensor and a Lidar VLP-16, providing real-time RGB-D images at 1200 ×680 resolution with a 5-meter range, and uses LOAM<cit.> for localization. The maximum velocity limit is 1.0 m / s, and the depth data includes a uniform noise of 2 cm.
We validate our method through simulations in three complex Matterport3D (MP3D)<cit.> scenes: 17DRP, 2t7WU, and Gdvg with the local planner range of 6 m × 6 m and the resolution of voxel map integrated into Gaussian map to 0.1 m. Viewpoints are sampled with a minimum distance of 1.5 m to avoid excessive overlap.
Similar to <cit.> and <cit.>, we evaluate our method in terms of effectiveness and efficiency. We adopt scene quality metrics from NARUTO<cit.>: Accuracy (cm), Completion (cm), and Completion Ratio (the percentage of points in the reconstructed mesh with Completion under 5 cm). We extract geometric centroids from Gaussian spheres to simulate mesh vertices due to the absence of a standard method for converting 3DGS into mesh. In these metrics, about 300k points are sampled from the surfaces. For efficiency, similar to <cit.>, we evaluate each step planning time (second) T_P and the path length (meter) P.L.. For each planning cycle during the reconstruction, T_P is divided into viewpoints sampling and evaluation time T_VE, local path planning time T_LP and global planning time T_GP, with average T_GP times of approximately 0.017 s (17DRP), 0.018 s (2t7WU) and 0.0 15s (Gdvg). i.e. T_P=T_VE+T_LP+T_GP. In TARE, the time taken to evaluate viewpoints corresponds to the time required to update the information about the areas they cover.
§.§ Efficacy of the Method
Following <cit.>, we evaluate our method's efficacy based on its validity and efficiency. We create variants of our method with 3D Gaussian representation: V1 (TARE <cit.>), V2 (Coverage evaluation only), V3 (Quality evaluation only), and V4 (On both without an adaptive strategy). Our method proves highly effective and more efficient than other approaches.
1) Quality evaluation in real-time reconstruction: Fig. <ref> shows that quality evaluation closely matches actual losses, even without ground truth. Highlighted areas on the loss map indicate regions with lower reconstruction quality, aligning with our quality gain evaluation.
2) Novel view evaluation criterion: We make V1 as a baseline applying hierarchical planning, V2 (for coverage), V3 (for quality), and V4 (on both) to verify our evaluation criterion's efficacy. Metrics in Table <ref> show that evaluating coverage and quality improves reconstruction. However, V2 results in low-quality reconstruction as it overlooks complex details, while V3 yields poor completeness by only refining already-covered areas and neglecting unobserved regions.
3) Adaptive hierarchical planning: To validate the adaptive hierarchical planning, we establish V4 as our baseline. Combining these two tasks noticeably hampers the speed of scene exploration and may result in local optima, especially when dealing with intricate reconstruction details. The introduction of adaptive hierarchical planning (Ours) ensures efficient exploration while maintaining reconstruction quality, preventing the process from getting stuck in local optima.
§.§ Comparison with existing reconstruction methods
We benchmark two recent works: NARUTO <cit.> based on view information gain fields, and GS-Planner <cit.> using 3D Gaussian reconstruction. The metrics in Table <ref> show our framework outperforms both planning efficiency and reconstruction quality. NARUTO neglects uncovered areas reducing the exploration efficiency, while GS-Planner often gets stuck in local optima during exploration. Fig. <ref> and metrics in Table <ref> highlight our method's superior reconstruction. We refer readers to the supplementary video for more visual results and the reconstruction process. We implemented the GS-Planner algorithm on a mobile vehicle. Fig. <ref> compares the trajectories of our method and GS-Planner in scene 2t7WU, showing GS-Planner's focus on smaller areas reduces overall efficiency. Our framework achieves more efficient scene reconstruction.
§.§ Robot experiments in real scene
We implemented our proposed framework on an UGV equipped with Realsense Depth Camera D435i and Ouster Lidar to perform the real scene reconstruction. FAST-LIO<cit.> provides the localization. Since we use an Ackermann-steering vehicle, we replace the A* algorithm with Kino-A* to ensure the path meets kinematic constraints. The detailed process will be shown in the supplementary video. | null | null |
http://arxiv.org/abs/2409.17718v1 | 20240926104249 | Optical control of spin-splitting in an altermagnet | [
"Sangeeta Rajpurohit",
"Revsen Karaalp",
"Yuan Ping",
"Liang Z. Tan",
"Tadashi Ogitsu",
"Peter E. Blöchl"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
[email protected]
Molecular Foundry, Lawrence Berkeley National Laboratory, USA
Advanced Light Source, Lawrence Berkeley National Laboratory, USA
Department of Materials Science and Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States
Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States
Department of Chemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States
Molecular Foundry, Lawrence Berkeley National Laboratory, USA
Lawrence Livermore National Laboratory, Livermore, USA
Institute for Theoretical Physics, Clausthal University of Technology, Germany
Institute for Theoretical Physics, Georg-August-Universität Göttingen, Germany
§ ABSTRACT
Manipulating and controlling the band structure and the spin-splitting
in the newly discovered class of magnetic materials known as 'altermagnets'
is highly desirable for their application in spintronics. Based on real-time simulations for
an interacting multiband tight-binding model, we propose optical excitations as an
effective way to selectively control the spin-splitting of an altermagnet.
The consistent treatment of electronic interactions and electron-phonon
coupling in the model allows for a systematic study of the effect of
these interactions on the spin-splitting of the altermagnet
in the ground as well as in the excited-state. Our simulations reveal that optical excitations
modify the band structure and thus lead to significant changes in the spin-splitting within 50 fs.
The relative spin-splitting in the conduction band grows up to four times in the
optically excited altermagnet. We disentangle the roles of Coulomb U and J
in the enhancement of the spin-splitting in the photoexcited state. Our study elucidates the potential
for exploiting optical control of spin-splitting gaps to obtain
desirable properties in altermagnets on the fastest possible timescales.
Optical control of spin-splitting in an altermagnet
Peter E. Blöchl
September 28, 2024
===================================================
Altermagnetism, which has emerged as a new class of magnetism
besides ferromagnetism and antiferromagnetism,
has drawn a lot of attention in condensed matter physics
<cit.>.
Altermagnets exhibit properties of both ferromagnets (FMs) and
antiferromagnets (AFMs). Similarly to AFMs, they have zero net
magnetization with local magnetic moments alternating in real space.
However, like FMs, they break Kramers's degeneracy in
reciprocal space in the absence of spin-orbit coupling (SOC).
Instead of spatial translation and spin-inversion, the opposite spin
sublattices in an altermagnet are connected by spatial rotation.
Due to the breaking of time-reversal symmetry in
momentum space, there are several unconventional anomalous
magnetic responses predicted in altermagnets, such as the anomalous
Hall effect <cit.> and the
magneto-optical Kerr effect <cit.>, which have
been verified by experiments.
The spin-splitting in altermagnets originates from
anisotropic exchange interactions. Many materials with
the potential to host altermagnetism have been identified
<cit.>. However, the predicted spin-splitting
of most of them is less than 0.50 eV. Few exceptions are
RuO_2, CrSb and MnTe. The maximum spin-splitting of 1.4 eV is
predicted for RuO_2, followed by CrSb and MnTe with
spin-splittings of 1.2 eV and 1.1 eV, respectively
<cit.>.
After the discovery of alternative magnetism, most research efforts
have focused on identifying promising materials. Although spin-splitting
plays a pivotal role in most of the predicted applications
of altermagnet, including spintronics, the possible ways to manipulate
and control the spin-splitting gap in these materials remain largely
unexplored. Advances in ultrafast science have resulted in promising
routes to manipulate and control the properties of materials
<cit.>. Optical tuning of spin-splitting
gaps can offer exciting opportunities to modify the
altermagnetic properties on ultrafast timescales.
In this study, we demonstrate the realization of altermagnetism
with spin-splitting of d-wave symmetry in the presence of orbital
ordering using a multiband interacting tight-binding (TB) model.
Our study shows that spin-splitting in the altermagnetic ground state strongly depends on
Coulomb U, but remains relatively unaffected by Coulomb exchange
J. Coulomb J only affects the local magnetic moment. The dependence of
spin-splitting gaps on Coloumb parameters suggests that these can be manipulated
in the altermagnetic state on ultrafast timescales via
intense photoexcitation. Our real-time simulations based on
the TB-model reveal significant changes in the band structure
and spin-splitting of the altermagnet induced following the
optical excitation. Notably, the simulations predict substantial
alterations in spin-splitting in the conduction band,
which develop fully within 75 fs. Furthermore, increasing
the light-pulse intensity further amplifies the spin-splitting.
Our findings demonstrate that optical excitations
offer an effective approach to modify spin-splitting
in altermagnets with correlated electrons.
Firstly, we construct a 2D interacting TB-model that describes the generic features of
altermagnetism driven by orbital ordering in strongly correlated oxides.
We start with a square planar lattice of corner-sharing oxygen octahedra.
Each such oxygen octahedron hosts a transition-metal (TM) ion in its center.
For each TM site R we consider the d-orbitals of e_g symmetry, d_x^2-y^2
and d_3z^2-r^2 and two octahedral distortion modes.
The potential energy E_pot[{ψ_σ,α,R,n},{Q_i,R}]
of the proposed interacting TB-model is expressed in terms of one-electron
wave functions |ψ_n⟩=∑_σ,α,R |χ_σ,α,R⟩ψ_σ,α,R,n
and octahedral distortion modes Q_i,R. The basis set |χ_σ,α,R⟩ for
consists of local spin orbitals with spin σ∈{↑,↓} and orbital
character α∈{}.
The potential energy
E_pot = E_e +E_ph +E_e-ph
consists of the energy E_e of the electronic subsystem, the energy E_ph the
phonon subsystems and the electron-phonon (el-ph) interaction energy E_e-ph.
Electronic subsystem: The electronic energy E_e=E_hop+E_coul consists
of the kinetic energy E_hop and the Coulomb interaction E_coul.
The kinetic energy is expressed as
E_hop = ∑_R,R',σ,n
f_n ∑_α,α'ψ_σ,α,R,nT_α,α',R,R'ψ^*_σ,α',R,n
where the hopping-matrix elements T_α,α',R,R' contribute only onsite and
nearest-neighbor terms between the TM-sites. The f_n are the occupations of the
one-particle wave functions |ψ_n⟩. The hopping matrix elements along
x and y directions are defined as
T^x_R,R' =
-1/4t_hop([ 3 -√(3); -√(3) 1 ])
T^y_R,R' =
-1/4t_hop([ 3 +√(3); +√(3) 1 ])
The Coulomb interaction E_coul
E_coul = 1/2(U-3J_xc)∑_R(∑_σ,αρ_σ,α,σ,α,R)^2
- 1/2(U-3J_xc)∑_R
∑_σ,α,σ',β
|ρ_σ,α,σ',β,R|^2
+ 1/2J_xc∑_R∑_σ,σ'(-1)^σ-σ'∑_k∈{x,z}
× [
(∑_α,βρ_σ,α,σ',β,Rσ^(k)_βα)
(∑_α,βρ_-σ,α,-σ',β,Rσ^(k)_βα)
+(∑_αρ_σ,α,σ',α,R)
(∑_αρ_-σ,α,-σ',α,R)],
is conveniently expressed in terms of the on-site terms of the one-particle-reduced density matrix
ρ̂ with the matrix elements defined as
ρ_σ,α,R,σ',α',R=∑_n f_nψ_σ,α,R,nψ^*_σ',α',R,n.
Phononic subsystem:
The phononic subsystem consists of two octahedral modes of oxygen around
every TM-site R: a Jahn-Teller (JT) mode Q_2, R and
a breathing mode Q_1,R <cit.>. These phonon modes are defined
by the displacement vectors of oxygen ions from their equilibrium positions.
The JT mode is an asymmetric expansion
of an octahedron in the plane, i.e. expansion along x and contraction along the y-direction.
The breathing mode is the isotropic expansion of the oxygen octahedron.
The restoring energy of this phononic
sub-system E_e-ph is expressed as
E_ph = ∑_R( 1/2k_JTQ^2_2,R
+ 1/2k_brQ^2_1,R)
where K_JT and K_br are the restoring force constants of
the JT and breathing mode. Due to the shared oxygen ions, the octahedral
distortions are highly cooperative.
Electron-phonon coupling:
We consider strong coupling between the eg-electron at TM-sites to the local
modes Q_1,R and Q_2,R <cit.> (see SI).
In 3d TM oxides like manganites and nickelates, this type of el-ph interaction is
another mechanism besides the Coulomb interaction, which causes long-range orbital
ordering <cit.>. The el-ph coupling E_el-ph in the
model is defined as
E_e-ph =
∑_R,σ∑_α,βρ_σ,α,σ,β,R
M^Q_β,α(Q_1,R,Q_2,R).
Here g_JT and g_br are the el-ph coupling constants and 𝐌^Q(Q_1,R,Q_2,R) is defined as
𝐌^Q(Q_1,R,Q_2,R)=
([ -g_brQ_1,R g_JTQ_2,R; g_JTQ_2,R -g_brQ_1,R ])
The TB-model can describe the strong inequivalent local crystal environment
caused by local el-ph coupling and electronic interactions for two sublattices with
opposite spins, which reduces the symmetry as needed for alternating spin-splitting
of the energy bands in reciprocal space.
We take model parameters from our previous study
as a reference <cit.>.
The reference model parameters are g_br = 2.988 eV/Å, K_br = 10.346
eV/Å^2, g_JT = 2.113 eV/Å, K_JT = 5.173
eV/Å^2, U/t=4.29, U/J=2.63 and t_hop = t = 0.585 eV. In the present study,
we are interested in the effect of electronic interactions and correlations on the
altermagnetic properties, we vary U, J and t_hop while keeping the other
parameters fixed.
Altermagnetic ground-state:
We investigate the region of phase diagram
with the altermagnetic ground-state, shown in
Figure <ref>-a. We consider one electron per TM-site.
Figures <ref>-b displays
the changes in the average local mode ⟨ Q_1,R⟩
and the magnetic moment ⟨|m_R|⟩ at the TM-sites
in the altermagnetic ground-state as a function of U/t and
U/J for U=2.50 eV. For U/J<2.5 and U/t<2.0,
the system exhibits a non-magnetic metallic phase characterized by
TM-sites with zero magnetic and orbital moment. As U/J and U/t
increase, the system undergoes
metallic to an insulating altermagnetic phase transition. In the
altermagnetic phase, the TM-sites
have finite magnetic moments and eg-orbital polarization.
These local magnetic moments are arranged in
antiferromagnetic (AFM) order. The local eg-orbital polarization
forms long range staggered orbital ordering as shown in Figure
<ref>-a. This altermagnetic state remains stable for
2.5<U/J<6.0 and 2.0<U/t<6.0.
Figures <ref>-c show the
density of states of the altermagnetic
ground-state. In the ground state, all TM-sites
have one eg-electron and are JT active. The JT-effect
lifts the degeneracy of the local e_g-orbitals.
The lower filled e_g-state |Θ_l,R⟩ located
between -1.5 and 0 eV is described by the linear combination
|Θ_l,R⟩ = -sin(γ)|d_x^2-y^2⟩±cos(γ)|d_3z^2-r^2⟩
of the e_g-orbitals d_x^2-y^2 and d_3z^2-r^2 at site
R with γ=60^∘.
The corresponding unoccupied state |Θ_u⟩_R are
|Θ_u,_R⟩ = cos(γ)|d_x^2-y^2⟩±sin(γ)|d_3z^2-r^2⟩.
The upper sign in front of the |d_3z^2-r^2⟩-state
describes an orbital polarization along x and the lower sign
describes an orbital polarization along the y direction.
Figure <ref>-e shows the spin-polarized band structure of
the calculated altermagnet state. Clearly, the spin degeneracy of the
bands is lifted in momentum space, with the largest spin-splitting
gap at high-symmetry points X=(π, 0) and Y=(0, π).
The spin-splitting is present for both the valence band and the conduction band.
The magnitude of the spin-splitting for the valence band,
indicated by Δ^s_A in Figure <ref>-e (black) is
approximately 0.57 eV. The spin-splitting of 1.10 eV
in the lower conduction band, indicated by Δ^s_B (blue), is significantly higher
than the corresponding values of Δ^s_C=0.07 eV (green) and Δ^s_D=0.45 eV (red)
in the upper conduction band. The spin-polarized bands
with opposite spins are related by π/2
rotations in the momentum space.
Figure <ref>
shows the magnitude of the spin-splitting gap as a function of U/t and U/J.
The spin-splitting gap Δ^s_A of the valence band and Δ^s_B of
the lower conduction band decreases with U/t. The spin-splitting in
the upper conduction band Δ^s_D
increases with U/t, while Δ^s_C decreases with U/t.
The spin-splitting gaps remain largely unaffected, while the
magnetic moments are weakly affected by U/J.
The above study of the phase diagram suggests that the properties of the
altermagnetic state in correlated electrons systems are sensitive
to the Coulomb U and J parameters. The strength of electronic
interactions and correlations can be altered through material composition
or external means, such as applying an electromagnetic field or varying
the temperature. In the following sections, we use real-time simulations
based on the proposed TB-model to demonstrate optical excitations as
an effective way to modify the properties of altermagnet on ultrafast
timescales.
Photo-excitation study with rt-TDDFT:
To study the evolution of spin-splitting under photo-excitation
in the altermagnetic state described above, we employ a simulation
framework similar to real-time time-dependent density-functional
theory (rt-TDDFT) formalism <cit.> based on the model defined in Eq.<ref>.
The one-particle wavefunctions of eg-electrons are propagated according to
the time-dependent Schrödinger equation. For the sake of simplicity, the atoms
are kept frozen while studying the photoexcitation. The photoexcitation is
described by an explicit time-dependent vector potentials A⃗(t)=e⃗_s ωIm(A_oe^-iω t)g(t)
which is coupled to the electrons using Peierls substitution method <cit.>.
Here, A_o is the amplitude of the vector potential and
ω is angular frequency.
The envelope g(t) of the light pulse is a Gaussian with a width of 30 fs.
The polarization e⃗_s of light is along the x̂+ŷ or the
x̂-ŷ direction. The unit cell consists of
N=4 TM-sites and 8 oxygen atoms. The simulations are performed
with periodic boundary conditions on a 24x24 k-point grid.
Absorption Spectra:
The optical absorption of the altermagnetic state as a function of photon
energy ħω is shown in the inset of Figure <ref>-a.
It is expressed in terms of the photon-absorption density D_p, i.e. the total
number of photons absorbed per site. The photon-absorption density is computed
from energy difference before and after the light pulse as
D_p=δ E_pot/ħω N with E_pot from Eq. <ref>.
We attribute the broad absorption peak around ħω_p=2.39 eV to
dipole-allowed inter-site electronic transitions from majority to minority spin
orbitals located on nearest-neighbor TM-sites belonging to opposite spin sublattices.
Spin-splitting in the photo-excited state: Let us examine the changes of
the spin-splitting gaps upon optical excitation of the altermagnet.
We select the photon energy ħω_p=2.39 eV with the maximum optical absorption.
For the discussion ahead, we discriminate between two sets of time-dependent wave functions.
One set of one-particle wave functions with symbol |ψ_n(k⃗,t)⟩ defines the
time-dependent Slater determinant. These wave functions are obtained via the time-dependent
Schrödinger equation with the Hamiltonian ĥ(k⃗,t). The second set of one-particle
wave functions with symbol |ϕ_n(k⃗,t)⟩ is obtained with the same instantaneous
Hamiltonian ĥ(k⃗,t), but using the time-independent Schödinger equation.
The eigenvalues of this Hamiltonian are used to calculate the band structure and spin-splitting gaps Δ^s.
The instantaneous occupations f_m(k⃗,t) of the eigenstates of the Hamiltonian are obtained
by projecting the time-dependent wave functions |ψ_n(k⃗,t)⟩ onto the
eigenstates |ϕ_m(k⃗,t)⟩ of the Hamiltonian, i.e. f_m(k⃗,t)=∑_q=1^N_e|⟨ψ_q(k⃗,t)|ϕ_m(k⃗,t)⟩|^2.
The ratio of photo-excited electrons is
N^exci=1/2∑_k⃗|f_m(k⃗,t_f)-f_m(k⃗,t_i)|/∑_k⃗f_m(k⃗,t_i)
where t_i is a time before and and t_f is a time after the light pulse.
The spin-splitting gaps Δ^s(t) are extracted from the eigenvalues
ϵ_n(k⃗,t) of the instantaneous electronic Hamiltonian.
The z-component of the local magnetic moment for site R is
μ_z,R(t)=∑_α,n(
|ψ_↑,α,R,n(t)|^2 -|ψ_↓,α,R,n(t)|^2) .
Figures <ref>-a and <ref>-c show the evolution of local magnetic moments μ_R and spin-splittings
relative spin-splitting Δ^s(t)/Δ^s(t=0) for light pulses with three different intensities,
corresponding to N^exci=8.50%, 18.30%, and 20.30% photoexcited electrons.
The optical excitations are charge-transfer transitions between neighboring TM-sites.
Because the two sites participating in the charge transfer are antiferromagnetic,
the excitation reduces their magnetic moments, which is evident in Figure <ref>-a (blue and red).
The spin-splitting gaps respond instantaneously, i.e. during the photoexcitation process:
Two of the spin-splitting gaps, namely Δ^s_B and Δ^s_C respond strongly
to optical excitation. For N^exci=18.30% we find a change of 0.35 eV of Δ^s_B
and 0.41 eV of Δ^s_C. The other spin-splittings are small in comparison with a
change of 0.01 eV in Δ^s_A and 0.08 eV in Δ^s_D.
The spin-splitting gap with the largest response to photoexcitation, namely Δ^s_C,
is between states that were nearly degenerate initially. This spin-splitting gap, Δ^s_C,
is in the upper conduction band, where most of the excited electrons are located.
Initially, the spin-splitting Δ^s_C falls off to zero, and then rises again with
an opposite sign. For the more intense excitations, i.e. with
N^exci = 18.30% and 20.30%, Δ^s_C(t) becomes nearly
four times its equilibrium value within 50 fs, see Figure <ref>-c (green).
Figure <ref>-b shows the relative change in spin-splitting
Δ^s(t_f)/Δ^s(t=0) at the high-symmetry point X at t_f=50 fs
as a function of photon energy ħω. The largest changes
Δ^s(t_f)/Δ^s(t=0) of the spin-splitting gaps occur for photon
energies near the bottom of the broad absorption peak, namely at hω_p=2.39 eV.
Let us rationalize the findings described above on the level of local orbitals.
For the sake of the argument, we consider here only the diagonal elements of the
one-particle reduced density matrix. The photo excitation with photon energy at
the absorption maximum transfers electrons from the lower, majority spin orbital of
one site to the upper, minority spin orbital of a neighboring site.
This implies that, on each site, the occupation change δ n_l,↓,R of the
filled orbital equals -N^exci, while the occupation δ n_u,↑,R of
the upper, minority-spin orbital changes by +N^exci.
Figure <ref>-a shows the electron distribution in the photo-excited state
at time t=75 fs for N^exci=20.30%.
The excited electrons predominantly occupy the upper conduction band on the
X-M and the Y-M high-symmetry lines in reciprocal space.
The orbital energies are obtained using Janak's theorem<cit.>, as
the derivative of total energy with respect to occupations n_σα (see SI for more information).
Hence, the occupation changes δ n_α,σ,R result in a
response δϵ̅_β,σ',R of the orbital energies of the form
δϵ̅_α,σ,R=∑_β,σ'∂^2 E_coul/∂ n_α,σ,R∂ n_β,σ',Rδn_β,σ',R.
It turns out that the same orbitals, which experience the largest occupation
changes, experience also the largest energy level shifts, namely δϵ̅_l,↓,R=-δϵ̅_u,↑,R=(U-2J)N^exci.
The band structure changes according to the k-dependent orbital weights, i.e. δϵ_n(k⃗)=∑_α,σ,R|⟨ψ_n(k⃗)|Θ_α,σ,R⟩|^2 δϵ̅_α,σ,R.
Let us investigate the changes in the spin-splitting gaps as a function of the
photoexcited electron population N^exci. The most significant changes in
relative spin-splitting with increasing N^exci are observed for Δ^s_C,
see Figure <ref>-b. Remarkably, Δ^s_C increases up to four times
its initial value at high N^exci.
Our study demonstrates that spin-splittings of altermagnets can be modified
and controlled by optical excitations. These changes occur on ultrafast time scales.
Promising candidates to realize this effect are 3d transition-metal compounds.
The temporal change in spins-splittings predicted here can experimentally be
probed by time-resolved spin angle-resolved photoemission spectroscopy (SARPES)
<cit.>.
In conclusion, we demonstrate that optical excitations significantly modify
the spin-splitting gaps in an altermagnet with correlated electrons.
We study this effect by employing real-time simulations based on an interacting
tight-binding model. The modification in spin-splitting is driven by the
photoinduced changes in the band structure. The magnitude of changes
in spin-splittings is strongly sensitive to the intensity of the light-field.
Our findings suggest that light fields can be an effective tool for modifying
the altermagnetic properties of strongly correlated systems on ultrafast timescales.
Acknowledgments:
Theory and simulation were supported by the Computational Materials Sciences
(CMS) Program funded by the US Department of Energy, Office of Science,
Basic Energy Sciences, Materials Sciences and Engineering Division. Data analysis
was provided by the User Program of the Molecular Foundry, supported by the Office
of Science, Office of Basic Energy Sciences, of the U.S.
Department of Energy under Contract No. DE-AC02-05CH11231. This work was performed
under the auspices of the U.S. Department of Energy by Lawrence Livermore National
Laboratory under Contract DE-AC52-07NA27344
This work was performed under the auspices of the U.S. Department of
Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
S.R and T.O. are supported by the Computational Materials Sciences Program
funded by the US Department of Energy, Office of Science, Basic
Energy Sciences, Materials Sciences and Engineering Division.
This work is funded in parts by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) 217133147/SFB1073, projects B03 and C03.
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17798v1 | 20240926124736 | Swarm-LIO2: Decentralized, Efficient LiDAR-inertial Odometry for UAV Swarms | [
"Fangcheng Zhu",
"Yunfan Ren",
"Longji Yin",
"Fanze Kong",
"Qingbo Liu",
"Ruize Xue",
"Wenyi Liu",
"Yixi Cai",
"Guozheng Lu",
"Haotian Li",
"Fu Zhang"
] | cs.RO | [
"cs.RO"
] |
Bias Assessment and Data Drift Detection in Medical Image Analysis: A Survey
[
September 28, 2024
=============================================================================
§ ABSTRACT
Aerial swarm systems possess immense potential in various aspects, such as cooperative exploration, target tracking, search and rescue. Efficient, accurate self and mutual state estimation are the critical preconditions for completing these swarm tasks, which remain challenging research topics. This paper proposes Swarm-LIO2: a fully decentralized, plug-and-play, computationally efficient, and bandwidth-efficient LiDAR-inertial odometry for aerial swarm systems.
Swarm-LIO2 uses a decentralized, plug-and-play network as the communication infrastructure. Only bandwidth-efficient and low-dimensional information is exchanged, including identity, ego-state, mutual observation measurements, and global extrinsic transformations. To support the plug-and-play of new teammate participants, Swarm-LIO2 detects potential teammate UAVs and initializes the temporal offset and global extrinsic transformation all automatically. To enhance the initialization efficiency, novel reflectivity-based UAV detection, trajectory matching, and factor graph optimization methods are proposed. For state estimation, Swarm-LIO2 fuses LiDAR, IMU, and mutual observation measurements within an efficient ESIKF framework, with careful compensation of temporal delay and modeling of measurements to enhance the accuracy and consistency. Moreover, the proposed ESIKF framework leverages the global extrinsic for ego-state estimation in case of LiDAR degeneration or refines the global extrinsic along with the ego-state estimation otherwise. To enhance the scalability, Swarm-LIO2 introduces a novel marginalization method in the ESIKF, which prevents the growth of computational time with swarm size.
Extensive simulation and real-world experiments demonstrate the broad adaptability to large-scale aerial swarm systems and complicated scenarios, including GPS-denied scenes, degenerated scenes for cameras or LiDARs.
The experimental results showcase the centimeter-level localization accuracy which outperforms other state-of-the-art LiDAR-inertial odometry for a single UAV system.
Furthermore, diverse applications demonstrate the potential of Swarm-LIO2 to serve as reliable infrastructure for various aerial swarm missions. In addition, we open-source all the system designs on GitHub to benefit society: https://github.com/hku-mars/Swarm-LIO2github.com/hku-mars/Swarm-LIO2.
Aerial Swarms, LiDAR Perception, Localization, Sensor Fusion
§ INTRODUCTION
In recent years, multi-robot systems, especially aerial swarm systems, have exhibited great potential in many fields, such as collaborative autonomous exploration<cit.>, target tracking<cit.>, search and rescue<cit.>, etc. Thanks to their great team cooperation capability, swarm systems can complete various missions in complex scenarios, even in degenerated environments for a single robot.
For a single robot system, well-developed state estimation techniques provide accurate ego-state estimation <cit.>, serving as a critical precondition for a wide variety of autonomous tasks such as trajectory planning<cit.> and motion control<cit.>. For robotic swarm systems, state estimation plays an equally significant role<cit.>, where each robot needs to estimate the state of the self UAV (, ego-state estimation) as well as the other teammate UAVs (, mutual state estimation). Accurate and robust estimation of ego and mutual states is crucial for the robot swarms to collaborate on a task.
Over the past few decades, multiple sensors and devices have been adopted to achieve reliable state estimation for robotic swarm systems.
GPS and RTK-GPS are commonly used for self-localization in outdoor environments, as reported in previous studies <cit.>. For GPS-denied environments, motion capture systems <cit.> and anchor-based Ultra-WideBand (UWB) systems <cit.> have been utilized for state estimation in multi-robot systems. These methods <cit.> often rely on the stationary ground station, resulting in a centralized system that is prone to single-point-of-failure. In more recent research, cameras have become a popular choice in multi-robot systems due to their lightweight design, low cost, and rich color information. These camera-based systems are often complemented by an Inertial Measurement Unit (IMU) and an anchor-free UWB to provide more robust state estimation <cit.>.
However, cameras are vulnerable to inadequate illumination and lack direct depth measurements, leading to high computational complexity in computing 3D measurements.
Although the complementary anchor-free UWB can provide distance measurements, it is susceptible to multi-path effects and obstacle occlusion in the environment, which decreases the overall system accuracy.
In recent years, 3-D light detection and ranging (LiDAR) sensors have gained popularity in state estimation due to their ability to provide direct, accurate 3D measurements over a long range and various illumination conditions. While traditional mechanical spinning LiDARs are often expensive and heavy, recent advancements in LiDAR technology have introduced cost-effective and lightweight LiDARs that are suitable for deployment on mobile robots, particularly unmanned aerial vehicles (UAVs). These LiDARs have not only enabled the development of autonomous navigation systems using LiDARs for UAVs <cit.>, but opened up new possibilities for state estimation in swarm systems.
Leveraging the above LiDAR advantages, this paper aims to develop a fully decentralized, plug-and-play, computationally efficient, and bandwidth-friendly state estimation method for aerial swarm systems based on LiDAR-inertial measurements. Fully decentralized means no master agent exists in any module of the whole system from communication hardware to algorithm software, which avoids single-point-of-failure. Plug-and-play means that an agent can automatically join the swarm and easily collaborate with other teammates before or in the middle of a mission. The system must also be computationally efficient and bandwidth-efficient, since the limited payload capacity of UAVs imposes significant constraints on the computational resources and network bandwidth.
We decompose the task of aerial swarm state estimation into two key online modules, initialization and state estimation.
In the initialization module, each UAV needs to detect and identify possible new teammate UAVs, and calibrate temporal offsets and global extrinsic transformations with the found teammates.
To achieve simple but effective teammate detection, reflective tapes are attached to each UAV, making teammate UAVs easily detectable from LiDAR reflectivity measurements. This teammate detection is conducted in real-time at each LiDAR scan measurement, enabling the detection of new teammate UAVs even in the middle of a mission. For each detected new UAV, its identification and global extrinsic transformation are obtained through trajectory matching, while the global extrinsic with the rest of teammate UAVs found on the network are swiftly calibrated through a factor graph optimization. Moreover, by exchanging low-dimensional data via a decentralized Ad-Hoc network, teammate monitoring and temporal calibration can be fulfilled efficiently and in a fully decentralized manner.
In the state estimation module, each UAV in the swarm system performs real-time, robust, and precise ego-state estimation as well as mutual state estimation. Estimating the full state of all teammates in each UAV is computationally demanding. Thus, we propose to estimate on each UAV only the ego-state, meanwhile refining the global extrinsic transformations w.r.t. (with respect to) the teammates. The ego-state and global extrinsic transformations are estimated efficiently within an Error State Iterated Kalman Filter (ESIKF) framework, by tightly fusing LiDAR point-cloud measurements, IMU measurements, and observed teammate locations (, mutual observation measurements), which are enhanced by careful measurement modeling and temporal compensation. In each step of the state estimation, we marginalize the extrinsic states of all teammate UAVs not observed in the LiDAR. This state marginalization along with a degeneration evaluation method prevents the state dimension (hence computational complexity) from growing with the swarm size, effectively enhancing the scalability of our system to larger swarms.
This paper is extended from our previous work<cit.>, which proposed the general framework of swarm LiDAR-inertial odometry. Compared to the previous work <cit.>, this paper proposes five crucial extensions:
* Factor graph optimization for efficient teammate identification and global extrinsic calibration, which largely decreases the complexity and energy consumption of the swarm initialization. Specifically, the number of flights required in the initialization of a swarm with N UAVs is reduced from O(N) to O(1).
* A novel state marginalization strategy and a LiDAR degeneration evaluation method that alleviate the computational burden and to enhance the swarm scalability. The marginalization reduces the growth rate of the state estimation complexity from cubic to sub-linear.
* Detailed measurement modeling and carefully designed temporal compensation of the mutual observation measurements, to compensate for the temporal mismatch due to asynchronous sensor measurements among different UAVs.
* Comprehensive simulation and real-world experiments verifying the effectiveness of Swarm-LIO2, , support of large swarm scales (tested 5 UAVs in the real-world as shown in Fig. <ref> and 40 UAVs in simulation), robust to degenerated scenes, and allows the dynamic change of swarm size with online joining or dropping out of any teammate UAVs.
* An open source implementation of the proposed system, termed as Swarm-LIO2, including source codes of the algorithms and hardware designs of our aerial platforms.
§ RELATED WORKS
In this section, we first review the mainstream frameworks of the state estimation for robotic swarm systems. Then we discuss the existing swarm initialization approaches, which is the core module of swarm state estimation.
§.§ State Estimation for Robotic Swarm
In the past few years, multi-robot systems have flourished and some great collaborative SLAM methods for ground robotic swarm systems have been proposed to achieve robust ego-state estimation and consistent mapping <cit.>.
Chang <cit.> proposed a multi-robot SLAM system in which each robot sends its single-robot odometry result and constructed submap to a centralized base station to perform loop-closure detection and joint pose graph optimization. The wheeled robot platform used in <cit.> is equipped with abundant sensors, including three Velodyne LiDARs and a complete mobile LiDAR scanner system. Similarly, in <cit.>, a centralized computer that receives map information from all robots is needed to implement loop closure correction and global optimization, and the platform for dataset collecting in <cit.> contains five cameras and an Ouster LiDAR. The aforementioned centralized systems are fragile to single-point-of-failure, promoting the development of decentralized methods. In <cit.>, Lajoie proposed a decentralized multi-robot SLAM method in which each robot performs the same computation with only onboard computing resources.
In <cit.>, compact descriptors are exchanged which could partly decrease the communication load compared to exchanging raw map information, but still, the environmental information size will rapidly grow as the travel distance and swarm scale increases. All the aforementioned methods can be categorized into environment feature-based methods, of which the obvious weakness is that they are limited to feature-rich environments and the communication bandwidth is relatively high.
Compared to ground robots, the restricted payload capacity and endurance of aerial vehicles greatly limit the weight of sensors and the quality of computation units, which further necessitates a swarm with lightweight sensor configurations, efficient algorithms, and low-bandwidth communication. To satisfy these requirements, some aerial swarm systems <cit.> directly utilize independent VIO to achieve ego-state estimation which gives up the inter-UAV data fusion. Although the independent VIO is easy to adopt, the state estimation results suffer from inevitable drift, especially after long-distance running. To mitigate the VIO drift during the online running of swarm systems, in <cit.>, the inter-UAV place recognition results are exchanged among the UAVs. Likewise in <cit.>, the UWB module is incorporated to provide distance constraints as a complementary sensor of the camera. Apart from map and descriptor exchange mentioned above, mutual observation is another important unique feature of swarm systems, which is a lightweight type of information, avoiding large communication and computation load.
In <cit.>, learning-based methods like YOLOv3-tiny are utilized to detect other robots to provide mutual observation measurements for VIO drift compensation.
These vision-based methods usually struggle in low-visibility environments and may fail to provide accurate 3D observation results due to imprecise distance estimation, which narrows down their applications in practical cases, in large-scale outdoor environments.
By contrast, LiDAR can provide accurate and long-range depth measurements, bringing many new opportunities for swarm state estimation. In <cit.>, 3D LiDAR-based place recognition (loop closure) is widely utilized to improve state estimation accuracy. However, the large communication bandwidth greatly limits the scalability of the swarm systems.
Wasik <cit.> propose a laser-based multi-robot system, laser range finders are used for each robot to estimate the distances and angles to other robots. However, the adopted 2D LiDARs do not apply to UAVs considered in this paper which fly in 3D spaces. Pritzl <cit.> utilize the LiDAR observation measurements to mitigate the VIO drift under the framework of non-linear least square optimization.
Compared to the centralized methods <cit.>, our system is fully decentralized which would suffer no single-point-of-failure issue.
Different from the environment feature-based methods <cit.>, our approach fuses the mutual observation measurements under ESIKF framework, leading to quite low communication bandwidth and efficient computation. Compared to the camera-based <cit.> or 2D LiDAR-based methods <cit.>, our method utilizes 3D LiDAR sensor due to its capability of providing accurate and long-range depth measurements with large field of view.
§.§ Swarm Initialization
The critical parts of the initialization of a robotic swarm system typically contain robot detection, robot identification, and global extrinsic calibration.
Robot detection. Learning-based robot detection methods are widely employed for visual-based swarm systems. For instance, Nguyen <cit.> propose a visual-inertial multi-UAV localization system, in which MAVNet is used to detect other teammate robots. Xu <cit.> propose a visual-inertial-UWB mutual state estimation system utilizing YOLOv3-tiny for teammate robot detection. These learning-based detection approaches usually need preliminary network training, resulting in extra time and computation consumption. In <cit.>, each robot is equipped with a circle marker for easy detection. While for the LiDAR-based swarm systems, robot detection is more difficult since LiDAR cannot provide texture and color information. In <cit.>, a local occupancy map is constructed and ray-casting is utilized to detect the dynamic obstacles, which is memory-intensive and time-consuming. In a similar way to <cit.>, in our previous work <cit.>, reflective tapes are attached to each robot (UAV) and leverage the reflectivity measured by LiDAR sensors to detect teammate robot. This detection method is simple but effective, avoiding cumbersome network training.
Identification and global extrinsic calibration. In a fully decentralized system, under general circumstances, each UAV estimates the states in its own global frame. Thus, After detecting objects that might be teammate robots, each robot needs to identify other teammates and calibrate the corresponding global extrinsic transformations.
In the existing literature, the global extrinsic is usually calibrated offline such as by measuring the distance between the UAVs <cit.>, resulting in quite coarse global extrinsic values. As the flight distance increases, even small deviations in the global extrinsic parameters can lead to significant drift. In <cit.>, the global extrinsic parameters are estimated online but high-quality initial estimations are necessary. Tian <cit.> calibrates the global extrinsic transformations by constructing a truncated least square problem that employs the inter-robot loop closure results and the odometric estimates. This method requires environmental information exchange, leading to a large communication burden.
Chang <cit.> place three reflective tapes (fiducial markers) with known 3D coordinates in the take-off environment. Then the LiDAR sensor on each robot would segment the three markers and thereby compute the global extrinsic by minimizing the distance between all triplets of marker positions. Compared to <cit.>, we attach reflective tapes to the UAVs instead of the environment, which supports initialization outside the take-off area. Different from the one-off calibration <cit.> or the online estimation relying on good initial values <cit.>, we utilize the trajectory matching proposed in our previous work <cit.> and factor graph optimization to calibrate the global extrinsic parameters, and constantly refine them in the subsequent swarm state estimation. The whole process is fully autonomous and no initial value is required.
§ SYSTEM OVERVIEW
In this section, we outline the structure of Swarm-LIO2 and give a brief overview of its modules. Aiming to assist understanding of the proposed system, we define some important notations in Table <ref>.
We use ∘ to compactly represent the rigid transformation of a point 𝐩∈ℝ^3× 1 with pose 𝐓 = (𝐑, 𝐭) ∈ SE(3) as 𝐓∘𝐩≜𝐑𝐩 + 𝐭.
Consider an aerial swarm system consisting of N UAVs and each one is equipped with a LiDAR and an inertial measurement unit (IMU). To achieve decentralized swarm state estimation, each UAV is required to detect, automatically, all the teammate UAVs in the system and estimate, in real-time, the state of itself (, ego-state estimation) as well as of all other teammates (, mutual state estimation). Performing ego-state and mutual state estimation altogether in one individual UAV is challenging due to the limited onboard computation resources and the high system dimension. Therefore, Swarm-LIO2 estimates the ego-state on each UAV and broadcasts the ego-state among the teammates. Since the ego-state is performed in each UAV's own global reference frame (, the first IMU frame), the extrinsic transformations among all UAV pairs' global frames also need to be calibrated. With the calibrated global extrinsic, the received teammates' ego-state can be projected to the self global reference frame, hence achieving the mutual state estimation (see Fig. <ref>).
Summarizing the above analysis, Swarm-LIO2 has two key modules. The first module is online initialization, in which each UAV detects all the teammates and performs temporal and spatial calibration with the detected teammate. Let i denote the self UAV and j denote the detected teammate candidate, since the computer clocks of different UAVs are usually asynchronous, the temporal offset ^iτ_j between the clocks of any two UAVs i,j should be calibrated, which is essential for the inter-UAV data fusion. Then, the self UAV needs to validate the teammate identity (UAV ID, which is a unique number assigned to each UAV once manufactured) and, if successful, calibrate the global extrinsic transformation ^G_i𝐓_G_j = (^G_i𝐑_G_j,^G_i𝐩_G_j) ∈ SE(3) w.r.t. it.
The second module is state estimation, aiming to estimate in real-time the ego-state of each UAV (, pose, velocity), by fusing the self-LiDAR and IMU data as well as mutual observation measurements. When the estimated ego-state is projected to the teammates' global frames using the global extrinsic transformation, a small extrinsic error occurs in the initialization stage could lead to a large mutual state estimation error if the UAV's travel distance is long. To mitigate this error, Swarm-LIO2 refines the global extrinsic transformations online along with the ego-state.
The two modules of Swarm-LIO2 run in parallel on each UAV of the swarm system, as detailed in Fig. <ref>. For the initialization module (Section <ref>), it further contains three sub-modules that run concurrently. The first sub-module monitors new teammate UAVs on the network and calibrates the temporal offsets ^iτ_j w.r.t. it (Section <ref>). The second sub-module detects new teammates observed in LiDAR point-cloud, and calibrates the global extrinsic transformation ^G_i𝐓_G_j w.r.t. it (Section <ref>). The calibrated global extrinsic are then sent to the third sub-module of the self-UAV and teammate UAVs on the network. Then, the third sub-module receives the global extrinsic from the second sub-module of the self-UAV or teammate UAVs on the network, based on which the global extrinsic w.r.t. teammates not observed in LiDAR point-cloud are calibrated via a factor graph optimization (Section <ref>). Once the global extrinsic ^G_i𝐓_G_j w.r.t. UAV j is calibrated, UAV j is considered as a valid teammate whose state will be added to and estimated in the state estimation module. Meanwhile, the extrinsic ^G_i𝐓_G_j is sent to the state estimation module for further refinement.
For the state estimation module (Section <ref>), it estimates the swarm state, which consists of the ego-state and the global extrinsic transformations w.r.t. all teammates. To reduce the state dimension, Swarm-LIO2 performs a marginalization step (Section <ref>), followed by a degeneration evaluation to evaluate the degeneration of current LiDAR measurements indicated by ℐ_i and to perform further marginalization (Section <ref>). The marginalized state is then estimated in an error-state iterative Kalman filter (ESIKF) framework <cit.> by performing state prediction (Section <ref>) and iterative update (Section <ref>) after measurements modeling (Section <ref>). The measurements contain LiDAR point-cloud and mutual observation measurements (, the teammate location observed by the self UAV, denoted by ^b_i𝐩̆_b_j, and the self-location observed by the teammate, denoted by ^b_j𝐩̆_b_i, which is received from the teammate). Mutual observations have temporal mismatch, which are temporally compensated in Section <ref>.
The state estimation results are finally transmitted to other teammate UAVs for their next round of state estimation (Section <ref>).
In Swarm-LIO2, all information is exchanged via a fully decentralized Ad-Hoc network infrastructure under the IEEE 802.11 architecture (, IBSS), which is broadly supported by commonly used WiFi modules and can be configured by programming the WiFi driver <cit.>.
§ SWARM INITIALIZATION
In this section, we introduce the three sub-modules of the swarm initialization process running on each UAV of the swarm system. The first sub-module (Section <ref>) monitors new teammate UAVs on the network and calibrates the temporal offsets w.r.t. them. The second sub-module (Section <ref>) monitors new teammate UAVs from LiDAR measurements, validates their identities, and calibrates the global extrinsic transformations w.r.t. them via trajectory matching. The third sub-module (Section <ref>) monitors the extrinsic updates from the second module or the network, and calibrates the global extrinsic transformations w.r.t. other teammates that are not directly observed by its LiDAR, via a novel factor graph optimization.
§.§ New Teammate Monitoring on Network and Temporal Calibration
The first sub-module detects teammate UAVs on the network, maintains the connection with the found teammates, and performs temporal offset calibration w.r.t. them. To achieve this, each UAV would continuously broadcast its identity information in the Ad-Hoc network, including its UAV ID and Internet Protocol (IP) address, at a fixed frequency of 1Hz. This identity information is commonly called the “heartbeat” packet, used for teammate monitoring and communication status maintenance. The “heartbeat" packet could also be encrypted if necessary to prevent UAV information leakage or cyber-attacks. Upon receiving identity information from a teammate, the self-UAV adds the teammate to its teammate list and maintains the connection status continuously. For each teammate in the teammate list, the self-UAV assigns one of two states: connected or disconnected. After a teammate is added to the teammate list, its corresponding status is initialized as “connected".
If the UAV fails to receive identity information from a teammate for two seconds, the teammate's state is set to “disconnected". Upon receiving the identity information from the disconnected teammate again, the state is switched back to “connected".
After discovering a new teammate on the network, the crucial temporal offset w.r.t. the teammate UAV is calibrated. For each teammate UAV, a decentralized temporal calibration method based on the peer-delay mechanism in Precision Time Protocol (PTP)<cit.> is utilized to acquire the temporal offset corresponding to it.
The self-UAV would send request messages to each teammate UAV and receive response messages from teammates. By leveraging the timestamps of these messages, the self-UAV can calculate the temporal offset w.r.t. each teammate UAV following the principle of PTP <cit.>. To suppress random errors or fluctuations, this process is repeated 30 times, and the average value of ^iτ_j is adopted. Since the clock drift among different UAVs is negligible within the typical UAV flight time (, less than an hour), estimating the temporal offset one time is sufficient for actual swarm tasks, , for any UAV i, once its temporal offset w.r.t. UAV j, ^iτ_j, is obtained, the corresponding temporal calibration is considered as complete and no request-response messages will be communicated with UAV j. If the clock drift is significant, the temporal offset can be estimated constantly at a fixed frequency, , 1Hz.
This mechanism is performed for both UAVs in each pair of the swarm system and is robust to single-point-of-failure due to the absence of a designated master clock. Each UAV performs temporal offset calibration with every teammate UAV newly found in the network. The calibrated temporal offset ^it_j is stored in a Hash table where the key is UAV ID and the value is temporal offset.
When a UAV receives any data from a teammate, the data is stamped with the teammate's clock. To use the data for the self-UAV, the received data will have its timestamp modified according to the temporal offset obtained by the fast and efficient lookup of the Hash table.
§.§ New Teammate Detection from LiDAR Observations and Extrinsic Calibration
For any teammate in the teammate list, apart from calibrating the temporal offset, each UAV also needs to calibrate the spatial offset, , the extrinsic transformations between the two UAVs' global reference frames. In this section, we calibrate the global extrinsic w.r.t. those teammates observed by the LiDAR on the self-UAV.
We propose a novel reflectivity filtering and cluster extraction-based teammate detection method to easily detect the observed teammate UAVs from LiDAR point-cloud measurements. After accumulating the trajectory of the directly observed objects over a certain time, a trajectory matching-based identification and global extrinsic calibration method are used for fast swarm initialization.
To implement the above method, for each UAV, several reflective tapes are attached to its body, so that it can be easily detected and tracked by other teammates based on the reflectivity information measured by the LiDAR sensor. The detailed implementation of this sub-module is summarized in Alg. <ref>. After receiving a LiDAR scan, we first undistort the raw points following <cit.> and filter out the points on teammate UAVs with whom the initialization has completed (which is explained later in Section <ref>),
to obtain the LiDAR points ^b_i𝒫, which is represented in the current body frame. Then, points with high reflectivity values exceeding a pre-defined threshold, which can be calibrated beforehand on the reflective tapes attached to each UAV, are extracted by 𝚁𝚎𝚏𝚕𝚎𝚌𝚝𝚒𝚟𝚒𝚝𝚢𝙵𝚒𝚕𝚝𝚎𝚛𝚒𝚗𝚐(^b_i𝒫) in Line <ref> of Alg. <ref>. Then in Line <ref>, the high-reflectivity points ^b_i𝒫_h are efficiently clustered by 𝙵𝚊𝚜𝚝𝙴𝚞𝚌𝚕𝚒𝚍𝚎𝚊𝚗𝙲𝚕𝚞𝚜𝚝𝚎𝚛𝚒𝚗𝚐 (FEC) <cit.>, which aims to detect new potential teammate UAVs.
Each detected object, with clustered centroid ^b_i𝐩̆_m, is then tracked by a Kalman filter-based temporary tracker based on the assumption of constant velocity in Line <ref> <cit.>.
The tracked trajectory of each potential new teammate is accumulated (Line <ref>) for subsequent identification and global extrinsic calibration. To achieve this, all UAVs in the swarm system will exchange their estimated ego-states (in their own global frames) with others. The ego-state of each UAV is estimated by the state estimation module (Section <ref>), which runs in parallel to the initialization module as explained in Section <ref>, based on LiDAR points, IMU data, and mutual observations of teammate UAVs if available.
Let ^G_i𝒯_m = {^G_i𝐩̅_m,κ, κ = 1,⋯, 𝒦} denote the trajectory of the m-th temporary tracker and ^G_j𝒯_j = {^G_j𝐩̆_b_j,κ, κ = 1, ⋯, 𝒦} represent the trajectory received from UAV j, the m-th tracked object is identified as UAV j if the following trajectory matching problem has unique optimal solution with a residual below a certain threshold:
min_^G_i𝐓_G_j∑_κ = 1^𝒦12^G_i𝐩̅_m,κ - ^G_i𝐓_G_j∘^G_j𝐩̆_b_j,κ,
Considering possible short-term communication disconnection, some data of ^G_j𝐩̆_b_j might be lost. Thus, we only pick ^G_i𝐩̅_m,κ that has close timestamp with ^G_j𝐩̆_b_j,κ to participate in trajectory matching. Besides, to avoid large computing time due to too much data, we use a sliding window of the most recent 𝒦 positions for matching. We implement the above trajectory matching process as a function 𝚃𝚛𝚊𝚓𝙼𝚊𝚝𝚌𝚑𝚒𝚗𝚐(^G_j𝒯_j, ^G_i𝒯_m) in Line <ref>.
Since no unique transformation can be determined from (<ref>) if the involved trajectories are straight lines <cit.>, the trajectories of those tracked objects are constantly evaluated by 𝚃𝚛𝚊𝚓𝙴𝚡𝚌𝚒𝚝𝚎𝚍(^G_i𝒯_m) in Line <ref>.
Once the condition of 𝚃𝚛𝚊𝚓𝙴𝚡𝚌𝚒𝚝𝚎𝚍(^G_i𝒯_m) is satisfied, the trajectory matching will be performed.
Let ^G_i𝐩̅_m^c represent the centroid of ^G_i𝒯_m, 𝚃𝚛𝚊𝚓𝙴𝚡𝚌𝚒𝚝𝚎𝚍(^G_i𝒯_m) assesses the excitation (shape) of ^G_i𝒯_m by computing the singular values of matrix ℋ∈ℝ^3× 3:
ℋ≜∑_κ=1^𝒦 (^G_i𝐩̅_m,κ - ^G_i𝐩̅_m^c)· (^G_i𝐩̅_m,κ - ^G_i𝐩̅_m^c)^T.
If the second largest singular value is larger than a given threshold, it means the positions of the trajectory ^G_i𝒯_m do not lie on a straight line, which ensures a unique solution in 𝚃𝚛𝚊𝚓𝙼𝚊𝚝𝚌𝚑𝚒𝚗𝚐(^G_j𝒯_j, ^G_i𝒯_m) <cit.>. The matching for temporary tracker trajectory ^G_i𝒯_m is performed with all received teammate UAV's trajectory ^G_j𝒯_j, j = 1,⋯, N until the matching error is smaller than a given threshold, indicating that the object m is essentially the observation of teammate with UAV ID j, and the solution of (<ref>) gives an initial estimation of the global extrinsic ^G_i𝐓̆_G_j. Meanwhile, the m-th temporary tracker will be removed from the temporary tracker list and become a teammate tracker that will used in the state estimation module (Section <ref>). An example of the trajectory matching-based initialization pipeline is illustrated in Fig. <ref>.
Finally, the extrinsic obtained by trajectory matching ^G_i𝐓̆_G_j is sent to the third sub-module of the self-UAV as well as all teammate UAVs on the teammate list. Note that the sending of ^G_i𝐓̆_G_j occurs only once since after the temporary tracker has been removed, no trajectory matching will be performed to produce the extrinsic ^G_i𝐓̆_G_j in the next cycle.
§.§ Factor Graph Optimization-based Global Extrinsic Calibration of Not Directly Observed Teammates
Apart from the trajectory matching-based identification method detailed above, which was originally presented in our previous work <cit.>, a novel decentralized factor graph optimization method is proposed to calibrate the global extrinsic transformations w.r.t. not directly observed teammates, which expedites the identification and the swarm initialization.
In Swarm-LIO2, each UAV will share the global extrinsic transformations obtained via trajectory matching with all teammate UAVs in the teammate list. Then, each UAV constructs and maintains a factor graph (see Fig. <ref>) where the variables G_i,G_j,⋯ are the global reference frames of all UAVs (including teammates with or without direct observation) and the factors ^G_i𝐓̆_G_j are the global extrinsic transformation between any two UAVs, which could be calibrated by the self-UAV using the trajectory matching or received from teammate UAVs. By fixing the global frame G_i of the self-UAV, it can use the global extrinsic transformations ^G_i𝐓̆_G_j as constraints to solve for the global frames of all other UAVs who are connected to the self-UAV in the factor graph. Subsequently, the global extrinsic between each UAV (with and without direct observations) and the ego-UAV can be deduced as ^G_i𝐓̂_G_j = G_iG_j^-1.
The third sub-module runs after the second sub-module, hence running recurrently at the scan rate too. Specifically, it receives extrinsic ^G_i𝐓̆_G_j from the second sub-module and extrinsic ^G_k𝐓̆_G_l from the network. If the received extrinsic (either from the second sub-module or from the network) corresponds to a new edge that did not exist before in the factor graph, an edge corresponding to this extrinsic will be created in the factor graph. Otherwise, the received extrinsic will be dumped to avoid information reuse. On the other hand, if there are multiple global extrinsic transformations on the same edge, such as the ^G_i𝐓̆_G_j, which is obtained through trajectory matching on the self-UAV i, and ^G_j𝐓̆_G_i, which is obtained through trajectory matching on the teammate UAV (received on the network), the average of these global extrinsic transformations is computed and used as a factor, which can effectively save the number of factors in the factor graph. In case the factor graph is updated, an optimization process is performed using iSAM2<cit.>, and the optimized global extrinsic, if not sent before, is sent to the state estimation module as the initial estimation ^G_i𝐓̂_G_j for the online global extrinsic refinement.
Remark 1: As can be seen, the first sub-module runs concurrently with the second and the third sub-modules at different frequencies. The first sub-module runs at 1 Hz, while the second and the third sub-modules run at the scan rate. The recurrent nature of the three sub-modules allows the swarm to discover, identify, and calibrate the global extrinsic w.r.t. new joining UAVs in the middle of a mission.
Remark 2: For each of the three sub-modules, it breaks into two parts. The first parts of the three sub-modules monitor for new teammates on the network, new teammates observed by LiDAR measurements, or new edges in the factor graph, respectively. The second parts conduct the temporal calibration, trajectory matching, and factor graph optimization, respectively. While the first parts run at their respective frequencies, the second parts run only when the first parts detect new teammates or edges.
§ DECENTRALIZED SWARM STATE ESTIMATION
In this section, we will introduce the fully decentralized swarm state estimation module, including mutual state estimation, ESIKF-based ego-state estimation, and global extrinsic refinement.
§.§ Mutual State Estimation
One crucial mission of each UAV in a swarm system is to estimate any other UAV j's state ^G_i𝐱_b_j (including pose ^G_i𝐓_b_j and velocity ^G_i𝐯_b_j) in UAV i's global frame, called mutual state estimation. This is significant for various swarm applications, such as mutual collision avoidance, formation flight, etc. However, estimating the full state ^G_i𝐱_b_j of all the teammate UAVs are high-dimensional tasks that are computationally demanding. To reduce the system complexity, we propose to estimate the ego-state only on each UAV, denoted by ^G_i𝐱̅_b_i. The estimated ego-states are exchanged through the network. Then, a UAV i can estimate a teammate j's state, denoted by ^G_i𝐱̅_b_j, by directly projecting the received UAV j's ego-state into UAV i's global frame using the global extrinsic transformation ^G_i𝐓_G_j = (^G_i𝐑_G_j, ^G_i𝐩_G_j):
^G_i𝐓̅_b_j = ^G_i𝐓_G_j^G_j𝐓̆_b_j,
^G_i𝐯̅_b_j = ^G_i𝐑_G_j^G_j𝐯̆_b_j.
where ^G_j𝐓̆_b_j is the received ego-state of UAV j. Note that ^G_j𝐓̆_b_j is denoted as ^G_j𝐓̅_b_j on UAV j (since it is an estimation on UAV j), but has an accent (̆·̆)̆ on UAV i (since it acts as a measurement for UAV i).
A problem in the above process is that it requires knowing the global extrinsic transformation ^G_i𝐓_G_j = (^G_i𝐑_G_j, ^G_i𝐩_G_j). Although they can be calibrated by the initialization process described in Section <ref>, possible errors could still remain. We propose to continually refine these extrinsic transformations along with the ego-state estimation in the state estimation module. Denote ^G_i𝐓̅_G_j = (^G_i𝐑̅_G_j, ^G_i𝐩̅_G_j) the refined extrinsic transformation, the mutual state of teammate j can be computed by (<ref>) with ^G_i𝐓_G_j = (^G_i𝐑_G_j, ^G_i𝐩_G_j) being replaced by ^G_i𝐓̅_G_j = (^G_i𝐑̅_G_j, ^G_i𝐩̅_G_j). This mechanism enables smooth and stable mutual state estimation even in situations where frequent mutual observation losses occur due to occlusions or teammates entering and exiting the field of view (FoV).
To refine the extrinsic transformation in the state estimation, for each new teammate j whose extrinsic was calibrated in the initialization module (Section <ref>), we append ^G_i𝐓_G_j = (^G_i𝐑_G_j, ^G_i𝐩_G_j) to the existing state vector of UAV i, so its value can be estimated along with other states in a unified ESIKF framework. Moreover, the calibrated extrinsic from the initialization module immediately serves as the initial estimation ^G_i𝐓_G_j of the appended state component ^G_i𝐓_G_j.
§.§ State and Covariance Prediction
We first introduce the scheme of the state and covariance prediction. For illustration, we select UAV i as the self-UAV and assume N - 1 teammate UAVs have been found and calibrated in the initialization module. Let τ denote the IMU measurement index during the k-th LiDAR frame, the discrete state transition model is shown below:
𝐱_i,τ+1 = 𝐱_i,τ⊞(Δ t_τ𝐟_i(𝐱_i,τ, 𝐮_i,τ, 𝐰_i,τ)),
where Δ t_τ is the time interval between two consecutive IMU measurements, 𝐱_i,τ denotes ground-truth of the state at the τ-th IMU measurement of the i-the UAV, whose timestamp is t_i,τ. Furthermore, we use the notation ⊞/⊟ defined in <cit.> to compactly represent the “plus" on the state manifold. Specifically, for the state manifold SO(3) ×ℝ^n in (<ref>), the ⊞ operation and its inverse operation ⊟ are defined as
[ 𝐑; 𝐚 ]⊞[ 𝐫; 𝐛 ]=
[ 𝐑Exp(𝐫); 𝐚+𝐛 ];
[ 𝐑_1; 𝐚 ]⊟[ 𝐑_2; 𝐛 ]=
[ Log(𝐑_2^T 𝐑_1); 𝐚-𝐛 ]
where 𝐑, 𝐑_1, 𝐑_2 ∈ SO(3),𝐫∈ℝ^3 , 𝐚,𝐛∈ℝ^n, Exp(·): ℝ^3 ↦ SO(3) is the exponential map on SO(3)<cit.> and Log(·): SO(3) ↦ℝ^3 is its inverse logarithmic map.
The state vector 𝐱_i, the process noise vector 𝐰_i, and the input 𝐮_i, omitting the the time index, are defined as:
𝐱_i ≜
[
^G_i𝐑_b_i^T
^G_i𝐩_b_i^T
^G_i𝐯_b_i^T 𝐛_g_i^T 𝐛_a_i^T
^G_i𝐠^T
⋯
^G_i𝐑_G_j^T
^G_i𝐩_G_j^T ⋯
]^T∈ℳ,
𝐰_i ≜[ 𝐧_g_i^T 𝐧_a_i^T 𝐧_ 𝐛_gi^T 𝐧_𝐛_ai^T ]^T,
𝐮_i ≜[ ω_m_i^T 𝐚_m_i^T ]^T,
where j = 1, 2, ⋯, i-1, i+1, ⋯, N.
The discrete state transition function 𝐟_i is defined as:
𝐟_i ≜[ ω_m_i - 𝐛_g_i -𝐧_g_i; ^G_i𝐯_b_i + 1/2(^G_i𝐑_b_i (𝐚_m_i-𝐛_a_i- 𝐧_a_i) +^G_i𝐠)Δ t_τ; ^G_i𝐑_b_i (𝐚_m_i-𝐛_a_i- 𝐧_a_i) +^G_i𝐠; 𝐧_ 𝐛_g i; 𝐧_𝐛_a i; 0_3× 1; ⋮; 0_3× 1; 0_3× 1; ⋮; ]
where ω_m_i, 𝐚_m_i represent the IMU (gyroscope and accelerometer) measurements of UAV i, 𝐧_g_i and 𝐧_a_i are the white noise of IMU measurements, 𝐛_g_i and 𝐛_a_i are the IMU bias modeled as the random walk process with Gaussian noises 𝐧_ 𝐛_g i and 𝐧_𝐛_a i.
The meaning of each element in state vector 𝐱_i is introduced in Table <ref>, the state manifold ℳ is defined in (<ref>) and its dimension is 18+6(N-1).
ℳ≜SO(3) ×ℝ^15_dim = 18×⋯× SO(3) ×ℝ^3×⋯_dim = 6 (N-1)
Following the state model in (<ref>), the state and covariance prediction is implemented under the ESIKF framework once receiving a new IMU measurement. More specifically, the state and covariance are predicted following (<ref>) by setting the process noise 𝐰_i,τ to zero. The detailed demonstration of predication can be referred to <cit.>.
§.§ Error State Iterative State Update
The update step is implemented iteratively at the end time of the new LiDAR scan at t_i,k, fusing point-cloud measurements and mutual observation measurements (if any). In the following sections, we will introduce the measurement model of the point-cloud measurements, and the novel mutual observation measurements, which were not present in <cit.>.
§.§.§ Modeling of Measurements
In the general ESIKF framework, for any measurement 𝐲_k at the k-th round, we can write the measurement model as
𝐲_k= 𝐡(𝐱_k,𝐯_k)
where 𝐡(𝐱_k,𝐯_k) is the measurement model depending on the true state 𝐱_k and the measurement noise 𝐯_k which is assumed to be zero mean multivariate Gaussian noise. For convenience and simplification of the description, we omit the subscript k in the following formulations.
Once receiving a new LiDAR scan, motion compensation will be performed to obtain the undistorted point clouds. Then the point-to-plane distance will be calculated to generate point-cloud constraints. The details of the motion compensation can be referred to <cit.>. The n-th undistorted point of the current scan projected into the body frame is denoted by ^b_i𝐩_n, let 𝐮_n represent the normal vector of the corresponding plane in the global frame G_i, on which lies a point ^G_i𝐪_n. Considering the LiDAR measurement noise 𝐧_p, n of the n-th point, we obtain the measurement model of the n-th point measurement as <cit.>
0 =
𝐮_n^T (^G_i𝐓_b_i∘( ^b_i𝐩_n + 𝐧_p, n) - ^G_i𝐪_n)_𝐡_p,n(𝐱_i, 𝐧_p, n)
which defines an implicit measurement equation about the state vector 𝐱_i containing ego-pose ^G_i𝐓_b_i. The normal vector 𝐮_n and the point ^G_i𝐪_n are known vectors, and 𝐧_p,n is point measurement noise, both can be referred to <cit.>.
Apart from the point-cloud measurements, the mutual observation measurements are also used for state updates, which can be obtained by the teammate tracker that evolved from the temporary tracker in the initiation module. Specifically, with the predicted pose ^G_i𝐓_b_i obtained in Section <ref>, we can acquire the predicted position of each teammate UAV j described in UAV i's body frame as ^b_i𝐩_b_j = (^G_i𝐓_b_i^-1^G_i𝐓̅_G_j) ∘^G_j𝐩̆_b_j. The LiDAR points around the predicted position ^b_i𝐩_b_j will be removed from the LiDAR raw points. The rest of the points ^b_i𝒫 will be used by 𝚁𝚎𝚏𝚕𝚎𝚌𝚝𝚒𝚟𝚒𝚝𝚢𝙵𝚒𝚕𝚝𝚎𝚛𝚒𝚗𝚐(^b_i𝒫) in the initialization module (Section <ref>, Algorithm <ref>) for new teammate detection.
Moreover, the points around the predicted teammate positions will be used for Euclidean clustering to obtain the mutual observation measurement. If a valid object is clustered, the centroid position of the object will be regarded as the actual position of UAV j observed by UAV i, called “active observation measurement" for UAV i w.r.t. UAV j which is denoted by ^b_i𝐩̆_b_j. Each UAV would share this active observation measurement with all teammates and meanwhile receive teammates' ones via the Ad-Hoc network. The received observation measurement from UAV j is referred to as “passive observation measurements" and is denoted by ^b_j𝐩̆_b_i, representing the self-position of UAV i observed by teammate j.
The explicit measurement model of the active observation measurement ^b_i𝐩̆_b_j can be obtained by projecting UAV j's ground-true position ^G_j𝐩_b_j into UAV i's body frame using the ground-true global extrinsic ^G_i𝐓_G_j and the ground-true ego-pose ^G_i𝐓_b_i = (^G_i𝐑_b_i, ^G_i𝐩_b_i) of UAV i.
Further considering that the active observation may have measurement noise 𝐧_ao,ij∼𝒩(0,Σ_ao,ij) due to incomplete point measurements on the teammate UAV j, the model of the active observation measurement is:
^b_i𝐩̆_b_j =
(^G_i𝐓_b_i^-1^G_i𝐓_G_j)∘^G_j𝐩_b_j + 𝐧_ao,ij.
This measurement equation, unfortunately, involves the ground-true position ^G_j𝐩_b_j of UAV j, which is not a part of the state vector 𝐱_i as defined in (<ref>). To fix this issue, we leverage the estimated ego-position ^G_j𝐩̆_b_j of UAV j and its covariance Σ̆_𝐩_j, both are received from UAV j. Then, the ground-true position of UAV j is modeled as ^G_j𝐩_b_j = ^G_j𝐩̆_b_j + 𝐧_𝐩_j, where the noise 𝐧_𝐩_j∼𝒩(0, Σ̆_𝐩_j). Consequently, the measurement model of the active observation measurement can be derived as:
^b_i𝐩̆_b_j =
( ^G_i𝐓_b_i^-1^G_i𝐓_G_j) ∘(^G_j𝐩̆_b_j + 𝐧_𝐩_j ) + 𝐧_ao,ij_𝐡_ao,ij(𝐱_i,𝐧_𝐩_j,𝐧_ao,ij)
which defines a valid measurement equation about the state vector 𝐱_i containing ego-pose ^G_i𝐓_b_i and global extrinsic ^G_i𝐓_G_j. The received teammate position ^G_j𝐩̆_b_j is known, and 𝐧_𝐩_j,𝐧_ao,ij are measurement noises.
Similarly, the explicit measurement model of the passive observation measurement ^b_j𝐩̆_b_i for UAV i can be obtained by projecting UAV i's ground-true position ^G_i𝐩_b_i into UAV j's body frame using the ground-true global extrinsic ^G_i𝐓_G_j and the ground-true ego-pose ^G_j𝐓_b_j = (^G_j𝐑_b_j, ^G_j𝐩_b_j) of UAV j. Then considering the measurement noise 𝐧_po,ij∼𝒩(0,Σ_po,ij) of the passive observation measurement, the measurement model is:
^b_j𝐩̆_b_i =
( ^G_j𝐓_b_j^-1^G_i𝐓_G_j^-1) ∘^G_i𝐩_b_i + 𝐧_po,ij.
Since UAV j's ground-true pose ^G_j𝐓_b_j is not a part of the state vector 𝐱_i as defined in (<ref>), we similarly utilize the estimated ego-pose ^G_j𝐓̆_b_j of UAV j and the covariance Σ̆_𝐓_j received from UAV j, to model the ground-true pose of UAV j as ^G_j𝐓_b_j = ^G_j𝐓̆_b_j⊞𝐧_𝐓_j, where the noise 𝐧_𝐓_j∼𝒩(0, Σ̆_𝐓_j).
Consequently, the passive observation measurement model is:
^b_j𝐩̆_b_i =
( (^G_j𝐓̆_b_j⊞𝐧_𝐓_j)^-1^G_i𝐓_G_j^-1) ∘^G_i𝐩_b_i + 𝐧_po,ij_𝐡_po,ij(𝐱_i,𝐧_𝐓_j, 𝐧_po,ij),
which defines a valid measurement equation about the state vector 𝐱_i containing ego-position ^G_i𝐩_b_i and global extrinsic ^G_i𝐓_G_j. The received teammate pose ^G_j𝐓̆_b_j is known, and 𝐧_𝐓_j,𝐧_po,ij are measurement noises.
To sum up, the entire measurement vector 𝐲, the observation function 𝐡 and the observation noise 𝐯 (the subtract k is omitted for simplification) are
𝐲 = [ ⋯,0, ⋯_point measurements , ⋯, ^b_i𝐩̆_b_j^T ,⋯_active observation measurements,⋯, ^b_j𝐩̆_b_i^T ,⋯_passive observation measurements ]^T,
𝐡 = [ ⋯,𝐡_p,n^T , ⋯,⋯, 𝐡_ao,ij^T ,⋯,⋯, 𝐡_po,ij^T ,⋯ ]^T,
𝐯 = [ ⋯,𝐧_p, n^T , ⋯ ,⋯, 𝐧_𝐩_j^T , 𝐧_ao,ij^T , ⋯ ,⋯, 𝐧_𝐓_j^T , 𝐧_po,ij^T ,⋯ ]^T.
§.§.§ Temporal Compensation of Mutual Observation Measurements
For the measurement models (<ref>) and (<ref>) to be valid, the involved states and measurements should be at the same time.
However, due to the asynchronous nature of state estimation among different UAVs and the presence of transmission delays, the states and measurements from different UAVs are usually asynchronous.
Therefore, it is necessary to compensate for the temporal mismatch between the received measurements or states, and the ego-state in the measurement models. While the previous work <cit.> ignores this temporal mismatch, this paper carefully addresses this problem based on a constant velocity model.
For the active observation measurement model (<ref>), the measurement ^b_i𝐩̆_b_j is a cluster of points, which are undistorted and projected to the scan end time t_i,k (see Section <ref>). The received UAV j's position ^G_j𝐩̆_b_j, however, is estimated at timestamp t_j,k in UAV j's system time. To make a valid measurement model at time t_i,k, UAV j's position ^G_j𝐩̆_b_j should be temporally compensated from its time of estimation (, t_j,k) to the time the measurement model is established (, t_i,k), according to a constant velocity model from its estimated velocity ^G_j𝐯̆_b_j:
^G_j𝐩̆_b_j^comp = ^G_j𝐩̆_b_j + ^G_j𝐯̆_b_j (t_i,k-t_j,k + ^iτ_j),
which should be substituted into (<ref>) to supply the original measurement ^b_i𝐩̆_b_j. The resultant measurement model with temporal compensation is hence:
^b_i𝐩̆_b_j = (^G_i𝐓_b_i^-1^G_i𝐓_G_j)∘ (
^G_j𝐩̆_b_j
+ ^G_j𝐯̆_b_j
( t_i,k - t_j,k + ^iτ_j )
+ 𝐧_𝐩_j ) + 𝐧_ao,ij,
which is a measurement equation about the state 𝐱_i containing ego-pose ^G_i𝐓_b_i and global extrinsic ^G_i𝐓_G_j.
For the passive observation measurements model (<ref>), the passive observation measurement ^b_j𝐩̆_b_i is transmitted from UAV j and is estimated at timestamp t_j,k of UAV j's system time. To establish a valid measurement model at the time indicated by t_j,k, all the states and other measurements in (<ref>) should also be at t_j,k. The received UAV j's state ^G_j𝐓̆_b_j is already stamped with t_j,k, while the ego-position ^G_i𝐩_b_i, which is the state at t_i,k, can be compensated using a constant velocity model as follows:
^G_i𝐩_b_i^comp = ^G_i𝐩_b_i + ^G_i𝐯_b_i (t_j,k -t_i,k - ^iτ_j),
which should be substituted into (<ref>) to supply the original state ^b_j𝐩_b_i. The temporally compensated measurement model is hence
^b_j𝐩̆_b_i =
(^G_j𝐓̆_b_j⊞𝐧_𝐓_j)^-1^G_i𝐓_G_j^-1∘ (
^G_i𝐩_b_i
+ ^G_i𝐯_b_i (t_j,k - t_i,k - ^iτ_j))
+ 𝐧_po,ij,
which is a measurement equation about the state 𝐱_i containing ego-position ^G_i𝐩_b_i, velocity ^G_i𝐯_b_i, and global extrinsic ^G_i𝐓_G_j.
§.§.§ State and Covariance Update
Based on the LiDAR point measurement model (<ref>), mutual observation models (<ref>) and (<ref>) with temporal compensation explained previously, we leverage an iterated Kalman filter (ESIKF) <cit.> to update the state repeatedly.
This process will repeat until convergence, then the optimal state estimation and covariance are obtained. The detailed computation of Kalman gain and update steps can be referred to <cit.>.
After the update, covariance re-initialization will be implemented following Section <ref> for the next round of state estimation.
§.§ Marginalization
The dimension of the state defined in (<ref>) would increase linearly with the swarm size, leading to an almost cubic growth of computation complexity of the ESIKF. To address the problem of explosion of state dimension and computational complexity in the previous work <cit.>, we propose a novel marginalization method.
In the flight of aerial swarm systems, due to the restricted detecting range and FoV of the LiDAR sensor, the UAVs are typically unable to observe all teammate UAVs at all times. The observed teammates (either active or passive) have their global extrinsic transformations ^G_i𝐓_G_j persistently excited, as shown in (<ref>) and (<ref>), while others do not. Therefore, we only need to update the global extrinsic of teammate UAVs which can observe the self-UAV (contributing a passive observation measurement) or are observed by the self-UAV (contributing an active observation measurement). This is achieved by a marginalization operation below.
For simplification, we omit the subtract k and i, which represent the k-th estimation of UAV i.
After receiving the k-th LiDAR scan, we identify the mutual observations as detailed in Section <ref>. Let 𝒜 denote the set of teammate UAVs that are observed in the current scan and ℬ the set of teammate UAVs that are not. Let 𝐱_1 represent the sub-state consisting of the ego-state and global extrinsic w.r.t. teammates in the set 𝒜, while 𝐱_2 represents the complementary state consisting of global extrinsic w.r.t. teammates in the set ℬ. We get (𝐱_1) = 18 + 6K and (𝐱_2) = 6(N-1-K), where K represent the number of teammates with mutual observation (, (𝒜) = 6K). Furthermore, in the current round of state estimation, assume (𝐱,𝐏) as propagated state and covariance after a normal ESIKF prediction step (, Section <ref>). Then, they can be partitioned as:
𝐱∼𝒩(
𝐱,
𝐏
)
=
𝒩(
[ 𝐱_1; 𝐱_2 ],
[ Σ_11 Σ_12; Σ_21 Σ_22; ]
).
Since 𝐱_2 will not be updated due to the lack of persistent excitation, we marginalize it out from 𝐱, leading to the prior distribution of the two sub-states:
𝐱_1 ∼𝒩(𝐱_1, Σ_11), 𝐱_2 ∼𝒩(𝐱_2, Σ_22).
To update the sub-state 𝐱_1, we notice the measurement model
𝐲 = 𝐡(𝐱,𝐯) = 𝐡(𝐱_1,𝐯_1),
where 𝐲 includes point measurements and mutual observation measurements (both active and passive), which depend only on 𝐱_1. Then, 𝐱_1 can be updated by fusing the prior distribution 𝐱_1 ∼𝒩(𝐱_1, Σ_11) with the measurements 𝐲 by following the normal ESIKF update step (, Section <ref>). Assume the updated state estimate and covariance are 𝐱̅_1 and Σ̅_11 respectively. Then, we have 𝐱_1 ∼𝒩(𝐱̅_1, Σ̅_11) and that the sub-state 𝐱_2 still remains at 𝐱_2 ∼𝒩(𝐱_2, Σ_22). Now that 𝐱_1 and 𝐱_2 are two independent distributions, they should evolve separately in the subsequent ESIKF steps. Specifically, for 𝐱_2, it is subject to its state transition function:
𝐱_2,τ +1 = 𝐱_2,τ
while for 𝐱_1, it is subject to
𝐱_1,τ+1 = 𝐱_1,τ⊞(Δ t_τ𝐟_1(𝐱_1,τ, 𝐮_τ, 𝐰_τ)),
where 𝐟_1(𝐱_1,τ, 𝐮_τ, 𝐰_τ) takes the first 18 + 6K elements of 𝐟(𝐱_τ, 𝐮_τ, 𝐰_τ) in (<ref>).
In the next round of ESIKF, each of the two sub-state will propagate starting from their respective initial distribution, 𝐱_1 ∼𝒩(𝐱̅_1, Σ̅_11) and 𝐱_2 ∼𝒩(𝐱_2, Σ_22), and following their respective state transition function (<ref>) and (<ref>). This process can be expressed compactly by propagating the complete system following (<ref>) from an initial distribution 𝐱∼𝒩(𝐱̅, 𝐏̅) defined below
𝐱̅ =
[ 𝐱̅_1; 𝐱_2 ],
𝐏̅ =
[ Σ̅_11 0; 0 Σ_22; ].
Packing the posterior distribution 𝐱_1 ∼𝒩(𝐱̅_1, Σ̅_11) and the prior distribution 𝐱_2 ∼𝒩(𝐱_2, Σ_22) into the joint distribution 𝐱∼𝒩(𝐱̅, 𝐏̅) is termed as “covariance re-initialization". With the covariance re-initialization, the propagation of the next step can simply follow the standard ESIKF prediction step of the complete system (<ref>), which is detailed in Section <ref>.
§.§ Degeneration Evaluation
The ESIKF presented previously would update the global extrinsic of teammate UAVs along with the ego-state. However, the update is valid only when the LiDAR scan contains sufficient geometric features. In some extreme environments, LiDAR sensors may encounter degeneration where the point-cloud fails to provide sufficient constraints to determine its ego-pose, making it impossible to distinguish the global extrinsic from ego-motion given mutual observation measurements, which is a problem suffered by our previous work <cit.>. To address this problem, we propose to automatically detect LiDAR degeneration. If it occurs, the previously estimated global extrinsic is used with mutual observation measurements to provide constraints for determining the ego-pose. The switching between the two cases (, updating global extrinsic along with ego-state, and, using currently-estimated global extrinsic for ego-state update) can be achieved automatically by leveraging the marginalization operation as follows.
When LiDAR degeneration occurs, we marginalize all global extrinsic out from the state vector by setting 𝒜 to null and ℬ to the full set of all teammate UAVs. Thus, sub-state 𝐱_1 only includes ego-state with (𝐱_1) = 18, and 𝐱_2 contains the global extrinsic transformation w.r.t. to all the teammates with (𝐱_2) = 6(N-1).
For the measurement model (<ref>), we rewrite it as
𝐲 = 𝐡(𝐱, 𝐯) =𝐡(𝐱_1, 𝐱_2, 𝐯) = 𝐡(𝐱_1, [𝐱_2, 𝐯]_𝐯_ext) = 𝐡(𝐱_1, 𝐯_ext),
where the marginalized sub-state 𝐱_2 is an exogenous random signal (, it is independent of the state 𝐱_1) just like the measurement noise 𝐯, so it is grouped with 𝐯 to form the extended measurement noise 𝐯_ext. The distribution of the “measurement noise" 𝐱_2 is obtained by propagating its sub-system following (<ref>). The rest of the steps, including the update of 𝐱_1 and the subsequent prediction step, will be identical to that in Section <ref>.
Besides the marginalization above, the mutual observation noise 𝐯 = [𝐧_ao,ij, 𝐧_po,ij] (see Section <ref>) of UAV i would be adjusted to a smaller value to provide adequate constraints for ego-pose determination.
To achieve the aforementioned operations, a degeneration evaluation module is required. Inspired by <cit.>, we evaluate the degeneration situation of UAV i by implementing singular value decomposition (SVD) of the Jacobian matrix 𝐉_𝐓 of 𝐡_p,n(𝐱_i, 0) in (<ref>) w.r.t. the ego-pose ^G_i𝐓_b_i:
𝐉_𝐓 =
[ - 𝐮_n^T ^G_i𝐑_b_i⌊^b_i𝐩_n ⌋_∧ 𝐮_n^T ],
where the notation ⌊𝐚⌋_∧ represents the skew-symmetric matrix of vector 𝐚∈ℝ^3×1
that maps the cross-product operation.
By calculating the singular values of 𝐉_𝐓, finding the smallest one λ, and comparing it with a predefined degeneration threshold ϵ_d, we can obtain the evaluation result. If λ < ϵ_d, UAV i is regarded as encountering LiDAR degeneration, and the corresponding responses mentioned above will be activated for this round of updates.
It is worth mentioning that the proposed method can achieve automatic switchover between the two modes: when degeneration is detected, the global extrinsic transformations and mutual observation measurements are utilized to accurately determine the ego-state; when no degeneration occurs, the point-cloud measurements of LiDAR are used to refine the global extrinsic states.
§.§ Broadcast of State Estimation Results
After state estimation completes, the results including updated ego-pose ^G_i𝐓̅_b_i, velocity ^G_i𝐯̅_b_i, pose covariance 𝐏̅_i, and refined global extrinsic transformations ^G_i𝐓̅_G_j are shared with all teammate UAVs through the decentralized Ad-Hoc network. The ego-pose and velocity sent to teammates are utilized for their mutual state estimation following (<ref>). The ego-pose, pose covariance, and refined extrinsic transformations are sent to teammates to construct their mutual observation measurements (Section <ref>) for the next step estimation.
Remark 3: The broadcast of the estimation results will also cause the refined global extrinsic transformations to be shared with a new UAV joining the swarm in the middle of a mission. The shared extrinsic will trigger the factor graphs of the new UAV to be updated, by inserting the refined extrinsic transformations received from the network. Optimizing the factor graph will then obtain the extrinsic between the new UAV and existing swarms. On the other hand, the shared extrinsic transformations will not trigger any factor graph update of existing UAVs in the swarm, as this edge has already existed in the factor graph.
§ SIMULATION EVALUATION
In this section, we conducted simulation experiments to evaluate the performance of the Swarm-LIO2 framework.
§.§ Simulator Setup
In our simulation experiments, we utilize the MARSIM simulator<cit.>, a lightweight point-realistic simulator for LiDAR-based UAVs. As shown in Fig. <ref>, MARSIM supports a variety of common LiDAR models and we select Livox LiDAR sensors including Livox Avia and Livox Mid360 to maintain consistency with real-world experiment setup. It is worth mentioning that MARSIM is capable of simulating the mutual observation scenarios among multiple UAVs, which is essential for validating the method proposed in this paper.
To simulate the scenario where each UAV, in reality, is equipped with reflection tapes, the reflectivity of the mutual observation points observed by each UAV is set to large saturated values. In all simulation experiments, the simulator is running on a laptop with i9-12900H CPU and NVIDIA GeForce RTX 3080 Ti GPU, and the LiDAR scan rate is set to 10Hz.
§.§ Initialization Efficiency and Accuracy Evaluation
A key step in the initialization of an aerial swarm is the global extrinsic calibration. In our previous work, Swarm-LIO <cit.>, the identification and global extrinsic calibration are achieved solely through trajectory matching, necessitating each UAV to fly a certain trajectory. Each UAV performing this initialization trajectory in turn will lead to successive long flight distances, especially when the swarm size is large.
In contrast, the decentralized pose graph optimization in Swarm-LIO2 requires only one UAV to fly a certain trajectory that can be observed by teammate UAVs, significantly reducing the initialization complexity.
We validate the initialization efficiency by comparing it to Swarm-LIO <cit.> at swarm size varying from 0 to 40. At each swarm size, for Swarm-LIO2, only one UAV executes a figure-8 trajectory that can be observed by the rest UAVs. For Swarm-LIO, each drone needs to fly the figure-8 trajectory in other UAVs' FoV.
Table <ref> shows the total flight distance for all UAVs in the initialization at different swarm sizes. As can be seen, as the swarm size increases, the total flight distance in <cit.> increases linearly, while that of Swarm-LIO2 increases very slowly and nearly remains unchanged regardless of the number of UAVs.
This indicates that the proposed method effectively mitigates the need for individual UAVs to fly extensive distances during initialization, compared to <cit.>. This contributes to significant energy savings and increased effective operational flight time for the swarm system.
We also evaluate the initialization accuracy of Swarm-LIO2 and Swarm-LIO <cit.> using their respective initialization trajectories (, only one UAV flies a figure-8 trajectory for Swarm-LIO2 versus all UAVs fly figure-8 trajectories for Swarm-LIO).
By comparing the RMSE of the global extrinsic transformations obtained by Swarm-LIO2 with Swarm-LIO<cit.> at swarm size varying from 0 to 40, it can be observed from Table <ref> that the two methods have similar initialization accuracy, which means the proposed factor graph optimization nearly does not deteriorate the initialization accuracy.
§.§ State Estimation and Global Extrinsic Accuracy Evaluation
Swarm-LIO2 can achieve robust, accurate ego-state and mutual state estimation and provide effective global extrinsic transformation.
This capability is indispensable for various swarm applications, such as multi-UAV formation flight, mutual collision avoidance, collaborative exploration, etc. As explained in Section <ref>, the mutual state estimation is robust to mutual observation loss. To evaluate such performance, the simulation experiments are conducted in a randomly generated 3D forest-like scenario of dimension 60 × 40 × 8 m^3 with a swarm composed of 5 UAVs (Fig. <ref>). In this evaluation, each UAV needs to perform ego-state estimation as well as mutual state estimation of the other four UAVs in the simulated forest. After the swarm initialization, the UAVs fly through the forest from one side to the other, causing frequent mutual observation losses between any two UAVs due to the dense obstacles. As shown in Fig. <ref> in which UAV1 is selected as the self-UAV, despite the frequent mutual observation losses caused by occlusions, the ego trajectory and teammate trajectories estimated by Swarm-LIO2 on UAV1 can maintain smoothness and continuity. We also transform the point cloud maps constructed by each UAV to the global frame of UAV1, using the estimated global extrinsic transformations. As can be seen, the merged point cloud map maintains a high level of consistency, which qualitatively showcases the excellent accuracy of the global extrinsic estimation.
For quantitative evaluation, we compute the error (RMSE) of all the estimated UAV trajectories by comparing them to the ground-truth offered by the simulator. Since each of the N UAV trajectories is estimated N times by itself and the rest of the teammates, we compute the RMSE of all the N^2 estimated trajectories and compare them with Swarm-LIO<cit.>. The distribution of all the N^2 RMSE, separated by position and rotation, are illustrated in Fig. <ref>. It can be observed that compared to Swarm-LIO, Swarm-LIO2 achieves a similar accuracy despite the introduced marginalization operations.
Finally for the global extrinsic estimation, Fig. <ref> shows the initialization and online refinement of the global extrinsic transformation, we select UAV1 and UAV2 for analysis and depict the error of the estimated global extrinsic ^G_1𝐑_G_2, ^G_1𝐩_G_2, in which the ground-truth is provided by the simulator.
It can be seen that the estimation error gradually converges during the online refinement, and the final error of the global extrinsic is less than 1^∘ (for rotation) and 0.2m (for translation).
With the accurate global extrinsic, we can merge the point-cloud map produced by different UAVs, which is extremely useful for large-scale collaborative mapping. As shown in Fig. <ref>, all the point-cloud maps (points of teammate UAVs are filtered as they are dynamic) are transformed into UAV1's global frame using the estimated global extrinsic transformations.
The consistently aligned map shown in Fig. <ref> indicates the accurate global extrinsic estimation of Swarm-LIO2.
§.§ Scalability Analysis in Time Consumption and Communication Bandwidth
To validate that Swarm-LIO2 possesses high scalability and can maintain efficient computation even at a large swarm size, a comparative experiment is conducted in a sparse simulated 3D forest-like scenario.
We compare the time consumption of Swarm-LIO2 to Swarm-LIO<cit.> at different swarm sizes. The entire framework of each method can be partitioned into several modules, including point clustering, mutual state estimation, ESIKF-based state estimation, etc. We analyze the time consumption of each module of the two methods and the results are shown in Fig. <ref>.
As can be seen, the computation time of clustering and mutual state estimation in the two methods linearly increases as the swarm size increases. This is because these two sub-modules need to be performed for every teammate UAV in the swarm system.
Moreover, since Swarm-LIO2 employs Fast Euclidean Clustering (FEC)<cit.> for clustering, which is extremely faster compared to traditional Euclidean clustering provided by the PCL library used in Swarm-LIO, the overall time consumption of clustering in Swarm-LIO2 is lower than that in Swarm-LIO.
For the ESIKF-based state estimation module, its time complexity is cubic to the state dimension in theory. In Swarm-LIO <cit.>, the state contains the ego-state and the global extrinsic transformations of all teammates, leading to a time consumption rapidly increasing with the swarm size. By contrast, in Swarm-LIO2, the state only includes the ego-state as well as the global extrinsic of observed teammates or teammates observing the self-UAV, which often saturates at a relatively small number due to mutual occlusions and LiDAR FoV limit (see Fig. <ref>).
As a result, as the swarm size increases, the time consumption of Swarm-LIO2 increases sub-linearly and at a rather low rate, even exhibiting a saturation trend when the swarm size reaches a certain size.
To sum up, Swarm-LIO2 is highly scalable compared to Swarm-LIO in terms of time consumption, reducing 7.83ms, 31.65ms, 133.09ms total consumed time at swarm size of 10, 25, and 40, respectively.
In addition to computational time consumption, the communication overhead might be another bottleneck that prevents the swarm scale from growing.
Therefore, it is crucial to evaluate the communication bandwidth usage under different swarm scales. We count the average data transfer volume per second, which is the average transmitting bandwidth usage, of Swarm-LIO2 and the previous version Swarm-LIO<cit.>, at swarm size varying from 0 to 40. From the results shown in Fig. <ref>, it can be observed that the average bandwidth usages of both methods increase linearly as the swarm size grows, but still remains at a low level since all the information to be communicated is of low dimension. When the swarm size is 40, the bandwidth usage of Swarm-LIO2 is below 250KB/s. Compared to the bandwidth of the Intel Wi-Fi 6E AX211 (Gig+) [https://www.intel.cn/content/www/cn/zh/products/sku/204837/intel-wifi-6e-ax211-gig/specifications.html] adapter used in our real systems, which is 2.4Gbps (approximately 300MB/s), the bandwidth usage of Swarm-LIO2 is almost negligible, indicating that the bandwidth is not a bottleneck at all. Besides, the transmitting bandwidth usage of Swarm-LIO2 is slightly larger than that of Swarm-LIO because there is additional information (, global extrinsic transformation) to be exchanged in Swarm-LIO2.
§.§ Fly Through a Degenerated Corridor
In this section, we conduct a simulation experiment in which five UAVs equipped with Livox Mid360 LiDARs need to fly through a degenerated corridor. In this case, the measurements of a single LiDAR can not provide sufficient constraints for pose determination, but Swarm-LIO2 can perform robust and stable state estimation thanks to the mutual observation measurements from teammates. We compare the localization accuracy of our method to Swarm-LIO and some state-of-the-art LiDAR-inertial odometry for a single UAV system and the results showcase the superior robustness of Swarm-LIO2 to degenerated scenes.
Due to the page limit, we put the detailed descriptions, illustrations, qualitative and quantitative results in the Supplementary
Material <cit.>.
In this section, we conduct a simulation experiment in which five UAVs equipped with Livox Mid360 LiDARs need to fly through a degenerated corridor (Fig. <ref>). In this case, the measurements of a single LiDAR can not provide sufficient constraints for pose determination, but Swarm-LIO2 can perform robust and stable state estimation thanks to the mutual observation measurements from teammates.
In the simulation, the UAVs fly cooperatively through the corridor one by one. When the first UAV, here is UAV2, flies into the corridor, other UAVs hover at the entrance (where sufficient structural features exist for their pose estimation) and provide mutual observation measurements for UAV2. As shown in Fig. <ref>(B1), when UAV2 detects LiDAR degeneration, it leverages mutual observation measurements from the rest UAVs to achieve robust state estimation. Then UAV2 flies through the corridor and hovers at the end of the corridor, offering mutual observation measurements for the rest UAVs to pass through the corridor (Fig. <ref>(B2-B4)).
As far as we know, apart from our previous work Swarm-LIO <cit.>, there is no other open-sourced 3D LiDAR-based state estimation method for UAV swarm, thus we compare the localization accuracy of our method to Swarm-LIO and some state-of-the-art LiDAR-inertial odometry for a single UAV system, including FAST-LIO2<cit.>, Point-LIO<cit.>, and Faster-LIO<cit.>, in which the ground-truth of the ego-state is provided by the simulator.
Take UAV2 as an example, the point-cloud map, the self-estimated position trajectory, and the ground-true trajectory are illustrated in Fig. <ref> with quantitative results supplied in Fig. <ref>. It can be observed that in degenerated scenes, by fusing mutual observation measurements, the localization errors of Swarm-LIO2 and Swarm-LIO are much smaller than those of other single-agent LiDAR-inertial odometry methods. Moreover, in such a degenerated scenario, Swarm-LIO2 can achieve slightly better self-localization robustness and accuracy than Swarm-LIO, which is mainly attributed to the careful measurement modeling (Section <ref>) and temporal compensation (Section <ref>).
§.§ Localization Accuracy with Communication Loss
The wireless communication is assumed to be perfect in all the previous simulation tests.
However, in reality, communication issues like dropouts are inevitable since various interference sources, , electromagnetic interference and physical occlusions would impact communication stability. In case of communication loss, Swarm-LIO2 would still hold the connection status for two more seconds (Section <ref>), during which the teammates' states are predicted via (<ref>) using the constant velocity model and the last updated extrinsic. Once the teammate connection status changes to “disconnected”, the teammate states will no longer be estimated until the teammate is reconnected. Holding the connection status for two more seconds can effectively reduce the false alarm caused by temporary communication loss such as temporary network congestion. Regardless of the communication loss, Swarm-LIO2 can reliably estimate the ego-state based on the measurements of LiDAR and IMU. In the case of complete communication loss, Swarm-LIO2 would degrade to FAST-LIO2<cit.>, to estimate the ego-state only.
To validate the robustness of Swarm-LIO2 to communication loss, we evaluate the state estimation accuracy on a swarm composed of five UAVs in the simulation environment shown in Fig. <ref>, under different simulated packet loss rates (PLRs). We evaluate the accuracy by averaging the RMSEs of the N^2 trajectories, which are shown in Table <ref>. As can be seen, as PLR increases, the localization accuracy of Swarm-LIO2 does not deteriorate obviously, which clearly illustrates the remarkable robustness of Swarm-LIO2 to communication dropouts.
§ REAL-WORLD APPLICATIONS
To comprehensively demonstrate the properties of the proposed swarm state estimation method and its capability to support different applications, we conduct various experiments in real-world environments.
§.§ Hardware Platform
The experiment platform is a compact and cost-effective quadrotor UAV that is equipped with 3D LiDAR and IMU sensors.
The quadrotor UAV has a 280mm wheelbase and is equipped with a Livox Mid360 LiDAR. The LiDAR is capable of generating point clouds at a rate of 200,000 points per second and possesses 360^∘ × 59^∘ field of view. As for the computation unit, each UAV is equipped with an onboard Intel NUC computer featuring an i7-1260P CPU, coupled with a flight controller that provides over 200Hz IMU measurements. In all the real-world experiments, the LiDAR scan rate is 30Hz. Each UAV is attached with reflective tapes for easy detection.
The spatio-temporal extrinsic of the LiDAR and IMU are pre-calibrated with <cit.>.
The hardware platform of our swarm system is shown in Fig. <ref>.
§.§ Inter-UAV Collision Avoidance
This experiment emulates a dense air traffic scenario by flying five UAVs in interleaved directions (Fig. <ref>(A1,B1)). Two flight tasks are demonstrated: in the first one, five UAVs initially hovering at the five vertices of a pentagon need to fly to a target position on the opposite side of the pentagon. In the second one, five UAVs initially hovering on one side of a field need to reach the other side of the field, meanwhile interchanging their positions. In both tasks, Swarm-LIO2 serves as the infrastructure for swarm initialization and swarm state estimation, which provides accurate global extrinsic transformations and real-time mutual state for inter-UAV collision avoidance. The inter-UAV collision avoidance is achieved by a swarm planner modified from <cit.>. The planned trajectories are fed into the motion controller<cit.> for execution.
The composite snapshots illustrating the entire flight process are shown in Fig. <ref>(A1,B1), with the estimated trajectories of each UAV shown in Fig. <ref>(A2,A3,B2,B3).
It can be observed that the estimated trajectories highly match the actual flights in the composite snapshots, which qualitatively validate the accuracy of Swarm-LIO2. We also analyze the state estimation result quantitatively, which is demonstrated in Section <ref>.
§.§ Fly Through a Dense Forest
To validate the performance of Swarm-LIO2 in cases of mutual observation loss, we conduct a test in a dense forest environment using the five UAVs as shown in Fig. <ref>. Each UAV needs to fly through a dense forest and reach each UAV's target point which is 40m away from the start point. During the whole process, no collision with obstacles in the environment or with teammate UAVs is allowed.
The initialization, goal transformation, and trajectory planning are the same as those in Section <ref>.
Then all the UAVs start to fly through the forest from one side to the other, shown in Fig. <ref>(A).
During the flight, the dense trees lead to frequent mutual observation losses, while Swarm-LIO2 can still achieve robust and smooth mutual state estimation. The trajectories of the five UAVs and the point-cloud of the forest are depicted in Fig. <ref>(A). The red trajectory represents the self-estimated flight trajectory of UAV1, while the green, orange, yellow, and purple trajectories represent the other UAVs' mutual state estimation results estimated by UAV1 in its respective global frame. Some details of the estimated trajectories when the UAVs avoid obstacles are illustrated in Fig. <ref>(B1-B3).
Throughout the entire mission, Swarm-LIO2 provides accurate, real-time state estimation results (see the quantitative analysis in Section <ref>) for the planning and control modules to achieve collision-free flights.
§.§ Target Tracking with Dynamic Joining and Leaving
To validate the plug-and-play property of Swarm-LIO2 which supports dynamic teammates joining and leaving, we conducted a collaborative target-tracking experiment with four UAVs. To enable fast detection of the target, the person being tracked wears a high-reflectivity vest, so his position can be easily detected from the high-reflectivity points from each UAV's LiDAR measurements. All UAVs have the same pre-programmed task: detecting and tracking a target, characterized by its high reflectivity and certain size, in a collaborative manner with teammates (if any) to maximize the overall target visibility, meanwhile avoiding the static and dynamic obstacles in the environment. In this process, Swarm-LIO2 serves as an infrastructure for automatic teammate finding, identification, and mutual state estimation, while trajectory planning is achieved by a decentralized swarm tracker in our previous work <cit.>.
Before the mission starts, UAV1 and UAV2 are placed at the same area 𝒫_1 as shown in Fig. <ref>(g,h) where they can communicate well and are commanded to complete the swarm initialization by flying one UAV along a figure-8 trajectory. After the initialization, the two UAVs form a swarm system of size two. UAV3 and UAV4 are placed separately in different locations 𝒫_2 and 𝒫_3 respectively where they can't detect the target due to occlusion.
The mission starts when the target enters the area 𝒫_1, where UAV1 and UAV2 successfully detect the target and start to track it collaboratively. To maximize the target visibility, the two UAVs form a straight line with the target in the middle, as shown in Fig. <ref>(a). UAV3 and UAV4 are actively searching for the target but did not find one due to occlusions and FoV limit, hence they remain at their respective initial position.
Subsequently, the target moves to the area 𝒫_2, where it is detected by UAV3. Then UAV3 takes off and starts to track the target. Since UAV3 is not yet part of the swarm (it has neither been identified as a teammate by UAV1 and UAV2 nor the global extrinsic transformations are calibrated), it tracks the target in a solo manner by treating UAV1 and UAV2 as dynamic obstacles to avoid, as shown in Fig. <ref>(b). Similarly, UAV1 and UAV2 remain in their current formation, which is the optimal one for the tracking mission, while treating UAV3 as a dynamic obstacle to avoid. As a consequence, the swarm of UAV1 and UAV2 and the individual UAV3 both execute their pre-programmed task, although not in a collaborative manner due to the lack of a prior initialization.
As the tracking goes on, the UAV3 would fly a trajectory, during which its identity, temporal offset, and global extrinsic w.r.t. any of UAV1 and UAV2 can be estimated by Swarm-LIO2 on the fly. With the online initialization, UAV3 joins the swarm, forming a swarm system of size three. The new swarm system starts changing its form into a triangle shape, with the target in the center, to maximize the tracking visibility (see Fig. <ref>(c)). This process takes place automatically without interrupting the tracking task. When the target approaches area 𝒫_3 where UAV4 is placed, the swarm size increases further to four and the UAVs form the shape of a square for maximizing the tracking visibility (see Fig. <ref>(d, e)).
At last, UAV1 is killed intentionally to emulate a scenario in which one agent in the swarm experiences failures. Swarm-LIO2 on each remaining UAV can detect the teammate dropout, update its teammate list, and estimate the rest of teammate states all automatically, indicating that Swarm-LIO2 is robust to single-point-of-failure (Fig. <ref>(f)). Correspondingly, the planner quickly transforms the formation back into a triangle shape (Fig. <ref>(f)).
To sum up, in the entire target-tracking process, Swarm-LIO2 can conduct initialization on the fly, discover newly joined teammates or dropout teammates dynamically, and estimate the ego-state and mutual state in real-time, all take place automatically without interrupting the tracking task. This enables the swarm to adapt its formations to optimize task completion. Moreover, Swarm-LIO2 can provide consistent and accurate state estimation results throughout the entire mission (see quantitative results in Section <ref>), assuring excellent target-tracking performance. To validate that Swarm-LIO2 possesses applicability to various environments, we also experimented with an indoor setting and a low-light night setting (see the attached video at https://youtu.be/Q7cJ9iRhlrYhttps://youtu.be/Q7cJ9iRhlrY).
§.§ More Experiments
We conduct two more real-world experiments to highlight the superior robustness and broad applicability of Swarm-LIO2. In the first experiment, two UAVs equipped with different LiDARs fly in a degenerated scenario. With mutual observation constraints provided by Swarm-LIO2, the two UAVs can achieve centimeter-level localization accuracy. In the second experiment, the accurate mutual state estimation and global extrinsic calibration capability of Swarm-LIO2 enables three UAVs to transport a payload cooperatively from an outdoor scene into a building.
Due to the page limit, we put the detailed descriptions, illustrations, and quantitative results in the Supplementary
Material <cit.>.
§.§ Time Consumption Analysis
In this section, we evaluate the average computational time per LiDAR scan (unit: ms) of the aforementioned real-world experiments (from Section <ref> to Supplementary <cit.>), tested on the onboard computer NUC equipped with an Intel i7-1260P CPU.
We compare the time consumption of Swarm-LIO2 with FAST-LIO2<cit.> (an efficient single-LiDAR LIO system), LiLi-OM<cit.> (an optimization-based single-LiDAR LIO system) and our previous work Swarm-LIO <cit.>.
The average computational time of different methods in the different experiments is shown in Table <ref>.
As can be seen, Swarm-LIO2 improves the computation efficiency significantly when compared to Swarm-LIO, due to the introduced state marginalization.
In the experiment shown in Supplementary III <cit.>, the time consumption of Swarm-LIO2 and Swarm-LIO are at a similar level because the three UAVs need to remain close to each other, leading to full mutual observation during the entire process. Therefore, the reduction in computation time caused by marginalization is not significant, and the slight reduction is primarily attributed to the more efficient point clustering algorithm FEC<cit.>.
However, in the other four experiments, since the mutual observation is frequently lost due to occlusions, the computational time of Swarm-LIO2 is obviously less than that of Swarm-LIO mainly due to the proposed marginalization operation. Compared to LiLi-OM, Swarm-LIO2 consumes much less time since it avoids time-consuming feature extraction and utilizes the efficient ESIKF framework. Compared to FAST-LIO2, despite Swarm-LIO2 incorporating many additional modules and handling more complex problems, it only incurs approximately 20% more computation time on average.
Finally, since the LiDAR scan rate is 30Hz, indicating the limit of real-time computation is about 33.33ms per frame. In all the real-world experiments, the computational time of Swarm-LIO2 is far less than the limit value, showcasing the excellent real-time performance of Swarm-LIO2.
§.§ Quantitative Analysis of State Estimation
In this section, we quantitatively evaluate the state estimation consistency of Swarm-LIO2 in the aforementioned real-world
experiments. Since in most real-world experiments, no ground-truth of UAVs' states can be obtained, for a swarm system containing N UAVs, we compute the standard deviation of all the N^2 estimated UAV trajectories in each experiment to quantitatively evaluate the state estimation consistency.
The computed standard deviations of rotation and translation estimated by Swarm-LIO2 and Swarm-LIO<cit.> are illustrated in Table <ref>. As can be seen, the standard deviation of position and rotation is at the centimeter level and the degree level, respectively, indicating the excellent consistency of the swarm state estimation (both ego and mutual) of Swarm-LIO2. From the comparison with Swarm-LIO, it is evident that the two methods have similar state estimation consistency, indicating that the marginalization operations adopted in Swarm-LIO2 nearly do not impact the performance.
§.§ Communication Bandwidth Analysis
We quantitatively evaluate the data transfer volume (TX and RX) per second of each UAV in the aforementioned real-world experiments. The results are shown in Table <ref>. Both systems have extremely low average bandwidth usage, which is under 35KB/s when the swarm system contains 5 UAVs.
In all the real-world experiments of Swarm-LIO2, the adopted wireless network adapter on each UAV is Intel Wi-Fi 6E AX211 (Gig+) with 2.4Gbps (approximately equals to 300MB/s) bandwidth, four orders of magnitude higher than the actual usage. Compared with Swarm-LIO, Swarm-LIO2 consumes a slightly larger bandwidth since it exchanges more information, including global extrinsic transformations and degeneration status.
We also calculated the packet loss rate (PLR) of the two methods. The average PLR is 11.08% for Swarm-LIO2 and 10.36% for Swarm-LIO, which are similar. The accurate state estimation as shown previously in the presence of the packet loss indicates the excellent robustness of our system.
§ CONCLUSION AND FUTURE WORK
In this paper, we proposed Swarm-LIO2, a decentralized, efficient state estimation framework based on LiDAR and IMU measurements for aerial swarm systems. A decentralized temporal calibration approach was utilized to calibrate the inter-UAV temporal offset.
Novel reflective tape-based UAV detection, trajectory matching, and factor graph optimization-based methods were proposed to perform efficient and fast teammate identification and global extrinsic calibration. A novel marginalization module was proposed to reduce the state dimension and further improve the swarm scalability, and a degeneration evaluation module was presented to ensure robust ego-state determination.
Furthermore, we introduced the elaborate measurement modeling and temporal compensation of the mutual observation measurements, enhancing our state estimator's accuracy and consistency. By exchanging bandwidth-efficient information via an Ad-Hoc network, the mutual observation measurements are tightly coupled with IMU and point-cloud measurements under an ESIKF framework, fulfilling real-time, accurate ego-state mutual state estimation. Using simulation benchmarks, we compared our LiDAR-inertial odometry with other state-of-the-art LIO methods, demonstrating excellent robustness to LiDAR degenerated scenarios. In addition, the analysis of computational time and communication bandwidth usages at different swarm scales showcases the superior scalability of our method.
Besides, we integrated our method into a UAV swarm composed of at most five UAVs with fully autonomous state estimation, planning, and control modules.
Various simulated and real-world experiments were conducted, demonstrating that our method serves as an infrastructure for aerial swarm systems and can support a wide range of UAV swarm applications.
In the future, we will focus on extending Swarm-LIO2 to a more complete swarm SLAM system by incorporating loop closure modules and historical pose correction, to ensure low drift after a long time of running.
§ ACKNOWLEDGEMENT
The authors thank Ms. Yang Jiao, Mr. Wendi Dong, and Mr. Meng Li for the helpful discussion, and Prof. Ximin Lyu for the experiment site support. The authors acknowledge funding from CETC and DJI, and equipment support from Livox Technology.
support/IEEEtran
[
\begin@twocolumnfalse
§
SUPPLEMENTARY MATERIALS FOR SWARM-LIO2: DECENTRALIZED, EFFICIENT LIDAR-INERTIAL ODOMETRY FOR UAV SWARMS
\end@twocolumnfalse
]
§ FLY THROUGH A DEGENERATED CORRIDOR
In this section, we conduct a simulation experiment in which five UAVs equipped with Livox Mid360 LiDARs need to fly through a degenerated corridor (Fig. <ref>). In this case, the measurements of a single LiDAR can not provide sufficient constraints for pose determination, but Swarm-LIO2 can perform robust and stable state estimation thanks to the mutual observation measurements from teammates.
In the simulation, the UAVs fly cooperatively through the corridor one by one. When the first UAV, here is UAV2, flies into the corridor, other UAVs hover at the entrance (where sufficient structural features exist for their pose estimation) and provide mutual observation measurements for UAV2. As shown in Fig. <ref>(B1), when UAV2 detects LiDAR degeneration, it leverages mutual observation measurements from the rest UAVs to achieve robust state estimation. Then UAV2 flies through the corridor and hovers at the end of the corridor, offering mutual observation measurements for the rest UAVs to pass through the corridor (Fig. <ref>(B2-B4)).
As far as we know, apart from our previous work Swarm-LIO zhu2023swarm, there is no other open-sourced 3D LiDAR-based state estimation method for UAV swarm, thus we compare the localization accuracy of our method to Swarm-LIO and some state-of-the-art LiDAR-inertial odometry for a single UAV system, including FAST-LIO2xu2022fast, Point-LIOhe2023point, and Faster-LIObai2022faster, in which the ground-truth of the ego-state is provided by the simulator.
Take UAV2 as an example, the point-cloud map, the self-estimated position trajectory, and the ground-true trajectory are illustrated in Fig. <ref> with quantitative results supplied in Fig. <ref>. It can be observed that in degenerated scenes, by fusing mutual observation measurements, the localization errors of Swarm-LIO2 and Swarm-LIO are much smaller than those of other single-agent LiDAR-inertial odometry methods. Moreover, in such a degenerated scenario, Swarm-LIO2 can achieve slightly better self-localization robustness and accuracy than Swarm-LIO, which is mainly attributed to the careful measurement modeling (Section V-C1) and temporal compensation (Section V-C2) in the paper.
§ FLIGHT IN DEGENERATED SCENARIO
To validate the robustness of Swarm-LIO2 in LiDAR degenerated environments, we conduct a flight experiment for a swarm consisting of two UAVs. UAV1 carries a Livox Mid360 LiDAR, while UAV2 carries a Livox Avia LiDAR with a smaller FoV (especially in the horizontal direction) which is only 70.4^∘ × 77.2^∘, shown in Fig. <ref>(d). We instructed UAV2 to follow a pre-planned trajectory, shaped like “MARS" which is the name of our laboratory. At certain poses, the Avia LiDAR mounted on UAV2 would directly face a smooth plane, leading to LiDAR degeneration. During the entire flight, UAV1 is flying behind UAV2 as an observer to provide UAV2 passive mutual observation measurements.
We compare the self-localization result of our method with a representative single-agent LIO system, FAST-LIO2xu2022fast, as shown in Fig. <ref>. Since the LiDAR measurements can not provide sufficient constraints for pose determination, the state estimation of FAST-LIO2 diverges soon, the odometry largely drifts, and the point-cloud map gets messy. For Swarm-LIO2, the passive observation measurements offered by UAV1 provide the necessary information for robust localization and a consistent map. For the degenerated UAV2. The ground-truth provided by the motion capture system, the self-estimated trajectory of UAV2, and the trajectory of UAV2 estimated by UAV1 are depicted in Fig. <ref>. The average position error estimated by UAV2 itself is 0.043m and that estimated by UAV1 is 0.059m, both in the centimeter level.
§ COOPERATIVE PAYLOAD TRANSPORTATION
In this section, we implemented an interesting application in which a swarm composed of three UAVs completes the payload transportation from an outdoor location to an indoor area, see Fig. <ref>. The focus of this application is to demonstrate the capability of estimating the temporal offset and the global extrinsic transformations of Swarm-LIO2, rather than trajectory planning for the swarm. Therefore, in this experiment, the trajectory for UAV1 is priorly planned to ensure collision-free flight.
The trajectories of the other two UAVs are obtained by transforming online the pre-planned trajectory into their respective global frames with appropriate offsets, using the precise temporal offset and global extrinsic transformations provided online by Swarm-LIO2. The three trajectories are then tracked independently with controller lu2022model.
After loading the payload, the three UAVs fly in a low-light, outdoor scenario shown in Fig. <ref>(a), and ultimately enter a building through a window, shown in Fig. <ref>(b). Throughout the entire flight, the three UAVs maintain the formation of a triangle, ensuring that each UAV contributes nearly equal pulling forces. It is accurate state estimation, temporal offset, and global extrinsic calibration that empowers the UAVs to maintain the correct formation at any given moment, and successfully complete the payload transportation mission without collisions. The point-cloud map of the whole experiment site, which is constructed in real-time, and the transformed paths of the three UAVs are illustrated in Fig.<ref>(c). The poses of the three UAVs and the point-cloud of the environment at the moment when UAVs were flying through the window are illustrated in Fig.<ref>(d). It is worth noting that the entire swarm successfully flies through the narrow window without any collisions between the UAVs, payload, and the surrounding environment.
support/IEEEtran
supplementary
| In recent years, multi-robot systems, especially aerial swarm systems, have exhibited great potential in many fields, such as collaborative autonomous exploration<cit.>, target tracking<cit.>, search and rescue<cit.>, etc. Thanks to their great team cooperation capability, swarm systems can complete various missions in complex scenarios, even in degenerated environments for a single robot.
For a single robot system, well-developed state estimation techniques provide accurate ego-state estimation <cit.>, serving as a critical precondition for a wide variety of autonomous tasks such as trajectory planning<cit.> and motion control<cit.>. For robotic swarm systems, state estimation plays an equally significant role<cit.>, where each robot needs to estimate the state of the self UAV (, ego-state estimation) as well as the other teammate UAVs (, mutual state estimation). Accurate and robust estimation of ego and mutual states is crucial for the robot swarms to collaborate on a task.
Over the past few decades, multiple sensors and devices have been adopted to achieve reliable state estimation for robotic swarm systems.
GPS and RTK-GPS are commonly used for self-localization in outdoor environments, as reported in previous studies <cit.>. For GPS-denied environments, motion capture systems <cit.> and anchor-based Ultra-WideBand (UWB) systems <cit.> have been utilized for state estimation in multi-robot systems. These methods <cit.> often rely on the stationary ground station, resulting in a centralized system that is prone to single-point-of-failure. In more recent research, cameras have become a popular choice in multi-robot systems due to their lightweight design, low cost, and rich color information. These camera-based systems are often complemented by an Inertial Measurement Unit (IMU) and an anchor-free UWB to provide more robust state estimation <cit.>.
However, cameras are vulnerable to inadequate illumination and lack direct depth measurements, leading to high computational complexity in computing 3D measurements.
Although the complementary anchor-free UWB can provide distance measurements, it is susceptible to multi-path effects and obstacle occlusion in the environment, which decreases the overall system accuracy.
In recent years, 3-D light detection and ranging (LiDAR) sensors have gained popularity in state estimation due to their ability to provide direct, accurate 3D measurements over a long range and various illumination conditions. While traditional mechanical spinning LiDARs are often expensive and heavy, recent advancements in LiDAR technology have introduced cost-effective and lightweight LiDARs that are suitable for deployment on mobile robots, particularly unmanned aerial vehicles (UAVs). These LiDARs have not only enabled the development of autonomous navigation systems using LiDARs for UAVs <cit.>, but opened up new possibilities for state estimation in swarm systems.
Leveraging the above LiDAR advantages, this paper aims to develop a fully decentralized, plug-and-play, computationally efficient, and bandwidth-friendly state estimation method for aerial swarm systems based on LiDAR-inertial measurements. Fully decentralized means no master agent exists in any module of the whole system from communication hardware to algorithm software, which avoids single-point-of-failure. Plug-and-play means that an agent can automatically join the swarm and easily collaborate with other teammates before or in the middle of a mission. The system must also be computationally efficient and bandwidth-efficient, since the limited payload capacity of UAVs imposes significant constraints on the computational resources and network bandwidth.
We decompose the task of aerial swarm state estimation into two key online modules, initialization and state estimation.
In the initialization module, each UAV needs to detect and identify possible new teammate UAVs, and calibrate temporal offsets and global extrinsic transformations with the found teammates.
To achieve simple but effective teammate detection, reflective tapes are attached to each UAV, making teammate UAVs easily detectable from LiDAR reflectivity measurements. This teammate detection is conducted in real-time at each LiDAR scan measurement, enabling the detection of new teammate UAVs even in the middle of a mission. For each detected new UAV, its identification and global extrinsic transformation are obtained through trajectory matching, while the global extrinsic with the rest of teammate UAVs found on the network are swiftly calibrated through a factor graph optimization. Moreover, by exchanging low-dimensional data via a decentralized Ad-Hoc network, teammate monitoring and temporal calibration can be fulfilled efficiently and in a fully decentralized manner.
In the state estimation module, each UAV in the swarm system performs real-time, robust, and precise ego-state estimation as well as mutual state estimation. Estimating the full state of all teammates in each UAV is computationally demanding. Thus, we propose to estimate on each UAV only the ego-state, meanwhile refining the global extrinsic transformations w.r.t. (with respect to) the teammates. The ego-state and global extrinsic transformations are estimated efficiently within an Error State Iterated Kalman Filter (ESIKF) framework, by tightly fusing LiDAR point-cloud measurements, IMU measurements, and observed teammate locations (, mutual observation measurements), which are enhanced by careful measurement modeling and temporal compensation. In each step of the state estimation, we marginalize the extrinsic states of all teammate UAVs not observed in the LiDAR. This state marginalization along with a degeneration evaluation method prevents the state dimension (hence computational complexity) from growing with the swarm size, effectively enhancing the scalability of our system to larger swarms.
This paper is extended from our previous work<cit.>, which proposed the general framework of swarm LiDAR-inertial odometry. Compared to the previous work <cit.>, this paper proposes five crucial extensions:
* Factor graph optimization for efficient teammate identification and global extrinsic calibration, which largely decreases the complexity and energy consumption of the swarm initialization. Specifically, the number of flights required in the initialization of a swarm with N UAVs is reduced from O(N) to O(1).
* A novel state marginalization strategy and a LiDAR degeneration evaluation method that alleviate the computational burden and to enhance the swarm scalability. The marginalization reduces the growth rate of the state estimation complexity from cubic to sub-linear.
* Detailed measurement modeling and carefully designed temporal compensation of the mutual observation measurements, to compensate for the temporal mismatch due to asynchronous sensor measurements among different UAVs.
* Comprehensive simulation and real-world experiments verifying the effectiveness of Swarm-LIO2, , support of large swarm scales (tested 5 UAVs in the real-world as shown in Fig. <ref> and 40 UAVs in simulation), robust to degenerated scenes, and allows the dynamic change of swarm size with online joining or dropping out of any teammate UAVs.
* An open source implementation of the proposed system, termed as Swarm-LIO2, including source codes of the algorithms and hardware designs of our aerial platforms. | null | null | null | null | null |
http://arxiv.org/abs/2409.17216v1 | 20240925175901 | Data-Centric AI Governance: Addressing the Limitations of Model-Focused Policies | [
"Ritwik Gupta",
"Leah Walker",
"Rodolfo Corona",
"Stephanie Fu",
"Suzanne Petryk",
"Janet Napolitano",
"Trevor Darrell",
"Andrew W. Reddie"
] | cs.CY | [
"cs.CY",
"cs.AI"
] |
Walker: Self-supervised Multiple Object Tracking by Walking on Temporal Appearance Graphs
Mattia Segu1,2 Luigi Piccinelli1 Siyuan Li1
Luc Van Gool1,3 Fisher Yu1 Bernt Schiele2
September 28, 2024
===========================================================================================
§ ABSTRACT
Current regulations on powerful AI capabilities are narrowly focused on “foundation” or “frontier” models. However, these terms are vague and inconsistently defined, leading to an unstable foundation for governance efforts. Critically, policy debates often fail to consider the data used with these models, despite the clear link between data and model performance. Even (relatively) “small” models that fall outside the typical definitions of foundation and frontier models can achieve equivalent outcomes when exposed to sufficiently specific datasets. In this work, we illustrate the importance of considering dataset size and content as essential factors in assessing the risks posed by models both today and in the future. More broadly, we emphasize the risk posed by over-regulating reactively and provide a path towards careful, quantitative evaluation of capabilities that can lead to a simplified regulatory environment.
§ THE SHORTCOMINGS OF TODAY’S AI GOVERNANCE
The past decade has seen a rapid burst of commercial AI products, such as Google Translate and OpenAI’s ChatGPT, delivering new capabilities into the hands of the public. As AI has made its way to wider audiences, it has continued its rapid pace of development, giving everyday users what were previously highly specialized computing tools and capabilities. This raises questions for governments, academics, and commercial labs about whether certain AI capabilities or behaviors should be deemed as too “risky” for public access <cit.>.
Today’s AI governance efforts have coalesced around the terms “frontier”, “foundation”, “dual-use”, and “general purpose” to describe the largest and most capable of these models. In policy papers and legislation, models described by these terms are subject to additional scrutiny and regulatory interest. These terms are usually synonymous with the most cutting-edge models of today including OpenAI’s ChatGPT <cit.>, Meta’s LLaMA <cit.>, and Google’s Gemini <cit.>. Although there is general agreement for the types of AI-accelerated risks that regulations aim to curtail, there is much less clarity and consensus in concrete definitions for such models. In an effort to define the characteristics of these large, capable models, a number of policy documents have focused on parameter counts and/or FLOPs, measures of model size and compute requirement <cit.>.
We argue that this approach is short-sighted for three reasons. First, there is no consistent definition of “frontier”, “foundation”, “dual-use”, and “general purpose” models with regards to FLOPs or parameter count. As discussed below in Section <ref>, this lack of definitional clarity has led to a regulatory and governance landscape with varying ceilings for what constitutes a covered capability. Second, as machine learning advances, models are becoming more efficient, requiring fewer parameters and FLOPs to achieve the same tasks. This means that the next generation of models could be more capable while falling below regulatory ceilings. Finally, the focus on the largest and most compute-intensive models ignores the fact that smaller models can be just as capable in niche, and potentially risky, areas as their larger counterparts. These factors culminate in inadvertent loopholes that powerful capabilities can slip through, rendering expensive regulatory efforts not only useless but potentially detractory from beneficial uses of AI technologies.
Analyzing AI purely as a function of models is an outdated view. The broader field of machine learning has recognized the role of data as a direct indicator of model performance <cit.>, suggesting that dataset quality and size should also be included as factors in conversations surrounding model capabilities.
In this paper, we first discuss the limitations of the current model-focused governance ecosystem. Then, we demonstrate the value of a data-focused approach to AI governance. In particular, we present experiments corroborating the role that dataset size plays in model capability. Finally, we propose legal and technical approaches to AI governance rooted in our understanding of the dataset-model capability relationship.
§ DEFINITIONAL CHALLENGES AND FLAWED LIMITS IN AI GOVERNANCE
Over time, the capabilities of digital systems have improved in tandem with hardware advancements <cit.>. This is also true of machine learning models; simple two-parameter logistic regressions have evolved to deep neural networks with trillions of parameters, enabled by the advent of graphics processing units (GPUs) and high-speed memory[This natural progression in size and complexity was formalized as a computing law by Gordon Moore and is colloquially known as Moore’s Law.] <cit.>. Yet, Moore’s Law has weakened significantly. We have observed periods of stagnation, and even reversal, in aggregate computing trends despite overall progress in terms of effectiveness of outcomes <cit.>. The trend of plateauing is also observed in machine learning, and policies aiming to regulate machine learning models solely as a function of continuous growth are flawed, as demonstrated in Section <ref>.
In addition, much of the conversation around AI regulation has centered itself around the prevention of behaviors that are deemed to be “harmful” or otherwise detrimental to society <cit.>. The mention of “harm” is too often unqualified and does not address the capabilities of existing technologies that may already be capable of much of the malicious behavior discussed in AI policy circles today. For example, AI for biological agent design is widely cited as a potential harm <cit.>, yet computational drug discovery has been the norm since the 1980s and has enabled the discovery of drugs such as ritonavir, a medication critical in treating both HIV and COVID-19 <cit.>. The conversation surrounding the use of AI to further societal harms must contextualize the additional marginal risk posed by these methods when compared to existing technologies such as search engines or statistical inference algorithms.
The AI ecosystem’s difficulty in defining and identifying harm extends into inconsistent efforts to identify “harmful” or “risky” models and regulate them. In the following sections, we demonstrate the shortcomings and inconsistencies of these model-focused AI governance efforts, while identifying key drivers of AI risk that are currently overlooked in modern AI policy.
§.§ An Unstable Definition Foundation
The use of the terms “foundation”, “frontier”, “dual-use”, and “general purpose” to describe machine learning models has arisen in the past few years in an effort to isolate classes of models seen as posing the greatest risk of harm to public safety. In 2021, Stanford University researchers introduced the concept of “foundation models” in “On the Opportunities and Risks of Foundation Models” <cit.>. The paper uses the term to describe machine learning models trained using self-supervised learning methods on large sets of data to the point that they demonstrate emergent behaviors during inference.
Self-supervised learning
A model-training technique using unlabeled data rather than relying on external human-provided labels. Oftentimes, labels are generated from the data itself or are inherent in the training process.
Inference
The stage of machine learning in which a model makes predictions on new inputs based on its knowledge up to that point. Model weights are not updated at inference time.
The term “foundation model” has spread swiftly throughout the AI research community to a point of saturation where any model trained on a subjectively large set of data can be termed “foundational.” More recently, the terms “frontier model,” introduced in “Frontier AI Regulation: Managing Emerging Risks to Public Safety” <cit.> and “dual-use model,” found in the “Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” <cit.> and the EU AI Act <cit.>, have arisen as similar descriptions of large, cutting-edge models with an increased potential for harm. The cross-cutting conclusion of the literature has been that these types of models can pose serious risks to the general public and should be governed as such.
However, despite launching regulatory processes to achieve similar outcomes to curtail risk, various across reports, legislation, and other forms of governance documents lack agreement in definitions utilized to circumscribe powerful AI capabilities. In Table <ref>, we highlight various impactful papers and policies that have shaped international AI governance. In particular, we highlight the inconsistencies between how influential works which first introduced various terms and thresholds disagree from their actualization in policy proposals.
[2]Despite using the words “dual-use”, the definition provided in the document are more aligned with accepted definitions of “general purpose.”
Terminology such as “foundation” and “frontier” are terms of art that have non-static and contentious definitions, suggesting that utility-based terminology such as “general purpose” may be better regulatory terms instead. Furthermore, a leading approach is to to bound “risky” AI models in terms of the amount of computation required to train them. As we demonstrate in Section <ref>, these thresholds do not appropriately bound “risky” AI models—a driving goal for regulatory efforts. Additionally, the documents that discuss training on “large” amounts of data do not define how many data points meet the bar, leaving leeway for bound parties to argue exemptions.[Historically, the computing paradigm of “big data” suffered from similar criticisms with no concrete amount or volume of data being defined for the purpose of strict regulation.]
§.§ Capability and Model Size are not Strictly Correlated
Today’s AI governance efforts regularly seek to define frontier models by their size or the amount of computation required to train them. As reflected in some of the governance documents analyzed above, a common strategy is to set a regulatory threshold on the number of parameters included in a model. The rationale behind this approach is a set of experiments that demonstrate that models with larger numbers of parameters, with all other factors held constant, suddenly perform drastically better on downstream tasks they are not explicitly trained for <cit.>. This phenomenon was termed “emergence” and drove fears that sufficiently large models, by default, can perform well on tasks that pose risks to public safety.
Parameter
A variable within a model that is learned from the training data. Parameters define how the model makes predictions by influencing the model's internal structure and decision-making process.
Discussions prioritizing model size as a viable threshold fixate on a superficial, easy-to-obtain quantity that is ultimately a red herring. In reality, model capacity and generalizability represent characteristics that are innately difficult to quantify and measure. Not only are current generalization benchmarks lacking in accurate definitions for model capabilities <cit.>, but it is exceedingly common for smaller, more task-focused models to perform better than large, broad-purpose models on specific downstream tasks, as demonstrated below.
Downstream tasks
Applications that take model outputs as input, re-purposing model knowledge for a new problem that it was not explicitly trained for. The model parameters may optionally be updated via further training using additional data from these tasks.
We use the task of image segmentation as an example where smaller models can outperform their larger counterparts. Examples of image segmentation include both civil and national security applications such as building damage assessment or object targeting <cit.>. Specifically, we examine RefCOCO <cit.>, a common image segmentation dataset used to train vision-language models (VLMs), and two models which attain near-state-of-the-art performance on it, PaliGemma <cit.> and UniLSeg <cit.>. PaliGemma is a large VLM released openly by Google consisting of 3.0×10^9 parameters <cit.>. On the other hand, UniLSeg, released by Tsinghua University, ByteDance, and the University of Hong Kong, consists of only 1.7×10^8 parameters—an order of magnitude smaller than PaliGemma. Yet, UniLSeg achieves a mean intersection-over-union of 81.7 versus PaliGemma’s 73.4 on RefCOCO, which is a massive gain of ∼11.3% in performance.[Model performance numbers are obtained from their respective papers and https://paperswithcode.com/sota/referring-expression-segmentation-on-refcocoPapers With Code. Parameter counts are derived from the respective papers.] Figure <ref> additionally demonstrates the performance of two more near-state-of-the-art models, UNINEXT <cit.> and HIPIE <cit.>, on RefCOCO for completeness.
Image segmentation
A task in which an image is split into regions which each represent a specific object type or concept.
Mean intersection-over-union
A measure of how accurately and precisely, on average, an image segmentation model performs. This ranges from zero to one.
To further underline this point, we visualize the accuracy of top open-source language models on the Massive Multitask Language Understanding benchmark[MMLU consists of questions in the “subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn.”] <cit.> as a function of model parameter count in Figure <ref>. Low parameter count does not imply incapability, therefore parameter counts alone are an insufficient quantity to define capability frontiers. More parameters are helpful insofar they can fit an appropriately larger amount of data—the two concepts must be bundled to properly circumscribe AI capabilities.
§.§ A Misplaced Focus on FLOPs
FLOPs
Floating point operations. The cumulative number of floating point operations (e.g., 3.5 x 7.1) used during model training. Not to be confused with floating point operations per second (FLOPS, with a capital S).
Definitions of foundation and frontier models (see Table <ref>) include regulatory thresholds defined by cumulative training FLOPs. These numbers have no basis in outcomes or technical reality, as we demonstrate in the following section. 10^26 FLOPs appeared as an arbitrary FLOPs threshold in the October 2023 Biden Administration Executive Order.
Some of the largest models in existence today are sufficient to employ in harmful activities <cit.>, yet all fail to meet American FLOPs thresholds (see Table <ref>), raising questions about the threshold's usefulness. These same models are covered under the the EU’s proposed threshold of 10^25 for AI models. However, a fractured environment in which a model regulated in France might not be subject to the same regulations in the United States will lead to confusion.
These thresholds further exacerbate the perception that frontier capabilities can only arise from large models trained with a large amount of computation on larger datasets. As we further demonstrate in this section, even smaller models trained with fewer resources on smaller datasets can set a capability frontier. In fact, research incentives necessitate the creation of methods that reduce computational needs for model training—a trend that is be contrary to regulatory assumptions.
Optimizations reverse trends. One way to visualize the futility of FLOPs thresholds is via recent works such as those on efficient sparse training <cit.> (Figure <ref>) or other architectural improvements <cit.>. They demonstrate that model performance can, in some cases, be decoupled from computational cost—models can train faster and more accurately with fewer parameters and FLOPs. Further research demonstrates decoupling in the opposite direction, i.e., efficient training can occur in compute-constrained environments. Models distributed across multiple machines can be trained with a fraction of parameters while equaling performance at the cost of increased FLOPs <cit.>. As such, policies solely relying on FLOP ceilings to bound “frontier” models are relying on simplified computing proxies that may not correlate to desired outcomes of controlling the spread of “risky” models.
Public disclosure of metrics such as FLOPs is beneficial, however, most well-known commercial AI models do not publicly disclose the amount of FLOPs utilized in the course of training their models. Open-source models, by definition, have exact FLOPs counts available. Below, we provide estimates of FLOPs for a variety of large vision and language models, both commercial and open-source. For proprietary models, these estimates are based on assessments from third-parties rather than concrete disclosures from the respective AI companies.
Efficient methods develop rapidly. AI research progresses rapidly and the development of efficient methods is an entire subfield with deep financial incentives. The amount of FLOPs needed for a given model architecture to reach a target performance threshold generally tends to drop significantly over a short period of time as the machine learning community identifies software and hardware optimizations for widely-used models.
Transformer
A widely-used model architecture across vision, language, and other modalities, capable of learning from large datasets and achieving high performance on downstream tasks.
To illustrate this point concretely, we consider various vision transformers[DeiT, PVTv2, CaiT, CoAtNet, XCiT, Swin, MViTv1, MViTv2. Numbers are gathered from the MViTv2 paper and are on models using a comparable amount of computation.] trained on the ImageNet-1K classification benchmark <cit.>. In less than a year, the ML research community increased the achieved top-1 accuracy on the benchmark from 81.8% to 84.4% while reducing the required FLOPs by 42% from 17.6 to 10.2 GFLOPs (see Figure <ref>). This trend holds true for large language models as well <cit.>.
§ DATA IS MISSING FROM THE CONVERSATION
Machine learning capabilities are not singularly determined by their model architecture. Rather, machine learning capabilities are defined by both the model and the data provided. We define “data” as any information a model is exposed to, whether it is during training or deployment. This paper aims to center data in AI governance conversations. We suggest that models alone are not harmful; rather, the unique combination of models exposed to specific datasets (whether during training or inference) and subsequently being used for specific purposes may pose a risk to public safety <cit.>.
Traditionally, training data (both pre-training and fine-tuning) was the only source of information that a model would have access to before making a prediction. However, models can now incorporate new, unseen data during inference through frameworks such as prompting and Retrieval-Augmented Generation (RAG) <cit.>. Therefore, both the training and deployment data are relevant when considering how a model incorporates information in its outputs.
§.§ Big Data to Usable Information
The rapid rise of AI since approximately 2010 can largely be attributed to (1) advancements in computational hardware in accordance with Moore’s Law, and (2) a focus on large quantities of data. Models are useless without data, and the availability of “foundational” datasets, such as ImageNet <cit.> and Common Crawl,[https://commoncrawl.org/the-data/ https://commoncrawl.org/the-data/] brought modern machine learning capabilities to bear. Today, AI datasets are often orders of magnitude larger, created by scraping content across the internet.
Dataset size is a key component in “scaling laws,” or predictions of performance within a family of models as a function of variables in a training recipe. Research in this area finds strong relationships between model performance and amount of training data, amount of computation, and model parameters <cit.>. Additionally, both <cit.> and <cit.> find that model and optimal dataset size scale at equal proportions as training compute increases.
However, even an optimal training recipe with an appropriate amount of data, parameters, and compute does not necessarily produce a useful model. The dataset content is a crucial factor. A model “trained on the internet” can unsurprisingly exhibit the same bias <cit.> and toxicity <cit.> present in the data and also fall short in other areas: it may fail at logical reasoning <cit.>, algebraic computation, or following a user's instructions, to name a few examples. In a limiting argument, a multi-trillion parameter model trained only on Shakespeare novels may never be able to reason about chemical weapon design.
To address this, models are fine-tuned on higher-quality, curated data. Popular techniques that rely on high-quality data include instruction tuning (e.g., reinforcement learning from human feedback, or RLHF <cit.>), training models to use tools or act as agents <cit.>, or supervised fine-tuning for a specific task, such as generating images in a particular artistic style.
There is evidence that with the right data and training regime, models in the millions or single digit billions of parameters can perform comparably, if not better, than counterparts orders of magnitude larger in many domains <cit.>. In fact, once a model is sufficiently large, focusing on improving the quality and utilization of data can yield greater gains in performance for a task over simply increasing the size of the model. For example, the Retrieval Augmented Fine-Tuning <cit.> framework has been shown to improve the question-answering performance of a 7B parameter Llama2 language model over that of GPT-3.5, which otherwise significantly outperforms it out-of-the-box.
§ DATA-CENTRISM OPENS NEW ANALYTIC FRONTIERS
Modern machine learning methods are useful beyond traditional data querying and correlation tools such as search engines in part due to their ability to retrieve, compile, and organize data even when given unspecific queries. Below we outline two distinct features that are uniquely enabled by the combination of ML models and data: (1) retrieval, where a model outputs information retrieved directly from its data, and (2) derivation, where a model compiles or synthesizes items from provided data to generate new information. These features enable new ways to interact with complex data that would otherwise be difficult to manage, offering potential benefits as well as risks, which we explore further below.
§.§ Retrieval
As our ability to collect and maintain digital information has soared over the last few decades, retrieving the right results for a certain query has become a core technological focus. Billions of dollars have been spent towards developing efficient data representations for search engines <cit.> and databases <cit.>, and towards creating the algorithms to find and retrieve these results. Now, AI models trained on large amounts of data have become both capable encoders and retrievers of data (in addition to generators, as we describe in the section on derivation). This becomes a problem when a model has been exposed to specific data points that would be considered sensitive if directly retrieved, such as credit card numbers or classified information. The retrieval itself could occur either through (1) a model memorizing and then reproducing points in training data, or (2) retrieving from a large amount of data provided at test time, such as a company's internal database. Below, we describe these two cases in more detail.
Retrieval
A task requiring models to accurately locate and return a requested piece of information.
Retrieval from training data. Datasets contaminated with outliers have historically relied on dataset volume to dilute outlier effects. This leads to the misconception that a small quantity of “harmful” data points can be negated by massive amounts of otherwise commonplace data. Unfortunately, this intuition does not translate to modern machine learning methods. Large models are known to memorize some parts of training data and reproduce them if queried correctly <cit.>. Therefore, large models can retrieve, and therefore utilize, harmful data even if it is present in a negligible quantity.
However, memorization does not occur across data equally: prior work shows that “average” training samples are less likely to be memorized, whereas outlierse and duplicated data points are more likely to be memorized <cit.>. As certain types of data, such as child sexual abuse material, are outliers on the Internet <cit.>, memorization of such data poses an inherent risk in the downstream usage of affected models, especially combined with the powerful retrieval abilities of current models. <cit.> is another example of work where ChatGPT was used to retrieve training data which comprised personally identifiable information of dozens of individuals.
Retrieval from previously unseen data. An AI system may also be exposed to entire new domains of data during inference that were not present during training. Models can utilize new, unseen data through prompting or integration with external databases. Models' ability to effectively interpret new domains without prior training marks a significant shift in how we store and use information. Instead of creating expensive, specialized systems to process data like financial documents or hospital records, modern general-purpose models can understand and work with novel data formats they have never encountered before while requiring minimal engineering effort.
Many of today’s large models are being specifically designed to respond flexibly to new tasks and prompt formats. Concretely, in-context learning <cit.> (ICL) allows users to provide example input-output pairs of a task to a large model which can equip it to solve novel instances of that task. ICL can, for example, pose a risk to society in the automation of attack prompt generation, achieved by instructing LLMs to mimic high-quality human-crafted prompts in large volumes <cit.>. Further, modern AI systems may be used to efficiently sift through large amounts of data at inference time—even if they have not seen it before—using frameworks such as Retrieval Augmented Generation (RAG) <cit.>. In RAG, given a user query, an answer can be generated by efficiently searching a database for relevant concepts and making sense of this new information to return a relevant response <cit.>. As these systems can now return sensitive examples not seen before by dynamically augmenting their knowledge or understanding new tasks from test-time examples, they can therefore be used in the furtherance of actions that pose risks to society by drastically lowering the boundary to both finding and exploiting risk-posing information <cit.>.
§.§ Derivation
As AI capabilities increase, a growing concern is the generation of original or derivative information that is more revealing that the data provided to the model initially. For example, if a system is given two entry-level textbooks in physics and chemistry, respectively, and uses independent concepts from either to build a toy rocket, we would call the process of arriving at the toy rocket instructions “derivation.”
This feature is especially present in modern machine learning methods when compared to technologies such as databases due to their ability to synthesize unrelated pieces of information on the fly. While retrieved content is often straightforward to recognize and check—i.e., it may be quickly obvious that a generated phone number is real, and possible to check if a particular image was contained within training or deployment-time data—derived content is more nuanced and difficult to measure, and thus may present a greater concern.
Derivation
When a model compiles disparate pieces of data to infer novel piece of information not explicitly contained in any of its components.
Under this category, multiple pieces of otherwise mundane information could be compiled to form information that is now sensitive. For instance, a language model trained for code generation could be provided a description of a vulnerability and be used to generate code for exploiting it. Models have already begun to present synthesis capabilities in different arenas, such as for code generation of programming languages with low data availability <cit.> and the generation of Mathematics Olympiad-level geometric proofs as part of larger pipelines <cit.>.
The maximal extent to which current models are capable of derivation is not yet clear as methodologies for inducing such capabilities are constantly evolving. For example, although modern language models have shown nascent indicators of capability to generate novel research ideas in fields such as natural language processing, the ideas they generate lack diversity and may not be tractable <cit.>. Modern image generation models struggle to synthesize images precisely adhering to descriptions of unique combinations of objects and their attributes previously unseen in training data <cit.>. Our intent in this section is not to establish a measure for models’ derivation capability but rather to bring attention to derivation as a unique capability offered by modern ML models.
§ ASSUMPTIONS AND LIMITATIONS
Given the rapid pace of AI development, we acknowledge the limits of our core analytic assumptions, grounded in the current state-of-art in the field, that drive the analysis and recommendations in this work. If these building blocks are outpaced by future developments, then this work should be revisited.
Assumption 1: Powerful models are unable to reason without memorizing information.
Large models can perform well by both learning generalizable semantics over their training data, but also through the rote memorization of data or concepts. Currently, there exists no class of powerful machine learning models which are able to “reason” about the world without having memorized any data during its training period. Put another way, there are no reasoning agents that are derived in a manner that is completely detached from data. One can argue that such a model, should it exist, would fit the definition of “artificial general intelligence” as it could generalize to any new set of data without inherent data priors.
Assumption 2: Dataset distillation methods are still over the horizon.
The field’s understanding of the amount of data points needed for a model to achieve proficiency on specific tasks is still evolving. This area of research is termed “dataset distillation” and aims to reduce the number of data points necessary to achieve target metrics <cit.>. Further, it remains unclear what exactly constitutes a “data point,” especially with modern methods like transformers, which rely on tokens, the amount of which varies with different tokenization methods <cit.>. We aim to establish one rigorous definition of “data point” in future work, as well as analysis of how many data points define emergent capability.
In a limiting argument, should data distillation methods improve to the point where models can learn generalizable knowledge without any data at all, this work would need to be revisited.
§ AVENUES FOR DATA-FORWARD REGULATION
Given our analysis above, the inclusion of data in nascent AI governance conversations can simplify the regulatory overhead by enabling the use of existing legal frameworks and the creation and execution of novel, data-backed evaluation schemes. Specifically, there are numerous policies and laws surrounding the appropriate use of data in contexts that are deemed to be of risk to the public. Instead of reinventing these policies using a new set of definitions that are model-specific, expanding and modifying them to account for the use of data by powerful models might offer a simpler path towards effective evaluation frameworks in areas where definitions alone are vague, leading to simpler regulations.
§.§ Applying Existing Data-Focused Legal and Regulatory Approaches
Significant work has been and continues to be done to mitigate malicious model outputs or behaviors. Thus far, model creators have relied on identifying malicious outputs or behaviors through red teaming and safety training <cit.>.
However, some classes of outputs or behaviors that are deemed risky could more easily be stemmed by careful curation of datasets. Unique information such as the relationship between a person and their social security number, or specific instances of child sexual abuse material, is extremely unlikely to be generated if that data is never provided to a model.
There exists a range of legal and regulatory frameworks that cover many categories of model outputs that are of greatest concern, including personal identifiable information, child sexual abuse material, and classified content. Data-centrism prevents models from acquiring the capacity for harmful behaviors prior to the expenditure of computation. Since existing regulations can be applied, AI governance can be achieved without the need for new regulatory frameworks.
§.§ Technological Levers for Data-Forward Regulation
Although research has shown that certain model capabilities emerge once sufficient model size and compute are attained <cit.>, establishing regulatory thresholds is ill-defined given just these two metrics. As discussed in Section <ref>, models provided with the right data can perform comparably to, if not better than, larger and more compute intensive alternatives. Further, a model must first be paired with sensitive information for it to make use of it. That is, the model does not exist in a vacuum, and a data-forward approach that prioritizes data content and quality filtration over model size and computation could yield greater benefits in mitigating risks posed by the use of models. Here, we briefly outline examples of existing techniques and argue for the development of new methods.
Existing data filtration.
Modern web-scale datasets are extremely large, numbering in the billions to trillions of data points. As such, human review of every data point is not possible from either a labor or monetary perspective. However, the volume of data does not permit the abdication of responsibility or duty to curate datasets responsibly. In response, methods have been proposed to partially or fully automate the filtration process <cit.>. Content can be filtered based on fixed patterns such as blacklisted source URLs or key words, however, these methods can be rigid and insensitive to the nuance of usage context. Large vision and language models such as CLIP <cit.> and Meta’s Llama Guard <cit.> have been used to classify whether data points are risky under human-defined criteria and can be more sensitive to context than blacklist-based methods. However, these methods are far from perfect—offering an important avenue for future research.
Quantifying risk for workloads.
In addition to data filtration schemes, a rigorous evaluation framework for powerful AI models that is inclusive of both models and data is needed. Many approaches are feasible, and we detail an evaluation framework under development that attempts to solidify this discussion into a quantifiable benchmark.
For example, imagine asking a model a question in a setting where accuracy of the answer matters, say “what materials make up Saturn’s rings?” Short, broken answers such as “rock, water” would be regarded as unreliable or incorrect as opposed to an answer that demonstrates mastery of grammar and facts such as “The rings of Saturn are primarily composed of countless small particles of ice and rock. These particles range in size from tiny grains of dust to larger chunks that can be several meters across.”
For a specific type of output, there is likely a minimum size threshold for a model to be capable of learning the syntax of that output domain <cit.>. The initial stage of model training is focused on acquiring fluency—object detection models learn what the shape and proportion of a valid detection looks like, and language models learn the underlying structure and grammar of the languages over which they operate. In this stage, models are parameter-bound—the largest gains in fluency are likely to come from making models bigger. However, once a model has passed this hypothesized stage to learn the syntax “well enough,” we posit that the model is now data-bound and improvements to performance, or correctness, are more likely to come from improvements to the content and utilization of data rather than just from arbitrary scaling <cit.>.
This inherent relationship between fluency and correctness can be used as a powerful tool to regulate AI capabilities in a data-parameter inclusive fashion. For any arbitrary task, the performance of a model on that task can be plotted on a fluency-correctness curve. Once all workloads are plotted, the resulting risk profile can be adjudicated and a resulting judgment—whether a reduction in the parameter count of the model or a specific pruning of the training dataset is necessary—can be made by the model developers.
Fluency and correctness are somewhat dependent
Fluency and correctness cannot be plotted as two independent axes. Since semantics are somewhat dependent on syntax for many languages, these concepts can sometimes be used interchangeably.
Ultimately, such an evaluation framework can aid in the development of regulatory system through which the government and model developers can safely, privately, and precisely iterate on removing the ability of models to aid in risky tasks prior to model release.
§.§ Incentivizing Data Governance Tools and Practices
Just as existing policies and regulations advocate for the standardization of model documentation, such as model or system cards <cit.>, data-centrism motivates the standardization of dataset documentation. Comprehensive approaches for doing so have been proposed already, such as Datasheets for Datasets <cit.> or Data Cards <cit.>. These documentation formalisms currently detail dataset properties regarding content, structure, preprocessing, distribution, and intended or potential use cases. Given the common practice of aggregating datasets from multiple sources, mechanisms for documenting and tracking the provenance of dataset contents, such as Data Provenance Cards <cit.>, would greatly ease verification of information available to a model. Further, standardized ontologies <cit.> that categorize and rank “risky information” can be applied to each dataset in a provenance card, which can then be used as a first approximation of potential retrieval and derivation capabilities.
Red teaming <cit.> has hitherto emerged as the de facto standard to evaluate the potential of models intended for release to pose a risk to public safety. However, red teaming is, as of yet, not standardized. Further, with the rapid increase in the amount of models that need to be assessed, there exists no mechanism through which the potential of models to perform specific tasks can be estimated a priori. The development of a technical framework for measuring the dynamics of the performance of a model family for a given task as a function of both model scale and quality of training data, particularly one that can identify inflection points at which a model’s performance becomes bound by its data rather than its size, would be an important tool for more precisely identifying when models could feasibly be used in the furtherance of behaviors that harm society.
§ CONCLUSION: EVOLVING AI GOVERNANCE ALONGSIDE AI TECHNOLOGY
Despite rapid growths in both model and dataset sizes in recent years, AI policies have hinged on thresholds, definitional concepts, and qualifiers that limit their medium-to-long term liability. For a technology that will be with us for the foreseeable future, we can, and should, approach governance in a more deliberate manner, with a clear understanding of what enables these capabilities to be powerful in the first place.
Similar to how an arbitrarily large engine, no matter how specifically quantified, would be useless without defining the kind of fuel used with it, the AI policy landscape mistakenly focuses on a small set of model-based thresholds, particularly FLOP and parameter counts. Neither fully define how powerful a machine learning model may be without an understanding of the data that accompanies them. Furthermore, the lack of definitional clarity with what constitutes a “frontier”, “foundation”, “dual-use”, or “general purpose” model complicates governance efforts. More generally, these two trends in governance further propagate the outdated idea that the largest, most compute intensive models are those which drive AI risk. As we reach a point where smaller models, when paired with large, foundational datasets or small, high-quality datasets, can perform as well as larger models, this narrow approach creates loopholes and unfairly penalizes otherwise beneficial technologies.
Centering data offers a more durable approach to AI governance, particularly as trends in quantifiable measures of model capability are difficult to predict. A focus on data also provides an opportunity to better research, define, and respond to benefits and risks posed by AI, a debate that remains nebulous in both policy and technical circles. Centering data also provides avenues for existing regulations surrounding sensitive types of data to apply while also clearing the way for new evaluation methods to quantify the use of data and models together. Expanding model-based regulations to focus additionally on their paired data builds a stronger foundation that is less prone to collapse.
While a pivot in the governance landscape may be daunting, a focus on data provides the opportunities and incentives for government, academic researchers, civil society, and the private sector to develop new tools and approaches that lead to meaningful policies. This paper is the first of the Frontier Data Initiative which seeks to focus governance efforts on the combination of data and models. Future papers in this series will focus on novel technical approaches to benchmarking dataset capabilities, auditing data and conducting data forward red teaming. These technical papers will be joined by policy papers centering data-forward AI governance opportunities.
§ ACKNOWLEDGEMENTS
We are thankful to many friends, colleagues, and collaborators for the numerous discussions, critiques, and edits to this paper including Yutong Bai, David Chan, Lisa Dunlap, Michelle Li, Sanjeev Raja, Courtney Rankin, Anand Siththaranjan, and Sanjay Subramanian. Authors, as part of their affiliation with UC Berkeley, were supported in part by the National Science Foundation, U.S. Department of Defense, Founders Pledge, Ford Foundation, and/or the Berkeley Artificial Intelligence Research (BAIR) industrial alliance program.
iclr2025_conference
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17811v1 | 20240926130736 | Anomalous Order in $R_3\text{Ni}_{30}\text{B}_{10}$ $(R = \text{La}, \text{Ce})$ | [
"Maximilien F. Debbas",
"Takehito Suzuki",
"Joseph G. Checkelsky"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
APS/123-QED
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Physics, Toho University, Funabashi, Japan
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
§ ABSTRACT
We report the synthesis of single crystals of the R3Ni30B10 (R = La, Ce) system which realizes the tetragonal P4/nmm (No. 129) space group. We performed single crystal, transmission X-ray diffraction measurements to determine the crystal structure. Additionally, we characterized the samples through magnetization, resistivity, and heat capacity measurements. The R= Ce system exhibits mixed-valence states and we observe maxima in the heat capacity at T_1^* = 6 K and T_2^* = 2 K (both absent for R= La) with no corresponding features in the resistivity or magnetization. We discuss the potential roles of multipolar and short-ranged/partial order in connection to the T_1^* anomalous order.
Anomalous Order in R3Ni30B10 (R = La, Ce)
Joseph G. Checkelsky
September 28, 2024
=========================================
§ INTRODUCTION
Rare earth systems are known to realize a multitude of different phases arising from their localized f-electron states provided by lanthanide or actinide elements in the crystal. An important subset of these materials is that able to realize long-range multipolar order across a lattice of localized f-electron states. The most well-known of these systems are the praseodymium-based quadrupolar order materials which take advantage of the integer-quantized J=4 ground state of the Pr3+ ion. Kramers' theorem does not apply to this angular momentum state which allows for a crystal electric field (CEF) to split the free-space J=4 ground state into potentially nonmagnetic multiplets realizing a leading-order quadrupole moment. Notable examples of praseodymium systems realizing long-range quadrupolar order include PrIr2Zn20 <cit.>, PrRh2Zn20 <cit.>, PrV2Al20 <cit.>, PrTi2Al20 <cit.>, and PrPb3 <cit.>. Some cerium-based systems are also known to host quadrupolar order. As the Ce3+ ion ground state realizes the half-integer J=5/2 angular momentum, Kramers' theorem requires that all CEF split energy levels be magnetic. Notable materials such as CeB6 <cit.> and Ce3Pd20Ge6 <cit.> will then realize multiple phases with distinct quadrupolar and magnetic ordering temperatures. Multipolar interactions (sometimes involving higher-order multipoles) can yield rich phase diagrams in systems such as Ce_1-xLa_xB_6 <cit.>, YbRu2Ge2 <cit.>, URu2Si2 <cit.>, U0.75Np0.25O2 <cit.>, and NpO2 <cit.>.
Some rare earth compounds may also exhibit mixed-valance behavior and move the system away from the lattice picture of localized, f-electron moments; such mixed-valence behavior is especially prevalent in cerium and ytterbium based systems <cit.>. In mixed-valance cerium systems, the cerium f-state realizes a valence between that of nonmagnetic Ce4+ (4f^0) and magnetic Ce3+ (4f^1) such that the f-electrons are partially delocalized <cit.>. Some mixed-valence cerium systems are known to realize long-ranged magnetic order despite the partially delocalized f-states, viz. CeRuSn <cit.> and Ce3Rh4Sn7 <cit.>. In other mixed-valence compounds such as CeNiSi2, fluctuations in the partially delocalized f-states can lead to short-ranged correlations at low temperature which manifest themselves as broad maxima in the heat capacity corresponding to low losses of entropy <cit.>. Short-ranged quadrupolar fluctuation are also known to yield similar behavior in the doped Y_1-xPr_xIr_2Zn_20 system <cit.>.
New material systems in this class provide an opportunity to not only study the properties of f-electron states in unconventional settings, but also provide potential insight into their design. Herein, we report a new single crystal system Ce3Ni30B10 hosting mixed-valence cerium. Through combined thermodynamic and transport studies, we find evidence for anomalous order manifesting a small entropy response in heat capacity but not other observables. We discuss possible mechanisms for this and how tuning this system may provide an opportunity to study connections between the disparate phenomena of metallic systems containing f-electrons.
§ GROWTH METHOD
Single crystals of the R3Ni30B10 system (R = La, Ce) were synthesized through a nickel-boron flux which takes advantage of the 1018oC eutectic point occurring at a nickel to boron ratio of 11:9. The samples were prepared by employing a R:Ni:B ratio of 4.1:11:9 loaded into an alumina crucible sealed in a quartz tube under high vacuum. The materials used were high-purity, lump cerium and lanthanum ingots (AMES), 99.9999% purity, -4 mesh boron powder (Thermo Scientific), and 99.996% purity, -120 mesh nickel powder (Thermo Scientific). The cerium and lanthanum ingots were, respectively, clipped into chunks and filed into shavings under argon in a glovebox.
The elements for the R= Ce growth were melted together by first heating to 1100oC over 12 hours and holding at that temperature for four days. After this, the material was cooled to 850oC over a week and then finally cooled to room temperature over 12 hours. The R= La materials were initially heated to 1150oC over 12 hours and then held at that temperature for two days after which they were cooled to 900oC over a week and then finally cooled to room temperature over another 12 hours.
Both growths resulted in a large number of small (<1 mm length) crystals and a few large (>1 mm length) crystals embedded in a matrix of shiny, grey material in the alumina crucible. The crystals were mechanically separated out of the this matrix. Figure <ref>(a) shows a La3Ni30B10 crystal and Fig. <ref>(b) shows a Ce3Ni30B10 crystal.
§ CRYSTAL STRUCTURE
The R3Ni30B10 (R = La, Ce) system realizes a structure in the tetragonal P4/nmm (No. 129) space group to our knowledge not previously reported within the R-Ni-B phase diagram. Transmission geometry X-ray diffraction was performed on single crystals at 100 K with Mo-K_α radiation; a full-matrix, least-squares fit on F^2 was performed using SHELXL to determine and refine the structure. Crystallographic data is provided in appendix <ref> for both the R = La and R = Ce systems.
The refined crystal structure includes eight nickel Wyckoff positions; two of these positions exhibit a partial occupancy of approximately 40 % nickel. Were all the sites fully occupied, the system would be isostructural to the Nd3Ni29Si4B10 system <cit.>. The stoichiometric structure with full site occupancy would likely be difficult to synthesize using the method described here as adding nickel to the growth would prevent the use of the 1018oC nickel-boron eutectic point.
Figure <ref>(c) shows the R3Ni30B10 unit cell. The system adopts a structure of lanthanide sites each surrounded by a coordination polyhedron of nickel sited with a network of boron interspersed between them. Each lanthanide atom coordinates with 20 nickel sites to form a distorted crystal field (shown in the inset of Fig. <ref>(c)). Were it undistorted, this coordination polyhedron would realize the point group D_6d; the distortion lowers the symmetry group to D_2 (see appendix <ref> for additional details). Note that the coordination polyhedra locally preserve inversion symmetry around each lanthanide site even with the distortion. Additionally, the P4/nmm space group is globally centrosymmetric.
§ TRANSPORT, THERMODYNAMIC, AND SPECTROSCOPIC MEASUREMENTS
Herein, we present results derived from resistivity, magnetization, heat capacity, and X-ray photoelectron spectroscopy (XPS) measurements of three Ce3Ni30B10 crystals (C1, C2, and C3) which were all grown in the same synthesis batch. Magnetization and heat capacity measurements on the La3Ni30B10 system were also performed on a single crystal (L1) and serve as a reference to which the cerium f-electron physics may be compared.
§.§ Resistivity
Electrical transport measurements were performed by a conventional five-probe method in a Quantum Design Physical Property Measurement System (PPMS). Ce3Ni30B10 crystal C2 was sanded down to a thickness of 57 μm with a commercial lapping jig and affixed to a sapphire substrate with GE varnish. Gold wire contacts were affixed to the crystal using silver paint.
Figure <ref>(a) shows the zero-field longitudinal resistivity of Ce3Ni30B10 as a function of temperature below 300 K, and the inset shows the resistivity below 50 K. The resistivity exhibits a broad minimum at around 20 K as well as an inflection point around 175 K. Due to this inflection point, the resistivity is not well fit by the Bloch-Grüneisen model at temperatures above the broad minimum, though the overall response is metallic.
Figure <ref>(b) shows Hall resistivity data as a function of applied magnetic field at various temperatures and Fig. <ref>(c) shows the associated Hall coefficient. This measurement indicates a primarily electron-like response at high temperature and a hole-like response at low temperature. The crossover temperature between these two regimes occurs near 150 K. Using a single band approximation, we find carrier densities of n_e = 2.0 × 10^22 cm^-3 at 300 K, and n_h = 7.1 × 10^21 cm^-3 at 1.8 K.
§.§ Magnetization
Magnetic susceptibility χ and magnetization M were measured using SQUID magnetometry in a Quantum Design Magnetic Property Measurement System (MPMS3). Single crystals were affixed to a quartz rod using GE varnish and DC magnetization measurements were performed using Vibrating Sample Magnetometer (VSM) mode.
Figure <ref>(a) shows the magnetic susceptibility for both Ce3Ni30B10 and La3Ni30B10 measured in an applied magnetic field of 0.5 T. Both systems lack any sharp features; they exhibit paramagnetic behavior down to base temperature. This indicates that neither the cerium nor the nickel realizes any prominent magnetic ordering in either compound. Magnetic torque (data not shown here) measured as a function of both temperature (1.8 K - 300 K) and field (up to 14 T) also does not show any signatures of a magnetic transition.
Neither system exhibits simple Curie-Weiss behavior of the magnetic susceptibility over a broad temperature range. The inverse susceptibilities of Ce3Ni30B10 and La3Ni30B10 exhibit broad maxima around 50 K and 150 K respectively (see appendix <ref> for additional details). At low temperatures, the magnetic susceptibilities can be fit to a simple Curie-Weiss model with a constant offset. Fitting the Ce3Ni30B10 susceptibility below 30 K yields a Curie-Weiss temperature of θ_CW = -2.27 K and an effective moment of 0.201 μ_B per formula unit. Fitting the La3Ni30B10 susceptibility below 50 K yields θ_CW = -0.733 K and an effective moment of 0.395 μ_B per formula unit.
Figure <ref>(b) shows the Ce3Ni30B10 magnetization (in μ_B per formula unit) plotted as a function of applied magnetic field. The magnetization appears to increase almost linearly with a slight bend at low fields and does not appear to saturate up to 7 T of applied field. This along with the low temperature susceptibility behavior suggests that the system possesses a small amount of localized moment (Brillouin function magnetization) from the Ce/Ni sites or potential magnetic impurities in addition to a relatively large Pauli paramagnetic response (linear magnetization) from the conduction states.
§.§ XPS
X-ray photoelectron spectroscopy (XPS) was performed using a commercial Thermo Scientific Nexsa system with an aluminum Kα source. Ce3Ni30B10 crystal C3 was affixed to a piece of silicon wafer using silver paint, and the surface milled with an argon ion beam prior to the measurement.
Figure <ref>(c) shows an XPS spectrum taken in the Ce 3d region for Ce3Ni30B10. The peaks corresponding to the 3d_3/2 and 3d_5/2 multiplets are clearly resolved as are some of the peaks corresponding to splitting of the multiplets due to the f-shell filling. The final states 3d^94f^0 and 3d^94f^1 are clearly resolved for the 3d_3/2 multiplet, and the 3d^94f^1 final state is well resolved for the 3d_5/2 multiplet. The 3d_3/2 3d^94f^2 final state and the 3d_5/2 3d^94f^0 final state are expected to be weak and may account for the excess counts present around 895 eV. The shoulder peaks to the right of the 3d^94f^1 final states of each multiplet are likely due to oxides as is the case in CeNi4B <cit.>.
§.§ Heat Capacity
Heat capacity was measured using a Quantum Design PPMS with the heat capacity module enabled. Single crystals were affixed to heater platform using Apiezon N grease. A background heat capacity measurement was taken of the grease prior to mounting the crystal such that it could be subtracted from the total head capacity after the crystal was mounted.
Figure <ref> shows the zero-field heat capacity (C) divided by temperature (T) for Ce_3Ni_30B_10 while the inset shows the 4f electron contribution to the heat capacity (C_4f) as well as the calculated 4f entropy (S_4f) below 8 K. C_4f was estimated by subtracting the heat capacity of La_3Ni_30B_10 from that of Ce_3Ni_30B_10. S_4f was computed by interpolating C_4f from base T to zero using a C ∼ T^α power low fit of the data between 2.5 and 5.5 K. The heat capacity exhibits a maximum at T_1^* = 6 K; at T_1^*, the excess heat capacity associated with the 4f state for crystal C2 is approximately 170 mJ/molCe-K and the excess entropy is approximately 95 mJ/molCe-K = 0.016 R log2 (R= 8314.5 mJ/mol-K).
We also note that an additional smaller peak may be present at T_2^* = 2 K and is observed systematically in the crystals studied here. This is an important subject for future low temperature investigation. For both peaks the precise height and prominence over the background heat capacity varied, suggesting a potential interplay with disorder. Appendix <ref> discusses heat capacity measurements for four additional crystals of the same synthesis batch. The calculated entropy at T_1^* for these crystals ranged from about 95 to 220 mJ/molCe-K and were associated with power law exponents (from the C∼ T^α fit between 2.5 K and 5.5 K) ranging from 0.7 to 2.5.
Fitting the low temperature heat capacity of La3Ni30B10 yields a Sommerfeld coefficient of γ_0=15.0 mJ/molLa-K2 and a Debye coefficient of β = 0.197 mJ/molLa-K4. The heat capacity of the R = Ce compound exhibits a Sommerfield coefficient enhanced by a factor of approximately two from that of R = La compound. More details on this fit are included in appendix <ref>.
Figure <ref> shows the Ce3Ni30B10 heat capacity feature plotted as a function of temperature for a range of applied magnetic fields. As field is increased, the peak at T_1^* = 6 K shifts towards lower temperatures and is broadened. The inset of Fig. <ref> shows the phase space for Ce3Ni30B10, indicating a continuous field suppressed boundary between two regions, denoted I and II. The boundary T^*(μ_0 H) was fit by a second order polynomial to yield the zero-field slope d T^*/dH|_H=0 = -0.0383 K/T. This slope may be used in conjunction with the Ehrenfest relation:
Δ( ∂ M/∂ T) = - d T^*/dHΔ( C/T)
to determine the expected discontinuity in magnetization were the feature at T_1^* ferromagnetic (Δ(X) indicates the discontinuous change in the quantity X). Crystal C1 realizes a jump of Δ(C/T) = 69.6 mJ/molCe-K (between T_1^* and T = 6.4 K) which would correspond to a Δ( ∂ M/∂ T) = 2.66 emu/molCe-K. A discontinuity in the slope of M(T) of this size was not observed at T_1^*; see appendix <ref> for more details on this calculation.
§ DISCUSSION
The Ce3Ni30B10 system exhibits a maximum in the heat capacity at T_1^* = 6 K without clear corresponding features in the resistivity or magnetization. Electrically, Ce3Ni30B10 behaves as a poor metal. Magnetically, the system behaves paramagnetically with a small effective moment and does not appear to order.
The longitudinal resistivity exhibits a small residual resistivity ratio (RRR) of approximately 1.95 and a broad saturation with a shallow minimum at low temperatures. Similar behavior is seen in other Ce-Ni-B compounds: in CeNi4B a low temperature upturn is attributed to Kondo scattering from the fluctuating cerium valance <cit.>, while in Ce3Ni25.75Ru3.16Al4.1B10 a small RRR is attributed to the presence of disorder scattering <cit.>. In Ce3Ni30B10, the partial-occupancy of the nickel sites suggests that disorder may play a role in the behavior; the low temperature minimum may be connected to this.
The XPS spectrum, specifically the peak associated with the cerium 3d_3/2 multiplet 3d^94f^0 final state, indicates that Ce3Ni30B10 is a mixed-valence compound with cerium valence between Ce^4+ (nonmagnetic 4f^0) and Ce^3+ (magnetic 4f^1). The low-temperature Curie-Weiss fit of Ce3Ni30B10 results in a moment of 0.201 μ_B per formula unit which would correspond to 0.0670 μ_B per cerium atom were the nickel nonmagnetic (this value may then be treated as an upper bound for the effective moment per cerium site). The magnetic field dependence of the magnetization seen in Fig. <ref>(b) is also characteristic of similar mixed-valence compounds such as CeNi4B <cit.>. In this scenario, the small effective moment per cerium site induces a slight curve to the magnetization which may be understood as a weak Brillouin function superimposed on a relatively stronger linear Pauli magnetization from the delocalized conduction states (or possible contributions from impurities).
The anomalous heat capacity features in Ce3Ni30B10 are absent in the La3Ni30B10 heat capacity implying that they arise from the f-electrons in cerium. The heat capacity maximum at T_1^* is associated with a small f-electron entropy per cerium site and does not correspond to any other features in either resistivity or magnetization. Short-ranged coherence associated with fluctuations of the f-states may explain the small amount of entropy associated with the T_1^* feature. The mixed-valence CeNiSi2 system exhibits a maximum in heat capacity at ∼ 3 K arising from the onset of short-ranged coherence of dipolar spin fluctuations; it is associated with a feature in the magnetic susceptibility, but no corresponding feature in the resistivity <cit.>. A similar feature is reported in the doped Y_1-xPr_xIr_2Zn_20 system as PrIr2Zn20 is a known antiferroquadrupolar (AFQ) material <cit.>, and the doped compound realizes a dilute system of 4f quadrupoles <cit.>. For a doping of x=0.44, the AFQ order is completely eliminated, and a weak maximum appears in the heat capacity with no corresponding sharp features in the resistivity (the feature is attributed to short-ranged correlations between the dilute quadrupoles) <cit.>.
Assuming full Ce3+ valence, every Kramers' doublet in the crystal field split energy level structure realizes a nonzero O_2^0 quadrupole moment (see appendix <ref>) suggesting that short-ranged quadrupolar coherence is possible in the mixed-valence system. This would explain the lack of features in magnetic susceptibility and resistivity as well as the small entropy realized at T_1^*. The short-ranged coherence maxima seen in CeNiSi2 and Y1-xPrxIr2Zn20 (at x=0.44 doping) are, however, quite broad in contrast to the sharpness of the feature seen in Ce3Ni30B10. This sharpness is suggestive of a true phase transition associated with the onset of long ranged order.
By the Ehrenfest relation, ferromagnetic ordering associated with the heat capacity at T_1^* is unlikely due to the absence of any discontinuity in the slope of the magnetization with respect to temperature. Mixed-valence compounds such as CeRuSn <cit.> and Ce3Rh4Sn7 <cit.> which do order antiferromagnetically are associated with reduced amounts of entropy at the transition, but also show distinct features in magnetic susceptibility. A minority phase that orders antiferromagnetically may then be possible for a small volume fraction of the crystal, though no clear features in magnetic susceptibility (or in magnetic torque, not shown here) are observed at T_1^*. Future neutron scattering experiments will be important to investigate any minority-phase magnetic dipole order. Scattering experiments would also be useful to probe for any structural transition at T_1^*, although no feature in electrical transport of the same prominence as that in heat capacity is observed.
Long-ranged ordering of the active O_2^0 electric quadrupole moments at T_1^* is another possibility as such ordering generally does not correspond to any feature in magnetic susceptibility (although it is associated with a smooth downturn in the resistivity as observed in PrTi2Al20 (T_Q = 2.0 K) <cit.>, perhaps here accounted for by a minority phase). In this scenario, below T_1^* the system still realizes Curie-Weiss paramagnetic behavior associated with a small effective moment as every Kramers' doublet realizes an active magnetic dipole moment, which may have a reduced size due to the mixed-valence nature of the 4f-state and the high anisotropy of the effective g-tensor in the local D_6d symmetric crystal field.
Each Kramers' doublet also realizes some kind of nonzero magnetic octupolar moment (shown in appendix <ref>). It is thereby possible for octupolar order to be realized at T_1^* (potentially as a minority phase). Octupolar ordering would break time reversal symmetry below T_1^* and break the degeneracy of the Kramers' doublets, realizing a Van Vleck paramagnetic susceptibility and weakened Curie Weiss paramagnetism for the entire crystal. We also note that anomalously small entropies are known to occur in hidden order materials such as URu2Si2 <cit.> and NpO2 <cit.> wherein higher order multipolar interactions are considered.
§ CONCLUSIONS
The presented data indicates that the newly discovered mixed-valence Ce3Ni30B10 system exhibits low-temperature, anomalous ordering. Notably, this system shows a peak in heat capacity at T_1^* = 6 K, which corresponds to a small loss of excess entropy per cerium site. This peak, however, does not correlate with any observable features in the magnetic susceptibility or resistivity measurements.
To determine the exact type of ordering at T_1^*, future research steps could include resonant ultrasound spectroscopy (RUS) and muon spin relaxation (μSR). The elastic strain ϵ = 1/√(3)( 2 ϵ_zz - ϵ_xx - ϵ_yy) associated with the transverse elastic constant 1/2 (C_11 - C_12) couples directly to the quadrupolar O_2^0 moment <cit.>. A characteristic softening of this mode at T_1^* would then provide evidence for the hypothesis of quadrupolar ordering. μSR experiments would probe whether or not the order parameter breaks time reversal symmetry by probing for a precession signal below T_1^*. If the system is found to break time reversal symmetry, Mössbauer spectroscopy could be used to put an upper bound on the local moment and probe any antiferromagnetic dipole ordering <cit.>. Resonant elastic X-ray (REX) scattering experiments could furthermore provide evidence for the realization of long-ranged order rather than short-ranged coherence below T_1^*. Neutron scattering could be performed to probe any minority dipolar ordering.
As the anomalous ordering likely depends on the amount of nickel vacancy in the structure, future work on this system could include a nickel doping study based on a different synthesis method. Studying the behavior of this system under pressure could also shed light onto potential proximate phases. The Ce3Ni30B10 system is a candidate to support a low temperature strongly correlated phase ripe for further exploration.
We acknowledge C. John for support with XPS measurements and P. Müller for support with single crystal X-ray diffraction and analysis. This work was funded, in part, by the Gordon and Betty Moore Foundation EPiQS Initiative, Grant No. GBMF9070 to J.G.C (instrumentation development) and the Army Research Office, Grant No. W911NF-16-1-0034 (material characterization).
§ X-RAY DIFFRACTION
Single crystal, transmission geometry X-ray diffraction was performed on both Ce3Ni30B10 and La3Ni30B10 at 100 K. Figure <ref> shows the diffraction patterns obtained for the (001), (010), and (100) planes for Ce3Ni30B10 (Fig. <ref>a-c) and La3Ni30B10 (Fig. <ref>d-f). Note the clear fourfold symmetry of the diffraction patterns for the (001) plane.
Table <ref> shows crystallographic data and structural refinement parameters obtained using SHELXL. Table <ref> shows the Wyckoff positions, atomic coordinates, and site occupancies obtained in the refinement for Ce3Ni30B10 while Table <ref> shows this data for La3Ni30B10. Note the partial occupancy for the Ni(7) and Ni(8) sites in both crystal systems.
§ LANTHANIDE COORDINATION POLYHEDRON
The R3Ni30B10 (R = La, Ce) structure may be understood as a structure of coordination polyhedra around each lanthanide site whereat each lanthanide atom coordinates with 20 nickel atoms. The unit cell (shown in Fig. <ref>(a)) is made up of two identical A-layers of coordination polyhedra in the ab-plane with a different B-layer in between. The lanthanide atoms in the A-layer (shown in Fig. <ref>(b)) form a square lattice with a 5.63 Å spacing between lanthanide sites, whereas the lanthanide atoms in the B-layer form a square lattice (rotated by 45o relative to the A-layers) with a 7.98 Å spacing between lanthanide sites.
The coordination polyhedra are slightly distorted in the crystal structure such that their symmetry is lowered. Were the polyhedra undistorted, they would have a sixfold rotational symmetry axis and realize the point group D_6d (Fig. <ref>(c) shows a coordination polyhedron with this axis perpendicular to the page). The distortion serves to stretch the polyhedron slightly along a direction perpendicular to the main rotational symmetry axis such that the symmetry is reduced from sixfold to twofold and the overall symmetry group is reduced to D_2.
The quadrupolar moment of the cerium 4f electron may be analyzed for this crystal field to check that the hypothesized quadrupolar coherence is possible. As the distortion in the coordination polyhedron is very slight, this analysis will be done for the undistorted polyhedron described by the point group D_6d. The Ce3+ ion will also be considered here which contains a single 4f electron per site.
Due to strong spin-orbit coupling in rare earth systems, the single 4f-electron couples to the orbital angular momentum to yield a sixfold degenerate ground state multiplet ^2 F_5/2 and an eightfold degenerate excited state multiplet ^2 F_7/2. Due to the large energy splitting between these multiplets, only the ground state manifold needs to be considered. The ^2 F_5/2 manifold is described by J=5/2 angular momentum and must be analyzed using the crystal double group of the coordination polyhedron due to the odd parity of ^2 F_5/2 under a 2π rotation <cit.>.
As the D_6d point group is a direct product group of D_6 and inversion (i.e., D_6d = D_6 × i), the CEF splitting may be analyzed using the double group D_6' as the inversion operator commutes the Hamiltonian and CEF splitting cannot change the parity of the free-space eigenstates <cit.>. The sixfold degenerate ^2 F_5/2 multiplet breaks up into the three doublet irreducible representations of D_6', viz. Γ_7 ⊕Γ_8 ⊕Γ_9.
The CEF potential for a crystal field respecting D_6 symmetry is given by Eq. (<ref>):
V_D_6 = B_0^2 C_0^2 + B_0^4 C_0^4 + B_0^6 C_0^6
C_q^k (θ,ϕ) = √(4π/2k+1) Y_k^q (θ,ϕ)
where B_q^k are the crystal field parameters, and C_q^k (θ,ϕ) are tensor operators (related to the spherical harmonics Y_k^q (θ,ϕ) via Eq. (<ref>)) <cit.>. Treating V_D_6 as a perturbation and diagonalizing it in the |J=5/2,m_J⟩ basis yields the following degenerate pairs of wavefunctions:
Γ_7 : |ψ_±1/2⟩ = |5/2,± 1/2⟩
Γ_8 : |ψ_±3/2⟩ = |5/2,± 3/2⟩
Γ_9 : |ψ_±5/2⟩ = |5/2,± 5/2⟩
each corresponding to a Kramers' doublet in the CEF split energy level structure.
The magnetic dipole operators 𝐉_x, 𝐉_y, and 𝐉_z may be projected onto each of these Kramers' doublet subspaces and diagonalized to yield the magnetic dipole moments along x̂, ŷ, and ẑ for each doublet. The same may be done for the quadrupole operators (described by Stevens operators <cit.>) to find that the only nonzero quadrupole moment is that of 𝐎_2^0 = 1/2( 3 𝐉_z^2 - |𝐉|^2 ). Table <ref> displays these moments for each of the irreducible representations of D_6' describing the CEF energy level structure.
The magnetic octupole operators may also be projected onto each Kramers' doublet. Eqs. (<ref>)-(<ref>) give the form of these octupole operators <cit.>:
𝐓_xyz = √(15)/6 ( 𝐉_x 𝐉_y 𝐉_z )
𝐓_x^α = 1/2( 2 𝐉_x^2 - 𝐉_x 𝐉_y^2 - 𝐉_z^2 𝐉_x)
𝐓_y^α = 1/2( 2 𝐉_y^2 - 𝐉_y 𝐉_z^2 - 𝐉_x^2 𝐉_y)
𝐓_z^α = 1/2( 2 𝐉_z^2 - 𝐉_z 𝐉_x^2 - 𝐉_y^2 𝐉_z)
𝐓_x^β = √(15)/6 ( 𝐉_x 𝐉_y^2 - 𝐉_z^2 𝐉_x)
𝐓_y^β = √(15)/6 ( 𝐉_y 𝐉_z^2 - 𝐉_x^2 𝐉_y)
𝐓_z^β = √(15)/6 ( 𝐉_z 𝐉_x^2 - 𝐉_y^2 𝐉_z)
where the overline represents the sum with respect to all possible permutations of the operators thereunder, i.e. 𝐉_x 𝐉_y = 𝐉_x 𝐉_y + 𝐉_y 𝐉_x. Table <ref> displays these moments for each of the irreducible representations of D_6' describing the CEF energy level structure.
§ CURIE-WEISS FIT OF MAGNETIC SUSCEPTIBILITY
The magnetic susceptibility of R3Ni30B10 (R = La, Ce) was fit by a Curie-Weiss model with a constant offset via the following equation:
χ = 1/T-θ_CWN_A/3k_Bμ_eff^2 + χ_0
where χ is the magnetic susceptibility per mole, θ_CW is the Curie temperature, N_A Avogadro's number, χ_0 is the constant Pauli susceptibility, and μ_eff is the effective moment. As both systems behave paramagnetically down to base temperature, the temperature window for the fit was chosen where χ_0 was found to be approximately constant and the inverse susceptibility (χ-χ_0)^-1 was linear as a function of temperature.
Figure <ref> shows the inverse magnetic susceptibilities for both Ce3Ni30B10 and La3Ni30B10. Eq. (<ref>) was used to fit the inverse susceptibility for Ce3Ni30B10 between 1.8 - 30 K and the inverse susceptibility for La3Ni30B10 between 1.8 - 50 K. All the susceptibility data in Fig. <ref> is plotted per mole of formula unit of R3Ni30B10 such that the effective moments of both compounds may be compared. The Curie-Weiss fit parameters for both systems are presented in Table <ref>. We note these are upper bounds considering potential contributions from impurities.
§ CE3NI30B10 HEAT CAPACITY VARIATIONS
The features seen in the heat capacity of Ce3Ni30B10 at T_1^* = 6 K and T_2^* = 2 K exhibit some variation in strength between samples. Figure <ref> shows heat capacity data for crystal C2 as well as four other crystals of Ce3Ni30B10 (crystals C4-C7) from the same synthesis batch.
The strength of the T_1^* and T_2^* features vary between samples of this system. The 4f electron entropy S_4f^* associated with the T_1^* feature was computed for each of the Ce3Ni30B10 crystals C1, C2, C4-C7 and is displayed in Table <ref> along with the 4f electron contribution to the hear capacity C_4f^* at T_1^*. The entropy was computed by first subtracting the heat capacity of La3Ni30B10 to obtain the heat capacity associated with the 4f states of cerium and then interpolating the background-subtracted heat capacity from 2.5 K to zero via a power law fit (C ∼ T^α) of the data between 2.5 K and 5.5 K. This 4f state heat capacity was then divided by temperature and integrated to yield the entropy associated with the 4f state.
Were the feature at T_1^* ferromagnetic, the expected discontinuity in magnetization could be calculated using the Ehrenfest relation. Using the zero-field phase boundary slope d T^*/dH|_H=0 = -0.0383 K/T, the expected value of Δ( ∂ M/∂ T) was computed for all the crystals of Ce3Ni30B10 listed in table <ref> (Δ(C/T) was computed between T_1^* and T = 6.4 K). Figure <ref> shows the low temperature magnetization of Ce3Ni30B10 crystal C2 with no discernible discontinuous change in slope occurring at T_1^*.
§ DEBYE FIT OF HEAT CAPACITY
The heat capacity of La3Ni30B10 was fit at low temperature between 1.8 K and 6.7 K. The low temperature heat capacity is modeled as the sum of a linear electronic Sommerfeld term and a cubic Debye term such that:
C = γ_0 T + β T^3
where γ_0 is the constant Sommerfeld coefficient, and β is the Debye coefficient. The fit for the heat capacity of La3Ni30B10 is shown in Fig. <ref> and results in γ_0=15.0 mJ/molLa-K2 and β = 0.197 mJ/molLa-K4. The inset of Fig. <ref> shows C/T plotted as a function of T^2, which shows behavior that is close to linear but deviates slightly below 3 K. Under the simplifying assumption that all atoms in the unit cell contribute equally to the acoustic phonon mode, this value of β corresponds to a Debye temperature of θ_D = 521 K via the Debye model result:
β = 12 k_B π^4/5 θ_D^3
where k_B is the Boltzmann constant. To use Eq. (<ref>), the value of β (in units of mJ/molLa-K4) was first divided by 14.33 to express it as a quantity per total moles of atoms in the crystal. The experimentally determined Debye temperature for La3Ni30B10 is somewhat higher than values reported for similar compounds viz. CeNi4Al : θ_D = 315 K <cit.>, CeNiSi2 : θ_D = 451 K <cit.>, CeNiGe2 : θ_D = 351 K <cit.>, and LaNi12B6 : θ_D = 304 K <cit.>.
| Rare earth systems are known to realize a multitude of different phases arising from their localized f-electron states provided by lanthanide or actinide elements in the crystal. An important subset of these materials is that able to realize long-range multipolar order across a lattice of localized f-electron states. The most well-known of these systems are the praseodymium-based quadrupolar order materials which take advantage of the integer-quantized J=4 ground state of the Pr3+ ion. Kramers' theorem does not apply to this angular momentum state which allows for a crystal electric field (CEF) to split the free-space J=4 ground state into potentially nonmagnetic multiplets realizing a leading-order quadrupole moment. Notable examples of praseodymium systems realizing long-range quadrupolar order include PrIr2Zn20 <cit.>, PrRh2Zn20 <cit.>, PrV2Al20 <cit.>, PrTi2Al20 <cit.>, and PrPb3 <cit.>. Some cerium-based systems are also known to host quadrupolar order. As the Ce3+ ion ground state realizes the half-integer J=5/2 angular momentum, Kramers' theorem requires that all CEF split energy levels be magnetic. Notable materials such as CeB6 <cit.> and Ce3Pd20Ge6 <cit.> will then realize multiple phases with distinct quadrupolar and magnetic ordering temperatures. Multipolar interactions (sometimes involving higher-order multipoles) can yield rich phase diagrams in systems such as Ce_1-xLa_xB_6 <cit.>, YbRu2Ge2 <cit.>, URu2Si2 <cit.>, U0.75Np0.25O2 <cit.>, and NpO2 <cit.>.
Some rare earth compounds may also exhibit mixed-valance behavior and move the system away from the lattice picture of localized, f-electron moments; such mixed-valence behavior is especially prevalent in cerium and ytterbium based systems <cit.>. In mixed-valance cerium systems, the cerium f-state realizes a valence between that of nonmagnetic Ce4+ (4f^0) and magnetic Ce3+ (4f^1) such that the f-electrons are partially delocalized <cit.>. Some mixed-valence cerium systems are known to realize long-ranged magnetic order despite the partially delocalized f-states, viz. CeRuSn <cit.> and Ce3Rh4Sn7 <cit.>. In other mixed-valence compounds such as CeNiSi2, fluctuations in the partially delocalized f-states can lead to short-ranged correlations at low temperature which manifest themselves as broad maxima in the heat capacity corresponding to low losses of entropy <cit.>. Short-ranged quadrupolar fluctuation are also known to yield similar behavior in the doped Y_1-xPr_xIr_2Zn_20 system <cit.>.
New material systems in this class provide an opportunity to not only study the properties of f-electron states in unconventional settings, but also provide potential insight into their design. Herein, we report a new single crystal system Ce3Ni30B10 hosting mixed-valence cerium. Through combined thermodynamic and transport studies, we find evidence for anomalous order manifesting a small entropy response in heat capacity but not other observables. We discuss possible mechanisms for this and how tuning this system may provide an opportunity to study connections between the disparate phenomena of metallic systems containing f-electrons. | null | null | null | The Ce3Ni30B10 system exhibits a maximum in the heat capacity at T_1^* = 6 K without clear corresponding features in the resistivity or magnetization. Electrically, Ce3Ni30B10 behaves as a poor metal. Magnetically, the system behaves paramagnetically with a small effective moment and does not appear to order.
The longitudinal resistivity exhibits a small residual resistivity ratio (RRR) of approximately 1.95 and a broad saturation with a shallow minimum at low temperatures. Similar behavior is seen in other Ce-Ni-B compounds: in CeNi4B a low temperature upturn is attributed to Kondo scattering from the fluctuating cerium valance <cit.>, while in Ce3Ni25.75Ru3.16Al4.1B10 a small RRR is attributed to the presence of disorder scattering <cit.>. In Ce3Ni30B10, the partial-occupancy of the nickel sites suggests that disorder may play a role in the behavior; the low temperature minimum may be connected to this.
The XPS spectrum, specifically the peak associated with the cerium 3d_3/2 multiplet 3d^94f^0 final state, indicates that Ce3Ni30B10 is a mixed-valence compound with cerium valence between Ce^4+ (nonmagnetic 4f^0) and Ce^3+ (magnetic 4f^1). The low-temperature Curie-Weiss fit of Ce3Ni30B10 results in a moment of 0.201 μ_B per formula unit which would correspond to 0.0670 μ_B per cerium atom were the nickel nonmagnetic (this value may then be treated as an upper bound for the effective moment per cerium site). The magnetic field dependence of the magnetization seen in Fig. <ref>(b) is also characteristic of similar mixed-valence compounds such as CeNi4B <cit.>. In this scenario, the small effective moment per cerium site induces a slight curve to the magnetization which may be understood as a weak Brillouin function superimposed on a relatively stronger linear Pauli magnetization from the delocalized conduction states (or possible contributions from impurities).
The anomalous heat capacity features in Ce3Ni30B10 are absent in the La3Ni30B10 heat capacity implying that they arise from the f-electrons in cerium. The heat capacity maximum at T_1^* is associated with a small f-electron entropy per cerium site and does not correspond to any other features in either resistivity or magnetization. Short-ranged coherence associated with fluctuations of the f-states may explain the small amount of entropy associated with the T_1^* feature. The mixed-valence CeNiSi2 system exhibits a maximum in heat capacity at ∼ 3 K arising from the onset of short-ranged coherence of dipolar spin fluctuations; it is associated with a feature in the magnetic susceptibility, but no corresponding feature in the resistivity <cit.>. A similar feature is reported in the doped Y_1-xPr_xIr_2Zn_20 system as PrIr2Zn20 is a known antiferroquadrupolar (AFQ) material <cit.>, and the doped compound realizes a dilute system of 4f quadrupoles <cit.>. For a doping of x=0.44, the AFQ order is completely eliminated, and a weak maximum appears in the heat capacity with no corresponding sharp features in the resistivity (the feature is attributed to short-ranged correlations between the dilute quadrupoles) <cit.>.
Assuming full Ce3+ valence, every Kramers' doublet in the crystal field split energy level structure realizes a nonzero O_2^0 quadrupole moment (see appendix <ref>) suggesting that short-ranged quadrupolar coherence is possible in the mixed-valence system. This would explain the lack of features in magnetic susceptibility and resistivity as well as the small entropy realized at T_1^*. The short-ranged coherence maxima seen in CeNiSi2 and Y1-xPrxIr2Zn20 (at x=0.44 doping) are, however, quite broad in contrast to the sharpness of the feature seen in Ce3Ni30B10. This sharpness is suggestive of a true phase transition associated with the onset of long ranged order.
By the Ehrenfest relation, ferromagnetic ordering associated with the heat capacity at T_1^* is unlikely due to the absence of any discontinuity in the slope of the magnetization with respect to temperature. Mixed-valence compounds such as CeRuSn <cit.> and Ce3Rh4Sn7 <cit.> which do order antiferromagnetically are associated with reduced amounts of entropy at the transition, but also show distinct features in magnetic susceptibility. A minority phase that orders antiferromagnetically may then be possible for a small volume fraction of the crystal, though no clear features in magnetic susceptibility (or in magnetic torque, not shown here) are observed at T_1^*. Future neutron scattering experiments will be important to investigate any minority-phase magnetic dipole order. Scattering experiments would also be useful to probe for any structural transition at T_1^*, although no feature in electrical transport of the same prominence as that in heat capacity is observed.
Long-ranged ordering of the active O_2^0 electric quadrupole moments at T_1^* is another possibility as such ordering generally does not correspond to any feature in magnetic susceptibility (although it is associated with a smooth downturn in the resistivity as observed in PrTi2Al20 (T_Q = 2.0 K) <cit.>, perhaps here accounted for by a minority phase). In this scenario, below T_1^* the system still realizes Curie-Weiss paramagnetic behavior associated with a small effective moment as every Kramers' doublet realizes an active magnetic dipole moment, which may have a reduced size due to the mixed-valence nature of the 4f-state and the high anisotropy of the effective g-tensor in the local D_6d symmetric crystal field.
Each Kramers' doublet also realizes some kind of nonzero magnetic octupolar moment (shown in appendix <ref>). It is thereby possible for octupolar order to be realized at T_1^* (potentially as a minority phase). Octupolar ordering would break time reversal symmetry below T_1^* and break the degeneracy of the Kramers' doublets, realizing a Van Vleck paramagnetic susceptibility and weakened Curie Weiss paramagnetism for the entire crystal. We also note that anomalously small entropies are known to occur in hidden order materials such as URu2Si2 <cit.> and NpO2 <cit.> wherein higher order multipolar interactions are considered. | null |
http://arxiv.org/abs/2409.17392v1 | 20240925220959 | Trading through Earnings Seasons using Self-Supervised Contrastive Representation Learning | [
"Zhengxin Joseph Ye",
"Bjoern Schuller"
] | cs.LG | [
"cs.LG",
"q-fin.TR"
] |
1
.001
Zhengxin Joseph Ye et al.
mode = title]Trading through Earnings Seasons using Self-Supervised Contrastive Representation Learning
1]Zhengxin Joseph Ye[type=editor]
[email protected]
[1]GLAM, Department of Computing, Imperial College London, UK
1]Björn W. Schuller[]
[email protected]
§ ABSTRACT
Earnings release is a key economic event in the financial markets and crucial for predicting stock movements. Earnings data gives a glimpse into how a company is doing financially and can hint at where its stock might go next. However, the irregularity of its release cycle makes it a challenge to incorporate this data in a medium-frequency algorithmic trading model and the usefulness of this data fades fast after it is released, making it tough for models to stay accurate over time. Addressing this challenge, we introduce the Contrastive Earnings Transformer (CET) model, a self-supervised learning approach rooted in Contrastive Predictive Coding (CPC), aiming to optimise the utilisation of earnings data. To ascertain its effectiveness, we conduct a comparative study of CET against benchmark models across diverse sectors. Our research delves deep into the intricacies of stock data, evaluating how various models, and notably CET, handle the rapidly changing relevance of earnings data over time and over different sectors. The research outcomes shed light on CET's distinct advantage in extrapolating the inherent value of earnings data over time. Its foundation on CPC allows for a nuanced understanding, facilitating consistent stock predictions even as the earnings data ages. This finding about CET presents a fresh approach to better use earnings data in algorithmic trading for predicting stock price trends.
Self-Supervised Learning Contrastive Predictive Coding Transformer Algorithmic Trading Earnings Data
[
[
=====
§ INTRODUCTION
The advent of high performance computing and the explosion of digital data in the late 1990s and early 2000s spurred the growth and popularity of machine learning into the specialised field of algorithmic trading, whether in supervised learning <cit.><cit.><cit.>, unsupervised learning <cit.>, or reinforcement learning <cit.><cit.><cit.>.
We observe that research efforts in fusing machine learning with algorithmic trading often entail work on the modelling side <cit.><cit.> and the data side <cit.><cit.>, with the research objectives often framed much more as a machine learning problem rather than a finance problem. Those are logical and legitimate research angles because financial time series data do carry embedded patterns available to be mined; auxiliary data such as news pieces and twitter feeds etc indeed offers added features and dimensions. However, from a practical standpoint, it is crucial to recognise that the stock market is primarily event driven and largely stimulated by the periodic injection of financial and economical data of various kinds at the macro level, such as U.S. nonfarm payrolls, Employment Situation Summary, and CPI data etc, <cit.><cit.>, as well as any data specific to individual companies, such as quarterly earnings releases <cit.>. It is
the interpretation of such data by portfolio managers/traders that tips the balance of the underlying supply and demand of stocks which propel their prices to move toward certain directions. We believe it is imperative that a trading algorithm captures data release events and understands the jumps and changes in dynamics that follow <cit.>.
Post Earnings Announcement Drift (PEAD) as a stock market anomaly has been well studied by renowned economists from Ball and Brown <cit.> to Fama and French <cit.> and many others. It is a phenomenon when stock price continues to drift up for firms that are perceived to have reported good financial results for the preceding quarter and drift down for firms whose results have turned out worse than the market had anticipated. <cit.>
investigated and illustrated the true predictability of PEAD based on a machine learning approach using a large dataset on earnings and financial metrics. Crucially, they highlighted the speed at which the market reacts to such information, causing the rapid vanishing of actionable trading signals. This led us to reconsider the prevailing practice in the literature: is relying primarily on stock price and volume time series, even when additionally augmented by other continual features such as twitter feeds and technical indicators <cit.>, adequate enough to grasp the abrupt shifts in market dynamics during periods of heightened volatility, particularly when influenced by specific data types?
We argue that by relying on such continual data, we might miss out on capturing valuable predictive patterns during crucial market events. Therefore, to ensure a more holistic and responsive trading strategy, there is a compelling case for incorporating diverse datasets and employing advanced machine learning techniques to decipher the full spectrum of market behaviours. In this paper, we present an exploratory study into algorithmic trading through earnings seasons combining regular intra-day stock minutely-frequency data with irregular heterogeneous earnings data which drives the PEAD. We devise a novel algorithmic trading model, Contrastive Earnings Transformer (CET), which fuses data of various characteristics and granularity together through representations with the help of a Transformer <cit.> and Contrastive Predictive Coding (CPC) <cit.>.
Traditionally, Long Short-Term Memory (LSTM) and its variants featured heavily in algorithmic trading <cit.><cit.> due to their natural fit with time series data, and this is particularly true when a single time series data is involved <cit.>. However, as level of data complexity went up, LSTM's performance was found to be unsatisfactory and could not represent the complex features of sequential data efficiently, particularly if long interval multivariate time series with high non-linearity were involved <cit.>. The emergence and proliferation of the Transformer model due to its groundbreaking architecture and remarkable performance across various domains quickly turned it into a more superior choice in place of LSTM in financial trading research <cit.><cit.>. In this research, we employ a Transformer as an integral component of our model design and rely on its self-attention mechanism to facilitate robust feature extraction and foster contextual understanding of earnings data dynamics with raw price movements by attending simultaneously to different parts of the regular stock data time series and irregular earnings data, enabling it to model complex dependencies effectively.
It is widely known that machine learning models that rely on the backpropagation technique and optimization algorithms such as stochastic gradient descent (SGD) can suffer from random weight initialization which might not be optimal and can impact the initial behavior and convergence of the model during training <cit.>. The Transformer model is not immune from it. In search of a remedy, our initial research found that previous works <cit.><cit.><cit.> had shown that, although the existence of intricate and infrequent movement patterns posed challenges for creating effective recognition models, integrating unsupervised learning into traditional pattern recognition systems led to promising outcomes, especially for feature extraction. Facing similar challenges with the diverse economic and price data and their inconsistent granularity and intensity, the proposed model adopts self-supervised pre-training to enable the model to learn powerful time-aware representations from the unlabeled price and earnings data.
A catalogue of mainstream techniques has been developed and found successful for self-supervised pre-training, such as autoencoders and its variants <cit.><cit.>, Masked Language Modelling (MLM), and contrastive learning, among a few others. MLM, which is used in BERT <cit.>, is perhaps the most famous and has been used to obtain state-of-the-art results on a wide array of natural language processing (NLP) tasks. However, initial assessment of this study determines that Contrastive Predictive Coding (CPC) <cit.>, a special type of contrastive learning model designed for sequential data, offers several advantages over MLM when working with time series data. Given the vast number of company stocks listed on major U.S. stock exchanges, our model needed to generalise well to unseen price sequences to be practically useful. MLM works better when the data is discrete (such as text data within a finite vocabulary), and its focus on predicting missing parts in masked positions may not explicitly encourage the model to capture broader temporal price patterns, potentially making it less robust in handling such time series prediction tasks. In contrast, CPC incorporates the notion of temporal context and learns the underlying patterns and structures by focusing on learning a useful representation of the data that maximises the similarity between the context vector and the future vector. This feature may enhance our model's ability to generalise and extrapolate its predictions to unseen stock time series and earnings data. Also, by focusing on predicting elements multiple time steps into the future, the model is encouraged to capture the underlying structure of the data that is less affected by noise or irrelevant variations. We have found this unique feature particularly important on a earnings release day when trading is typically most volatile.
In our exploration of algorithmic trading, we have taken a new approach to utilising earnings data by integrating CPC techniques. The task of melding the irregular release patterns of earnings data with high-frequency stock data posed significant challenges. The proposed solution is to turn to self-supervised learning. After a series of considered tests, we found that CPC emerged as a promising model to address our needs. The novel insights from this research include: (1) The CET model, through a series of experiments, showcases its potential in understanding earnings data, even when considering the variations across sectors or the diminishing relevance of the data over time. It offers a pathway for others to consider when dealing with irregular and vital data sources. (2) Our findings highlight the importance of earnings data in predicting stock movements. Even though its predictive potency may decrease as time progresses, its immediate impact on stock prices remains crucial. (3) One noteworthy observation was the adaptability of the CET model. As the influence of earnings data lessens over consecutive days, CET demonstrates an ability to adjust and refine its predictions, keeping pace with the evolving data. These discoveries enrich our understanding of algorithmic trading and pave the way for deeper investigations into the role of self-supervised models in financial decision-making.
§ RELATED WORK
§.§ Earnings Data and their impacts
Ever since <cit.> discovered Post Earnings Announcement Drift as a stock market anomaly in the 60s, quarterly financial earnings data have been playing an important role in various areas of financial analysis and forecast. Early machine learning-based studies by
<cit.> demonstrated the possibility of achieving excessive risk-adjusted returns when forecasting 12-month stock returns with a simple artificial neural network (ANN) using 61 financial ratios for 2352 Canadian stocks. Conducting fundamental and technical analysis together, both <cit.> and <cit.> jointly utilised financial data and technical signals in trading company shares and equity index and reported satisfactory returns. <cit.> analysed 29 330 different earnings call scripts between 2014 and 2017 using four different machine learning algorithms, and managed to achieve a low classification error rate and beat the S&P500 benchmark in simulated trading. Similar research on earnings call transcripts by <cit.> using a graph neural network reported reliable and accurate stock movement predictions and more importantly confirmed the overweighting of certain market-acknowledged variables such as Earnings-per-share (EPS). Both research studies demonstrated the predictive capabilities of auxiliary earnings information. Despite the success in both cases, we observe that their supervised learning's predictions were measured over days after earnings release. There was limited investigation into the immediate impact on intra-day price volatilities as market participants actively absorb and respond to the latest data that just came out.
§.§ Key Model Components
One of the core tasks of algorithmic trading lies in financial time series prediction, which has enjoyed great success with the help of various deep learning technologies. As seen in the study by <cit.>, while LSTM remains a key component in constructing time series forecasting models such as in <cit.>, recent trends show that advanced architectures like Transformers <cit.>, GANs, GNNs, and DQNs are increasingly being utilized for price forecasting tasks, oftentimes showing exceptional capabilities in accuracy and convergence efficiency <cit.>. This shift underscores how the financial industry is capitalizing on the latest developments in deep learning technologies.
With representation learning being proven to be an exceptionally valuable approach across many fields and applications, the Transformer model has emerged as a groundbreaking framework for this objective, with great success initially in natural language processing, as seen in <cit.><cit.> which demonstrated that Transformer layers were able to overcome the performance of traditional NLP models. On the time series data front, <cit.> developed a Multi-Transformer model which merged several multi-head attention mechanisms to produce the final output and demonstrated that merging Multi-Transformer layers with other models led to more accurate stock volatility forecasting models. <cit.> presented a transformer encoder-based multivariate time series representation learning framework with a focus on unsupervised pre-training, which was proven very successful extracting vector representations of unlabelled data. This particular result inspired us to introduce a pre-training phase to our model, except that we have specifically chosen self-supervised pre-training whose key idea is to employ a `pretext' task where the model is asked to predict some part of the data given the rest, creating some sort of `supervised signal' in the process which benefits downstream tasks. In fact, powerful Transformer-based models with self-supervised learning capabilities have been developed, such as GPT <cit.> and BERT <cit.>.
One very effective family of self-supervised representation learning is contrastive learning, based on which a lot of successful models have been developed, such as SimCLR <cit.> for visual representations, TimeCLR <cit.> for time series, and CPC <cit.> whose learnt representations have been found to deliver strong performance in numerous applications. For example, <cit.> utilise CPC on raw speech signals from a large unlabelled corpus to improve speech recognition performance on smaller labelled datasets. <cit.> used the CPC's InfoNCE loss to calculate the loss inside the module by dividing the deep neural network into a set of gradient isolation modules. <cit.> introduced a copula-based contrastive predictive coding (Co-CPC) method in order to address the issue of inadequate generalisation caused by uncertainty in data and models. Co-CPC considers the dependencies between a stock class, sector, and related macroeconomic variables, and learns stock representations from a micro perspective in a self-supervised manner. This allows for the mapping of stock characteristics to a generalised embedding space. <cit.> proposed the TS-TCC model which would treat the data with two different augmentations as a pair of positive samples, then map the augmented data to the latent space through a convolutional layer, and additionally learn the feature from latent space through the Transformer module.
§ MODELS AND METHODS
Self-supervised representation learning based on Contrastive Predictive Coding (CPC) plays a central role in allowing the CET model to digest important earnings data while reading in historic price and volume data series on a per-minute basis. Inituitively a CPC approach involves three steps: First, the approach involves condensing complex, high-dimensional data into a simpler, smaller latent space, making it easier to predict outcomes. Next, advanced autoregressive model is used within this simplified latent space for forecasting into the future. Lastly, the model’s training is enhanced end-to-end using Noise-Contrastive Estimation (NCE) in the loss function. Fundamentally, CPC is designed to capture and learn the essential information shared across various segments of a complex signal while filtering out less important details and noise. In modeling time series and high-dimensional data, methods that typically predict the immediate next step utilize the signal's local continuity. However, for predictions further into the future, a model needs to identify broader, more global structures, as the shared information decreases with time. This approach focuses on 'slow features' that are relevant over extended time periods <cit.>, which are typically more insightful for understanding the data's underlying patterns, and especially so on the extended impact by an event on time series data, which is exactly what we are modelling for earnings release. This is achieved by employing a "pretext task" which typically involves predicting some part of the input data from other parts, which helps the model learn useful representations of the data. Specifically, the pretext task involves predicting future segments of a signal based on past segments. This helps the model capture and understand the underlying structure and features of the data, which can then be used for various downstream tasks such as classification.
With those model principles in mind, the network structure of the CET model as well as the optimization goal of the pretext task are depicted in figure <ref>. In the initial phase, the process involves a pre-training stage comprising two separate data encoding activities. To learn the intra-day price context c_t^s a non-linear encoder g_enc is first used to map the input sequence of observed price and volume data x_t to a sequence of latent representations z_t=g_enc(x_t). While <cit.> studied the choice of various convolutional neural networks including a 1D CNN and ResNet in their initial research, a Multilayer Perceptron (MLP) with one hidden layer and a linear activation at the output layer is deemed sufficient for the task of price data embedding. A Transformer g_ar, which is the choice of the auto-regressive model, then summarizes all z_t in the latent space to produce a stock context representation c_t^s=g_ar(z_⩽ t). The earnings data representation c_t^e is generated through a regular Autoencoder which is combined with c_t^s. This whole process generates effective data representations, aggregating time series data and periodic data into a single context vector representation c_t. These representations are then employed in a preliminary training activity, i.e. the pretext task, in the framework of CPC.
Once all of the network weights have been learned in the unsupervised learning pretext task, the model will be further optimized through a fine-tuning phase which involves predicting the price level in the next minute on the learnt representation. Actual price levels are used in this step since the entire network's weights will be adjusted in a supervised-learning style training routine. This step will complete the self-supervised learning phase and the weights from g_enc, g_ar, and h_enc will then be frozen for the actual stock trading experiments. To do that, another MLP with one hidden layer is used whose neuron weights will be optimized to predict stock price gain/loss in percentage terms at t+1 with respect to the price at t.
§.§ Encoding of Earnings Data
we propose to encode the diverse and heterogeneous set of earnings data into a compact vector representation using an autoencoder <cit.>, reducing the dimensionality of the feature space but most importantly allowing the earnings data to be aligned and connected with price and volume data encoded through the Transformer. The autoencoder consists of an encoder network E(x) and a decoder network D(c). The encoder network takes as input the group of pre-processed earnings features x_e∈ℝ ^D with D being the earnings data dimension and maps them to a d-dimensional context vector c_t^e. The decoder network then reconstructs the original input from the d-dimensional latent space representation using a reverse network structure and calculating the reconstructed output x' = D(E(x)). The decoder minimises the reconstruction error between x_e and x^' through mean squared error loss 1/N∑_i=1^Nx_e;i - x_i'^2, learning a compressed representation of the input earnings c_t^e and capturing the most salient features of the data while discarding noise and less important details.
It is noteworthy to mention that we have assessed the suitability of using a Variational Autoencoder (VAE) <cit.>. We determined that capturing underlying probabilistic structures or generating new samples was of lesser importance. Instead a regular autoencoder has been chosen, due to its simpler architecture and more straightforward training process when compared to VAEs. This choice is driven by the priority for high-fidelity reconstruction of input data. Through the direct minimization of reconstruction loss, the autoencoder aims to accurately replicate the original input, a key factor in our decision to use this model.
§.§ Encoding of Price and Volume data
Our stock data at every minute includes the closing stock price and average trading volume within the minute time frame. In particular, each training sample of price and volume data is X ∈ℝ ^ω×υ which is a time series of length ω with υ feature vectors, x_t∈𝕏^υ:X ∈ℝ ^ω×υ=[x_1, x_2, …, x_ω]. The original feature vectors x_t are first standardised for each dimension and then fed through a Discrete Wavelet Transformation (DWT) to perform noise mitigation. We choose a threshold λ of 0.7 as seen in <cit.> which is uesd to filter away the part of the normalised time series whose frequency coefficients are higher than the threshold. The smoothed time series coming out of the DWT module is next encoded by a non-linear network g_enc into the standardised data as z_t=𝐰_encx_t+b_enc, where 𝐰_enc∈ℝ ^ d ×υ, b_enc∈ℝ ^d are the learnable parameters from the non-linear network, and z_t∈ℝ_d, t=0, …, ω are the embedded vectors of dimension d as inputs to the Transformer. z_t is sensitive to the ordering of the sequence. Consequently, in order to make the Transformer aware of the sequential nature of the time series, positional encodings z_pos∈ℝ ^ωυ are added yielding the final inputs to Transformer, z_t^' = z_t + z_pos.
A Transformer encoder structure with multi-head self-attention is used. Through three trainable matrices (linear layers), three vectors are generated (query, key, and value) for each z_t. To compute attention, the model first compares the query vector of the current z_t with the key vectors of z_{i ≠ t} at all the other times in the training sample, measuring how similar they are to each other. Next, the model calculates attention weights, which determine the importance of each z_{i ≠ t} in relation to the current z_t. These weights are like scores that say which data are most relevant or important for understanding the current data point. Finally, the value vectors of all the z_t are multiplied by their corresponding attention weights and combined together, creating a new representation c_t^s of the data, taking into account its context and the importance of other z_{i ≠ t} in understanding it. This structure and process is explained in figure <ref>.
§.§ Representation Learning of Combined Data Through CPC
The presence of earnings data on the day they are released creates a unique context for the intra-day stock price movements, a context that CPC is perfectly placed to capture, for it not only learns to understand the context of each data point in relation to the surrounding data points (price-level context), but it also captures the dynamics imposed by the current set of earnings data as opposed to earnings data of previous quarters (earnings-level context). This temporal context modelling is a key feature of CPC as opposed to other forms of contrastive learning. It is about implicitly capturing essential information shared across sections of data series and is achieved by maximising what is termed as the Mutual Information (MI) between the context vector c_t at present and a prediction vector x_t+k.
At the high level, the problem can be formulated like this: at time t, a set of samples X=[ x_1 ... x_N] is given which contains one positive sample x_t+k and N-1 negative samples. The vector x_t+k is designated as the positive sample because the goal is to maximize the implicit mutual information between c_t and x_t+k and minimize the mutual information between c_t and the rest of the (negative) samples. Here p(x_t+k|c_t) represents the conditional probability of achieving a x_t+k given c_t, and k is the number of time steps into the future at which a prediction is to be carried out. Tackling this problem with CPC is about optimizing the InfoNCE loss which is defined in equation <ref>.
L_InfoNCE(k)=-𝔼 [ logf_k ( x_t+k, c_t )/∑_x_j∈ X^f_k ( x_j,c ) ]
Where f_k is a scoring function.
The loss in equation <ref> is the categorical cross-entropy of classifying the positive samples correctly. To capture the Mutual Information between context vector c_t and the positive sample x_t+k, the optimal conditional probability is p(x = t+k|X,c), which is defined in <ref>, with [x = t+k] being the indicator that sample x_t+k is the positive sample.
p(x=t+k|X,c)
= p(x_t+k|c) ∏_i=1,...,N;i≠ pos^p(x_i)/∑_j=1^N p(x_j|c) ∏_i=1,...,N;i≠ j^p(x_i)
= p(x_t+k|c) ∏_i=1,.,N^p(x_i)/p(x_t+k)/∑_j=1^N p(x_j|c) ∏_i=1,.,N^p(x_i)/p(x_j) = p(x_t+k|c)/p(x_t+k)/∑_j=1^Np(x_j|c)/p(x_i)
Where p(x_i|c_t)/p(x_i) is the density ratio. It needs to be emphasized here that the goal of CPC is to avoid direct supervised learning for high-dimensional data, i.e. the model does not predict future observations x_t+k directly with a generative model with probability p(x_t+k). Instead, it models a density ratio which preserves the mutual information between c_t and x_t+k.
Equations <ref> and <ref> together suggest that the optimal value of the scoring function is proportional to the density ratio, i.e. f_k ( x_t+k, c_t ) ∝p(x_t+k|c_t)/p(x_t+k). Optimizing the InfoNCE loss will result in f_k ( x_t+k, c_t ) estimating the density ratio. Consequently, rather than directly modelling the probability of future stock price observations which has been established to be rather difficult and expensive, CPC is chosen to do a good job of modelling a density function which preserves the mutual information between x_t+k and c_t.
Lastly, the CET model follows the recommendation in <cit.> and takes the choice of modelling f_k using the log-bilinear (LBL) model <cit.>. Specifically the LBL model combines context vector c_t at time t with z_t+k at time t+k (the latent representation of future stock price and volume data x_t+k) and a set of trainable weights W_k which belongs to a linear layer in the network implementation.
f_k(x_t+k, c_t)=exp ( z_t+k^TW_kc_t )
With T representing transposition.
This modelling arrangement effectively turns f_k/∑_X^f_k into a softmax function, and the final form of the InfoNCE loss function employed by the CET model is presented in equation <ref>.
L_InfoNCE(k)
=-𝔼 [
logexp ( z_t+k^posW_kc_t )/exp ( z_t+k^posW_kc_t )+∑_j=1^N-1exp ( z_t+k^negW_kc_t ) ]
To wrap up this section, there are two important design choices that need to be emphasized at this stage. First, predictions are being performed for multiple future time steps, allowing the model to capture the broader structure of the minutely-frequency data. This approach goes beyond relying solely on a single step, which typically only considers the immediate smoothness of the signal <cit.>. This results in representations that allow the model to infer a more global structure between temporally separated parts of the time-series signal, or 'slow features' as discussed at the start of this section. Second, there is a deliberate method in the selection of negative samples for the model. These samples are randomly drawn from a wide range of price data but with a specific condition: they must originate from at least five days before any earnings announcements. This precaution is taken to mitigate the risk of information leakage, which is a concern in financial markets and can lead to unfair practices like front running and insider trading. By carefully choosing negative samples that have minimal connection to the immediate post-event period, the aim is to avoid bias or data contamination, ensuring a more robust and reliable modeling process.
§.§ Stock movement prediction post pre-training
Both the Transformer and Autoencoder encoders as well as the LBL model are concurrently trained employing the InfoNCE loss function. This cooperative training facilitates the prediction of the next token representation in the price sequence, conditioned upon the context c_t, by identifying concurrent patterns in price fluctuations. All of these model elements globally undergo a synchronised and comprehensive optimisation of their network weights in order to minimise the InfoNCE loss, contributing to the improvement of model performance:
𝐰 _lbl^*, 𝐰 _ar^*, 𝐰_enc^*, 𝐰_ae^*
= *arg min_𝐰_lbl, 𝐰_ar, 𝐰_enc, 𝐰_aeL_InfoNCE(𝐰_lbl, 𝐰_ar, 𝐰_enc, 𝐰_ae),
where 𝐰_lbl^*, 𝐰_ar^*, 𝐰_enc^*, 𝐰_ae^* represent the optimised weights for the LBL linear layer, the Transformer component g_ar, the non-linear encoding component g_enc implemented as an MLP with one hidden layer and linear activation, as well as the three-layered (including the bottleneck layer) autoencoder h_ae for the earnings data.
In order for the model to predict price movement direction following the pre-training, the c_t layer is connected to a new output layer that can captures the three movements: up, down and hold. 𝐰_ar^*, 𝐰_enc^* and 𝐰_ae^* obtained from the last step are all fixed, and weights of the output layer are learnt from scratch using cross-entropy loss with a softmax activation function and Adam optimizer <cit.> with a learning rate of 2e-4 as seen in <ref>. Note the LBL layer is not required in the price prediction operations as its presence was needed entirely to optimize the InfoNCE loss during unsupervised pre-training.
L=-∑_iy_ilog(p_i)
, where p_i=e^c_i/∑_k=1^Ne_k^c
§ EXPERIMENTS, RESULTS AND ANALYSIS
The main objective of this research is to assess the effectiveness of CPC-based self-supervised representation learning in capturing the impact of earnings shocks on intra-day stock price movements for intra-day algorithmic trading and gain insights into the underlying mechanisms. To accomplish these goals, we have designed a series of progressive experiments that provide increasingly insightful observations and analysis into these aspects. In all these experiments, we compare the performance obtained by CPC against the following list of relevant/state-of-the-art unsupervised approaches, each performing their respective pretext tasks:
∙ Autoencoder (AE) Autoencoders are respectively applied to earnings data and stock price data generating c_t^e and c_t^s.
∙ Transformer with Masked Language Modelling (MLM) MLM is a self-supervised learning technique used in the groundbreaking NLP model, BERT <cit.>. When applying MLM to the Transformer, we randomly mask sections of stock price data sequence and predict the masked sections from the unmasked ones. To make MLM work with numerical sequence, we substitute the original softmax cross entropy loss (designed for word token prediction) with a more appropriate Mean Squared Error (MSE) regression loss.
∙ SimCLR SimCLR <cit.> is another state-of-the-art self-supervised contrastive learning model. Although it was originally conceived for image data, it has now been adapted to handle the price data sequences with the Transformer. To generate positive samples, variations are infused into each input data sequence by adding random Gaussian noise, which emulates the data augmentation techniques used in the original image data-based design. The subsequent operations, embedding, similarity calculation, and loss optimization, remain unchanged.
For a vertical comparison, three supervised baselines have also been created for benchmark. These baselines are constructed by incrementally removing key elements from the CET model design:
∙ SupRep Direct supervised learning using the context vector c_t. This represents the same network structure used in CET with no CPC-based pre-training and hence without the InfoNCE loss.
∙ SupRaw Direct supervised learning with minimum representation learning (and hence noted `raw'). In this model, price and volume data feed directly to the Transformer as CET. Financial metrics from earnings report are not encoded and are concatenated to the Transformer output directly.
∙ SupRaw2 Direct supervised learning with the Transformer encoder on price and volume data only and no financial metrics.
§.§ Data Setup
We have selected a diverse range of data from <cit.>, as outlined in table <ref> that are considered to be most indicative of a company's financial soundness. This dataset is acquired from Bloomberg and subjected to pre-processing procedures to address missing and outlier data. Additionally, the data is adjusted for dividends and stock splits, converted to ratio format when necessary, and normalised using the Z-Score method. In addition, we also include each quarter’s Earnings Surprise (reported EPS minus market estimated EPS) and the change between current quarter's Earnings Surprise and that of the previous quarter. In total, we have selected 38 financial metrics that collectively reflect a company's earnings quality for each quarter.
The minutely-frequency price and volume data are programmatically downloaded through the public API of the Investors Exchange (IEX) <cit.> which is a national US stock exchange that also supplies historically traded OHLCV (Open, High, Low, Closed price and Volume) data that are publicly accessible for benchmarking purposes. There are 390 minute data points on each trading day and linear interpolation is employed to address rare missing data problems. To ensure equal representation and optimal price liquidity, we choose companies from the S&P500 index and divide them into the nine industrial sectors categorised by Bloomberg: Basic Materials, Communications, Consumer Cyclical, Consumer Non-Cyclical, Energy, Financial, Industrial, Technology, and Utilities, and select 10 companies from each sector to form our company universe as seen in table <ref>. We retrieve minutely-frequency data for the five business days after each company's quarterly financial reporting, or from the announcement date if it occurs before the market opens. Data from the days prior to financial results announcements are also required and used as negative samples as described in section <ref> but are not part of the training/testing data set. We have intentionally selected a window of 10 action-packed financial quarters, starting from the fourth earnings quarter in 2020 (around October) through the first quarter of 2023 (around January). Within this timeframe, the US stock markets initially witnessed extended periods of upward momentum, primarily driven by the widespread availability of Covid-related funds to retail investors, immediately followed by prolonged periods of decline, attributed to concerns over inflation and the ongoing war in Ukraine.
§.§ A Look At Pre-training
The initial phase of the entire learning process involves CPC-based self-supervised pre-training, where the weights obtained are later used as a feature extractor in algorithmic trading activities. To evaluate how effectively these representations have been learned, model performances are analyzed using a classifier network tasked with predicting stock price movements. This analysis focuses specifically on the data from the day immediately following the release of earnings reports (day 1), a strategic choice made to ensure the model performances are measured under the most pronounced effects of these financial events. As a result of this methodology, the dataset used includes 900 days of financial earnings data, complemented by intra-day minute-by-minute price and volume information. This extensive dataset serves as the foundation for pre-training the four self-supervised models.
After the unsupervised pre-training phase is completed, the network is augmented with an additional fully connected layer that serves as a task-specific component. This layer utilizes softmax cross-entropy loss for its computations. Next, to refine the pre-trained models, three identical but separate experiments are conducted using 20%, 50%, and 80% of the entire dataset, selected at random for fine-tuning. Additionally, another 20% of the dataset, chosen randomly, is reserved for testing purposes. In each experiment the benchmark supervised learning models also consume the same sets of training and testing data.
There are two important aspects to highlight here regarding experiment setup. Firstly, for the self-supervised models, both the pre-trained segments of the network and the newly incorporated output layer undergo fine-tuning. Secondly, the model employs an Adam optimizer with a learning rate of 2e-3 during the pre-training stage, which is then reduced to 2e-4 during the fine-tuning phase. This reduction in the learning rate is strategic; a too high learning rate during fine-tuning might cause the model to neglect the general features it learned during the pre-training stage.
Table <ref> presents the classification results from the experiments with each cell showing the mean classification rate and standard deviation. Here a few observations are made:
First and foremost, CET emerges as the leading self-supervised learning model, beating its benchmark peers in this category. The less favorable performance of MLM underlines the inherent challenges of applying context-based pre-training methods to numerical data, which lacks the same kind of contextual relationships as language data. Meanwhile, SimCLR's performance, being close to CET, illustrates the robust potential of contrastive pre-training from another perspective.
Second, in the first scenario where label information is limited due to only 20% of data being used for fine-tuning/direct supervised learning, pre-trained models expectedly take the upper hand against supervised models. This is a good example that showcases their strength in leveraging the unlabeled data for pre-learning useful representations, allowing models to quickly converge to an equilibrium state with comparatively smaller amount of data. The benefit of pre-training, although diminishes with increasing training data, is evident.
Third, while supervised models initially lag behind their pre-trained counterparts when training data is scarce, they are able to narrow or even close the performance gap with the introduction of more training data. However, it is observed that the supervised model performances appear to diminish as the dataset size becomes quite large, which the author considers as a classic case of overfitting. The same is in fact also observed with the fine-tuning of self-supervised models.
Fourth, it can also be observed that the SupRaw model surpasses its fellow supervised learning models, including SupRep. This observation is intriguing, and it could potentially be attributed to the additional complexity in SupRep's network structure, originally intended for unsupervised pre-training within the CET model framework. Its excessive sophistication appears to have undermined its performance compared to a more simplified network when used in a direct supervised learning setting.
Fifth, on the other hand, the substantial drop in the performance of SupRaw2 (where earnings data is excluded) underlines the vital role of this data in the classification task, a challenge that pre-trained models might handle better due to their ability to extract useful information from diverse and comprehensive datasets.
Lastly, the high standard deviation in performance among all supervised learning models points towards their relative instability. In contrast, models that undergo unsupervised pre-training exhibit consistency and stability, reinforcing the value of pre-training in achieving reliable performances.
§.§ Task-specific Training
The value of fine-tuning a pre-trained model on a downstream task is most apparent when there's additional labelled data that is specific to the downstream task. This allows the model to adjust its parameters to better fit the specific task. To evaluate the value of this feature, we take the self-supervised models that were pre-trained and fine-tuned in the previous section, and respectively train them to work with data from companies in each of the nine industrial sectors defined in section <ref>.
Specifically for the CET model, there is firstly the pre-training step using CPC. Next the model is fine-tuned with 60% of all company data, after which the learned weights 𝐰_ar^*, 𝐰_enc^*, 𝐰_ae^* from section <ref> are frozen and used in the classifier network. Sector-specific company data from half of the remaining labeled data is used to train the classifier network using cross-entropy loss. Essentially for the self-supervised models, the whole data set is split into three portions for fine-tuning / sector specific supervised training / sector specific testing in a 60:20:20 ratio, with the supervised training and testing data being sector-specific. For the benchmark supervised models, the first 60% of data from all companies, in conjunction with sector-specific company data in half of the remaining data, is used for direct supervised training. Sector-specific data in the final 20% of the whole dataset is reserved for testing.
As seen in the result table <ref>, the CET model emerges as a consistent top performer across most sectors. Its success rates are generally on the higher end, indicating its robustness and ability to generalize. The relatively small deviations in its success rates across sectors further emphasize its adaptability. In essence, CET's performance, as predicted, underscores its superior generalization capabilities. This appears particularly true when working with such diverse set of stock-related data irrespective of sector-specific nuances, especially when compared to other models that tend to show a wider range of outcomes depending on the industry in focus. For instance, MLM, designed on the principles of NLP's BERT model, reflects varied success rates, indicating its potential struggles with sector-specific nuances. SimCLR, which is adapted from image data-based design to the current context, portrays results close to CET but varies more distinctly between sectors.
Among the supervised models, SupRep and SupRaw once again present intriguing performances similar to what was seen in the last experiment <ref>. SupRep, which employs direct learning from the context vector, excels in specific sectors but appears to lack consistency in its performance. SupRaw, with its minimal representation learning, showcases results that, in certain sectors, even surpass SupRep. This could be indicative of the potency of raw data in making accurate predictions. Models leveraging earnings data tend to fare better, aligned with the expectation that earnings data provide crucial insights into a company's financial health and prospects, often serving as an indicator of future stock performance. As such, these models are equipped with richer context, leading to more accurate predictions. In contrast, SupRaw2 which lacks the earnings data inputs, predictably has a lower performance than its counterparts.
§.§ Preserving Predictive Power of Earnings Data
The previous experiments consider stock movements only on the first day after the release of earnings data (or the same day if release was made before market open), denoted as D_1. While the benefits of having earnings data is most profound at this point, this daily trading is highly volatile and the data's predictive power rapidly comes down after Day 1 <cit.>. Consequently, further test is arranged in this section to extend this notion of locality to one week (i.e., five business days) and evaluate how well the CET model pre-trained with Day 1 data will preserve the predictive power of earnings data. Specifically, after pre-training on D_1, the less volatile near-term performances are evaluated over the next four business days, D_2 - D_5. On each day, models would receive 60% of the stock and earnings data for training, while the remaining 40% are employed as the test set.
As seen in the table <ref> and figure <ref>, the CET model showcases a consistency throughout the four-day period. Beginning at 58.52% on Day 2, it slightly drops to 58.02% by Day 4 but then stabilises and even marginally improves to 58.04% by Day 5. This subtly upward trend between Day 4 and Day 5 could be due to different stock data characteristics on these days and appears unique to CET, underscoring its robustness and its ability to navigate the complexities of the stock data landscape. It is this kind of nuanced learning that makes CET stand out as it appears to course-correct its understanding from the previous day's downturn.
In contrast, the Autoencoder (AE) and Transformer with MLM models exhibit stable performances over the period but do not manage to eclipse CET's success rates. Both models' performance are particularly consistent, with MLM's performance sees a slight increase on Day 3, followed by a minor decrease over the next two days, suggesting a degree of sensitivity to temporal patterns in the data.
SimCLR presents a noticeable continuous decline from Day 2 to Day 5. Starting at 57.15% on Day 2, it slides down to 56.77% by Day 5. This progressive decline might hint at SimCLR's challenges in adjusting to the diminishing relevance of earnings data over subsequent days, a trait that diverges from its general self-supervised counterparts.
Among the supervised models, SupRep distinctly portrays a sharp diminishing trend from 56.50% on Day 2 to 53.72% by Day 5. This trajectory reiterates the notion of supervised models being more reliant on fresh earnings data, and as its immediate impact diminishes, so does their prediction capability. The model might also subject to over-complexity of its network. SupRaw also reveals a similar declining trend, although its starting point on Day 2 is higher than that of SupRep, something that we also observed in the previous experiment. Meanwhile, SupRaw2, contrary to SupRep, remains almost consistent, with a tiny dip between Day 2 and Day 4, but exhibits a recovery on Day 5.
This experiment highlighted two key findings. First, the ability of self-supervised models, particularly CET, to recalibrate and learn from evolving patterns in financial datasets is remarkable. Their performance trajectory indicated not just learning but a nuanced understanding of changing data dynamics. Second, while supervised models can be powerful predictors, their rigidity in the face of rapidly altering feature significance, like earnings data, can be a limitation. As model performances vary over time, it's evident how critical fresh earnings data is to predictions. As this data ages, models' strengths and weaknesses become apparent, highlighting differences in their design and learning methods.
§.§ Ablation Analysis
One of the key model features of CPC is its ability to make use of positive samples from multiple time steps (latent steps) into the future. It is important to evaluate the impact of changing the latent step size on the result of unsupervised pre-training which is what this study attempts to explore. The natural first step we have taken here is to examine the InfoNCE losses. Meanwhile, due to the predictive nature of CPC, we also assess the similarity between the predicted context vectors and the corresponding positive sample vectors. This comparison aims to evaluate how closely the model's predictions align with the actual future data and is measured using the cosine similarity score of two vectors 𝐀 and 𝐁, 𝐀·𝐁/𝐀𝐁. Both metrics are assessed across varying latent step sizes, ranging from 1 to 20. This assessment follows the experimental procedures outlined in section <ref> `A Look at Pre-Training'. For each step size, the study records the equilibrium loss rate and calculates the average similarity score.
The results of both evaluations are presented in figure <ref> which offers a wide-ranged view of the CPC model's performance from the angle of traditional study of loss and predictability. The result on InfoNCE loss demonstrates that the model is highly effective in short-term predictions but struggles with longer prediction horizons. This is corroborated by the decreasing cosine similarity for longer prediction steps highlighting the increasing divergence between the model's predictions on the context vector and the benchmark positive vector.
With CPC's key characteristic being its use of contrastive training, it is understood that achieving the lowest InfoNCE loss or the highest cosine similarity score does not necessarily translate to the most effective or practical model performance. It is possible that a model predicting just one future step might score well in these metrics but fail to recognise the longer-term patterns critical in financial time series data. Moving beyond initial assessments focused solely on unsupervised pre-training, this experiment next explores the model model after fine-tuning. This examination follows the same experimental setup detailed in the section <ref>, while using 50% of all data to fine-tune the CET model. Again, using the same evaluation approach as seen in that section, success rate of predicting stock movements is measured for each of the latent step sizes. The outcome of this experiment is detailed in figure <ref> which shows that, while the prediction success rate fluctuates a lot, it initially goes up as data 'further into the future' is used and stays at an elevated level until after step size of 7. This observation seems to support the earlier theory that the main benefit of this kind of training is to create strong features that help with future predictions, regardless of how the InfoNCE loss is measured on its own.
The two sets of results seen in figure <ref> and <ref> complement each other and highlight CPC's core principle: balancing immediate precision, indicated by lower loss or higher similarity in short-term predictions, against the ability to identify complex, long-term trends. A model excelling in short-term forecasting might score well in certain metrics, yet it could lack the robustness and broader applicability that our model aims for. Our CET model is designed to predict over extended periods. This approach of forecasting further ahead can enable the model to grasp more abstract, significant aspects of the data, which are vital for its adaptability and wider use.
§ CONCLUSION
This research, centred on the predictive potential of earnings data, offers crucial insights into the current capabilities of stock prediction models. Earnings data, usually a key signal for financial analysts, has a strong impact right after it's released. But our findings show that its ability to predict quickly decreases, highlighting how fast-changing and transient this information can be.
The introduction of the CPC and Transformer-powered Contrastive Earnings Transformer (CET) model marks a major step forward in this area. CET, unlike its contemporaries, showcases a consistent performance across varied industrial sectors and, crucially, manages to uphold the predictive relevance of earnings data over a more extended period. While other models, especially the supervised ones, struggled as the impact of earnings data diminished over time, CET remained effective. It not only maintained a steady prediction rate, but also demonstrated an ability to recalibrate based on evolving stock data patterns. In the ever-changing world of stock markets, where the release of earnings data is a significant event, the ability to adapt to shifting data trends in the days after the release becomes especially crucial.
In contrast, traditional supervised models, while showing promise in specific instances, struggled to maintain their efficacy as the earnings data aged. Their declining performance over days highlighted the potential limitations they might have in real-world trading scenarios. As we explore the intricate dynamics of stock predictions, the CET model distinguishes itself with its adaptability and expertise in leveraging earnings data, offering a promising avenue for trading through earnings seasons.
That being said, although it hadn't had much of an impact on the results of this study, noise in stock data can always have a major impact on machine learning models. The impact on this research would have been more pronounced if test data had not been intentionally sourced from a period with significant market volatility. One focus of future studies in this area should be more on exploring how well the CET model behaves under various market conditions and how it is influenced by different hyperparameters. Separately, the effectiveness of learning often depends on the selection of negative pairs, particularly if the dataset has underlying similarities across different sequences or time windows. This issue requires sophisticated negative sampling strategies to maintain stable learning efficacy over time, which in itself is a major future research direction. Lastly, as seen in section <ref>, high network complexity can play hindrance to future up-scaling of the model and this is an area requiring future optimization. Nevertheless, as the complexities of the financial domain expand, we hope the CET model would emerge as a promising benchmark model for future advancements in stock market predictions.
§ APPENDIX
Table <ref> provides the stock symbols of all the companies selected for testing the CET model and the chosen benchmarks.
Table <ref> lists all the key hyperparameters of the CET model.
apacite
| The advent of high performance computing and the explosion of digital data in the late 1990s and early 2000s spurred the growth and popularity of machine learning into the specialised field of algorithmic trading, whether in supervised learning <cit.><cit.><cit.>, unsupervised learning <cit.>, or reinforcement learning <cit.><cit.><cit.>.
We observe that research efforts in fusing machine learning with algorithmic trading often entail work on the modelling side <cit.><cit.> and the data side <cit.><cit.>, with the research objectives often framed much more as a machine learning problem rather than a finance problem. Those are logical and legitimate research angles because financial time series data do carry embedded patterns available to be mined; auxiliary data such as news pieces and twitter feeds etc indeed offers added features and dimensions. However, from a practical standpoint, it is crucial to recognise that the stock market is primarily event driven and largely stimulated by the periodic injection of financial and economical data of various kinds at the macro level, such as U.S. nonfarm payrolls, Employment Situation Summary, and CPI data etc, <cit.><cit.>, as well as any data specific to individual companies, such as quarterly earnings releases <cit.>. It is
the interpretation of such data by portfolio managers/traders that tips the balance of the underlying supply and demand of stocks which propel their prices to move toward certain directions. We believe it is imperative that a trading algorithm captures data release events and understands the jumps and changes in dynamics that follow <cit.>.
Post Earnings Announcement Drift (PEAD) as a stock market anomaly has been well studied by renowned economists from Ball and Brown <cit.> to Fama and French <cit.> and many others. It is a phenomenon when stock price continues to drift up for firms that are perceived to have reported good financial results for the preceding quarter and drift down for firms whose results have turned out worse than the market had anticipated. <cit.>
investigated and illustrated the true predictability of PEAD based on a machine learning approach using a large dataset on earnings and financial metrics. Crucially, they highlighted the speed at which the market reacts to such information, causing the rapid vanishing of actionable trading signals. This led us to reconsider the prevailing practice in the literature: is relying primarily on stock price and volume time series, even when additionally augmented by other continual features such as twitter feeds and technical indicators <cit.>, adequate enough to grasp the abrupt shifts in market dynamics during periods of heightened volatility, particularly when influenced by specific data types?
We argue that by relying on such continual data, we might miss out on capturing valuable predictive patterns during crucial market events. Therefore, to ensure a more holistic and responsive trading strategy, there is a compelling case for incorporating diverse datasets and employing advanced machine learning techniques to decipher the full spectrum of market behaviours. In this paper, we present an exploratory study into algorithmic trading through earnings seasons combining regular intra-day stock minutely-frequency data with irregular heterogeneous earnings data which drives the PEAD. We devise a novel algorithmic trading model, Contrastive Earnings Transformer (CET), which fuses data of various characteristics and granularity together through representations with the help of a Transformer <cit.> and Contrastive Predictive Coding (CPC) <cit.>.
Traditionally, Long Short-Term Memory (LSTM) and its variants featured heavily in algorithmic trading <cit.><cit.> due to their natural fit with time series data, and this is particularly true when a single time series data is involved <cit.>. However, as level of data complexity went up, LSTM's performance was found to be unsatisfactory and could not represent the complex features of sequential data efficiently, particularly if long interval multivariate time series with high non-linearity were involved <cit.>. The emergence and proliferation of the Transformer model due to its groundbreaking architecture and remarkable performance across various domains quickly turned it into a more superior choice in place of LSTM in financial trading research <cit.><cit.>. In this research, we employ a Transformer as an integral component of our model design and rely on its self-attention mechanism to facilitate robust feature extraction and foster contextual understanding of earnings data dynamics with raw price movements by attending simultaneously to different parts of the regular stock data time series and irregular earnings data, enabling it to model complex dependencies effectively.
It is widely known that machine learning models that rely on the backpropagation technique and optimization algorithms such as stochastic gradient descent (SGD) can suffer from random weight initialization which might not be optimal and can impact the initial behavior and convergence of the model during training <cit.>. The Transformer model is not immune from it. In search of a remedy, our initial research found that previous works <cit.><cit.><cit.> had shown that, although the existence of intricate and infrequent movement patterns posed challenges for creating effective recognition models, integrating unsupervised learning into traditional pattern recognition systems led to promising outcomes, especially for feature extraction. Facing similar challenges with the diverse economic and price data and their inconsistent granularity and intensity, the proposed model adopts self-supervised pre-training to enable the model to learn powerful time-aware representations from the unlabeled price and earnings data.
A catalogue of mainstream techniques has been developed and found successful for self-supervised pre-training, such as autoencoders and its variants <cit.><cit.>, Masked Language Modelling (MLM), and contrastive learning, among a few others. MLM, which is used in BERT <cit.>, is perhaps the most famous and has been used to obtain state-of-the-art results on a wide array of natural language processing (NLP) tasks. However, initial assessment of this study determines that Contrastive Predictive Coding (CPC) <cit.>, a special type of contrastive learning model designed for sequential data, offers several advantages over MLM when working with time series data. Given the vast number of company stocks listed on major U.S. stock exchanges, our model needed to generalise well to unseen price sequences to be practically useful. MLM works better when the data is discrete (such as text data within a finite vocabulary), and its focus on predicting missing parts in masked positions may not explicitly encourage the model to capture broader temporal price patterns, potentially making it less robust in handling such time series prediction tasks. In contrast, CPC incorporates the notion of temporal context and learns the underlying patterns and structures by focusing on learning a useful representation of the data that maximises the similarity between the context vector and the future vector. This feature may enhance our model's ability to generalise and extrapolate its predictions to unseen stock time series and earnings data. Also, by focusing on predicting elements multiple time steps into the future, the model is encouraged to capture the underlying structure of the data that is less affected by noise or irrelevant variations. We have found this unique feature particularly important on a earnings release day when trading is typically most volatile.
In our exploration of algorithmic trading, we have taken a new approach to utilising earnings data by integrating CPC techniques. The task of melding the irregular release patterns of earnings data with high-frequency stock data posed significant challenges. The proposed solution is to turn to self-supervised learning. After a series of considered tests, we found that CPC emerged as a promising model to address our needs. The novel insights from this research include: (1) The CET model, through a series of experiments, showcases its potential in understanding earnings data, even when considering the variations across sectors or the diminishing relevance of the data over time. It offers a pathway for others to consider when dealing with irregular and vital data sources. (2) Our findings highlight the importance of earnings data in predicting stock movements. Even though its predictive potency may decrease as time progresses, its immediate impact on stock prices remains crucial. (3) One noteworthy observation was the adaptability of the CET model. As the influence of earnings data lessens over consecutive days, CET demonstrates an ability to adjust and refine its predictions, keeping pace with the evolving data. These discoveries enrich our understanding of algorithmic trading and pave the way for deeper investigations into the role of self-supervised models in financial decision-making. | §.§ Earnings Data and their impacts
Ever since <cit.> discovered Post Earnings Announcement Drift as a stock market anomaly in the 60s, quarterly financial earnings data have been playing an important role in various areas of financial analysis and forecast. Early machine learning-based studies by
<cit.> demonstrated the possibility of achieving excessive risk-adjusted returns when forecasting 12-month stock returns with a simple artificial neural network (ANN) using 61 financial ratios for 2352 Canadian stocks. Conducting fundamental and technical analysis together, both <cit.> and <cit.> jointly utilised financial data and technical signals in trading company shares and equity index and reported satisfactory returns. <cit.> analysed 29 330 different earnings call scripts between 2014 and 2017 using four different machine learning algorithms, and managed to achieve a low classification error rate and beat the S&P500 benchmark in simulated trading. Similar research on earnings call transcripts by <cit.> using a graph neural network reported reliable and accurate stock movement predictions and more importantly confirmed the overweighting of certain market-acknowledged variables such as Earnings-per-share (EPS). Both research studies demonstrated the predictive capabilities of auxiliary earnings information. Despite the success in both cases, we observe that their supervised learning's predictions were measured over days after earnings release. There was limited investigation into the immediate impact on intra-day price volatilities as market participants actively absorb and respond to the latest data that just came out.
§.§ Key Model Components
One of the core tasks of algorithmic trading lies in financial time series prediction, which has enjoyed great success with the help of various deep learning technologies. As seen in the study by <cit.>, while LSTM remains a key component in constructing time series forecasting models such as in <cit.>, recent trends show that advanced architectures like Transformers <cit.>, GANs, GNNs, and DQNs are increasingly being utilized for price forecasting tasks, oftentimes showing exceptional capabilities in accuracy and convergence efficiency <cit.>. This shift underscores how the financial industry is capitalizing on the latest developments in deep learning technologies.
With representation learning being proven to be an exceptionally valuable approach across many fields and applications, the Transformer model has emerged as a groundbreaking framework for this objective, with great success initially in natural language processing, as seen in <cit.><cit.> which demonstrated that Transformer layers were able to overcome the performance of traditional NLP models. On the time series data front, <cit.> developed a Multi-Transformer model which merged several multi-head attention mechanisms to produce the final output and demonstrated that merging Multi-Transformer layers with other models led to more accurate stock volatility forecasting models. <cit.> presented a transformer encoder-based multivariate time series representation learning framework with a focus on unsupervised pre-training, which was proven very successful extracting vector representations of unlabelled data. This particular result inspired us to introduce a pre-training phase to our model, except that we have specifically chosen self-supervised pre-training whose key idea is to employ a `pretext' task where the model is asked to predict some part of the data given the rest, creating some sort of `supervised signal' in the process which benefits downstream tasks. In fact, powerful Transformer-based models with self-supervised learning capabilities have been developed, such as GPT <cit.> and BERT <cit.>.
One very effective family of self-supervised representation learning is contrastive learning, based on which a lot of successful models have been developed, such as SimCLR <cit.> for visual representations, TimeCLR <cit.> for time series, and CPC <cit.> whose learnt representations have been found to deliver strong performance in numerous applications. For example, <cit.> utilise CPC on raw speech signals from a large unlabelled corpus to improve speech recognition performance on smaller labelled datasets. <cit.> used the CPC's InfoNCE loss to calculate the loss inside the module by dividing the deep neural network into a set of gradient isolation modules. <cit.> introduced a copula-based contrastive predictive coding (Co-CPC) method in order to address the issue of inadequate generalisation caused by uncertainty in data and models. Co-CPC considers the dependencies between a stock class, sector, and related macroeconomic variables, and learns stock representations from a micro perspective in a self-supervised manner. This allows for the mapping of stock characteristics to a generalised embedding space. <cit.> proposed the TS-TCC model which would treat the data with two different augmentations as a pair of positive samples, then map the augmented data to the latent space through a convolutional layer, and additionally learn the feature from latent space through the Transformer module. | null | null | null | This research, centred on the predictive potential of earnings data, offers crucial insights into the current capabilities of stock prediction models. Earnings data, usually a key signal for financial analysts, has a strong impact right after it's released. But our findings show that its ability to predict quickly decreases, highlighting how fast-changing and transient this information can be.
The introduction of the CPC and Transformer-powered Contrastive Earnings Transformer (CET) model marks a major step forward in this area. CET, unlike its contemporaries, showcases a consistent performance across varied industrial sectors and, crucially, manages to uphold the predictive relevance of earnings data over a more extended period. While other models, especially the supervised ones, struggled as the impact of earnings data diminished over time, CET remained effective. It not only maintained a steady prediction rate, but also demonstrated an ability to recalibrate based on evolving stock data patterns. In the ever-changing world of stock markets, where the release of earnings data is a significant event, the ability to adapt to shifting data trends in the days after the release becomes especially crucial.
In contrast, traditional supervised models, while showing promise in specific instances, struggled to maintain their efficacy as the earnings data aged. Their declining performance over days highlighted the potential limitations they might have in real-world trading scenarios. As we explore the intricate dynamics of stock predictions, the CET model distinguishes itself with its adaptability and expertise in leveraging earnings data, offering a promising avenue for trading through earnings seasons.
That being said, although it hadn't had much of an impact on the results of this study, noise in stock data can always have a major impact on machine learning models. The impact on this research would have been more pronounced if test data had not been intentionally sourced from a period with significant market volatility. One focus of future studies in this area should be more on exploring how well the CET model behaves under various market conditions and how it is influenced by different hyperparameters. Separately, the effectiveness of learning often depends on the selection of negative pairs, particularly if the dataset has underlying similarities across different sequences or time windows. This issue requires sophisticated negative sampling strategies to maintain stable learning efficacy over time, which in itself is a major future research direction. Lastly, as seen in section <ref>, high network complexity can play hindrance to future up-scaling of the model and this is an area requiring future optimization. Nevertheless, as the complexities of the financial domain expand, we hope the CET model would emerge as a promising benchmark model for future advancements in stock market predictions. |
http://arxiv.org/abs/2409.17902v1 | 20240926145020 | Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and Security in IoT Devices | [
"Gaoxiang Li",
"Yu Zhuang"
] | cs.CR | [
"cs.CR",
"cs.LG"
] |
Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and Security in IoT Devices
1st Gaoxiang Li
Department of Computer Science
Texas Tech University
Lubbock, TX 79409, USA
email address or ORCID
2nd Yu Zhuang
Department of Computer Science
Texas Tech University
City, Country
email address or ORCID
September 28, 2024
==============================================================================================================================================================================================================================================
§ ABSTRACT
The rapid expansion of Internet of Things (IoT) devices demands robust and resource-efficient security solutions. Physically Unclonable Functions (PUFs), which generate unique cryptographic keys from inherent hardware variations, offer a promising approach. However, traditional PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) are susceptible to machine learning (ML) and reliability-based attacks. In this study, we investigate Component-Differentially Challenged XOR-PUFs (CDC-XPUFs), a less explored variant, to address these vulnerabilities. We propose an optimized CDC-XPUF design that incorporates a pre-selection strategy to enhance reliability and introduces a novel lightweight architecture to reduce hardware overhead. Rigorous testing demonstrates that our design significantly lowers resource consumption, maintains strong resistance to ML attacks, and improves reliability, effectively mitigating reliability-based attacks. These results highlight the potential of CDC-XPUFs as a secure and efficient candidate for widespread deployment in resource-constrained IoT systems.
IoT security; XOR-PUF; CDC-XPUF; machine learning modeling attack
§ INTRODUCTION
The rapid expansion of Internet of Things (IoT) devices has made them integral to various industries and daily life. As these devices become ubiquitous, ensuring the security of communications within these diverse networks is important. Traditional cryptographic protocols, while robust, are often too resource-intensive for the constrained computational and power capabilities typical of IoT devices. This limitation creates a pressing need for lightweight, yet secure, authentication mechanisms tailored to the IoT landscape.
Physically Unclonable Functions (PUFs) <cit.> have emerged as a promising lightweight alternative for cryptography in such settings. By leveraging inherent manufacturing variations in integrated circuits, PUFs generate unique and physically unclonable responses to input challenges, making them ideal for device identification and authentication in IoT systems <cit.>. Furthermore, PUFs are generally classified into two categories: weak PUFs and strong PUFs <cit.>. Weak PUFs have a limited Challenge-Response Pair (CRP) space, making them suitable for cryptographic key generation. In contrast, strong PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) feature extensive CRP spaces ideal for challenge-response authentication protocols.
Despite their potential, APUFs are susceptible to machine learning (ML) attacks due to their linear response behavior. XOR-PUFs were introduced to enhance ML resistance by adding non-linearity through XOR gates. However, achieving robust security with XOR-PUFs often requires increasing the XOR size, which leads to greater complexity and higher power consumption—undesirable attribute for resource-constrained IoT devices <cit.>.
Reliability is another critical concern for both APUFs and XOR-PUFs. Because of their sensitivity to environmental changes and operational variability, these devices can produce inconsistent responses, undermining practical usage and opening avenues for reliability-based attacks <cit.>. In such attacks, adversaries exploit the variability in responses to the same challenge under different conditions to model and predict PUF behavior. Even though some advanced PUF architectures can resist conventional ML attacks, they often remain susceptible to reliability-based attacks. While certain protocols have been developed to prevent these attacks by limiting repeated CRP queries <cit.>, implementing such measures at a system-wide level is often costly and complex. This situation underscores the need for a more fundamentally secure approach that addresses reliability at the hardware level rather than relying on external protections.
These challenges underscore a gap in the current PUF design: the need for architectures that simultaneously address security vulnerabilities and the rigorous resource constraints of IoT devices. The ideal solution would enhance resistance to both ML and reliability-based attacks without suffering high hardware or energy costs.
In this study, we revisit traditional XOR-PUF designs and propose an enhanced framework that overcomes these limitations by integrating two key innovations:
* Implementation of a Pre-Selection Strategy: We introduce a novel pre-selection mechanism that improves the reliability of PUF responses by filtering and utilizing only the most stable and consistent CRPs. By doing so, we significantly enhance the device's robustness in diverse and challenging environments and mitigate the effectiveness of reliability-based attacks at the hardware level.
* Development of a Lightweight CDC-XPUF Design: We propose a lightweight design based on Component-Differentially Challenged XOR-PUFs (CDC-XPUFs) <cit.>, a less explored variant that assigns unique challenges to each component APUF. Our design strategically reduces the number of stages within each APUF component while increasing the number of components, effectively reducing hardware overhead and power consumption. Importantly, this configuration maintains high resistance to ML attacks due to increased non-linearity and preserves the extensive CRP space necessary for effective authentication protocols.
Our research demonstrates that, with these enhancements, traditional XOR-PUF architectures have considerable untapped potential and need not be discarded in favor of entirely new designs. By refining and augmenting existing frameworks, we achieve a balance between security and efficiency that aligns with the demands of resource-constrained IoT devices.
The contributions of this paper are as follows:
* We address the vulnerability to reliability-based attacks by introducing a pre-selection strategy that filters out unstable CRPs, resulting in near-perfect reliability without significant hardware modifications.
* We design a lightweight CDC-XPUF architecture that reduces hardware complexity and challenge transmission overhead, making it practical for IoT applications while maintaining a high level of security against ML attacks.
* We provide comprehensive experimental validations demonstrating the effectiveness of our approach in enhancing reliability, security, and efficiency. Our results show that the enhanced CDC-XPUF achieves a balance of performance metrics suitable for real-world deployment.
The remainder of this paper is organized as follows: Section 2 provides background information on PUFs and discusses related work. Sections 3 and 4 introduce the pre-selection strategy and its impact on reliability and security. Section 5 presents the design of the lightweight CDC-XPUF and details the integration of our proposed strategies. Section 6 describes the experimental setup and evaluates the performance of our design. Finally, Section 7 concludes the paper and discusses future research directions.
§ BACKGROUND INFORMATION ON PUFS
To provide a foundation for the technical discussions that follow, this section describes the mechanisms of Arbiter PUFs, XOR-PUFs, CDC-XPUFs, and the types of attacks they face, including conventional machine learning (ML) modeling attacks and reliability-based attacks.
§.§ The Arbiter PUFs
Figure <ref> showcases a simple representation of an Arbiter PUF. An n-bit Arbiter PUF consists of n stages, each containing two multiplexers (MUXs). Upon receiving a rising signal, the signal enters the Arbiter PUF at stage one and splits into two paths. These signals traverse through gates at each stage, with their propagation paths being determined by the challenge bit supplied to the multiplexers. Ultimately, the two signals reach the D flip-flop, which acts as an arbiter. The D flip-flop ascertains whether the signal on the upper path or the lower path arrived first, returning 1 if the upper path signal arrives first, otherwise, it returns 0.
The Arbiter PUFs satisfy the additive delay model <cit.>, which stipulates that the time it takes for each of the two signals to arrive at the arbiter is the summation of the delays incurred at all stages of the PUF. The response r of an arbiter is defined by:
r = Sgn(v(n) + ∑_i=1^n w(i)ϕ(i)) ,
where ϕ's are transformed challenge <cit.> given by:
ϕ(i) = (2c_i-1)(2c_i+1-1)⋯(2c_n-1),
with c_i being the challenge bit at stage i, v and w's being parameters quantifying gate delays at different stages, and Sgn(·) the sign function. This linear classification problem, represented by (<ref>), makes Arbiter PUFs vulnerable to machine learning attacks <cit.>.
§.§ The XOR-PUFs
Arbiter PUFs exhibit weak resistance against ML modeling attacks, prompting the development of XOR Arbiter PUFs, which incorporate a non-linear XOR gate with multiple Arbiter PUFs to yield the final response. This design is illustrated in Figure <ref> as an n-bit 3-XOR-PUF. An n-XOR-PUF comprises n component Arbiter PUFs, referred to as streams or sub-challenges, with the responses of all component Arbiter PUFs combined via an XOR gate to produce a single-bit response. Notably, all component Arbiter PUFs in an XOR-PUF are fed identical challenge bits.
The response of an n-stage k-XOR arbiter PUF can be expressed as:
r = ⊕_j = 1 … k r_j,
where r_j is the internal output of the j^th component arbiter PUF. The XOR operation increases the non-linearity of the relationship between the response r and the transformed challenges ϕ's.
Studies have shown that XOR-PUFs achieve superior modeling attack resistance compared to Arbiter PUFs, particularly when integrated with a lockdown scheme for mutual authentication to eliminate open-access interfaces <cit.>. Nonetheless, the extension of the number of components escalates both the cost and power consumption of a PUF, a critical consideration for resource-constrained IoT devices. Moreover, increasing the number of streams can detrimentally impact the reliability of PUFs, heightening their susceptibility to reliability-based side-channel attacks <cit.>.
§.§ The Component-Differentially Challenged XOR PUFs (CDC-XPUFs)
CDC-XPUFs are an advanced variant of XOR-PUFs that enhance security by providing different challenge bits to each component Arbiter PUF. Unlike traditional XOR-PUFs, where all components receive the same challenge, CDC-XPUFs assign unique challenges to each component, increasing complexity and resistance to ML attacks. While CDC-XPUFs have been mentioned in some literature <cit.>, comprehensive investigations into their capabilities and limitations are still lacking.
The adoption of CDC-XPUFs in security systems, especially for IoT, encounters critical challenges that prevent their widespread deployment:
* Increased Challenge Transmission: Each component in a CDC-XPUF receives a unique set of challenge bits, significantly increasing the total number of bits that must be transmitted to and processed by the PUF. This leads to higher power and computational demands, which conflicts with the low-power requirements typical of many IoT applications.
* Vulnerability to Reliability-Based Attacks: Despite their advanced design, CDC-XPUFs are more susceptible to reliability-based ML attacks than conventional XOR-PUFs <cit.>. These attacks exploit variability in PUF responses, analyzing them to uncover exploitable patterns <cit.>, thus posing a severe security risk.
§.§ Machine Learning Modeling Attacks
§.§.§ Conventional ML modeling attack
Machine learning has become a primary tool for analyzing the security of PUFs, enabling attackers to create predictive models with high accuracy <cit.>. Attackers can obtain data to build these models by eavesdropping on communications between the PUF and a server, gaining access to a large set of uniformly random challenges and their corresponding responses. This type of attack, known as a conventional CRP attack, relies solely on the PUF's CRPs and has been extensively studied.
Various ML attack methods have been proposed, including Support Vector Machines (SVM), logistic regression (LR) with resilient backpropagation (RProp), and neural networks (NNs) <cit.>. Currently, the most effective approach for PUF modeling attacks utilizes a neural network architecture with the hyperbolic tangent (tanh) as the hidden layer activation function and the sigmoid function as the output activation function <cit.>.
§.§.§ Reliability-based ML modeling attack
In contrast to conventional ML attacks, reliability-based ML modeling attacks require attackers to access the reliability information of the target PUF by repeatedly applying the same challenges to obtain multiple responses. This approach significantly reduces the security level of PUFs and does not rely on side-channel information such as power consumption or timing measurements, which would necessitate physical access to the PUF and complex embedding devices.
Real-world PUFs inherently exhibit some degree of unreliability due to environmental variations and electronic noise, exacerbating their susceptibility to reliability-based attacks—even when the PUF is specifically engineered to withstand conventional ML assaults. This practicality makes reliability-based attacks a serious concern. Therefore, a PUF design that only prevents conventional modeling attacks is insufficient to guarantee security; resistance to reliability-based modeling attacks is also a crucial metric for evaluating PUF designs.
There have been several proposed reliability-based ML attack methods for breaking PUF:
CMA-ES Attack: Becker <cit.> introduced an attack modeling the weights of APUF components using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). While effective against simpler PUF designs, it is slow and cannot break more complex architectures like the iPUF. Gradient-Based Logistic Regression Method: Tobisch et al. <cit.> proposed an attack that simultaneously models the weights of all APUF components and their reliability information using a gradient-based logistic regression method, with constraints to avoid converging to identical components. However, this approach requires a fully differentiable model of the target PUF, limiting its applicability. Multiclass Side-Channel Attack (MSA): Liu et al. <cit.> introduced an attack employing a neural network with outputs constructed through "feature crossing." The number of output bits is determined by the product of the possible values of each output feature category, including the PUF response, a reliability measurement, and a power consumption measurement. While the method can work without power consumption data, no experimental study was conducted using only response and reliability information. Multi-Label Multi-Side-Channel Attack (MLMSA): Gao et al. <cit.> proposed a neural network attack with different groups of output bits representing various types of information—a one-bit group for the PUF response and additional groups for different side-channel information types. Auxiliary Learning Side-Channel Attack (ALScA): The ALScA method <cit.>, similar to MLMSA, uses response data, a reliability measurement, and a power measurement. However, ALScA employs different sub-networks for each group of output bits.
§ A PRE-SELECTION STRATEGY TO OVERCOME RELIABILITY-BASED ATTACKS
§.§ PUF Reliability
Despite significant advancements aimed at enhancing ML resistance, current PUF implementations such as XOR-PUFs with large XOR sizes and CDC-XPUFs remain vulnerable to reliability-based attacks. These attacks pose a critical threat as they exploit the inherent variability in the physical properties of PUFs, leading to inconsistent outputs that can undermine security.
Although certain protocols have been developed to prevent reliability-based attacks by limiting repeated CRP queries<cit.>, implementing these protocols at a system-wide level is often costly and complex. This has prompted a need for a more fundamentally secure approach that addresses reliability at the hardware level rather than relying on external protections.
To overcome these vulnerabilities, we propose a new design strategy for PUFs that enhances reliability directly through hardware modifications. This approach begins by investigating the core of unreliability issues—specifically, the variations in delay models under different operational conditions.
§.§ Delay Difference and PUF Reliability
In Arbiter PUFs, the core determinant of the binary response is the delay difference (Δ D) between two competing signal paths. These paths are modulated by the binary `challenge' applied to the PUF. The delay difference Δ D is defined as the difference between the delay encountered by the signal on the upper path (d_upper) and the lower path (d_lower), formally expressed as:
Δ D = d_upper - d_lower
The binary response (r) of the PUF is determined by the sign of Δ D. In the absence of electronic noise and other perturbations, the response is given by:
r = Sign(Δ D)
Where:
* r = 1 if Δ D < 0 (i.e., the signal on the lower path is faster).
* r = 0 otherwise (i.e., the signal on the upper path is faster).
In practical scenarios, electronic noise and environmental variations introduce a noise term (d_noise), which affects the delay measurements. The response with noise can then be modeled as:
r = Sign(Δ D ± d_noise)
Where d_noise represents the variability introduced by noise, and the `±' sign indicates that noise can either increase or decrease the measured delay difference.
A critical observation is that the reliability of the PUF response heavily relies on the magnitude of Δ D. If Δ D is substantial, it is highly unlikely that the noise term d_noise will be sufficient to alter the sign of Δ D, thus preserving the integrity of the response. Conversely, when Δ D is minimal, even slight noise can result in erroneous responses, rendering such CRPs unreliable.
One might consider employing a majority voting technique to diminish the effect of noise and increase response reliability. However, recent research <cit.> has demonstrated that PUFs enhanced with majority voting can still be compromised by reliability-based attacks. Majority voting cannot fundamentally resolve unreliability issues when Δ D is close to zero. In such scenarios, even minor noise can flip the response, and aggregating inherently unstable responses yields marginal reliability enhancement. Consequently, PUFs enhanced with majority voting can still be vulnerable to reliability-based attacks that exploit residual variability in responses near the decision boundary. This vulnerability underscores the need for a more robust solution that addresses reliability at the hardware level.
§.§ Pre-selection strategy to improve reliability
Based on the observation of noise impact the reliability of PUF responses is significantly influenced by the magnitude of Δ D, large values of Δ D ensure that the noise term d_noise is unlikely to alter the outcome of Sign(Δ D), leading to stable and reliable responses. This critical observation motivates the implementation of a pre-selection strategy wherein CRPs are evaluated and selected based on the robustness of their delay differences. Only those CRPs where |Δ D| exceeds a predefined threshold are utilized, significantly enhancing the PUF's reliability by mitigating noise-induced response fluctuations.
To operationalize the pre-selection strategy, we introduce additional delay elements into the traditional PUF architecture to selectively manipulate the delay paths based on the observed Δ D values. The following figure <ref> illustrates this implementation:
To enhance the reliability of responses, we implement a pre-selection strategy that involves evaluating three potential responses for each challenge:
* Response1: Sign(d_upper - d_lower)
* Response2: Sign(d_upper + D_Delay module - d_lower)
* Response3: Sign(d_upper - (d_lower + D_Delay module)
CRPs are selected for use only if all three responses are consistent, i.e., all are "000" or "111". This condition ensures that the absolute difference |d_upper - d_lower| is greater than the "Delay module", indicating a robust delay difference that is likely to be resilient to noise impacts. This setup ensures that only the CRPs with significant delay differences, and thus higher reliability, are selected for security applications.
§ PRE-SELECTION STRATEGY EXPERIMENTAL VALIDATION AND DISCUSSION
This section validates the effectiveness of the pre-selection strategy introduced to mitigate reliability-based attacks in PUF designs. Our experiments utilize two types of FPGA boards—Artix-7 A7-35T and NEXYS A7-50T—to demonstrate the robustness of the proposed strategy across different hardware implementations.
§.§ Experimental Setup
A total of 100 million CRPs were generated for each PUF instance using a pseudorandom number generator, defined by the equation:
C_n+1 = (a × C_n + g) m,
where C represents the sequence of generated numbers, a is a multiplier, g a constant adder, and m is 2^K, with K being the number of stages. CRPs were selectively filtered to include only those producing uniform "000" or "111" responses, ensuring significant delay differences (Δ D) and enhanced reliability.
Two configurations were tested to explore their impact on PUF performance:
1. Modules with two or four NOT gates.
2. Modules with one, two, or three AND gates.
These configurations introduced controlled delays in the signal paths, allowing the examination of their effects on PUF performance under varied conditions.
The PUF architectures were designed using VHDL within the Xilinx Vivado 15.4 HL Design Edition software suite, and implemented on FPGAs using Tool Command Language scripts for precise placement. Communication between the FPGA and the test environment was managed via the Xilinx Software Development Kit, using AXI GPIO interfaces for challenge submissions and response collections, and AXI UART for data transmission at a rate of 230,400 bits per second, facilitated by Tera Term.
Reliability tests were conducted 10,000 times per PUF instance, maintaining the operating voltage at 2.0W and ambient temperature around the Xilinx Artix®-7 FPGA at 26.0^∘C, with a thermal margin of 59.0^∘C (12.3W).
§.§ Evaluation Metrics for PUF Performance
Uniqueness:
Introduced by Hori et al. <cit.>, uniqueness measures the ability of PUFs to produce distinct responses across devices for the same challenges. It is quantified as:
HU_k = 4/N_r × N^2 ∑_i=1^N_r∑_j,m=1, j m^N (b_j,i⊕ b_m,i),
where N is the number of chips, N_r is the response length, and b_j,i, b_m,i are the i-th response bits from the j-th and m-th PUF instances, respectively.
Reliability (BER):
The Bit Error Rate (BER) quantifies the consistency of PUF outputs under identical conditions and is defined as:
BER = 1 - R = N - ∑_i=1^N (b_i == b_ref)/N,
where a lower BER indicates higher reliability.
Randomness:
Randomness evaluates the balance of '0's and '1's in PUF responses to ensure unpredictability:
p = 1/N_r∑_i=1^N_r b_i,
H = -log_2max(p, 1 - p),
where N_r is the total number of responses, and b_i is a response bit.
§.§ Experimental Results
The experimental results are summarized in Table <ref>, demonstrating that the pre-selection strategy significantly enhances the reliability of the PUFs. This is evident from the substantial reduction in the BER and the maintenance of high levels of uniqueness and randomness in the responses. These improvements are indicative of the strategy's effectiveness in enhancing the operational reliability of PUFs under varied conditions.
To quantify the effectiveness of our pre-selection strategy, we define the "CRP Selection Rate" (R) as the percentage of CRPs that result in uniform responses of "000" or "111", reflecting the robustness of the response against noise and variations. This rate for APUF is calculated as follows:
R = (Number of CRPs meeting the criteria/Total CRPs generated) × 100%
This metric directly measures the efficacy of the pre-selection strategy in filtering out unreliable CRPs, ensuring that only the most stable and reliable responses are utilized for security applications.
Step further, configurations utilizing two delay gates emerged as particularly effective, achieving an optimal balance between securing reliability and maintaining an acceptable selection rate. Specifically, two delay gates ensured enhanced reliability of the PUF responses, with approximately 5% of CRPs being reliably filtered. This rate is considered acceptable given that the PUFs tested are capable of providing an exponentially large CRP space, ensuring a sufficient number of CRPs remain usable for robust authentication and security applications.
Conversely, while one delay gate did increase reliability compared to no delay modification, the improvements were not substantial enough and the CRPs filtered out by one delay gate are still not reliable. The configuration with four delay gates, although potentially offering the highest level of reliability, resulted in an overly restrictive selection rate of merely 0.06% to 0.2% of CRPs. Such a low rate severely limits the practical usability of the PUF, as it drastically reduces the number of available CRPs to a point that may not support diverse operational requirements, especially in scenarios demanding high-frequency authentication.
§.§ Discussion on Applicability to XOR-PUFs
For XOR-PUFs, which combine the outputs from multiple Arbiter PUF components via XOR operations, the CRP Selection Rate (R_XOR-PUF) is determined by the intersection of reliable CRPs across all involved components. This approach is necessary because the XOR-PUF's final response is reliable only if all component PUFs simultaneously provide stable outputs ("000" or "111"). The selection rate for the XOR-PUF is thus given by the proportion of CRPs that are commonly reliable across all components:
R_XOR-PUF = Number of CRPs reliable in all components/Total CRPs generated
The experimental results presented in Table <ref> demonstrate that while the pre-selection strategy significantly enhances the reliability of APUFs and XOR-PUFs with smaller XOR sizes, the strategy becomes less effective as the XOR size increases. Due to the inherent design in XOR-PUFs, the variance in component reliability makes it difficult to apply a uniform pre-selection strategy effectively; a CRP that is reliable for one component may not be reliable for another, reducing the overall utility of the strategy. Notably, the selection rate drastically reduces for XOR-PUFs with more than two XOR gates, indicating that as XOR size increases to improve ML resistance, the applicability of the pre-selection strategy diminishes.
This presents a critical concern: PUF architectures with smaller XOR sizes, although amenable to reliability enhancements through pre-selection, are vulnerable to ML attacks. Conversely, larger XOR sizes, while resistant to ML attacks, yield a selection rate too low to be practical under the pre-selection strategy. This observation underscores the need for a PUF design that combines robust ML attack resistance with the capability to implement pre-selection reliability-enhancing strategies.
§.§ Pre-Selection Strategy for CDC-XPUF
Given the limitations of traditional XOR-PUFs, particularly their vulnerability to ML attacks when configured with smaller XOR sizes, we explored the potential of the CDC-XPUF architecture. CDC-XPUFs present a robust architecture that supports differential challenges across multiple components, enhancing resistance to ML attacks.
The architectural complexity of CDC-XPUFs allows for more flexible management of reliability across different components. This adaptability makes them ideal for integrating our pre-selection strategy, which is designed to maintain high rates of reliable CRPs without sacrificing resistance to ML attacks.
The following table <ref> presents the results from our experimental validation, showing the performance metrics of CDC-XPUFs under different delay module configurations. These results demonstrate the efficacy of the pre-selection strategy, showing substantial improvements in BER and maintaining acceptable uniqueness and randomness across various configurations of CDC-XPUFs. And with 2 NOT gates applied in the delay module, CDC-XPUFs could perform with perfect reliability.
§.§ Mitigating Reliability-Based Attacks with CDC-XPUF
The pre-selection strategy implemented in the CDC-XPUF design ensures the utilization of only robust and unique CRPs across different components. This strategy significantly reduces the risk of attackers exploiting the inherent variability and unreliability typically associated with PUF responses.
Reliability-based modeling attacks exploit the variability in PUF responses by repeatedly applying the same challenges to observe fluctuating outputs. Our strategy counters this vulnerability by selecting CRPs that demonstrate inherently stable and near-perfect reliability, effectively negating the possibility of reliability-based attacks.
Since only CRPs with consistent responses are used, the fundamental requirement for reliability-based attacks—response variability—is removed. Given the near-perfect reliability achieved, further testing specifically for resistance to reliability-based attacks may not be necessary.
By rigorously filtering out any CRPs susceptible to noise and environmental variations, the CDC-XPUF architecture ensures that the PUF responses are highly consistent. This approach not only enhances the overall reliability of the device but also drastically reduces the attack surface that could potentially be exploited by adversaries focusing on reliability weaknesses. Thus, the CDC-XPUF offers a significant advancement in PUF technology by addressing one of the most challenging vulnerabilities in traditional PUF designs.
§ DESIGN OF LIGHTWEIGHT CDC-XPUFS
The deployment of CDC-XPUFs in IoT devices faces significant challenges due to increased challenge transmission requirements and hardware demands. These issues limit the practical utility of CDC-XPUFs for resource-constrained IoT applications.
Our pre-selection strategy effectively addresses the vulnerability to reliability-based attacks by filtering out unstable CRPs, significantly enhancing the reliability of PUF responses. This approach not only reduces susceptibility to such attacks but also eliminates potential reliability side-channel patterns.
Building on this foundation, we focus on reducing the increased challenge transmission and hardware overhead associated with CDC-XPUFs, while maintaining strong resistance to machine learning modeling attacks. To achieve this, we examine the factors that impact the ML attack resistance of APUF-based designs.
§.§ Factors Impacting ML Modeling Attack Resistance of APUF-Based PUFs
Two primary factors influence the resistance of APUF-based PUFs to ML modeling attacks:
* Number of Stages within Each Arbiter PUF Component: Inside the arbiter PUF component, the response r is determined by the additive delay model <cit.>, where the total delay is the sum of delays across all stages. This model (<ref>) represents a linear classification problem, making it inherently susceptible to linear ML attacks. Increasing the number of stages can marginally enhance resistance by expanding the feature space, but the linear nature of the model remains a vulnerability
* Number of Arbiter PUF Components (XOR Size): Combining multiple Arbiter PUF components through XOR operations increases the non-linearity between the response r and the transformed challenges ϕs. This added complexity significantly complicates ML models attempting to predict the PUF responses. Empirical studies have shown that increasing the number of components has a more pronounced effect on enhancing ML attack resistance compared to increasing the number of stages <cit.>.
§.§ Proposed Lightweight CDC-XPUF Design
Guided by these insights, we propose a lightweight CDC-XPUF design that balances high attack resistance with reduced hardware overhead. Our approach involves increasing the number of components while decreasing the number of stages within each component. This strategy achieves enhanced ML attack resistance with lower resource consumption, making it ideal for resource-constrained IoT environments.
Key features of the proposed design include:
* Reduced Stage Count per Component: By decreasing the number of stages in each Arbiter PUF component, we reduce the hardware required for each component, leading to overall lower resource usage and power consumption.
* Increased Number of Components: Adding more Arbiter PUF components increases the non-linearity and complexity of the overall PUF response, significantly enhancing resistance to ML attacks.
* Distinct Challenges per Component: Unlike traditional XOR-PUFs, CDC-XPUFs receive unique challenge inputs for each component. This dramatically enlarges the potential CRP space without necessitating longer challenge bits for each component.
For example, a 16-stage CDC-XPUF with 4 components can support a 2^16^4 CRP space, vastly greater than the 2^16 CRP space of a similarly configured XOR-PUF. This distinction allows CDC-XPUFs to benefit from reduced stage counts without sacrificing the expansiveness needed for effective identification and authentication protocols.
By carefully balancing the number of stages and components, our lightweight CDC-XPUF design achieves enhanced ML attack resistance, reduced hardware overhead, and efficient challenge transmission.
§ INTEGRATION AND EVALUATION OF THE NEW LIGHTWEIGHT CDC-XPUF DESIGN
§.§ Strategic Integration of Design Innovations
Building upon the individual strengths of the pre-selection strategy and the architectural innovations of shorter-stage CDC-XPUFs, we propose a unified design that incorporates both elements to address the dual challenges of high hardware overhead and vulnerability to ML modeling attacks.
In theory, by applying the pre-selection strategy within the shorter-stage CDC-XPUF framework, we enhance the PUF's resistance to noise and environmental variability while maintaining a large and secure CRP space. This method ensures that despite the reduced number of stages, the security and reliability of the PUF responses are not compromised. Furthermore, the increase in the number of XOR components within each PUF component enriches the non-linearity, thereby preserving resistance against advanced ML attacks.
§.§ Experimental Evaluation
§.§.§ Evaluation Setup
To validate the efficacy of the integrated CDC-XPUF design, we conducted a series of experiments using the same FPGA board configurations as we tested in the pre-selection strategy.
In the CRP generation process, we first apply arbiter PUF components and delay module (two NOT gates) of the CDC-XPUFs on the FPGA board and utilize the pre-selection strategy to build a CRP sub-dataset. After collecting the CRP of each component, we combine these sub-datasets to build one integral dataset for one lightweight CDC-XPUF. These integral datasets are used to evaluate the PUF's performance across various metrics, including reliability, randomness, uniqueness, hardware cost, and resistance to ML modeling attacks.
For ML modeling attack resistance evaluation, resilience is evaluated by employing ML attacking methods. This is executed by providing a novel training set of simulated CRPs and subsequently predicting responses to fresh challenges. In every attack, the quantity of CRPs exploited begins at a minimal level and gradually increases until a threshold (specified in the column "Training Size" of the corresponding table) is attained, which either yields a 90% attack success rate across all twenty PUF instances or culminates in failure upon reaching 100 million CRPs. It is imperative to note that only those attacks with a testing accuracy exceeding 90% are considered successful.
§.§.§ Evaluation Tools for Modeling Attack Resistance:
When evaluating the modeling attack resistance of PUFs, it is crucial that the evaluation tools possess robust attacking capabilities to make meaningful claims about the security of the PUFs under consideration. In the context of PUF modeling attacks, two methodologies have gained prominence for evaluating resistance, namely the Neural Network (NN) method <cit.> and the Logistic Regression (LR)-based method <cit.>. Our recent research <cit.> indicates that the adapted LR-based method outperforms the NN method in attacking traditional CDC-XPUFs with extensive stage lengths. However, in this study, we retain both methods to ensure a comprehensive evaluation of the lightweight CDC-XPUFs.
To facilitate comprehension and reproductive purpose, we list the parameters for the NN attack method in Table <ref> and for the LR-based attack method in Table <ref>.
Although we conduct two different evaluation methods, we only list the minimum required CRP to break a certain PUF among the two methods in the later experimental result section.
§.§.§ Evaluating Hardware Cost of Lightweight CDC-XPUFs
When evaluating the hardware cost of PUF designs, it is essential to have a metric that provides a comprehensive and standardized measure of circuit complexity. Gate Equivalents (GE) is a widely accepted metric in the industry for this purpose. It quantifies the complexity of a digital circuit in terms of the equivalent number of basic gates, such as 2-input NAND gates, that would be needed to implement the same functionality. This makes GE a technology-independent metric that allows for meaningful comparisons across different fabrication processes.
One of the primary advantages of using Gate Equivalents as a metric is that it standardizes the measurement, making it universally applicable for comparing different designs and technologies. Moreover, it offers a more holistic view of complexity than counting specific components like multiplexers and arbiters, as GE accounts for all the logic gates required in the implementation. Furthermore, as the physical size and performance of components like MUXs and Arbiters can vary with fabrication technology, using GE allows comparisons that are not biased by these variations. Additionally, having insight into the number of gate equivalents can offer a better understanding of the trade-offs between design complexity, power consumption, and performance.
In this study, we adopt Gate Equivalents as the primary metric for evaluating the hardware cost of various PUF designs, including CDC-XPUF. This choice aims to present a more objective and comprehensive analysis of the hardware requirements and trade-offs involved in implementing these PUF designs.
§.§ Results and Analysis
The experimental validation of the lightweight CDC-XPUF design offers compelling evidence of its efficacy. The integration of a pre-selection strategy with the novel architectural design of shorter-stage CDC-XPUFs provides substantial improvements in several key performance metrics.
The unified dataset, created by combining CRP sub-datasets from each component, facilitated a detailed evaluation of the PUF's performance across various metrics—reliability, randomness, uniqueness, hardware cost, and resistance to ML modeling attacks.
The reliability performance Bit Error Rate was consistently below 10^-8, indicating extremely high reliability. Uniqueness values averaged between 65% to 75%, meeting the ideal criteria for effective security implementations. Given the enormous range of potential challenges, such as 2^8 × 10 in the CDC-10-XPUF 8-bit, this percentage represents a substantial amount of unique CRPs, making CDC-XPUFs viable for security applications. Randomness metrics were also within ranges 54%-59%, this would be because the delays of the two selector chains are not ideally close, nevertheless equalizing the two delays is almost impracticable since the architecture of the FPGA is fixed. But in the face of an exponentially large available CRP space, the randomness performances of the lightweight CDC-XPUF is adequate for future applications.
The results highlighted in Table <ref> and <ref> (equipped with two delay models for CRP selection) demonstrate robust resistance to ML attacks, particularly in configurations with higher component counts and reduced stages. Notably, configurations with 10 components and 8 stages resisted all modeling attacks attempted with up to 200 million CRPs, showcasing no successful breach at the 90% attack success criterion.
One major disadvantage of CDC-XPUFs is the additional overhead required for transmitting a larger number of bits. Table <ref> compares the number of GEs, multiplexers (MUXs), and arbiters required, along with the number of transmission bits needed. The integration of shorter stages effectively reduces the number of transmission bits without compromising CRP space or security, aligning with the demands for low-power, high-efficiency IoT devices. GE counts are significantly lower in these configurations, simplifying circuit complexity and reducing fabrication costs.
The experimental results support the hypothesis that combining shorter stages with the pre-selection strategy addresses the challenges of increased transmission requirements and susceptibility to attacks, meeting the stringent demands of modern IoT security frameworks.
§ CONCLUSION
Before this work, the prevailing trend in PUF research often focused on developing entirely new PUF architectures to overcome the limitations observed in traditional designs such as APUFs and XOR-PUFs. Much literature believed that these PUF designs were inherently flawed, particularly in their vulnerability to machine learning attacks and reliability-based attacks. This led to a significant shift towards exploring alternative PUF configurations that might offer better security characteristics.
However, our study revisits the potential of XOR-PUF designs, demonstrating that they can indeed perform effectively and have considerable untapped potential when enhanced with appropriate strategies. By integrating the Component-Differentially Challenged approach and a pre-selection strategy into the XOR-PUF framework, we have shown that it is possible to significantly improve both the reliability and the ML attack resistance of these devices. This not only preserves the extensive CRP space that XOR-PUFs are valued for but also reduces their susceptibility to attacks that have undermined their utility in the past.
Our findings suggest that rather than abandoning the foundational principles of existing PUF designs, there is substantial merit in enhancing these systems with innovative modifications. The CDC approach, combined with a reliability-enhancing pre-selection strategy, revitalizes the XOR-PUF design, making it a viable option for secure and efficient implementation in resource-constrained environments like IoT devices. This work underscores the importance of adaptive innovation in cybersecurity technologies, suggesting that the evolution of existing solutions could be as valuable as the invention of new ones. By refining and augmenting established designs, we can achieve significant advancements in security technology, aligning with the evolving needs of modern digital infrastructures.
In conclusion, our research provides a way for revitalizing traditional XOR-PUF architectures, demonstrating their practical utility and potential for innovation. The enhanced CDC-XPUF model not only meets the rigorous demands of IoT security but also offers a blueprint for future research to continue exploring and improving upon existing PUF architectures. This approach fosters a more sustainable path for PUF development, emphasizing incremental innovation and the adaptation of proven technologies to meet new challenges.
§ ACKNOWLEDGMENT
IEEEtran
| The rapid expansion of Internet of Things (IoT) devices has made them integral to various industries and daily life. As these devices become ubiquitous, ensuring the security of communications within these diverse networks is important. Traditional cryptographic protocols, while robust, are often too resource-intensive for the constrained computational and power capabilities typical of IoT devices. This limitation creates a pressing need for lightweight, yet secure, authentication mechanisms tailored to the IoT landscape.
Physically Unclonable Functions (PUFs) <cit.> have emerged as a promising lightweight alternative for cryptography in such settings. By leveraging inherent manufacturing variations in integrated circuits, PUFs generate unique and physically unclonable responses to input challenges, making them ideal for device identification and authentication in IoT systems <cit.>. Furthermore, PUFs are generally classified into two categories: weak PUFs and strong PUFs <cit.>. Weak PUFs have a limited Challenge-Response Pair (CRP) space, making them suitable for cryptographic key generation. In contrast, strong PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) feature extensive CRP spaces ideal for challenge-response authentication protocols.
Despite their potential, APUFs are susceptible to machine learning (ML) attacks due to their linear response behavior. XOR-PUFs were introduced to enhance ML resistance by adding non-linearity through XOR gates. However, achieving robust security with XOR-PUFs often requires increasing the XOR size, which leads to greater complexity and higher power consumption—undesirable attribute for resource-constrained IoT devices <cit.>.
Reliability is another critical concern for both APUFs and XOR-PUFs. Because of their sensitivity to environmental changes and operational variability, these devices can produce inconsistent responses, undermining practical usage and opening avenues for reliability-based attacks <cit.>. In such attacks, adversaries exploit the variability in responses to the same challenge under different conditions to model and predict PUF behavior. Even though some advanced PUF architectures can resist conventional ML attacks, they often remain susceptible to reliability-based attacks. While certain protocols have been developed to prevent these attacks by limiting repeated CRP queries <cit.>, implementing such measures at a system-wide level is often costly and complex. This situation underscores the need for a more fundamentally secure approach that addresses reliability at the hardware level rather than relying on external protections.
These challenges underscore a gap in the current PUF design: the need for architectures that simultaneously address security vulnerabilities and the rigorous resource constraints of IoT devices. The ideal solution would enhance resistance to both ML and reliability-based attacks without suffering high hardware or energy costs.
In this study, we revisit traditional XOR-PUF designs and propose an enhanced framework that overcomes these limitations by integrating two key innovations:
* Implementation of a Pre-Selection Strategy: We introduce a novel pre-selection mechanism that improves the reliability of PUF responses by filtering and utilizing only the most stable and consistent CRPs. By doing so, we significantly enhance the device's robustness in diverse and challenging environments and mitigate the effectiveness of reliability-based attacks at the hardware level.
* Development of a Lightweight CDC-XPUF Design: We propose a lightweight design based on Component-Differentially Challenged XOR-PUFs (CDC-XPUFs) <cit.>, a less explored variant that assigns unique challenges to each component APUF. Our design strategically reduces the number of stages within each APUF component while increasing the number of components, effectively reducing hardware overhead and power consumption. Importantly, this configuration maintains high resistance to ML attacks due to increased non-linearity and preserves the extensive CRP space necessary for effective authentication protocols.
Our research demonstrates that, with these enhancements, traditional XOR-PUF architectures have considerable untapped potential and need not be discarded in favor of entirely new designs. By refining and augmenting existing frameworks, we achieve a balance between security and efficiency that aligns with the demands of resource-constrained IoT devices.
The contributions of this paper are as follows:
* We address the vulnerability to reliability-based attacks by introducing a pre-selection strategy that filters out unstable CRPs, resulting in near-perfect reliability without significant hardware modifications.
* We design a lightweight CDC-XPUF architecture that reduces hardware complexity and challenge transmission overhead, making it practical for IoT applications while maintaining a high level of security against ML attacks.
* We provide comprehensive experimental validations demonstrating the effectiveness of our approach in enhancing reliability, security, and efficiency. Our results show that the enhanced CDC-XPUF achieves a balance of performance metrics suitable for real-world deployment.
The remainder of this paper is organized as follows: Section 2 provides background information on PUFs and discusses related work. Sections 3 and 4 introduce the pre-selection strategy and its impact on reliability and security. Section 5 presents the design of the lightweight CDC-XPUF and details the integration of our proposed strategies. Section 6 describes the experimental setup and evaluates the performance of our design. Finally, Section 7 concludes the paper and discusses future research directions. | null | null | null | null | Before this work, the prevailing trend in PUF research often focused on developing entirely new PUF architectures to overcome the limitations observed in traditional designs such as APUFs and XOR-PUFs. Much literature believed that these PUF designs were inherently flawed, particularly in their vulnerability to machine learning attacks and reliability-based attacks. This led to a significant shift towards exploring alternative PUF configurations that might offer better security characteristics.
However, our study revisits the potential of XOR-PUF designs, demonstrating that they can indeed perform effectively and have considerable untapped potential when enhanced with appropriate strategies. By integrating the Component-Differentially Challenged approach and a pre-selection strategy into the XOR-PUF framework, we have shown that it is possible to significantly improve both the reliability and the ML attack resistance of these devices. This not only preserves the extensive CRP space that XOR-PUFs are valued for but also reduces their susceptibility to attacks that have undermined their utility in the past.
Our findings suggest that rather than abandoning the foundational principles of existing PUF designs, there is substantial merit in enhancing these systems with innovative modifications. The CDC approach, combined with a reliability-enhancing pre-selection strategy, revitalizes the XOR-PUF design, making it a viable option for secure and efficient implementation in resource-constrained environments like IoT devices. This work underscores the importance of adaptive innovation in cybersecurity technologies, suggesting that the evolution of existing solutions could be as valuable as the invention of new ones. By refining and augmenting established designs, we can achieve significant advancements in security technology, aligning with the evolving needs of modern digital infrastructures.
In conclusion, our research provides a way for revitalizing traditional XOR-PUF architectures, demonstrating their practical utility and potential for innovation. The enhanced CDC-XPUF model not only meets the rigorous demands of IoT security but also offers a blueprint for future research to continue exploring and improving upon existing PUF architectures. This approach fosters a more sustainable path for PUF development, emphasizing incremental innovation and the adaptation of proven technologies to meet new challenges. |
http://arxiv.org/abs/2409.18024v1 | 20240926163210 | Report on the Workshop on Simulations for Information Access (Sim4IA 2024) at SIGIR 2024 | [
"Timo Breuer",
"Christin Katharina Kreutz",
"Norbert Fuhr",
"Krisztian Balog",
"Philipp Schaer",
"Nolwenn Bernard",
"Ingo Frommholz",
"Marcel Gohsen",
"Kaixin Ji",
"Gareth J. F. Jones",
"Jüri Keller",
"Jiqun Liu",
"Martin Mladenov",
"Gabriella Pasi",
"Johanne Trippas",
"Xi Wang",
"Saber Zerhoudi",
"ChengXiang Zhai"
] | cs.IR | [
"cs.IR"
] |
[email protected]]Timo BreuerTH KölnCologne, Germany
[email protected]]Christin Katharina KreutzTH MittelhessenGießen, Germany
[email protected]]Norbert FuhrUniversity of Duisburg-EssenDuisburg, Germany
[email protected]]Krisztian BalogUniversity of StavangerStavanger, Norway
[email protected]]Philipp SchaerTH KölnCologne, Germany
Nolwenn Bernard, Ingo Frommholz, Marcel Gohsen, Kaixin Ji, Gareth J. F. Jones, Jüri Keller, Jiqun Liu, Martin Mladenov, Gabriella Pasi, Johanne Trippas, Xi Wang, Saber Zerhoudi, ChengXiang Zhai
Report on the Workshop on Simulations for Information Access (Sim4IA 2024) at SIGIR 2024
[
September 28, 2024
========================================================================================
§ ABSTRACT
This paper is a report of the Workshop on Simulations for Information Access (Sim4IA) workshop at SIGIR 2024. The
workshop had two keynotes, a panel discussion, nine lightning talks, and two breakout sessions. Key takeaways were user simulation's importance in academia and industry, the possible bridging of online and offline evaluation, and the issues of organizing a companion shared task around user simulations for information access.
We report on how we organized the workshop, provide a brief overview of what happened at the workshop, and summarize the main topics and findings of the workshop and future work.
§ INTRODUCTION
The common approach and general understanding of evaluating information access systems (like search engines, recommender systems, or conversational agents) is closely coupled to the Cranfield paradigm, the dominating evaluation method, especially in information retrieval (IR). This has proven to be able to deal with the inherent complexity in information access contexts. The Cranfield studies can be understood to use a special form of simulation to mimic the search process by making implicit and explicit assumptions about the information system and its users. This helps to reduce the complexity of the search process and allows us to effectively compare different IR systems. Despite its long history and roots within the community, Cranfield has not been without criticism <cit.> and the underlying assumptions are often described as (over-)simplifications leading to potentially unrealistic search evaluations that deviate from users' actual interaction experience and search task performance <cit.>.
Other evaluation methods, including interactive/session-based retrieval settings and controlled user experiments <cit.>, living labs <cit.>, or (user) simulation studies <cit.> have been proposed and discussed in the community; these have also been used in shared tasks at TREC, NTCIR, or CLEF, e.g. iCLEF <cit.>, OpenSearch <cit.>, and LiLaS <cit.>). However, no shared tasks at TREC/CLEF have primarily focused on user simulations.
Recently, in the TREC Interactive Knowledge Assistance Track (iKAT) <cit.>, some submissions included simulated user feedback in their interactive information access systems, while the lab did not employ such an evaluation strategy.
Simulations can also contribute to a better understanding of users. Formalizing a user model for simulation delivers explicit hypotheses on user behavior, which can produce insights into the validity of assumptions about users <cit.>.
Other recent examples of a re-started interest in the topic of (user) simulation were the Sim4IR workshop that was held at SIGIR 2021 <cit.>, the SIMIIR 2.0 framework[<https://github.com/padre-lab-eu/simiir-2>] <cit.>, tutorials <cit.>, and a recurring theme of how generative model can be used for simulation <cit.>.
At ECIR or SIGIR, a reasonable number of relevant papers on user simulations were accepted, and even a study on simulating user queries won the best paper award at ECIR 2022 <cit.>. Additionally, the introduction of generative AI methods opened up new possibilities for integrating LLMs to simulate users.
Therefore, to understand how and whether the evaluation of information access technology can truly benefit from simulating user interactions, we organized the first
workshop on Simulations for Information Access (Sim4IA 2024), held in conjunction with SIGIR 2024. Its aim was to serve as a forum to bring together researchers and experts.
Additionally, this workshop's goal was to provide a much-needed forum for the community to discuss the emerging challenges when applying (user) simulations to evaluate information access systems in simulation-based shared tasks.
This paper is a report of the Sim4IA[<https://sim4ia.org/sigir2024/>] <cit.> workshop at SIGIR 2024. The workshop had two keynotes, a panel discussion, nine lightning talks, and two breakout sessions.
We report on how we organized the workshop, provide a brief overview of what happened at the workshop, and summarise the main topics and findings of the workshop as well as future work.
§ WORKSHOP OVERVIEW
Sim4IA was a full-day workshop at SIGIR 2024, held in Washington, D.C., on 18 July 2024. The workshop attracted 25 participants who participated in a very interactive setting. Instead of a typical “mini-conference” we decided to focus on short, but thought-provoking lightning talks from the participants, two keynotes and a panel discussion (see Table <ref>). Participants could later join two breakout discussion groups to deepen the previous discussions and further outline future research topics and methods.
To enable interaction with a broader set of participants, we offered limited hybrid participation in addition to onsite attendance via Zoom and Slack.
§ KEYNOTES
Our keynote speakers, Gabriella Pasi (University of Milano Bicocca) and Martin Mladenov (Google), both delivered their keynotes before taking questions from the audience. They represented perspectives from academia and industry.
Gabriella Pasi's keynote addressed the issue of personalizing information access by leveraging the user's experience, preferences and expertise. In particular, personalized search has been a core research focus for many years to offer users a search experience that can improve accessibility to content that is retrieved in response to their queries. This task involves two primary sub-tasks: modeling users and their context, and leveraging user models to constrain the search process towards producing a personalized outcome. Personalization can be interpreted as a simulation process, where the system relies on “knowledge” about a user to select content that is possibly useful and accessible to the specific user. Seen through this lens, effective and correct user modeling is paramount to an effective user simulation. In this perspective, the talk raised some key questions about the two above aspects.
Martin Mladenov's keynote focused on the application of user simulation as an engineering tool. Results at Google indicate that calibrated user simulations show promise to replace (at least partially) A/B tests as the core driver of the recommender system development cycle. The keynote emphasized that before this promise can be fulfilled, and user simulation could become a standard part of the recommender system development toolkit, a number of open questions need to be understood. These questions revolve around applicability, credibility, and reliability. The talk outlined potential approaches towards answering these questions in terms of developing diagnostics for individual simulation models as well as theoretical guarantees for the general simulation-driven development process. The talk introduced RecSim NG as a tool for developing solutions.
§ PANEL DISCUSSION
Besides the keynotes and lightning talks, the workshop featured a panel discussion with four invited panelists, including the two keynote speakers, Gabriella Pasi and Martin Mladenov, Johanne Trippas (RMIT University), and ChengXiang Zhai (University of Illinois at Urbana-Champaign). The panelists shared their experiences, opinions, and stances on simulations. They were moderated by Norbert Fuhr, who organized the discussion around the following three questions (see the left side of <ref>).
§.§ What is the purpose of simulating users?
Overall, the panelists mainly agreed that it is quite challenging to pinpoint a single purpose of user simulations, as the use cases and benefits for real users are manifold. Among many terrific ideas as to why user simulations can be useful, the following aspects were highlighted by the panelists. Personalization can be understood as a simulation process. In this sense, simulations can help users to better access useful (and personalized) information. Besides a deeper understanding of the user, simulations also help enable better understanding of the system and make the engineering process more transparent. They are particularly useful for evaluating an interactive system and making evaluations reproducible. Usually, the user's knowledge state changes as the session progresses and after the experiment is finished. In this regard, simulating users allows better understanding at different stages of the search process with explicit modeling of stage transitions. Enabling interpretability is another important merit factor, as simulations are grounded on a testable user model. Set against the prevailing agreement on the usefulness of user simulations, panelists also highlighted their limitations. If the underlying user model is incorrect, the simulations would not make much sense. Furthermore, in some cases, user simulations can imply an abstraction that goes too far to allow any generalizable conclusions.
§.§ For what kinds of experiments and evaluations have you used user simulation?
From a more personal point of view, the panelists shared their experiences with applying user simulations in the experimentation and evaluation process. First and foremost, user simulations enable offline experimentation without involving real users in the early development cycles. Doing so allows testing a system without conducting a sometimes risky and expensive A/B test. Sharing personal experiences and anecdotes, one panelist reported that simulators are sometimes more accurate than A/B tests, as they better align with metrics from the production systems, leaving opportunities for interesting research questions about why this is the case. Often, the simulators are based on real user logs, although there are limitations on how far they can be used for insightful estimates. It can be quite challenging to make reliable estimates for an out-of-distribution problem setting, where logs are obtained from a possibly different population or environment to estimate a new system feature, for example.
More user and context data is particularly helpful for reliable user models. In this regard, the academic search setting offers a profound basis for obtaining this kind of data from publications, as outlined by one of the panelists. For instance, a user's knowledge state can be modeled based on the cited works of a publication, helping to generate data where it is usually unavailable. Another panelist emphasized the usefulness of simulators in evaluating interface alternatives. For instance, two or more interfaces can be compared for a known-item search with regard to how much effort is required to reach the known item in the session. Even though simulations can be imperfect, they often allow a reliable evaluation of which systems are better than a baseline or at least how they differ.
Last but not least, one panelist also shared experiences with user simulations as useful tools for robustness tests of production systems. Even simple user models are often enough to run security checks or test the trustworthiness of a platform by spoiling the user base with fake users. As part of the discussions with the audience, the panelists also discussed the idea of having a guiding system that helps users during a search session by predicting the next interaction steps with a reasonable user model. Everyone agreed that such a system would be particularly helpful in the context of a conversational system.
§.§ How realistic are our user simulations?
One skeptical panelist argued that relying on (unrealistic) user simulations can be misleading and that they have to be critically analyzed. As an example, the panelist was referring to the field of aerospace engineering, where turbulences are an ongoing subject of simulations. Even though air travel is considered to be safe when people have buckled up, injuries or even deaths occur because people simply do not use their belts. If the simulation does not cover these cases, relying on simplistic models can be harmful in the extreme case.
Nevertheless, it was also argued that simulations are an important tool for better risk estimations, as they mainly help reduce entropy and uncertainty. Still, it was also pointed out that modeling the entire user might be too complex. A user model always implies a certain kind of abstraction, and not every aspect of the user behavior has to be covered by the model as long as it satisfies the requirements of the experimental setup and is sufficient to answer the underlying research question. For instance, sometimes, it is sufficient to have rather simple user simulators that are good enough to distinguish between two systems for which the effectiveness is known a priori.
In this regard, simulated user interactions must be analyzed carefully, especially the generalizability of the conclusions that can be drawn from them. Very often, user models focus on particular aspects of the user behavior, which are usually related to specific tasks. For better generalizability and a more comprehensive approach to user modeling,
our community probably needs the help of others, e.g. psychology, as argued by one panelist, and better characterize and represent users' bounded rationality, interaction intents, and judgment strategies in search sessions. All panelists agreed that these cross-disciplinary approaches toward user simulations can be fostered by collaborations between industry and academia, and researchers from different disciplines. Most notably, academic researchers have a strong interest in obtaining data from real-world experiments, whereas participants with an industrial background mentioned that many models from academia help design experiments and products. Participants from academia and industry had a strong interest and willingness to bring these kinds of collaborations forward to advance the fidelity of user simulations.
§ LIGHTNING TALKS
A total of nine lightning talks were given, spread out over two designated sessions. Three of these were given remotely via Zoom. The time frame for each lighting talk was 5 minutes. Table <ref> summarizes all nine talks and shows the wide range of different topics covered in these talks. Six out of the presentations were re-submission of previously presented work: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.
The rest of the talks were original content.
§ SUMMARY OF BREAKOUT GROUPS
§.§ Group discussion on shared tasks with user simulators
In this breakout group, we discussed the idea of having a shared task based on user simulators. We envision the general idea of conducting a shared task to which participants submit user simulators instead of systems for the sake of having better insights into the validity of user simulators. Generally, we envision the shared task to be based on a train/validation/test data split of user logs, where participants can instantiate their simulators with training samples and have their fidelity evaluated after submitting them for evaluation, which is based on how well the simulated interactions align with the real ones of the test data.
More general topics that were covered during the breakout group discussions included measuring how well the simulated users fit reality, what kind of data to use (logs, new or existing test collection resources, etc.), what kind of sustainable data artifacts would emerge from such a shared task, the need for annotators and how to spend the annotation budget, and what type of information access systems to use.
The following other ideas and aspects emerged from the discussions. In the context of the anticipated first calibrate, then predict setting, participants could be provided with user logs and scores of one calibration measure. The final evaluations are then conducted with the help of unknown or hidden measures. This setup would align with the idea of having counterfactual elements in the evaluations, where the final evaluation is conducted in a different setting, with a possibly different underlying user model. Likewise, the counterfactual element could be a different type of user interface. For instance, the training logs could be obtained from a search result interface with pagination to simulate and evaluate user interaction with a result page based on an infinite scrolling design.
Other suggestions from the audience highlighted the C/W/L framework <cit.> and the corresponding evaluation toolkit cwl_eval <cit.> regarding existing evaluation methods. Similarly, an evaluation scenario could be based on the Tester approach <cit.>, where the relative system performance is known the simulators are evaluated by how well they can reproduce the correct system ranking.
Possible sub-tasks could be aligned with different kinds of simulated user behavior. For example, one task could focus on content-based simulations, i.e., where simulated interactions depend on the contents of interface elements and modalities like snippet text, while the other task could focus on behavior-based simulations at a more abstract level that evaluates interaction sequences from a more general perspective.
Considering the challenge of simulating each user simulation step exactly, another shared task design could be based on providing interaction sequences to participants. Their user simulators would then be used to predict the very next interaction step. This design drastically cuts down complexity but, at the same time, would provide an interesting analysis of what we are currently able to achieve with regard to the prediction of next user interactions.
In general, it would be quite interesting to have a domain-specific focus for such a shared task. For example, the e-commerce setting or the legal and health domain could introduce an interesting novel direction beyond the still somewhat abstract evaluations of earlier work based on news corpora.
One of the most pressing and overarching question was about how to transfer the user simulation setting into a more modern environment beyond the typical list-based retrieval scenario. Considering the pace of recent advancements in the context of conversational systems and agents, these kinds of technologies offer an excellent basis for having a shared task about user simulators in a modern state-of-the-art setting.
§.§ Group discussion on simulating users
In this breakout group (see right side of <ref>), we examined the question of which user archetypes or personas we should model. We did so by thinking about factors of users and/or tasks that would be relevant when trying to represent a user. We considered user-centric factors as those that are independent of a task and do not change when facing a different task.
As user-centric factors we mentioned age, income, cultural background, the learning type of a user (e.g., visual), disability, language knowledge and fluency, working memory, level of technology knowledge and cognitive background.
Contrasting this, we defined task-centric factors as those that are dependent on the task and change, if another task is considered for the same simulated user. We noted the vocabulary used in the task, a task's complexity, the search strategy a user is employing and a user's knowledge and interest of the task.
Furthermore, there are more factors influencing user behavior. A user has a repertoire of behaviors they can compose and solve problems with.
When representing users as information seekers, different contexts plays a role as user-centric factors. Local context partially depends on the task and the problem solving process. Global contexts can be cultural, organisational or societal <cit.>.
Interaction effects appear in each level of user modeling. Interaction should be described in a formal framework, as we may not know the processes that create the interaction effects. When modeling interaction effects mathematically, should the base spaces (e.g. tasks) be discrete or continuous?
The question arose if we could construct a general template model of tasks from which specific tasks can be described.
One participant proposed to use reinforcement learning from user profiles while another wondered how we would avoid combinatorial explosion when we consider tasks, user model and context at the same time.
We concluded that to simulate means to constrain and that a user model is not, and cannot be, reality –- every model tries to approximate reality as closely as possible. A user is a composit.
User models in simulation should be multi-dimensional, for example in terms of Hofstede's cultural dimensions <cit.>, or (not necessarily mutually exclusive) multi-layered. In the multi-layered representation, each layer is a stack of different levels of skills or knowledge. When given a task, only a subset of these layers is leveraged and within each layer specific skills are selected (see Figure <ref>).
As a second smaller topic this breakout group shared some thoughts on evaluation. We composed a set of questions to which answers could help mitigate the uncertainty we are currently facing: What does it mean to evaluate the quality of a user profile? Is user profile evaluation the same as simulator evaluation? One perspective could be to consider them distinct and regard the simulation as a process, e.g., a personalization process. When do we have to consider a user profile's development over time and when could it also be enough to only focus on a static snapshot from a profile? Would it be easier to evaluate the quality of a user profile representing a group or a single user?
In terms of a quantification of the quality, we talked about the possible use of Fréchet distance. An evaluation metric could be composed from the ability to predict the next user action based on a current state. An extrinsic evaluation of user profiles might be necessary, e.g., in a search task, since how to best use the profile is unclear.
§ SUMMARY AND OUTLOOK
The workshop concluded with many inspiring ideas and directions for follow-ups and revealed challenges ahead. Our keynote speakers made it clear that user simulations are highly important for both industry and academia. Likewise, user simulations help better personalize content for users but also allow system evaluations without involving real users in online experiments.
During the panel discussion, user simulations were discussed from different points of view. The panelists highlighted their merits and potentials but also considered limitations of simulated users. Notably, they agreed that user simulations can bridge the gap between offline and online experiments toward a more user-centric evaluation.
Furthermore, the lightning talks gave valuable insights about ongoing work, including research ideas, resources, and (preliminary) results that involve user simulations for various kinds of information access systems and environments.
Our breakout groups mainly targeted the topics of organizing a shared task with user simulations (cf. <ref>) and defining reasonable user archetypes or personas (cf. <ref>). While we did not succeed with our ambitious goal of having a final shared task definition at the very end, our breakout discussions revealed that there is generally a strong interest in running a shared task that builds upon user simulations.
In this regard, we were able to identify major challenges like the overall question of how we can evaluate the validity of simulated users within a shared task setting or what kinds of user archetypes are worth to be considered in this context.
Most notably, there is a strong interest in running such a task but also many challenges lie ahead. We conclude that there is a need for additional community work and we envision a follow-up event to this workshop for having a more focused and in-depth discussion with experienced shared task organizers but also interested participants.
§ ACKNOWLEDGMENTS
This workshop has been partially funded by the project PLan_CV. Within the funding programme FH-Personal, the project PLan_CV (reference number 03FHP109) is funded by the German Federal Ministry of Education and Research (BMBF) and Joint Science Conference (GWK). DAAD gave a travel grant under funding number 57706671.
§ AUTHORS AND AFFILIATIONS
Workshop organizers:
* Timo Breuer; TH Köln, Cologne, Germany; [email protected]
* Christin Katharina Kreutz; TH Mittelhessen, Gießen, Germany; [email protected]
* Norbert Fuhr; University of Duisburg-Essen, Duisburg, Germany; [email protected]
* Krisztian Balog; University of Stavanger, Stavanger, Norway; [email protected]
* Philipp Schaer; TH Köln, Cologne, Germany; [email protected]
Other authors:
* Nolwenn Bernard; University of Stavanger, Stavanger, Norway; [email protected]
* Ingo Frommholz; University of Wolverhampton, Wolverhampton, UK; [email protected]
* Marcel Gohsen; Bauhaus-Universität Weimar, Weimar, Germany; [email protected]
* Kaixin Ji; RMIT University, Melbourne, Australia; [email protected]
* Gareth J. F. Jones; Dublin City University, Dublin, Ireland; [email protected]
* Jüri Keller; TH Köln, Cologne, Germany; [email protected]
* Jiqun Liu; The University of Oklahoma, Norman, OK, USA; [email protected]
* Martin Mladenov; Google; [email protected]
* Gabriella Pasi; University of Milano Bicocca, Milano, Italy; [email protected]
* Johanne Trippas; RMIT University, Melbourne, Australia; [email protected]
* Xi Wang; University of Sheffield, Sheffield, UK; [email protected]
* Saber Zerhoudi; University of Passau, Passau, Germany; [email protected]
* ChengXiang Zhai; University of Illinois at Urbana-Champaign, Champaign, IL, USA; [email protected]
| The common approach and general understanding of evaluating information access systems (like search engines, recommender systems, or conversational agents) is closely coupled to the Cranfield paradigm, the dominating evaluation method, especially in information retrieval (IR). This has proven to be able to deal with the inherent complexity in information access contexts. The Cranfield studies can be understood to use a special form of simulation to mimic the search process by making implicit and explicit assumptions about the information system and its users. This helps to reduce the complexity of the search process and allows us to effectively compare different IR systems. Despite its long history and roots within the community, Cranfield has not been without criticism <cit.> and the underlying assumptions are often described as (over-)simplifications leading to potentially unrealistic search evaluations that deviate from users' actual interaction experience and search task performance <cit.>.
Other evaluation methods, including interactive/session-based retrieval settings and controlled user experiments <cit.>, living labs <cit.>, or (user) simulation studies <cit.> have been proposed and discussed in the community; these have also been used in shared tasks at TREC, NTCIR, or CLEF, e.g. iCLEF <cit.>, OpenSearch <cit.>, and LiLaS <cit.>). However, no shared tasks at TREC/CLEF have primarily focused on user simulations.
Recently, in the TREC Interactive Knowledge Assistance Track (iKAT) <cit.>, some submissions included simulated user feedback in their interactive information access systems, while the lab did not employ such an evaluation strategy.
Simulations can also contribute to a better understanding of users. Formalizing a user model for simulation delivers explicit hypotheses on user behavior, which can produce insights into the validity of assumptions about users <cit.>.
Other recent examples of a re-started interest in the topic of (user) simulation were the Sim4IR workshop that was held at SIGIR 2021 <cit.>, the SIMIIR 2.0 framework[< <cit.>, tutorials <cit.>, and a recurring theme of how generative model can be used for simulation <cit.>.
At ECIR or SIGIR, a reasonable number of relevant papers on user simulations were accepted, and even a study on simulating user queries won the best paper award at ECIR 2022 <cit.>. Additionally, the introduction of generative AI methods opened up new possibilities for integrating LLMs to simulate users.
Therefore, to understand how and whether the evaluation of information access technology can truly benefit from simulating user interactions, we organized the first
workshop on Simulations for Information Access (Sim4IA 2024), held in conjunction with SIGIR 2024. Its aim was to serve as a forum to bring together researchers and experts.
Additionally, this workshop's goal was to provide a much-needed forum for the community to discuss the emerging challenges when applying (user) simulations to evaluate information access systems in simulation-based shared tasks.
This paper is a report of the Sim4IA[< <cit.> workshop at SIGIR 2024. The workshop had two keynotes, a panel discussion, nine lightning talks, and two breakout sessions.
We report on how we organized the workshop, provide a brief overview of what happened at the workshop, and summarise the main topics and findings of the workshop as well as future work. | null | null | null | null | null |
http://arxiv.org/abs/2409.17218v1 | 20240925180000 | Reflected entropy in random tensor networks III: triway cuts | [
"Chris Akers",
"Thomas Faulkner",
"Simon Lin",
"Pratik Rath"
] | hep-th | [
"hep-th",
"gr-qc",
"math-ph",
"math.MP",
"math.PR",
"quant-ph"
] |
Disk2Planet: A Robust and Automated Machine Learning Tool for Parameter Inference in Disk-Planet Systems
[
September 28, 2024
==========================================================================================================
§ INTRODUCTION
There is compelling evidence that quantum gravity should be thought of as an emergent phenomenon with the underlying geometry being described by the entanglement structure of a dual/holographic wave-function. The Ryu-Takayanagi (RT) formula <cit.> expresses this idea, connecting areas of minimal surfaces and von Neumann entropies: a measure of bipartite entanglement in pure states. Minimal surfaces can be equivalently described by bit-threads <cit.>, maximal divergence free locally bounded flows between two boundary regions. Furthermore, these bit-threads give a vivid picture of (pure state) bipartite entanglement in the boundary wavefunction, with a thread corresponding to a distillable EPR pair.
Random tensor network states <cit.> model this behavior with a graph G = (E,V) playing the role of the underlying geometry and boundary vertices ∂⊂ V containing the dual Hilbert space. Edges e ∈ E contain maximally entangled states with large bond dimensions χ(e) and vertices are described by randomly chosen tensors that are contracted with the edge states. The RT formula arises as a minimal cut through the graph with edges weighted by w(e) ∝lnχ(e). The cut divides the vertices V into the two disjoint sets each containing the corresponding sets of boundary vertices whose von Neumann entropy we wish to compute. Bit-threads correspond to dual maximal flows between the two sets of boundary vertices with flow capacities set by w(e). This correspondence is a version of the max-flow min-cut theorem that can be proven using strong duality theorems from the theory of linear/convex programs.
Bit-threads however tend to give a misleading picture of multipartite entanglement. For example, a bipartite dominance conjecture was formulated for three party holographic states based on the existence of such bit-thread configurations <cit.>. However, the conjecture of Ref. <cit.> contradicts other measures of tripartite entanglement beyond von Neumann entropies, most notably for this work, that of reflected entropy. Ref. <cit.> argued using the reflected entropy that holographic states have large amounts of tripartite entanglement and thus, disproved a version of the bipartite dominance conjecture.
Generally, minimal areas are only a limited probe of the underlying geometry, and one might expect other geometric objects – such as surfaces of various co-dimensions – to play an important role in a putative correspondence between geometry and quantum information. For example, computational complexity is believed to be associated to co-dimension 0 or 1 regions in spacetime <cit.>. There are now also hints that a class of tripartite entanglement should be associated to spatial co-dimension 2 objects <cit.>. In this paper, we find further evidence for the latter by proving a correspondence between reflected entropy and a minimal triway cut. Triway cuts generalize the bipartite cuts described above, and are likely the closest graph analog of a co-dimension 2 object that is defined for any graph.[The cuts themselves are co-dimension 1, however the three cuts meet at some locus that might be considered co-dimension 2.]
Triway cuts are integer optimization programs that cannot be dualized to a bit-thread description.[What we mean here is that the (Lagrange) dual flow programs do not have the same optimal value, i.e., there is a duality gap. However, it is possible to find a “dual” if one considers more exotic optimization problems. See sec:disc for more discussion on this topic.]
Relaxing the integer constraint gives a linear program that underestimates the cut. The ratio between these values, the output of the integer program over that of the linear program, is generally a difficult quantity to compute, and is called the integrality gap.
In fact, computing the integrality gap is an NP-complete problem <cit.>.
We now introduce our main result in more detail.
The reflected entropy S_R <cit.> of a state ρ_AB is defined as the entropy of AA^⋆ in the canonical purification | ρ_AB^1/2> ∈ℋ_AA^⋆ BB^⋆.
The Rényi generalization of S_R is a simple one parameter family of quantum information measures:
S^(n)_R(A:B) = -1/n-1ln Trρ_AA^⋆^n ρ_AA^⋆ = Tr_BB^⋆| ρ_AB^1/2> < ρ_AB^1/2|
We will prove:
For integer n > 1, the reflected entropy of a random tensor network state at large bond dimension, with a unique entanglement wedge for AB:C and a unique triway cut for A:B:C (with tensions specified below), satisfies:
lim_χ→∞ S_R^(n)(A:B) /lnχ = 1/n-1𝒜_𝐭(A:B:C) - n/n-1𝒜(AB:C)
where 𝒜_𝐭(A:B:C) is the area of a multiway cut with tensions 𝐭≡(t_A:B,t_B:C,t_C:A) = (2(n-1),n,n)
[In this paper we mostly consider triway cuts defined with this tension. For this reason, we will often abbreviate the triway cuts simply as 𝒜(A:B:C) when it is unambiguous.]
and 𝒜(AB:C) is the area of the minimal cut (with tension 1) for AB:C.
Averaging is taken with respect to the Haar measure over unitary matrices that are applied to vertex states in the graph. See Definition <ref> for the precise construction. The triway cut is defined in the same way as a cut: we split the vertices into three disjoint subsets containing respectively boundary vertices A,B,C. The area is then the sum over the edges e which intersect two of the three regions, weighted by the respective tensions and w(e). See Definition <ref> and fig:network-multi-cut.
The Markov gap is defined as <cit.>:
h(A:B) = S_R(A:B) - I(A:B) ≥ 0
where I(A:B) is the mutual information. The Markov gap vanishes iff the three party state has a particular structure: a classical superposition of states with only bipartite entanglement between the three different parties <cit.>. The Markov gap thus detects a certain class of non-trivial tri-partite entanglement.[h vanishes on GHZ states, so it does not detect all kinds of tripartite entanglement <cit.>. A refined version based on the entanglement of purification does better <cit.>. This is generally harder to compute but the results of this paper help compute it for a class of RTN states <cit.>.]
In particular, we have the following lower bound:
Under the uniqueness assumption for n=2 in Theorem <ref>,
the (normalized) Markov gap (MG) of a random tensor network state at large bond dimension is lower bounded by:
MG ≡lim_χ→∞h(A,B)/lnχ≥ 2 𝒜_𝐬(A:B:C) - 𝒜(A:BC) - 𝒜(B:AC)- 𝒜(C:AB)
where 𝒜_𝐬 is the standard minimal triway cut with equal tensions 𝐬=(1,1,1).
This follows from ∂_n S_R^(n)≤ 0 and Theorem <ref> applied at n=2.
We will show that the right-hand side of eq:MG-bound is determined by the integrality gap of the integer program <cit.>:
min_ρ∑_eρ(e) w(e)
∀ L ∈𝒫_A,B∪𝒫_A,C∪𝒫_B,C : ∑_e ∈ Lρ(e) ≥ 1
where ρ(e) ∈ℤ_≥ 0 and 𝒫_x,y refers to all paths through the edges of the network connecting vertices x and y. The integrality gap IG is the ratio between the two optimal values of the program before and after relaxing the integer constraint on ρ, a standard concept in the theory of integer programming <cit.>. The original program computes 𝒜(A:B:C) with equal tensions, while the relaxed program allows the domain walls to split into pairs and form three minimal cuts with tensions 1/2 (see fig:relax). The relaxed program is dual to the multicommodity flow problem <cit.> on three parties for which efficient algorithms exist. In this sense, we can interpret the integrality gap as an obstruction to obtaining a bit threads picture. Explicitly our bound relates the two “gaps”:
MG ≥ (IG-1) × ( 𝒜(A:BC) + 𝒜(B:AC) + 𝒜(C:AB))
Roughly speaking IG-1 ≥ 0 measures how computationally hard the integer program is, and so this gives an intriguing link between the Markov gap and complexity.[Note that this is logically different from the computational complexity of the state often discussed in AdS/CFT.]
While an obvious and natural continuation in n away from the integers exists for the triway cut problem, we have not yet rigorously established that this still computes the n-Rényi reflected entropy. We have no reason to believe otherwise[Any lingering doubt might be used to challenge the conclusion <cit.> that the bipartite dominance conjecture <cit.> is false. However, independent of the EW duality, using gaps we have now rigorously proven the bipartite dominance conjecture false for any random tensor network with IG≠ 1 for the program program-intro.], and so it is worth noting that the limit n → 1 reproduces the entanglement wedge cross-section:
lim_n → 1 ( 1/n-1𝒜(A:B:C) - n/n-1𝒜(AB:C) ) = 2 EW(A:B)
where EW(A:B) is simply the minimal cut dividing A:B on the sub-graph defined by the “entanglement wedge” of the cut AB:C (see fig:EW).
The calculation of reflected entropy in holography involves analytically continuing a two-parameter (usually called m,n) replica trick computation. It was shown in Ref. <cit.> that the continuation of the naive saddles proposed in Ref. <cit.> suffers from an order of limits issue, an analog of which exists for RTNs as well <cit.>. By incorporating new saddles, rigorously performing an analytic continuation in m and proposing an analytic continuation in n via the triway cut problem, we have resolved this issue in general RTNs. This motivates a similar prescription with the inclusion of new saddles even in AdS/CFT.
A summary of this paper is as follows. In sec:sum, we state our main theorems
pertaining to the computation of
Rényi reflected entropies for the state |ρ_AB^m/2⟩ with m ≥ 2 an even integer.
In sec:prelim we give some required background, including results on standard network flows, minimal cuts, and the mathematics of permutations and set partitions that make appearances in various statistical mechanics models that we consider. sec:main proves that the optimal solution to the reflected entropy statistical mechanics model can be found in a series of coarser models, finally ending in the multiway cut problem. In sec:cont we continue the m parameter away from even integers to m → 1, where we make contact with the reflected entropy. In this step, we use the method of moments in conjunction with a weak form of measure concentration for random tensor network states. In sec:disc, we discuss various aspects of our work such as bit threads, the relation to entanglement of purification as well as generalization to hypergraphs. Several lengthy proofs are relegated to Appendices.
We end this introduction with a list of common notations used in this article:
l|l|l
Glossary of symbols and notations.
Term Description First defined in
G={E,V} graph Def. <ref>
w(e) weight of an edge e∈ E after Def. <ref>
E_G[V'] or E[V'] set of edges that have some vertex in V' before Def. <ref>
V_G[E'] or V[E'] set of vertices that lie in E' before Def. <ref>
𝒫_A,B set of paths from A to B before Def. <ref>
𝒫_A,B set of edge-disjoint paths from A to B before Def. <ref>
r_A or r_A:B cut region containing A or dividing A:B before eq:mu_def
μ(r)⊂ E cut surface of a region r ⊂ V eq:mu_def
𝒜(A:B) minimal cut for A and B Def. <ref>
𝒜(A:B:C) minimal triway cut among A, B and C Def. <ref>
1_s(x) indicator function of the set s after cpfeas
S_N symmetric group of order N before eq:Cayley
P_N partitions of a set of order N before eq:q(g)
B_N set of string of N Boolean algebras before eq:bk(q)
P(g) coarse graining P: S_mn→ P_mn before eq:q(g)
q_g_0(p) coarse graining q_g_0: P_mn→ P_#(g_0) eq:q_g0
q_X(p) coarse graining q_X: P_mn→ P_2n eq:q(g) and eq:q_g0
s(q) coarse graining s: P_2n→ s(P_2n) ≃ B_2n before eq:S_metric
b^k(q) coarse graining b^k: P_2n→ (B_2n)^k ≃ℤ_2 before d1s
u_k largest partition with a singlet at location k after eq:bk(q)
#(x) cycle (x∈ S_N) or block (x∈ P_N) counting function eq:Cayley and eq:P_metric
#_1(q) number of singlets in q∈ P_N before eq:S_metric
a ∧ b meet of a and b beginning of sec:ppb
a ∨ b join of a and b beginning of sec:ppb
d(g_1,g_2) (g_1,g_2 ∈ S_N) Cayley distance in S_N eq:Cayley
d(q_1,q_2) (q_1,q_2 ∈ P_N) distance on semimodular lattice P_N eq:P_metric and eq:P_metric_2
d_1(s_1,s_2) (s_i=s(p_i):p_i∈ P_N) singlet distance on P_N eq:S_metric
d(b_1,b_2) (b_1,b_2 ∈ B_N) Hamming distance on B_N eq:B_metric and d1s
d_ρ(x,y) (x,y∈ E) graph distance induced by ρ:E→ℝ eq:d_graph
Throughout this article, we will define various distance functions on different sets. Most of these distances will be termed universally by d(·,·), with an understanding that we use different definitions based on the set in context, as shown in Table <ref>.
For a function ρ(e) on the set of edges ρ:E→ℝ we will sometimes write ρ(E') ≡∑_e∈ E'ρ(e) for a subset E'⊂ E.
Note: An alternate proposal for the entanglement wedge cross section (EW) was made in Refs. <cit.> relating it to the entanglement of purification (E_P). In Ref. <cit.>, we used the results obtained in this paper to prove the E_P=EW conjecture for specific RTNs.
Also note that triway cuts in holography have also been discussed in Refs. <cit.> as a candidate holographic dual to a different quantity, the multi-entropy. We will comment more on the relation to this work later in the paper.
§ SUMMARY OF MAIN THEOREM
We summarize our main mathematical results, setting up some of our notation at the same time. Our main results utilize the replica trick used to compute reflected entropy in random tensor networks <cit.>, a computation we formulate using graph theory.
See also <cit.> for a related replica computation.
Consider an undirected graph G = (V,E) where edges correspond to unordered pairs of vertices e = { v_1,v_2} for e ∈ E and v_1,2∈ V. We mark special vertices ∂⊂ V as boundary vertices
and the graph describes a pure state in the Hilbert space:
|ψ> ∈⊗_v ∈∂ℋ^v , ℋ^v = ⊗_e ∈ E(v)ℋ_χ(e)^(v),
where E(v):={{x,y}∈ E: x=v or y=v} are the subset of edges that contain v and χ(e) is the bond dimension of the edge.
For a given vertex v not in the boundary, i.e. v ∈ V \∂, we pick random tensors T(v) ∈ℋ^v
according to the Haar measure. Then the states in question are defined via:
| ψ> ∝(⊗_v ∈ V \∂< T(v) | ) (⊗_e ∈ E| Ψ_e > )
where |Ψ_e⟩∈ℋ_χ(e)^(v_1)⊗ℋ_χ(e)^(v_2)
is a maximally entangled state between the vertex Hilbert spaces of {v_1,v_2 } = e.
For convenience, on the graph we will use the rescaled weighting
w(e) = lnχ(e)/lnχ,
which we hold fixed as we send χ→∞.
We will care about graphs with boundary ∂ = A ⊔ B ⊔ C, and we are interested in using the replica trick to compute the averaged (m,n)-Rényi reflected entropy
S_R^(m,n)(A:B) = - 1/n-1ln Tr (ρ^(m)_AA^⋆)^n , ρ_AA^*^(m)= Tr_BB^⋆| ρ_AB^m/2> < ρ_AB^m/2|,
where ρ_AB = Tr_C | ψ> < ψ| and where m ∈ 2 ℤ_≥ 1.
The n-th moment Tr(ρ^(m)_AA^⋆)^n contains nm copies of |ψ⟩⟨ψ|, and the overline denotes that we compute the quantity in BBn averaged over the choice of T(v) from the Haar ensemble.
This calculation reduces to a statistical mechanics model with vertex-valued group elements g(v) ∈ S_mn <cit.>:
Tr (ρ^(m)_AA^⋆)^n = Z_m,n/(Z_m,1)^n,
Z_m,n =∑_{g(v)}exp(- ∑_e = {x,y}∈ E d(g(x),g(y)) lnχ(e) ) ,
where ∑_{g(v)} denotes a sum over all configurations of g(v).
We have denoted by d(g_1,g_2) the Cayley distance in S_mn:
d(g_1,g_2) = mn - #(g_1 g_2^-1),
where #(·) is the cycle counting function.
Note that we do not sum over permutations at boundary vertices; instead their permutations are fixed by the patterns of contractions that compute the moments in BBn.
We fix g(v) = g_A for v ∈ A, g(v) = g_B for v ∈ B, and g(v) = 𝕀 for v ∈ C where 𝕀 is the identity group element.[The identity element in a group G is colloquially termed e∈ G. This unfortunately clashes with our notation for edges. Hence in this article we will use id to denote the identity element in S_N as well as the finest element of set partitions P_N. This will hopefully not cause confusion to our readers.]
The group elements g_A and g_B can be read off from BBn. They are related by a conjugation and contain n cycles of length m. The Cayley distances between these elements are d(g_A,g_B) = 2(n-1), d(𝕀,g_A)=d(𝕀,g_B)=n(m-1).
Exactly evaluating statmech for general graphs is difficult, but in the large χ limit we can often obtain a very good approximation by evaluating statmech at a saddle point, i.e., finding the configuration g̃(v) that maximizes the exponential and dropping all other terms. Namely, we have
Z_m,n≈exp(- ∑_e = {x,y}∈ E d(g̃(x),g̃(y)) lnχ(e) ) ,
for g̃(v) now the configuration minimizing the “free energy”
∑_e = {x,y}∈ E d(g̃(x),g̃(y)) lnχ(e) .
The saddle point approximation is valid away from phase transitions and thus, in such generic situations, the problem of computing S_R^(m,n)(A:B) reduces to the problem of finding the optimal configuration g̃(v).
It is this problem that we focus on in this paper.
A particularly important group element that will arise in the optimal configuration g(v) for statmech is uniquely defined as the element X ∈ S_mn with the largest Cayley distance to the identity, d(X,𝕀), that lies on the joint Cayley geodesics defined by: d(g_A,X) + d(X,𝕀) = d( g_A,𝕀) and d(g_B,X) + d(X,𝕀) = d(g_B,𝕀). X contains 2n cycles of length m/2, roughly speaking, corresponding to the intersection of the cycles in g_A and g_B.
The specific form of g_A,g_B and X can be found in Appendix <ref>.
In order to find the optimal configuration g(v), with minimal free energy, we will relate this statistical mechanics model to a sequence of increasingly coarse grained models where we throw out some information contained in g(v).
See fig:coarse-grain.
As a result the free energy can only decrease (as we prove rigorously), but then – crucially – we will prove that the minimal free energy in the most coarse-grained model upper bounds the free energy of the original model!
This loop of inequalities allows us to use the tractable, most coarse-grained model to find the minimal free energy.
This approach was inspired by the solution to the single tensor case studied in Appendix B of Ref. <cit.> where a similar sequence of steps was followed. Below we first describe the various coarse-grained variables, then we define the various stat mech/optimization problems we are interested in. An important ingredient is the theory of set partitions which form a lattice, in the sense of a partially ordered set, with binary operations ∨ and ∧. Boolean algebras will also make an appearance. We review the necessary background material in sec:ppb.
Given a permutation g ∈ S_N we can associate a set partition P(g) ∈ P_N by mapping the cycles to subsets of ℤ_N, simply forgetting how elements within each cycle are permuted.
In our case N= nm. We then further coarse-grain this set partition further by blocking P(g) into partitions of the 2n blocks in P(X). This reduction effectively removes dependence on m. More specifically, for each g ∈ S_mn we associate an element in P_2n via q_X(g) : S_mn→ P_2n defined as:
q_X(g) ≡ (P(g) ∨ P(X)) / ∼ x ∼ y iff {x, y }∨ P(X) = P(X)
where ∨ is the least upper bound operation on the lattice (defined in sec:ppb) and the quotient is applied element wise to each element within the partition. We introduce a distance on set partitions[See sec:ppb for some properties of this metric. See Ref. <cit.> for some discussions of this metric. ]:
d(q_1,q_2) = #(q_1) + #(q_2) - 2 #(q_1 ∨ q_2)
where #(·) now counts the number of subsets in the partition. Note that #(g)=#(P(g)). This distance will replace the Cayley distance function in statmech. For the problem in P_2n, the boundary elements will be fixed to partitions: q_A = q_X(g_A) and q_B = q_X(g_B) and q_C = 𝕀≡ q_X(𝕀). Again we list these specific partitions in Appendix <ref>. The partition distances are d(q_A,q_B) = 2(n-1) and d(q_A,B,𝕀) = n.
At the next level, we introduce Boolean variables b = {b^k}_k=1^2n∈ (ℤ_2)^⊗ 2n≡ B_2n that detect the presence of a singlet (subsets with size 1) in q ∈ P_2n.
Define:
b^k(q) = 2-#(u_k ∨ q),
where u_k for k ∈{ 1,2, … 2n }, is the partition with a singlet at k plus a block of size 2n-1 containing with the rest of the elements.
Thus, b^k = 0 implies there is a singlet at location k; otherwise b^k =1.
The distance between two Boolean strings b_1,2 is:
d(b_1,b_2) = ∑_k =1^2n | b_1^k - b_2^k |.
In this case, the boundary conditions are b_AB =11 … 1 and b_C = 00 … 0.
The set of walks/paths in a graph between v_1 and v_2 will be denoted 𝒫_v_1,v_2, and consist of a sequence of edges L ⊂ E that join at common vertices
and start (end) at v_1 (v_2).
The set of edge disjoint paths will be denoted 𝒫_v_1,v_2 – these are paths with no repeated edges (vertices may be repeated).
The vertices v_1 and v_2 may be replaced by sets of vertices with paths starting/ending on any vertex in the respective set.
For a graph G={V,E} and a subset of edges E'⊂ E, denote V_G[E']={v∈ V: {x,v}∈ E' or {v,y}∈ E'} as the set of vertices that lie in E'.
Similarly, E_G[V']={{x,y}∈ E: x∈ V' or y∈ V'} are the set of edges that have some
vertex in V' ⊂ V.
The subscript G in E_G[·] and V_G[·] shall often be omitted when the referencing graph is unambiguous.
Lastly, for a function on edges ρ : E →ℝ we define ρ(E') = ∑_e ∈ E'ρ(e) for a subset E' ⊂ E.
Define the various optimization problems based on the refinements g → q → b:
R = min_ g R(g) , R(g) ≡∑_e = {x,y}∈ E w(e) d(g(x),g(y))
where g : V → S_mn and such that g(A) = g_A , g(B) = g_B and g(C) = 𝕀.
Q = min_q Q(q) , Q(q) = ∑_e = {x,y}∈ E w(e) d(q(x),q(y))
where q : V → P_2n and such that q(A) = q_A and q(B) = q_B and q(C) = 𝕀.
B = min_b B(b)
B(b) = min_r∑_e ∈ E w(e) r(e)
subject to ∀ L ∈𝒫_A,B : ∑_e ∈ L r(e) ∈ 2(n-δ_b_L,b_AB) + 2 ℤ_≥ 0
and ∀ e ∈ E : r(e) ≥⌈1/2 d(b(x),b(y)) ⌉
where
r : E →ℤ_≥ 0 and b : V → B_2n such that
b(A) = b(B) = b_AB≡ 11 … 1 and b(C) = 00 … 0, where b_L≡∧_v∈ V[L]b(v) denotes the piecewise and operation on Boolean algebra along the path L
[That is for L = {{v_0,v_1},{v_1,v_2},⋯,{v_i-1,v_i}}: b_L = b(v_1)∧ b(v_2)∧⋯∧ b(v_i) and (b_1∧ b_2)^k ≡ b_1^k ∧ b_2^k. Recall that b^k denotes the k-th element of the bit string b.].
I = min_ρ, σ∑_e ∈ E w(e) (σ(e) + ρ(e))
subject to ∀ L ∈𝒫_A,B : ∑_e ∈ L (σ(e) + ρ(e)) ∈ 2(n - δ_ρ(L), 0) + 2 ℤ_≥ 0
and ∀ L ∈𝒫_AB,C : ∑_e ∈ Lρ(e) ≥ n
where ρ(L)≡∑_e∈ Lρ(e) and ρ,σ: E →ℤ_≥ 0.
Our main result is to prove that the optimal solution to these programs is given by a particular minimal and multiway cut program that we now define.
A cut (or cut set) r for two vertices (or sets of vertices), A:B is defined as a subset r⊂ V such that A⊂ r and B⊂ r^c.[We will sometimes denote a cut that divide boundary vertices A:B as r_A:B, or simply as r_A when B=A^c.]
A cut surface μ(r) for a given subset r ⊂ V comprises the subset of edges that lie on the “boundary” of r↔ r^c, in the sense that for all e ∈μ(r) the pair { x, y } = e satisfies x ∈ r and y ∈ r^c or vice versa. In other words:
μ(r) = E[r]∩ E[r^c].
The minimal cut for A:B partitions two sets of boundary vertices ∂ = A ⊔ B and minimizes:
𝒜(A:B) = min_r𝒜(r)
𝒜(r) = ∑_e∈μ(r)w(e) ≡ w(μ(r))
over all subsets r ⊂ V, such that the boundary vertices A ⊂ r and B ⊂ r^c.
Fix some tensions 𝐭={t_A:B,t_B:C,t_C:A} with t_A:B, t_B:C,t_C:A > 0 and boundary vertices ∂ = A ⊔ B ⊔ C. The triway cut problem minimizes:
𝒜_𝐭(A:B:C) = min_(α,β,γ)𝒜_𝐭(α:β:γ)
𝒜_𝐭(α:β:γ) = t_A:B∑_e∈μ(α:β)w(e)+t_B:C∑_e∈μ(β:γ)w(e)+t_C:A∑_e∈μ(γ:α)w(e)
= t_A:B w(μ(α:β)) + t_B:C w(μ(β:γ)) + t_A:B w(μ(γ:α))
over all subsets (α,β, γ) such that
α⊔β⊔γ=V with A⊂α, B⊂β, C⊂γ
and where μ(r_1:r_2) = E[r_1]∩ E[r_2].
The triway cut is a special case of a multiway cut and we will often refer to it as such. The equal tension case is the standard multiway cut problem, which arises here when n=2 (see e.g., Ref. <cit.>). In the context of AdS/CFT, multiway cuts have been studied previously in <cit.>.
We note that minimal cuts must always lie inside some optimal multiway cut region:
Given an optimal solution to a minimal cut problem for C: AB, r_C ⊂ V then there exists
an optimal solution (α,β,γ) to the triway cut problem for (A,B,C) with t_A:C = t_B:C and such that r_C ⊂γ.
Consider any optimal triway cut (α',β',γ'). Construct α = α' ∩ (r_C)^c, β = β' ∩ (r_C)^c and γ =γ' ∪ r_C (see fig:lemma3). We note:
∑_e ∈μ(α:β) w(e) ≤∑_e ∈μ(α':β') w(e)
since μ(α:β)⊇μ(α':β') by construction and w(e)≥0. Also,
∑_e∈μ(α:γ)w(e) + ∑_e∈μ(β:γ)w(e) = w(μ(γ))
= w(μ( γ' )) + w(μ(r_C)) - w( μ(γ\ r_C^c)) ≤ w(μ( γ' ))
where in the last inequality we used the fact that γ\ r_C^c is a cut for C:AB and that r_C is a minimal such cut.
Putting these inequalities together we have:
𝒜(α,β,γ) ≤𝒜(α',β',γ')
implying equality and that (α,β,γ) is a minimal triway cut satisfying the properties stated in the Lemma.
The main theorem of this paper establishes:
The minimum of each of the programs defined above are determined by an optimal solution to the multiway cut problem
with 𝐭=(t_A:B,t_B:C,t_A:C) = (2(n-1),n,n):
R - 𝒜(AB:C) n(m-2) = Q = B = I = 𝒜_𝐭(A:B:C)
Consider some minimal cut r_C for C:AB.
There is an optimal solution to the triway cut problem with r_C ⊂γ by Lemma <ref>.
Set
g(x) =
g_A, x∈α
g_B, x∈β
𝕀, x∈ r_C
X, x∈γ∖ r_C
We then estimate:
R ≤𝒜(AB:C) d(𝕀, X) + 𝒜_𝐭(A:B:C)
by direct computation and since d(X,g_A) = d(X,g_B) = n and d(g_A,g_B) = 2(n-1) give the correct tensions.
After this we prove a chain of inequalities in the other way, R ≥𝒜(AB:C) d(𝕀, X) + Q (Lemma <ref> and Lemma <ref>) , and Q ≥ B (Lemma <ref> and Corollary <ref>), and B ≥ I (Lemma <ref>)
and finally I ≥𝒜_𝐭(A:B:C) (Theorem <ref>).
Together with rless this implies equality through the chain.
Theorem <ref> only asserts that the optimal value of the program R are equivalent to that of the triway cut. It need not be the same as the one that was constructed above and there may be other degenerate solutions that achieve the minimum – Indeed one generally expects a huge degeneracy when a phase transition happens. Such phase transitions in tensor networks are usually signaled by a degenerate minimal surface. The following theorem states that the optimal solution constructed above is unique when the system is far away from such transitions.
Let g:V→ S_mn be an optimal solution to the reflected entropy permutation group optimization problem R on graph G with a unique solution (α,β,γ) to the triway cut problem for (A,B,C) and a unique solution r_C to the minimal cut problem C:AB. Then g is unique and of the form as given in eq:unique_g.
We prove this theorem in sec:uniqueness after we establish the necessary Lemmas that lead to Theorem <ref>.
§ PRELIMINARIES
§.§ Min cuts and Max flows
We discuss a version of max-flow min-cut that is convenient here.
Consider the undirected max flow problem written as a linear program and in terms of edge disjoint paths:
max_c ∑_L ∈𝒫_A,B c(L)
subject to: ∀ e ∈ E ∑_P ∈𝒫_A,B c(L) 1_L(e) ≤ w(e)
where 1_L(e) = 1 if e ∈ L and 0 otherwise; and where c : 𝒫_A,B→ℝ_≥ 0 and we maximize over all such maps. Any c(L) that satisfies the constraint above is called feasible while any feasible c that achieves the maximum is called optimal.
Let us derive the dual program. For each constraint we introduce
a function ρ: E →ℝ_≥0 and minimize over ρ(e):
= max_cmin_ρ( ∑_L ∈𝒫_A,B c(L) + ∑_e ∈ Eρ(e) ( w(e) - ∑_P ∈𝒫_A,B c(L) 1_L(e) ) )
The minimum is -∞ with ρ→∞ for some e if c is not feasible. If c is feasible the minimum is ρ(e) =0. Thus, this gives back the original problem after maximizing over c. We can now exchange the maximum and the minimum since the function is linear in c at fixed ρ (and hence concave)
and linear in ρ at fixed c (and hence convex), so we can use the von Neumann's minimax theorem:
= min_ρmax_c(∑_e ∈ Eρ(e) w(e) + ∑_L ∈𝒫c(L) (1 -∑_e ∈ Lρ(e) ) )
Maximizing over c gives + ∞ unless ∑_e ∈ Pρ(e) ≥ 1 in which case we find c(L) = 0. Thus, we can restrict to the set of dual feasible ρ
satisfying this later constraint.
In summary, the dual program is defined in terms of a map ρ : E →ℝ_≥ 0:
min_ρ ∑_e ∈ E ρ(e) w(e)
subject to: ∀ L ∈𝒫_A,B ∑_e ∈ Lρ(e) ≥ 1
As we just derived, this problem satisfies strong duality.[Strong duality means that the optimal values of the original and dual program are equal.]
We can also impose the further constraints that all L ∈𝒫_A,B (not necessarily edge disjoint) also satisfy ∑_e ∈ Lρ(e) ≥ 1 without changing
the minimum, since it does not change the set of feasible ρ.
It is possible to show that this last problem is a min-cut problem.
A proof of this fact can be found in standard references in linear programming, e.g., Ref. <cit.>.
We will take a slightly different approach here.
We consider the integer version of tosat where we further impose ρ∈ℤ_≥ 0. This does indeed give the equivalent optimal value as rhomin, although this is not obvious and there could well have been an integrality gap.
We will now demonstrate it is equivalent to the min-cut problem, or that the integrality gap vanishes for this problem.
To construct the cut we consider an optimal ρ and define:
r_A = { x : d_ρ(x,A) = 0 }
where d_ρ(x,y) is the graph distance induced by ρ(e), which is the minimal distance between x and y as measured by the edge function ρ(e). See Appendix <ref>.
It is clear that
ρ(e) ≥1_μ(r_A)(e)
by integrality. Then for any other feasible ρ':
∑_e ∈ E ρ'(e) w(e) ≥∑_e ∈ E ρ(e) w(e)
≥ w(μ(r_A))
Also any path from A to B must pass μ(r_A) so that ρ'(e) = 1_μ(r_A)(e) is feasible and we
have the opposite inequality.
We now give a quick derivation of the RT formula for RTNs. We pick the cut C:AB, although this discussion is general. We compute the m-th Rényi entropy for ρ_AB at large χ which involves finding the minimum of the free energy:
min_ g∑_e = {x,y}∈ E w(e) d(g(x),g(y))
where g : V → S_m and g(AB) = τ_m = (12 … m) and g(C) = 𝕀. Use an optimal solution to the flow problem c(L), inserting cpfeas into the objective function of mingg:
∑_e = {x,y}∈ E w(e) d(g(x),g(y)) ≥∑_L ∈𝒫_AB:C c(L) ∑_e = {x,y}∈ L d(g(x),g(y))
≥ d(τ_m, 𝕀) ∑_L ∈𝒫_AB:C c(L)
= (m-1) 𝒜(AB:C)
where in the first inequality we used the feasibility condition, and in the second we used the triangle inequality for the Cayley metric repeatedly along the path.
The opposite inequality follows by considering g(x) = 𝕀 for x ∈ r_AB:C^c and g(x) = τ_m for x ∈ r_AB:C. Thus, Rényi entropies are all computed by
the same minimal area cut. This is the well known result that the entanglement spectrum of random tensor networks is flat, and determined by minimal cuts.
We conclude this subsection by giving a proof that the integrality gap of program-intro is determined by the ratio between 𝒜(A:B:C) and 1/2(𝒜(AB:C)+𝒜(B:AC)+𝒜(C:AB)). First we show that the value of the following integer program
min_ρ ∑_e∈ Eρ(e)w(e)
subject to: ∀ L ∈𝒫_A,B∪𝒫_A,C∪𝒫_B,C: ∑_e∈ Lρ(e)≥ 1
is given by the minimal triway cut.
We restrict our discussion here to the case where every connected component of G is connected to at least one of A, B, or C, since adding any disconnected components to G does not change the area of an optimal triway cut.
Consider an optimal ρ and define r_A = {x∈ V: d_ρ(x,A)=0} and similarly define r_B and r_C. (r_A,r_B,r_C) must be disjoint otherwise we violate the path constraint. Furthermore r_A∪ r_B ∪ r_C = V: If otherwise define r_D≡ V\ r_A \ r_B \ r_C. r_D must be connected to at least one of r_A, r_B or r_C, since otherwise r_D will be totally disconnected from the boundary. Without loss of generality suppose it is r_A. We have
ρ(e) ≥( 1_μ(r_A:r_B) + 1_μ(r_A:r_C) + 1_μ(r_B:r_C) + 1_μ(r_A:r_D) + 1_μ(r_B:r_D) + 1_μ(r_C:r_D)) (e)
≥( 1_μ(r'_A:r_B) + 1_μ(r_A':r_C) + 1_μ(r_B:r_C)) (e) ≡ρ'(e)
where the first inequality follows from integrality. In the second line we defined r_A'=r_A∪ r_D and this inequality is strict for e∈μ(r_A:r_D). ρ'(e) is clearly feasible since any path from A to B must pass through μ(r_A':r_B) and similarly for B:C and C:A. Since μ(r_A:r_D)≠∅, we have ∑_e ρ (e) w(e) > ∑_e ρ' (e) w(e), which is a contradiction since we have assumed ρ to be optimal.
Therefore r_D=∅ and r'_A=r_A. Since ρ is optimal we must have
min_ρ∑_e∈ Eρ(e)w(e) = ∑_e∈ Eρ'(e)w(e) = 𝒜(r_A:r_B:r_C) ≥𝒜(A:B:C)
On the other hand from any minimal triway cut (α,β,γ) we can construct ρ^''(e)=( 1_μ(α:β) + 1_μ(α:γ) + 1_μ(β:γ)) (e) which is clearly feasible. Hence 𝒜(A:B:C) ≥∑_eρ'(e)w(e) and the value of program is equivalent to minimal triway cut.
If one relaxes the integer constraint of ρ, then it allows us to construct a new solution from any optimal ρ:
ρ̃(e) = 1/2( 1_μ(r_A:r_A^c) + 1_μ(r_B:r_B^c) + 1_μ(r_C:r_C^c))(e)
ρ̃(e) violates the integer constraint but is still feasible.
Using a similar argument as above one can show that in this scenario we get
min_ρ∑_e∈ Eρ(e)w(e) = ∑_e∈ Eρ̃(e)w(e) = 1/2(𝒜(r_A:r_A^c)+𝒜(r_B:r_B^c)+𝒜(r_C:r_C^c))
≥1/2( 𝒜(A:BC) + 𝒜(B:AC) + 𝒜(C:AB) )
and the optimal value of the program is given by a sum of minimal cut areas. We have determined the optimal value of the program program-intro before and after the linear relaxation, which agrees with the right-hand side of eq:MG-bound.
§.§ Permutations, Partitions and Boolean variables
In sec:sum we introduced a coarse graining procedure which takes permutations to partitions. We further coarse grain our partitions by blocking them into partitions of the blocks in P(X). We can apply this procedure given any fixed permutation g_0, instead blocking with P(g_0). We discuss this more general procedure here and prove some important bounds.
The collection of set partitions P_N admits a natural lattice structure.
There is a partial order within P_N: Given two elements p,q∈ P_N we say p≥ q if every subset of p can be expressed as the union of some subsets in q, or in other words, q is a “finer” version of p formed by further dividing blocks in p.
The finest (smallest) element in P_N is the identity permutation 𝕀≡{{1},…,{N}}, and the coarsest (largest) is {ℤ_N}.
The join of p and q, denoted by p∨ q, is defined to be the least upper bound of p and q, i.e. p ∨ q is the smallest element x such that x≥ p and x≥ q. Conversely, the meet of p and q, denoted by p∧ q, is defined to be the greatest lower bound of p and q, i.e. p ∧ q is the largest element y such that y≤ p and y≤ q.
In the lattice of P_N the meet is simply the set of non-empty pairwise intersections of p and q; whereas the join can be thought of as the partition that arises from the connected orbits generated by p and q.
For further information on the lattice of set partitions we refer the reader to Appendix A of Ref. <cit.>.
Given some elements g,g_0 ∈ S_N we define:
q_g_0(g) ≡ (P(g) ∨ P(g_0)) / ∼ ∈ P_#(g_0)
where /∼ is the set quotient operation defined element wise on each block in the partition and using the equivalence x ∼ y if x,y are in the same block of P(g_0).
Recall the distance measure on the set of partitions P_N:
d(p,p') ≡#(p) + #(p') - 2 #(p ∨ p')
We verify some properties.
The distance on set partitions d(p,p') is a metric: it is positive, symmetric, vanishes iff p=p' and satisfies the triangle inequality. Additionally:
(a) There is the estimate:
d(p_1,p_2) ≥ | #(p_1) - #(p_2) |
(b) For all partitions r then:
d(p_1, p_2) ≥ d( p_1 ∨ r, p_2 ∨ r)
(c) It bounds the Cayley distance by the associated set partitions P(g), or coarse grained versions q_g_0(g):
#(g) - #(g ∨ g') ≥#(q_g_0(g)) - #(q_g_0(g) ∨ q_g_0(g'))
and
d(g,g') ≥ d( P(g), P(g')) ≥ d ( q_g_0(g), q_g_0(g'))
Firstly the distance is positive since #(p) - #(p ∨ p') ≥ 0 and similarly for p ↔ p'.
Equality implies that #(p') = #(p) = #(p ∨ p') which is only possible if p' ≤ p and also p' ≥ p which implies equality of the partitions.
We have the triangle inequality:
d(p,p') + d(p',p”) ≥ d(p,p”)
which follows from semimodularity
[Semimodularity here means #(p_1∧ p_2)+#(p_1∨ p_2)≥#(p_1) + #(p_2).]:
#(p')-#(p ∨ p') - #(p' ∨ p”) + #(p ∨ p”)
≥#( (p' ∨ p) ∧ (p' ∨ p”) ) -#(p ∨ p') - #(p' ∨ p”)
+ #(p ∨ p”∨ p') ≥ 0
where the first inequality follows simply from p' ≤ (p' ∨ p) ∧ (p' ∨ p”)
and p' ∨ p”≤ p ∨ p”∨ p'.
(a) Using #(p_1 ∨ p_2) ≤#(p_1) or #(p_1 ∨ p_2) ≤#(p_2) we derive d(p_1,p_2) ≥ | #(p_1) - #(p_2) |.
(b) For any r ∈ P_N:
d(p_1, p_2) - d( p_1 ∨ r, p_2 ∨ r)
= #(p_1)+#(p_2)-#(p_1∨ r)-#(p_2∨ r) + 2#(p_1∨ p_2∨ r) -2#(p_1 ∨ p_2)
≥ #(p_1) + #(p_2) - #( (p_1 ∨ p_2) ∧ (p_1 ∨ r) ) - #( (p_1 ∨ p_2) ∧ (p_2 ∨ r) ) ≥ 0
where we used semi-modularity in the first inequality[Specifically: -#( p_1 ∨ p_2) - #(p_1 ∨ r) + #(p_1 ∨ p_2 ∨ r) ≥ - #( (p_1 ∨ p_2) ∧ (p_1 ∨ r))
and 1 ↔ 2. ] and we used p_i ≤ (p_1 ∨ p_2) ∧ (p_i ∨ r) in the second inequality.
(c) In general we start with some g ∈ S_N. Then we will remove fine-grained topological information (roughly speaking, we lose information on the genus expansion) by
moving to partitions. We can do this using the general bound:
d(g, g') = - d(e,g) + d(e, g') + 2(#(g') - #(g' ∨ g) ) + 2 G_g' (g)
≥#(g) + #(g') - 2 #( g ∨ g')
= d(P(g), P(g'))
where #(g'∨ g)≡#(P(g')∨ P(g)) is the number of orbits generated by the joint action of g and g', and G_g'(g) is the genus of the admissible surface associated for g based over g'. The inequality then follows from the non-negativity of the genus. See theorem 7 of Ref. <cit.>.
Consider semimodularity applied to p = P(g) ∨ P(g_0) and p' = P(g) ∨ P(g') giving:
#(P(g))+ #(p ∨ p') ≥#(p ∧ p')+ #(p ∨ p') ≥#(p) + #(p') = #(q_g_0(g)) + #(g ∨ g')
where the first inequality follows since P(g) ≤ p
and P(g) ≤ p'
implying the same of the meet p ∧ p'.
Note that
#(p ∨ p') = #(q_g_0(g) ∨ q_g_0(g')),#(p) = #(q_g_0(g)) and #(p') = #(g ∨ g').
Thus, we derived the bound:
#(g) - #(g ∨ g') ≥#(q_g_0(g)) - #(q_g_0(g) ∨ q_g_0(g'))
which implies:
d( P(g), P(g')) ≥ d ( q_g_0(g), q_g_0(g'))
Consider q=q_X(g)∈ P_2n for some g∈ S_mn.
We can further classify q by the singlets in q.
A singlet is a block of size 1 so there are 2n possible singlets.
We define #_1(q) as the number of singlets in a given partition.
We define s(q) as the (unique) largest partition with singlets in the same location as the singlets of q. That is s(q) ≥ q and #(s(q)) = #_1(q) + 1 - δ_#_1(q) , 2n.
Also #_1(q) = #_1(s(q)).
We define the singlet distance on the set of s as:
d_1(s_1, s_2) = #_1( s_1) + #_1( s_2) - 2 #_1(s_1 ∨ s_2) ≥ d(s_1, s_2)
where the later inequality is only not saturated if one of s_1 or s_2 is 𝕀 (but not both.)
We also have d_1(s_1,s_2) ≤ d(s_1,s_2) + 1 with equality iff one and only one of s_1,s_2 is 𝕀.
We can bound the difference in q by s:
d(q_1,q_2) ≥⌈ d_1(s_1, s_2) /2 ⌉≥ d_1(s_1,s_2)/2
where s_i := s(q_i).
If q_1 and q_2 do not contain any singlet, then s_1=s_2=ℤ_2n and d(s_1,s_2)=0 so the estimation is true.
Now let C_12 be the common singlets of q_1 and q_2, C_1 be the singlets in q_1 that do not overlap with the singlets in q_2 and similarly for C_2.
Note that d(q_1,q_2)≥ d(q_1∨ r, q_2∨ r) from addr. We take r=s_1∨ s_2.
We can express r as the unique element that is fully connected in K=ℤ_2n\ C_1\ C_2\ C_12 and singlets elsewhere.
See:
< g r a p h i c s >
We can compute
d(q_1∨ r,q_2 ∨ r) = #(q_1 ∨ r) + #(q_2 ∨ r) - 2#(q_1∨ q_2∨ r)
=|C_1|+|C_2|+#(q_1∨ r)|_K∪ C_2+#(q_2∨ r)|_K∪ C_1-2#(q_1∨ q_2∨ r)|_K∪ C_1∪ C_2
where we have used #(q)|_C to indicate the number of cycles of q when restricted to a subset of elements C⊂ℤ_2n.
Note that the common singlets |C_12| cancels out in the calculation.
To proceed further, we define C'_1⊂ C_1 to be the elements in C_1 that are not connected to K via q_2∨ r and similarly for C_2'.
Then we have
#(q_1∨ r)|_K∪ C_2 = 1 + #(q_1∨ r)|_C_2'
#(q_2∨ r)|_K∪ C_1 = 1 + #(q_2∨ r)|_C_1'
#(q_1∨ q_2 ∨ r)|_K∪ C_1∪ C_2 = 1 + #(q_1∨ r)|_C_2' + #(q_2∨ r)|_C_1'
which readily follows from the structure of q_i∨ r. See:
< g r a p h i c s >
(In the example given here C_1'=∅)
Hence,
d(q_1∨ r, q_2 ∨ r) = |C_1| + |C_2| -#(q_1∨ r)|_C_2' - #(q_2∨ r)|_C_1'
Also note that (q_2 ∨ r)|_C_1' has no singlets
so that #( q_2 ∨ r)|_C_1'≤⌊ |C_1'|/2 ⌋≤⌊ |C_1|/2 ⌋ since this number is maximized by forming the largest number of doublet blocks or doublet and a single triplet block in C_1' (and similarly for 1 ↔ 2.)
This gives the estimate:
d( q_1 , q_2 ) ≥⌈ |C_1|/2 ⌉ + ⌈ |C_2|/2 ⌉≥⌈ (|C_1| + |C_2|)/2 ⌉
Now |C_1| + |C_2| = d_1(s_1, s_2) (see the proof of the Lemma following immediately after).
Note that the set of s, s(P_2n)⊂ P_2n, forms a lattice (albeit not a sub-lattice of P_2n [Since s(P_2n) is not closed under ∨.]), under the new meet and join defined by s_1 ∧_B s_2 ≡ s_1∧ s_2 and s_1 ∨_B s_2 ≡ s( s_1 ∨ s_2).
Define the units u_k∈ s(P_2n) to be the partition with one singlet at location k. That is, u_k = {{k},ℤ_2n\{k}}.
It then follows that every element in s(P_2n) (other than ℤ_2n) can be expressed as the meet of a string of u^k's, i.e. s=u_k_1∧⋯∧ u_k_i.
The lattice (s(P_2n),∨_B,∧_B) is isomorphic to Boolean algebra B_2n of bit-strings where singlets in s are 0's, and non-singlets are 1's.
More specifically, identify s (and thus q) using a binary variable b^k for k = 1, … 2n.
That is:
b^k(q) =#_1( s(q) ∨ u_k) = #_1(q ∨ u_k)
so that b^k(q)=0 if q has a singlet at the k-th element and b^k(q)=1 otherwise.
The bit-string b={b^k}_k=1^2n then forms a lattice where ∨ is the pair-wise or operation and ∧ is the pair-wise and operation.
Then the singlet distance on P_2n is simply the Hamming distance on bit-strings:
d(b_1, b_2) ≡∑_k=1^2n | b^k_1 - b^k_2 | = d_1(s_1,s_2)
where b^k_1,2 = b^k(s_1,2).
Consider the k-th element in a partition of ℤ_2n.
Then s_1∨ s_2 has a singlet at position k iff s_1 and s_2 both have a singlet at position k.
Let S_i be the set of singlets in s_i, then we have
d_1(s_1,s_2) = #_1(s_1) + #_1(s_2) - 2#_1(s_1∨ s_2)
= |S_1| + |S_2| - 2|S_1∩ S_2| = |C_1| + |C_2|
where C_1 are the elements in S_1 that are not contained in S_2 and likewise for C_2.
By definition C_1∩ C_2=∅ and
|C_1|+|C_2| = |C_1∪ C_2| = ∑^2n_k=1|b^k_1-b^k_2|
since |b^k_1-b^k_2|=1 signals k∈ C_1∪ C_2 and |b^k_1-b^k_2|=0 otherwise.
d(q_1,q_2) ≥⌈ d(b_1,b_2)/2⌉≥ d(b_1,b_2)/2
where b^k_1,2=b^k(s_1,2).
Substitute d1s into Lemma <ref>.
d(b_1,b_2) clearly satisfies all the properties of a metric.
We will also sometimes work with ⌈ d(b_1, b_2) /2 ⌉ which also satisfies the properties of a metric, since ⌈ a ⌉ + ⌈ b ⌉≥⌈ a + b ⌉.
We need one final result, which generalizes the triangle inequality and was already proven in Ref. <cit.>.
Consider two partitions t_A, t_B ∈ P_N and consider the bi-partite graph G formed from #(t_A) black vertices and #(t_B) white vertices joined
with t_A ∧ t_B = 𝕀 edges for each block in t_A and t_B that intersect. If G is a cycle graph then:
d(t_A, q) + d(q, t_B) ≥ d(t_A,t_B) + 2 (1-δ_#_1(q), 0 )
for any q ∈ P_N.
An example graph is shown in fig:modular_pair. Blocks in t_A can be found by combining the edges that intersect a fixed black vertex.
From this figure it is clear that t_A and t_B are made of doublets. Because it has a single cycle t_A and t_B fail to be a modular pair[A modular pair saturates the semimodularity condition.] by 1:
d(t_A,𝕀) + d(t_B,𝕀) - d(t_A,t_B) = 2(#(t_A ∨ t_B) + #(t_A ∧ t_B)- #(t_A) - #(t_B) ) = 2
since #(t_A ∨ t_B) = 1.
A modular pair (t_A',t_B) is associated to a tree graph <cit.>, and so any unit u_k will break a single edge t_A' = t_A ∧ u_k of the cycle
and produce a tree with:
d(t_A',𝕀) + d(t_B,𝕀) - d(t_A',t_B) = 2(#(t_A' ∨ t_B) + #(t_A' ∧ t_B)- #(t_A') - #(t_B) ) = 0
where we used the fact that t_A' ∧ t_B = u_k ∧𝕀 = 𝕀.
Thus if q = q ∧ u_k then:
d(t_A, q) + d(t_B,q) = d(t_A', q) + d(t_B,q) ≥ d(t_A',t_B) = d(t_A',𝕀) + d(t_B,𝕀)
= d(t_A,𝕀) + d(t_B,𝕀) = d(t_A,t_B) + 2
Since this is true for any u_k it is true if #_1(q) ≠ 0 as required.
We will apply this Lemma to t_A = q_A and t_B = q_B where these partitions satisfy the required properties, as can be seen by their explicit form in Appendix <ref>.
While the above example only slightly generalizes the case of interest, it points towards the basic structure that makes our results work. In particular, note
that #_1(𝕀) ≠ 0 and indeed the bound for q=𝕀, boundsat, is saturated in this case. So the bounds we derive (here for partitions) are tight, which is important for the possibility
of forming a collapsing chain in Theorem <ref>.
It is also important that the pair (t_A,t_B) does not form a tree (they are not a modular pair) since otherwise we would not get minimal triway cuts – we would simply get minimal cuts.
§ PROOF OF THE MAIN THEOREM
In this section, we establish our proof for the main results (Theorem <ref> and Theorem <ref>) of the paper. First, in sec:ptop, we show that the permutation group optimization problem R (Definition <ref>) can be coarse grained into a set partition optimization problem Q (Definition <ref>). Next, in sec:ptoint we bound Q by a mixed Boolean-integer program B (Definition <ref>), and then an integer program I (Definition <ref>). We these results at hand, we relate the value of I to multiway cuts in sec:inttomulti, thus proving the collapsing chain R-𝒜(AB:C)≥ Q ≥ B ≥ I ≥𝒜(A:B:C) as required by Theorem <ref>. We prove the uniqueness of the solution in sec:uniqueness.
Note that this section only establishes our results on even integer m>2. We will deal with the problem of analytically continuing m→ 1 in sec:cont.
§.§ From permutations to partitions
We seek:
R = min_g R(g) , R(g) = ∑_ e={ x, y}∈ Ew(e) d(g(x),g(y))
Consider a minimal cut r_AB⊂ V associated to the division AB:C. Define C' = {x∈ r^c_AB:{x,y}∈μ(r_AB)} to be the vertices in r^c_AB that border r_AB.
Define a new amputated graph G' = (V',E') = (r_AB∪ C' ,E_G [r_AB]).
We can estimate R using a new model, written in terms of the coarse grained q_X(g) with respect to element X, on this new graph:
We have the following estimate:
R(g) ≥𝒜(AB:C) d(X,𝕀) + Q'(q)
where q : V' → P_2n defined via q(v) ≡ q_X(g(v)) and where:
Q'(q) = ∑_e = {x,y}∈ E' w(e) d( q(x), q(y))
Minimizing over g gives R ≥𝒜(r_AB) d(X,𝕀) + Q' with Q'≡min_q Q'(q), where the boundary conditions on q for this later model is such that vertices in A(B) have a fixed permutation q = q_A(q_B) and the permutations on C' are fixed as q=𝕀.
To pass from the program Q' (on the amputated graph G') to the program Q (on the original graph G), as required in Definition <ref> and Theorem <ref>, we simply need
to show that Q'≥ Q. This is proven in Lemma <ref> after the proof of Lemma <ref>.
Given the minimal cut r_AB we can consider an optimal solution to the linear program c(L) in program:cP for edge-disjoint paths:
𝒜(r_AB) = ∑_L ∈𝒫_AB,C c(L)
where for all e ∈ E then w(e) ≥∑_L c(L) 1_L(e).
Consider a single such path L∈𝒫_AB,C.
Such a path must pass through the minimal cut μ(r_AB) at least one time.
Denote the vertex ν to be the first vertex along L that connects r_AB to C'.
That is for the first edge e in L ∩μ(r_AB) we have e = {ν', ν} where
ν' ∈ C'. Since C' lies on the minimal cut, this is guaranteed to exist, see fig:proof-partition.
We split up L=L_AB⊔ L_C into two connecting paths at the common vertex ν, where L_AB∈P_AB,ν and L_C∈P_ν,C.
We estimate the contribution of L to be:
∑_e = {x,y}∈ L d (g(x),g(y))
=∑_e ={x,y}∈ L_C d(g(x),g(y))+ ∑_ e ={x,y}∈ L_AB d(g(x),g(y))
≥ d(g(ν), 𝕀) + ∑_e ={x,y}∈ L_AB d(P(g(x)),P(g(y)))
To arrive at great2 we have used gPq for the part of the path that intersects r_AB and then repeated uses of the (Cayley distance) triangle inequality in the complement.
We can re-arrange the sum:
∑_e ={x,y}∈ L_AB d(P(g(x)),P(g(y)))
= #(g_A,B) - 2#(g_A,B∨ g(x_2)) + #(g(ν))
+ ∑_α=2^|L_AB| 2( #(g(x_α)) - #(g(x_α) ∨ g(x_α+1)) )
where x_α is the sequence of vertices that connects the path L. With x_1 ∈ AB and x_|L_AB|+1 = ν.
Note that if there are no edges in L_AB then the sum ssum does not contribute.
Now use the bound gqbound along with the equality #(g_A,B) = #(q_X(g_A,B)) and #(g ∨ g_A,B) = #( q_X(g) ∨ q_X(g_A,B)), we turn all g's in the sum of ssum
into q_X(g)'s plus the remainder #(g(ν)) - #(q_X(g(ν))). That is:
∑_e={x,y}∈ L_AB d(P(g(x)),P(g(y)))
≥#(g(ν)) - #(q(ν)) +
∑_e={x,y}∈ L_AB d(q(x),q(y))
(recall that q_X(g(x)) ≡ q(x)).
Hence we have the following estimate:
∑_e={x,y}∈ L d(g(x),g(y)) ≥ d(𝕀,X) + d(𝕀,q(v))+∑_e={x,y}∈ L_AB d(q(x),q(y))
where we have used the identity
d(𝕀,g(ν)) + #(g(ν)) - #(q(ν)) = nm - #(q(ν)) = d(𝕀,X) + d(𝕀,q(ν))
Now write:
R(g) = ∑_e ={x,y}(w(e) - ∑_ L c(L) 1_L(e) ) d(g(x),g(y))
+ ∑_L c(L) ∑_e ={x,y}∈ L d(g(x),g(y))
≥∑_e ={x,y}⊂ r_AB(w(e) - ∑_ L c(L) 1_L(e) ) d(q(x),q(y))
+ ∑_L c(L) ( d(𝕀,X) + d(𝕀,q(v))+∑_e={x,y}∈ L_AB d(q(x),q(y)))
where we applied great3 to all paths weighted by c(L) in the last term of firstlast, along with gPq in the first term of firstlast on the edges that are entirely inside r_AB and finally dropping all other edges in r_AB^c (note that the bracketed term in the first part of firstlast is positive
by feasibility of c(L)). Using:
∑_L c(L) d(𝕀,X) = 𝒜(AB:C) d(𝕀,X)
we arrive at lemff.
We will make use of the saturation conditions for the inequality (<ref>) in our proofs for uniqueness of the solution. It is useful to record them here for reference later. For edges e in r_AB^c the saturation of firstlast requires:
∑_e={x,y}∈ r_AB^c(w(e)-∑_L c(L)1_L(e))d(g(x),g(y)) = 0
We also have, from great2,
∑_e={x,y}∈ L_Cd(g(x),g(y)) = d(g(v), 𝕀)
where L_C∈𝒫_v,C is a subpath of L∈𝒫_AB,C with c(L)>0. L_C connects a vertex v immediately inside r_AB to C.
For the edges e in r_AB, saturation of firstlast requires ∑_e∈ r_AB(w(e)-∑_L c(L)1_L(e))d(g(x),g(y))
= ∑_e∈ r_AB(w(e)-∑_L c(L)1_L(e))d(q(x),q(y)).
Since d(g(x),g(y))≥ d(q(x),q(y))≥ 0 by gPq, this condition holds locally, i.e.,
d(g(x),g(y))=d(q(x),q(y))
for all edges e={x,y} where w(e)-∑_L c(L)1_L(e)>0. If the minimal cut r_AB is unique, then it holds for all e strictly inside r_AB.
Consider the program
Q=min_q Q(q),
on the original graph G,
where we minimize over all q:V→ P_2n with q(A)=q_A, q(B)=q_B and q_C = 𝕀. Then we have
R ≥𝒜(AB:C)d(X,𝕀) + Q
Consider the program Q'=min_q Q'(q), as defined in Lemma <ref> and Remark <ref>. Let q' be an optimal solution to this problem.
We construct from q' a feasible solution q:V→ P_2n to the Q problem by
q(v) =
q'(v), v∈ V'
𝕀, v∈ V \ V'
It is clear that q satisfies all the boundary conditions and Q(q)=Q'(q'), since setting q(v)=𝕀 on the new vertices does not increase the free energy.
Minimizing over all q:V→ P_2n we have Q'>Q.
Then from Remark <ref> we have R≥𝒜(AB:C)d(X,𝕀) + Q' ≥𝒜(AB:C)d(X,𝕀) + Q.
§.§ From partitions to integer program
In this section we move from partitions q, to Boolean variables b, to an integer program.
For all paths L ∈𝒫_A,B (not necessarily edge disjoint) we have the estimate:
∑_e ={x,y}∈ L d(q(x),q(y)) ≥ 2(n - δ_b_L,b_AB ) , ∑_e = {x,y}∈ L d(q(x),q(y)) ∈ 2 ℤ
where b_AB = 11⋯ 1∈ B_2n and
b_L = ⋀_x ∈ V[L] b(x),
and where b:V → B_2n is defined via b(v)^k = b^k(q(v)).
If b_L = b_AB then we must have b(x)=b_AB=11⋯ 1 for the entire path, so that #_1(q(x)) = 0 for all x∈ L (since singlets are preserved through the meet operation). We use
the triangle inequality repeatedly to show:
∑_e ={x,y}∈ L d(q(x),q(y)) ≥ d(q_A,q_B) = 2(n-1)
Conversely, if b_L ≠ b_AB then there must be some x∈ V along the path such that #_1(q(x)) ≠ 0.
We apply the triangle inequality about this point:
∑_e ={x,y}∈ L d(q(x),q(y)) ≥ d(q_A, q(x)) + d(q(x), q_B) ≥ 2n
where we used the improved triangle inequality derived in Lemma <ref>, and which applies when q(x) has a singlet somewhere.
More generally the deficit in the triangle inequality is always an even integer:
d(q_1,q_2) + d(q_2,q_3) - d(q_2,q_3) = 2( #(q_2 ∨ q_3) + #(q_2) - #(q_1 ∨ q_2) -#(q_2 ∨ q_3) )
Thus the difference in the left and right-hand side of the inequality in difflr must
be an even integer.
We have the estimate in terms of an integer program:
Given some fixed b: V→ B_2n, define the integer program:
B(b) ≡ min_r∑_e ∈ E w(e) r(e)
subject to ∀ L ∈𝒫_A,B : ∑_e ∈ L r(e) = 2(n-δ_b_L,b_AB) + 2 ℤ_≥ 0
and ∀ e ={x,y}∈ E : r(e) ≥⌈1/2 d(b(x),b(y)) ⌉
where r ∈ℤ_≥ 0.
Then for any q: V→ P_2n satisfying the boundary condition q(A)=q_A, q(B)=q_B and q(C)=𝕀, we have the bound Q(q) ≥ B(b^k(q)).
We simply consider r(e) = d(q(x),q(y)) for e ={x,y}.
We use the bounds in Lemma <ref> and Corollary <ref> to check feasibility for the B(b) problem.
Consider the program:
B≡min_b B(b),
where we now minimize over all b:V→ B_2n with b(A)=b(B)=b_AB=11…1 and b(C)=00…0.
Then we have Q≥ B.
Consider an optimal solution q to program Q. Then from Lemma <ref> it is clear that B(b^k(q))≤ Q(q) is feasible. Minimizing over all b then gives Q≥ B.
We will now introduce an edge variable ρ(e) = ⌈1/2 d(b(x),b(y)) ⌉ for e = {x,y}. We note that we can write:
δ_b_L,b_AB = δ_ρ(L), 0,
where we recall the shorthand notation ρ(L)≡∑_e∈ Lρ(e).
Thus we have the new integer program:
For integer n ≥ 2, consider the integer non-linear program:
I = min_ρ, σ∑_e ∈ E w(e) (σ(e) + ρ(e))
subject to ∀ L ∈𝒫_A,B : ∑_e ∈ L( σ(e) + ρ(e)) ∈ 2(n - δ_ρ(L), 0) + 2 ℤ_≥ 0
and ∀ L ∈𝒫_AB,C : ∑_e ∈ Lρ(e) ≥ n
for ρ, σ∈ℤ_≥ 0.
Then min_q Q(q) ≥min_b B(b) ≥ I.
Again we just have to check feasibility with ρ(e) = ⌈1/2 d(b(x),b(y)) ⌉
and σ(e) = r(e) - ρ(e) ≥ 0, which again follows from Lemma <ref>, Lemma <ref> and sPrho. We also need
repeated use of the triangle inequality for the metric ⌈1/2 d(b_1,b_2) ⌉:
∑_e ∈ Lρ(e) ≥⌈1/2 d(b_AB,b_C ) ⌉ = n
for all L ∈𝒫_AB,C.
In passing from the Boolean program B to the integer program I, we have split the integer variable r(e) into two variables ρ(e) and σ(e).
In particular, the region associated to ρ=0 plays a special role in the program, as the constraint on a path is weaker if it stays entirely within the said region. It turns out that this region corresponds to the “squeezed entanglement wedge” (backreacted EW accommodating the non-zero tension t_A:B) in the solution to the reflected entropy optimization problem.
The σ variable can be set to be non-zero only inside the ρ=0 wedge, which sources a total of 2(n-1) cuts separating A and B. We will see that these cut surfaces collapse to a single domain wall corresponding to the (squeezed) EW cross-section for an optimal solution.
§.§ From integer program to multiway cuts
The main theorem we would like to prove here is that the program nlip is equivalent to a multiway cut problem:
The minimum of the non-linear integer program nlip is achieved by an optimal solution to the multiway cut problem for A:B:C
with t_A:B = 2(n-1) and t_A:C = t_B:C = n.
We do this in two steps. In the first step we prove the existence of a subgraph G' = (V',E') with A,B ⊂ V' such that either (i) A and B are disconnected
on V' and there is an optimal solution to nlip where ρ(e) = σ(e) = 0 for all e ⊂ V' or (ii) A and B are connected and there is in optimal solution (ρ,σ)
to nlip where ρ(e) = 0 and σ can be described as a set of 2(n-1) cuts separating A,B for all e ⊂ V'.
These solutions then seed a multiboundary/intersecting cut problem in the remaining graph made from the remaining edges: E^c = E\ E' and vertices V^c ≡ (V \ V') ∪ (AB)'
where (AB)' = {x∈ V':x∈ V_G[μ(V')]} are the vertices in G' that lie on the cut surface μ(V').
Both cases (i) and (ii) will be treated together, using a vertex valued variable k(x) instead of an edge variable. In the second step we solve this multiboundary cut problem.
More explicitly, the first step is the following Lemma:
There exists a subgraph G' = (V',E') of G
where A,B ⊂ V' such that there is an optimal solution (ρ,σ) to nlip where:
* There exists a function k: V' →ℤ_≥ 0 with k(x) ≤ 2(n-1) such that:
for all edges e ∈ E' we have:
σ(e) = |k(x) - k(y) | , ρ(e) =0
where k(x) = 0 for x∈ A and k(x) = 2(n-1) for x∈ B.
* For the remaining edges we consider the ℓ-intersecting cut problem with ℓ=2(n-1) (defined immediately below)
on the complementary reduced subgraph G^c = (V^c, E^c)
, with boundary vertices Γ_k = { x ∈ (AB)' : k(x) = k } and C. Where the optimal minimum M of this later problem bounds:
I ≥ M + w( μ( V')) + ∑_e ={x,y}∈ E' w(e) |k(x) - k(y) |
See fig:G'-Gc for an exemplary configuration.
We prove this in sec:struct.
Recall the notation, used in IgeqM, f(E') = ∑_e ∈ E' f(e) for some function f on the edges and some subset E' ⊂ E.
Given an integer ℓ≥ 1 and weighted graph G = (V,E) with ℓ+2 sets of boundary vertices {Γ_k; k = 0 …ℓ} and C, define the following k-intersecting cut problem:
M ≡ min_ϱ M(ϱ) M(ϱ) = ∑_e ∈ E w(e) ϱ(e)
subject to ∀ k,k' = 0, …ℓ ∀ L ∈𝒫_Γ_k ,Γ_k' : ϱ(L) ∈ |k-k'| + ℤ_≥ 0
and ∀ k = 0, …ℓ ∀ L ∈𝒫_Γ_k,C : ϱ(L) ∈ℓ/2 + ℤ_≥ 0
where ϱ(e) ∈ℤ_≥ 0/2.
We solve this problem recursively in sec:intcut. The result is:
The optimal value of the ℓ-intersecting cut problem is:
M = 1/2∑_k=0^ℓ-1( w(μ(α_k)) + w(μ(β_k)) )
where α_k is a minimal cut for (Γ_0 ∪…Γ_k) : (Γ_k+1∪…Γ_ℓ∪ C) and
β_k is a minimal cut for (Γ_k+1∪…Γ_ℓ):(Γ_0 ∪…Γ_k ∪ C ) and α_k∩β_k=∅.
See fig:Gc-sol for an example. Given the above two Lemmas we now prove Theorem <ref>.
Use the function k(x) in Lemma <ref> to define bulk regions:
γ_k ={ x ∈ V': k(x) ≤ k }
for k = 0, … ,2(n-1)-1 and
such that:
∑_e ={x,y}∈ E' w(e) |k(x) - k(y) | = ∑_k=0^2(n-1)-1∑_ e ∈μ_G'( γ_k) w(e)
where μ_G(·) denotes the edge cut function on some graph G={V,E}, that is μ_G(r) ≡ E_G [r] ∩ E_G [V\ r]
for r ⊂ V.
The region γ_k contains A and shares vertices with α_k constructed from Lemma <ref> on the subgraph G^c.
Similarly V' \γ_k contains B and shares vertices with β_k.
Thus we can combine them so that A_k ≡γ_k ∪α_k forms a cut for A: BC on the original graph G and
B_k ≡ (V' \γ_k) ∪β_k forms a cut for B: AC. These cuts are non-intersecting but share edges in E' that are μ_G(γ_k). In particular:
μ_G(A_k) = μ_G'(γ_k) ∪μ_G^c(α_k)
and similarly for B_k.
We give an explicit proof of Aqproof for completeness. Recall that (AB)' = V' ∩ V^c and E = E' ⊔ E^c. We note that (AB)' ∩α_k = (AB)' ∩γ_k by the boundary conditions and this implies that A_k ∩ V' = γ_k and A_k ∩ V^c = α_k further implying
that E_G'[A_k] = E_G' [γ_k] since all {x,y}∈ E' satisfy {x,y }⊂ V'. Similarly E_G^c[A_k] = E_G^c [α_k].
Also note that V \ A_k = (V' \γ_k) ∪ (V^c \α_k). Putting this together:
μ_G(A_k) = E_G [A_k] ∩ E_G [V\ A_k]
= ( E_G' [A_k] ∩ E_G' [V \ A_k] ) ∪( E_G^c [A_k] ∩ E_G^c [V \ A_k] )
= ( E_G' [γ_k] ∩ E_G' [V' \γ_k] ) ∪( E_G^c [α_k] ∩ E_G^c [ V' \α_k] )
as required. A similar argument applies to μ_G(B_q).
Thus, we can write the estimate from Lemma <ref> and using wqq as:
I ≥(1/2∑_k=0^2(n-1)-1 w(μ_G( A_k)) + w_G(μ( B_k)) )
+ w( μ_G( V'))
≥min_k( (n-1) ( w(μ_G( A_k )) + w(μ_G( B_k))) + w( μ_G( V')) )
The right-hand side can be written as a sum of two triway cuts, with tensions as given in the statement of this theorem. That is:
I ≥min_k( (n-1)/n𝒜( A_k : B_k : (V \ (A_k ∪ B_k))) +1/n𝒜(γ_k : (V' \γ_k) : (V \ V') ) )
≥𝒜(A:B:C)
Equality follows from the collapsing chain mentioned in Theorem <ref> that we have also now established.
The proofs we give for Lemma <ref> and Lemma <ref> are quite lengthy,
so we present these in Appendix <ref> and Appendix <ref> respectively.
§.§ Uniqueness theorem
In this subsection we give a proof for Theorem <ref>, i.e., the solution to the permutation group optimization problem R (Definition <ref>) is the unique solution given by the triway cut if both the triway cut and the entanglement wedge of the graph are unique.
Though this result may seem trivial, it lays the foundation for performing the analytic continuation m→ 1, the main result in sec:cont.
We only consider the case where every connected component of the graph G is path connected to at least one boundary region. The reason is obvious – If such a disconnected region exists then one can always set the permutations on such region to be any fixed element on S_mn and we have a degeneracy of |S_mn|. In any case, such disconnected parts of the graph can be factored out of the computation of reflected entropy. We can thus deal with these trivially.
Let g:V→ S_mn be an optimal solution to the permutation group program R. If the minimal cut r_AB for AB is unique, then g(v)=𝕀 for all vertices v∈ r_AB^c.
We know from Theorem <ref> that R = 𝒜(AB:C) d(𝕀,X) + Q'.
We also have, from Lemma <ref> that
R(g) ≥𝒜(AB:C) d(𝕀,X) + Q'(q_X(g))
If g is optimal, this inequality must be saturated since otherwise Q'(q_X(g)) has a smaller objective.
Consider an optimal solution c:𝒫_AB:C→ℝ_≥ 0 to the dual max-flow problem program:cP.
The saturation condition of eq:Rg-ineq demands that (see the remark before Lemma <ref>)
∑_e={x,y}∈ L_C d(g(x),g(y)) = d(g(v),𝕀)
where L_C is the subpath of L∈𝒫_AB:C such that c(L)>0 and L_C connects C to the vertex v immediately inside r_AB (see fig:proof-partition).
In other words, it states that the elements along the path L_C must be on the group geodesic of 𝕀 to g(v).
Another saturation condition demands that
∑_e={x,y}⊂ r_AB^c( w(e)-∑_L∈𝒫_AB,C c(L)1_L(e) )d(g(x),g(y))=0
where we sum over all the edges outside the entanglement wedge r_AB.
Since both terms in the sum are semi-positive, for any e={x,y}⊂ r_AB^c it must be that either
w(e)-∑_L∈𝒫_AB,C c(L)1_L(e)=0, or d(g(x),g(y))=0
Now pick any vertex v∈ r_AB^c. We claim that there must exist a path L'∈𝒫_C,v such that the d(g(x),g(y))=0 for all adjacent vertices x,y along the path. If this is true, the triangle inequalities along L' then require d(g(v),𝕀)=0 and thus g(v)=𝕀. This holds true for any v∈ r^c_AB so we are done.
Suppose by contradiction that such a path does not exist. Define the region r_C to be the set of vertices that can be reached by some path with vanishing Cayley distance from C. It is clear that g(x)=𝕀 for all x∈ r_C by a similar argument as above. r_C is a cut for C since any vertex in r_C is path connected to C.
Consider now the cut surface μ(r_C). By definition, for all edges {x,y}∈μ(r_C) we must have d(g(x),g(y))>0, and thus w(e)-∑_L c(L)1_L(e)=0 by eq:either_or. We compute
∑_e∈μ(r_C) w(e) = ∑_L∈𝒫_AB:C c(L) = 𝒜(AB:C)
since all paths starting from C to AB must pass through μ(r_C). Moreover, any such path with c(L)>0 can only intersect μ(r_C) once, since it cannot re-enter r_C once it has left the cut, as this will violate the geodesic condition eq:geodesic. Hence, μ(r_C) is a minimal surface and r_C is a minimal cut. It is different from r_AB^c since v∈ r_AB^c and v∉ r_C. This is a contradiction because we have assumed that the minimal cut is unique.
Let q:V→ P_2n be an optimal solution to the set partition optimization problem Q on graph G with a unique triway cut. Then for any edge e={x,y}∈ E:
d(q(x),q(y)) = (2(n-1)1_μ(r_A:r_B)+n1_μ(r_B:r_C)+n1_μ(r_A:r_C))(e),
where (r_A,r_B,r_C) is the optimal triway cut for (A:B:C).
Since q is optimal we know that the chain in Theorem <ref> collapses, i.e.,
Q(q)=B({b^k(q)},r)=I(ρ,σ),
are all optimal, where r(e)=d(q(x),q(y)), ρ(e) = ⌈1/2d(b(x),b(y)) ⌉ and σ(e)=(r-ρ)(e) for e={x,y}∈ E. Since we have r(e)=(ρ+σ)(e)=d(q(x),q(y)), it suffices to prove eq:dq-sat on ρ+σ for an optimal solution (ρ,σ) of the integer program I.
Now, from Lemma <ref> we know that for any optimal (ρ,σ) one can always construct a new optimal (ρ',σ') with ρ+σ=ρ'+σ' that satisfies the conditions of Lemma <ref>. Now we write
I(ρ,σ) = 1/2∑_k=0^2(n-1)-1(w(μ(A_k))+w(μ(B_k)))+w(μ(V'))
=∑_k=0^2(n-1)-1(n-1/2n𝒜(A_k:B_k:(V\ A_k\ B_k)+1/2n𝒜(γ_k:(V'\γ):(V\ V'))
=𝒜(A:B:C) ≡𝒜(r_A:r_B:r_C)
where we have used Lemma <ref> and replaced the bound by equality since (ρ,σ) is optimal. The cut surfaces A_k,B_k and γ_k are defined as in the proof of Theorem <ref>.
Now since 𝒜(A:B:C) is the unique optimal triway cut, both terms in the sum of eq:triway-sum must be equal to 𝒜(r_A:r_B:r_C) for all k. Hence we have
A_k = γ_k = r_A, B_k = (V'\γ_k) = r_B, (V\ A_k\ B_k) = (V\ V') = r_C
which implies that within V',
ρ'(e)=0, σ'(e) = 2(n-1)1_μ(r_A:r_B)(e)
and outside V',
(ρ'+σ')(e) = 1_μ(V')(e) + (n-1)(1_μ(α_k) + 1_μ(β_k))(e) = n(1_μ(r_A:r_C) +1_μ(r_B:r_C))(e),
since any other configuration cannot reproduce an optimal value of M.
Combining eq:rhosigma1 and eq:rhosigma2 together, we have proven the Lemma.
Let q:V→ P_2n be an optimal solution to the set partition optimization problem Q on graph G with a unique triway cut. Then for any vertex v∈ V:
q(v) =
q_A, v ∈ r_A
q_B, v ∈ r_B
𝕀, v ∈ r_C
where (r_A,r_B,r_C) is the optimal triway cut for (A:B:C).
Since (r_A,r_B,r_C) is an optimal triway cut, if v∈ r_A then it is connected to A by some path L in r_A. The reason is as follows: Suppose that there exists a region r⊂ r_A not connected to A. Then r must be connected to at least one of r_B or r_C otherwise r will be a totally disconnected region. Without loss of generality, suppose it is B. We can then construct a new triway cut (r_A\ r,r_B∪ r, r_C) whose weight is smaller than the original. This is a contradiction since we assumed (r_A,r_B,r_C) is optimal.
From Lemma <ref> we know that ∑_{x,y}∈ Ld(q(x),q(y))=0 along the path L. Then by repeated use of triangle inequality we can show that d(q(v),q_A)=0 and thus q(v)=q_A. Similar arguments also apply to subregion r_B and r_C.
We are now ready to complete the proof for our main result of this subsection. To begin with, we restate Theorem <ref> in terms of the notations used here:
Let g:V→ S_mn be an optimal solution to the reflected entropy permutation group optimization problem R on graph G with a unique triway cut (r_A,r_B,r_C) and a unique entanglement wedge r_AB. Then for any vertex v∈ V:
g(v) =
g_A, v ∈ r_A
g_B, v ∈ r_B
X, v ∈ r_AB∖ r_A ∖ r_B ≡ r_X
𝕀, v∈ r^c_AB
There are four regions that we need to take care of. It is immediate that the region r^c_AB follows trivially from Lemma <ref>.
For the other regions: Note that since g is optimal for R, q_X(g) must be optimal for Q, therefore q_X(g) must coincide with eq:q(v)-sat. Moreover, the saturation conditions require d(g(x),g(y))=d(P(g(x)),P(g(y)))=d(q(x),q(y)) for all e={x,y} completely within r_AB (see the remark before Lemma <ref>).
Let's first consider r_A. The must be a path L completely within r_A such that d(g(x),g(y))=d(q(x),q(y))=0 along adjacent vertices {x,y}∈ L (cf. the proof of Corollary <ref>). Applying triangle inequality along the path we see d(g(v),g_A)=0 which implies g(v)=g_A. A similar argument holds for v∈ r_B.
For the remaining region r_X, We split the contribution of R(g) as
R(g) = ∑_e∈ E, e⊂ r_AB w(e) d(g(x),g(y)) + ∑_v ∈∂ r_AB w(e) d(g(v),𝕀)
where ∂ r_AB≡ V[μ(r_AB)]∩ r_AB are the vertices on the cut surface μ(r_AB) and within r_AB. We have used the fact that g(v)=𝕀 for v∈ r_AB^c.
Since q_X(g(v))=𝕀 for v∈ r_AB we have #(g∨ X)=#(g), and P(g(v)) lies on the geodesic between set partitions P(X) and P(𝕀) on P_mn:
d(P(X),P(g(v)))+d(P(g(v)),P(𝕀)) = #(X)-2#(X∨ g(v))+#(𝕀) = d(P(X),P(𝕀))
We claim that this geodesic condition naturally extends to Cayley distances on S_mn.
The reasoning is as follows: The region r_X must be path connected to either A or B, since if it is only connected to C then we know such region is subset of r_AB^c and thus must be empty. Now consider a connected component r_i of this region. Without loss of generality we may set g=g_i for all v∈ r_i. Consider an edge {x,y} on the common cut surface of r_A (or r_B respectively) and r_i such that g(x)=g_i and g(y)=g_A/B. We know that since g is optimal and x,y∈ r_AB,
d(g_i,g_A/B) = d(P(g_i),P(g_A/B))+ 2G_g_A/B(g_i) = d(q_X(g_i),q_A/B),
which implies that the genus G_g_A/B(g_i)=0. But since P(g_A/B)≥ P(X) we have G_g_A/B(g_i)≥ G_X(g_i) [This fact follows from the construction of admissible surfaces: Given an admissible surface of g_i based on g_A (or g_B) one can “pinch” the cycles of g_A (resp. g_B) to form cycles of X without destroying any connections. See Appendix A. of Ref. <cit.> for details.] and thus G_X(g_i)=0. Also G_g_i(𝕀)=0 trivially. Thus d(P(g_i),P(𝕀))=d(g_i,𝕀) and d(P(g_i),P(X))=d(g_i,x) and we have
R(g) = ∑_e∈ E, e⊂ r_AB w(e) d(g(x),g(y)) + ∑_v ∈μ(r_AB) ∩ r_AB w(e) (d(X,𝕀)-d(X,g(v)))
= ∑_e∈ E, e⊂ r_AB w(e) d(g(x),g(y)) + 𝒜(AB:C)d(X,𝕀) - ∑_v ∈∂ r_AB w(e) d(X,g(v))
Now since g is optimal we know R(g)=Q(q_X(g))+𝒜(AB:C)d(X,𝕀). Using d(g(x),g(y))=d(q(x),q(y)) within r_AB we obtain
∑_v ∈∂ r_AB w(e) d(X,g(v)) = 0
and thus g(v)=X for v∈ r_AB on the cut surface μ(r_AB). Since Cayley distance vanishes within r_X we must have g(v)=X within the whole region. This completes our proof.
In general, when using the permutation optimization to determine the moments of the reflected density matrix, one must sum over all the configurations that minimizes the free energy functional, i.e. statmech.
Physically speaking, what we have proven here is that such minimum is unique as long as one is sufficiently far away from the phase transitions, which is signalled by a degenerate minimal surface. In fact, there are two different kinds of phase transitions in play here – one being the entanglement wedge phase transition, in which the reflected entropy suffers a discontinuous jump; and the other being the EW cross-section phase transition, in which the there are two cross-sectional candidates with the same area. In the latter case, the change of reflected entropy across the transition is still continuous. Our result here only applies when the system is far away from both kind of transitions.
However, based on various evidences, we conjecture that S^(n)_R still converges as stated in Theorem <ref> for the second kind of the transitions. In this case the multiplicity factor is no longer unity but some integer d^(n)_m>1. We conjecture that d^(n)_m is independent of m in this scenario. Therefore, the techniques we will be using for analytic continuation still carries over. The multiplicity factor will introduce a ln d^(n)∼ O(χ^0) correction to the entropy, which goes away as one takes χ→∞.
Our uniqueness assumption on the triway cut problem would then be unnecessary in Theorem <ref>.
§ CONTINUATION IN M
To finish the proof of Theorem <ref>, we need to “analytically continue” away from integer m/2.
Since the answer we have for R(g) only depends trivially on m through normalization (it is independent upon correctly normalizing the reflected density matrix),
it seems like this task would be simple. For example, one might expect a simple application of Carlson's theorem. Unfortunately applying Carlson's theorem
after taking the limit χ→∞ turns out to be rather difficult – a fact that is often not discussed in the AdS/CFT literature. The basic issue is that
one does not know if the natural analytic continuation of Tr ( σ_AA^⋆^(m))^n in m remains an analytic function upon
taking the limit χ→∞. Compounding this difficulty is the need to divide the function one wishes to continue by the expected answer - this division is necessary for convergence, but will often introduce non-analytic dependence on m. Indeed it is well known that partition functions do not remain analytic in the thermodynamic limit due to phase transitions and the condensation of zeros.
That said, such phase transitions in m are not present here since the expected answer has a simple analytic dependence on m.
Even so, it is not obvious how to establish pointwise convergence of the sequence as χ→∞ and this is necessary in order to establish analyticity of the limit.
Thus we follow a different approach here – that of the method of moments. One immediate difficulty is that the quantity of interest Tr ( σ_AA^⋆^(m))^n is not obviously a moment for m (it is a moment for n, but here we are fixing n to be an integer.) However we can write it as a moment for some operator spectral measure <cit.>:
Tr_AA^⋆ ( σ_AA^⋆^(m))^n = < 1_AB^⊗ n| Σ_A^†ϱ^m/2⊗ϱ^m/2Σ_A | 1_AB^⊗ n>
where ϱ = ρ_AB^⊗ n and Σ_A is the usual n-twist operator.
Consider the operator:
𝒪 = ϱ⊗ϱ
and the associated spectral measure E_λ from which we define a new measure μ_Ψ(λ) = < Ψ| E_λ| Ψ> with | Ψ> = Σ_A | 1_AB^⊗ n>. More specifically we can write:
𝒪 = ∑_i λ_i | v_i > < v_i |
and the measure for finite χ will have a discrete decomposition:
dμ_Ψ(λ) = ∑_i |< Ψ| . v_i >|^2 δ( λ - λ_i) dλ
We recover the quantity of interest:
Tr_AA^⋆ ( σ_AA^⋆^(m))^n = ∫_0^∞ d μ_Ψ(λ) λ^m/2
We have computed the moments for m/2 integer away from the phase transition, schematically:
∫_0^∞ d μ_Ψ(λ) λ^m/2 χ→∞⟶ χ^ - 2(n-1) 𝒜(A:B:C) + n(m-2) 𝒜(AB:C)
We have not computed the m=0 moment. This makes the moment problem somewhat more involved since we have less
control over the limit of the measure μ_Ψ near λ =0.
Instead, we will work with a weak form of measure concentration for general random tensor networks that works as long as the entanglement wedge is appropriately unique:
Given a random tensor network with boundary ∂ = AB ⊔ C then the entanglement wedge r_AB is called unique if for any other AB:C cut r ≠ r_AB:
𝒜(AB:C) ≤𝒜(r : r^c) - g
for some non-zero gap g > 0.
Consider a random tensor network state with boundary ∂ = AB⊔ C and with a unique entanglement wedge
for AB, then:
ρ_AB - π_ρ_AB /χ^𝒜(AB:C)_1 = 𝒪(χ^-g/2)
where π_ρ_AB is the projection operator onto the support of ρ_AB.
The result presented here is a much weaker bound than the standard measure concentration where the probability of finding a minimum non-zero eigenvalue away from the peak is exponentially small in χ^#.
Such measure concentration results were proven for a single random tensor [See e.g. <cit.>, Theorem 5.6.] – multiple random tensors seem much harder to work with, hence we have not succeeded in deriving the much stronger result.
This weaker result is however sufficient for our purposes.
We start with a computation of the projected trace distance on the subspace π_ρ_ABℋ_AB
where Tr π_ρ_AB≤χ^𝒜(AB) for any tensor network state.
Then from Hölder's inequality we have:
ρ_AB - π_ρ_AB/χ^𝒜(AB)_1^2 ≤χ^𝒜(AB)ρ_AB - π_ρ_AB/χ^𝒜(AB)_2^2
= χ^𝒜(AB) Tr (ρ_AB - π_ρ_AB/χ^𝒜(AB))^2
= χ^𝒜(AB)( Trρ_AB^2 - 2 /χ^𝒜(AB) + Trπ_ρ_AB/χ^2𝒜(AB))
≤χ^𝒜(AB) Trρ_AB^2 - 1
Averaging gives the required estimate:
χ^𝒜(AB) Trρ_AB^2 - 1
= 𝒪(χ^-g)
by explicit computation of the replica statistical mechanics model, and application of the gap condition in Definition <ref>.
To proceed, we define a specific form of convergence in probabilities as follows. We say that:
α_χ→^ Pr c
if for all ϵ>0:
lim_χ→∞ Pr( |α_χ-c | ≥ϵ) = 0
Where α_χ is a sequence of real valued random variables (on different probability spaces equipped with the Haar measure with dimension determined by χ)
and c is simply a constant.
We now state our main result of this section:
For integer n:
S_R^(n)(A:B) - lnχ(1/n-1𝒜_𝐭(A:B:C) - n/n-1𝒜(AB:C) ) →^ Pr 0
as χ→∞.
The proof we give for Lemma <ref> is rather lengthy and technical so we present it in Appendix <ref>.
We now use the above result to prove Theorem <ref>.
The only thing remaining is to move from convergence in probability to convergence in mean after dividing by lnχ.
Certainly:
S_R^(n)(A:B)/lnχ - (1/n-1𝒜_𝐭(A:B:C) - n/n-1𝒜(AB:C) ) →^ Pr 0
as χ→∞, follows from Lemma <ref>. We now show that S_R^(n)(A:B)/lnχ is a uniformly bounded random variable.
We use monotonicity, established in Ref. <cit.> for integer n ≥ 2:
S_R^(n)(A:B) = S_n(AA^⋆)_ρ_AB^1/2≤ S_n(AA^⋆)_ρ_ABC^1/2 = S_R^(n)(A:BC) = 2 S_n(A)
Using the Swingle bound gives:
S_R^(n)(A:B) /lnχ≤ 2 𝒜(A:BC)
Both of these bounds
apply to all instances of the ensemble. Convergence in probability for a bounded random variable, implies convergence in mean, and so we are done.
§ DISCUSSION
In conclusion, we have rigorously demonstrated that the (m,n)-R'enyi reflected entropies in random tensor networks are computed by saddles involving triway cuts as shown in fig:network-multi-cut for arbitrary m and integer n. Moreover, there is a natural analytic continuation of the triway cut problem for non-integer n which leads to the holographic proposal relating to the entanglement wedge cross section. We now comment on various aspects of our work.
§.§ Bit Threads
As discussed earlier, bit threads provide a vivid picture of the entanglement structure of holographic states. Based on the structure of bit thread configurations, Ref. <cit.> conjectured that the three party entanglement structure of holographic systems is dominated by bipartite entanglement. However, a version of this conjecture is in conflict with the S_R=2EW proposal <cit.>, for which we have found further evidence in this paper. While this is conclusive, we would nevertheless like to discuss a way to see how one can get as close to bit threads as possible.
As reviewed in sub:minmax, the RT formula can be recast as a max flow problem by convex duality. Moreover, the natural form of the min-cut problem from the RTN perspective is an integer program where the optimization is over domain walls with integer energy costs, representing the different permutations that contribute to the free energy minimization problem. For the entanglement entropy, the problem can be relaxed to a linear program over the real numbers since there isn't an integrality gap for this problem. Once relaxed, convex duality naturally leads to a max-flow problem, i.e., bit threads.
In the context of reflected entropy however, we showed that the triway cut problem is equivalent to an integer program with a non-trivial integrality gap. While the Rényi reflected entropies are computed by triway cuts, which are related to the entanglement wedge cross section, the non-integer program allows relaxation to surfaces that are related to the original RT surfaces as shown in fig:relax (e.g. for n=2). For RTNs, this is identical to the mutual information, as found from assuming the mostly bipartite conjecture. Thus, we can think of the relaxation of the integer program as the “incorrect" step that led to bit threads, as well as the mostly bipartite conjecture.
It has been conjectured that one can amend the mostly bipartite story from bit threads by considering a generalized “hyperthread” optimization program <cit.> (see also <cit.>). Hyperthreads are threads connecting between multiple (k≥ 3) boundary regions and it is conjectured that the optimal value of a k-thread program may be a measure of k-partite entanglement. In the case of 3-threads, the optimal configuration saturates a minimal triway cut. In this sense, our results serve as a firsthand bridge that connects the 3-thread problem to a concrete quantum information measure. We think it should be possible to derive the hyperthread optimization as a dual problem of the various integer programs appearing in this paper – indeed the triway cut does not admit a bit thread dual but it may still be able to be “dual” to a more exotic program such as the hyperthreads. On the other hand, reflected entropy can be naturally generalized to accommodate k-party systems <cit.>. It would be interesting to see if the formalism we developed in this paper extends to such case and if there is possible connection to the k-thread programs. We leave these investigations to future works.
§.§ More General Tensor Networks
There are various aspects of RTNs that make them good models of holography such as their relation to fixed-area states <cit.>. However, there are other aspects that are missing such as the lack of mutual backreaction between domain walls, as well as the commutativity of area operators <cit.>. Thus, it is interesting to analyze the extent to which variants of random tensor networks can model holography.
For instance, the RT formula can be reproduced by choosing tensors that are 2-designs where the average is the same as the Haar average up to the second moment <cit.>. Random stabilizer tensor networks are an example which form at most a projective 3-design <cit.>. However, they satisfy the bipartite dominance conjecture <cit.> and thus, do not accurately model the reflected entropy for holographic states. Our paper certainly makes use of larger moments, so the discrepancy is no surprise. In fact, our results suggest that the integrality gap would become visible to a projective 4-design, at least when computing the (2,2) Rényi reflected entropy (involving canonical purifications of the density matrix ρ_AB^2/ Trρ_AB^2). The implications for the Rényi reflected entropy and the Markov gap are less clear, and we leave it as an important open question to understand at what level of k-design the Markov gaps becomes large, of order lnχ.
Another possible generalization of the RTN that can be considered is to use non-maximally entangled edge states <cit.>. In this case, the calculation for the (m,n)-Rényi reflected entropy is similar except the energy costs on the domain wall become functions of the entanglement spectrum of the edge states. In this case, it is harder to prove anything about the analytic continuation, but we expect similar saddles to play an important role.
Even more ambitious would be to use tensor networks with edge degrees of freedom, as in <cit.>, with a possible payoff of more realistic backreaction.
Another generalization we can consider is that of hypergraph RTNs. Hypergraphs are a natural generalization of graphs where edges connecting 2 vertices are generalized to hyperedges potentially connecting more than 2 vertices. States that satisfy the RT formula on hypergraphs were discussed in Refs. <cit.>. Such states have a natural construction in terms of the RTN where hyperedges are formed by projecting onto GHZ states coupling multiple vertex tensors <cit.>. One could then ask what the reflected entropy for such states is. Although we do not have a proof analogous to the one for usual RTNs, assuming that triway cut-like configurations dominate, we can compute the Rényi reflected entropy. The important ingredient is the generalization of the free energy optimization problem. The free energy cost of an edge in the RTN for vertex permutations g_1 and g_2 is weighted by #(g_2 g_1^-1). It is easy to work out the generalization for hyperedges. For instance, for a 3-edge with vertex permutations g_1, g_2 and g_3, the free energy cost is #(g_2g_1^-1∨ g_3g_1^-1), which measures the number of orbits of the relevant elements.
§.§ Holographic States
Having obtained rigorous results for RTNs, it is natural to guess that similar saddles, which are different from the naive saddles written down in Ref. <cit.>, will also play an important role in holography, resolving the issues with analytic continuation found in Ref. <cit.>.
In particular, this would imply that geometric saddles involving backreacted versions of triway cuts would compute the (m,n)-Rényi reflected entropy in holographic states.
It is useful to contrast this with the case of the multi-entropy, a symmetric multipartite entanglement measure defined by Ref. <cit.>.
In particular, for the case of three parties, the bulk dual was proposed to be a triway cut.
However, it was argued by Ref. <cit.> that despite the multi-entropy being computed by a triway cut in RTNs, the corresponding configurations were not gravitational saddles in AdS/CFT.
The basic reason is that in a smooth gravitational saddle, the local neighborhood of any point is a Euclidean ball.
This turns out to be true for the triway cut configuration at n=2.
However, for the triway cut configurations at n>2, the topology is such that the junction of the triway cut is not smooth.
A similar argument also holds for the entanglement negativity and the odd entropy <cit.>.
Given this, one may worry that while triway cuts are relevant for RTNs, they may not compute the (m,n)-Rényi reflected entropy in holographic states.
However, one can check that for the reflected entropy of (m,n)-R'enyi, the junction of the triway cut is consistent with the topology of a Euclidean ball, thus avoiding the above failure mode.
Thus, we still expect the topologies of the RTN saddles leading to triway cuts to continue to be relevant for holographic states.
A particular case where this is known to be true is that of the (2,n)-Rényi reflected entropy computed in Ref. <cit.>.
§.§ Effective Description
In Ref. <cit.>, we discussed an effective description of the canonical purification as a superposition over states with different values of area for the squeezed cross sections. E.g., for hyperbolic RTNs we have
< g r a p h i c s >
.
It is natural to expect a generalization of this result to general RTNs where the squeezed cross sections are solutions to the triway cut problem at different values of n.
While this effective description was proposed in the context of RTNs, where there is no backreaction and the superposition is simply over different sets of vertices in the graph, it is natural to ask if it can be understood in holographic systems as well. For the special case of two intervals in the vacuum state at m=2, it was found in Refs. <cit.> that the Rényi reflected entropies for arbitrary n can be computed by the torus partition function. This makes analytic continuation easy and is related to the fact that for m=2, there is no independent X element. The calculation of Rényi reflected entropy can then similarly be decomposed into fixed-area sectors of the entanglement wedge cross section, which correspond to different horizon areas in the BTZ saddle. It is then easy to see that the effective description with different squeezed cross sections is represented by these fixed-area states which induce varying conical defects at the horizon, which translate into the angle subtended at the triway cut at different values of n (see fig:torus_slice).
TF would like to thank Alejandro Dominguez-Garcia for discussions. SL would like to thank Shiliang Gao, Jingwei Xu, and Kiran Kumar A.S. for useful discussions. PR would like to thank Abhijit Gadde for discussions.
PR is supported in part by a grant from the Simons Foundation, by funds from UCSB, the Berkeley Center for Theoretical Physics; by the Department of Energy, Office of Science, Office of High Energy Physics under QuantISED Award DE-SC0019380, under contract DE-AC02-05CH11231 and by the National Science Foundation under Award Number 2112880.
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0360.
§ SPECIFIC GROUP AND PERMUTATION ELEMENTS
In this appendix we give a description of the important permutation and partition elements that appear in our proof of the main theorem.
We first describe the permutation group elements g_A and g_B∈ S_mn.
Denote a specific replica by (k,ℓ) where k=1… n and ℓ=1… m; τ_m^[k] as the m-cyclic permutation that shifts (k,ℓ)→(k,ℓ+1) and likewise τ_n^[ℓ] the n-cyclic permutation that shifts (k,ℓ)→(k+1,ℓ).
We write
g_B = ∏_k=1^nτ_m^[k], g_A = γ^-1 g_B γ
where γ=∏_ℓ>m/2τ^[ℓ]_n shifts the lower (ℓ>m/2) elements cyclically in the n-direction.
A graphical representation of these elements are shown in fig:gAgB.
There is a unique element X∈ S_mn that lies on the joint geodesic of 𝕀→ g_A and 𝕀→ g_B while being farthest away from 𝕀. Its form is given by
X = ∏_k=1^n (τ_m/2^U τ_m/2^L)_k
where τ^U_m/2 and τ^U_m/2 defines a permutation in S_m by cyclically permuting the upper m/2 the lower m/2 elements. See fig:X.
The Cayley distances between these elements and the identity are
d(g_A,g_B) = 2(n-1), d(𝕀,g_A/B)=n(m-1), d(𝕀,X)=n(m-2)
For the sake of completeness we also include the coarse grained version of the permutation group elements q_A≡ q_X(g_A) and q_B ≡ q_X(g_B) here:
< g r a p h i c s >
where we have represented each dot as an element in ℤ_2n, each corresponding to a cycle in the blocking permutation X, with position arranged in a similar fashion as in fig:X. We connect two dots by a line if they belongs to the same subset in the set partition.
Note that q_X(X)=q_X(𝕀)={{1},…,{2n}}, the finest element in P_2n which is also denoted by 𝕀∈ P_2n.
The distances between these set partitions are
d(q_A,q_B) = 2(n-1), d(𝕀,q_A)=d(𝕀,q_B)=n
§ PROOFS
§.§ Lemma <ref>: Structure of optimal ρ,σ
For quick reference we sketch the integer program, Definition <ref>, we are trying to solve:
< g r a p h i c s >
plus the additional even condition on paths A → B (see the original definition.)
Let us introduce some notation. For a path L, not necessarily edge disjoint, define:
f(L) = ∑_e ∈ L f(e)
for some function of the edges f: E→ℝ. This is slightly different (but consistent with) the notation introduced previously, because of the possibility that
paths L intersect an edge more than once. We sum each edge in the sequence defining L, which then includes such multiplicities.
We say e is ρ-binding if either ρ(e) =0 or, when ρ(e) >0, there exists a path L ∈𝒫_AB,C with e ∈ L such that
ρ(L) = n.
For the constraint involving the σ variable eveness we need a refined notion:
An edge e is called σ-binding within E' for some E' ⊂ E if either σ(e) =0 or, when σ(e) > 0, then there exists a path L ∈𝒫_A,B with e ∈ L and L ⊂ E' for which σ(L) + ρ(L) = 2 (n - δ_ρ(L),0). We say that e is simply σ-binding for the case E' = E.
Given some function ρ on the edges E, we define the graph distance induced by ρ as:
d_ρ(x,y) = min_L ∈𝒫_x,yρ(L)
and similarly for the distances between subsets of V. We define the distance to be ∞ should there be no such path L in the minimization.
Given some ρ, define the region:
(AB)_0 = { x∈ V : d_ρ(x, AB) = 0}
and similarly define
E_0 = {{x,y}∈ E : x∈ (AB)_0 and y∈ (AB)_0}
to be the edges that lie entirely within (AB)_0
[Note that E_0 is not the same as E[(AB)_0], which is the set of edges that have some vertex in (AB)_0.].
We will construct an optimal pair (ρ,σ) with some nice properties and such that V' = (AB)_0 will be our choice for V' in Lemma <ref>.
We note however, at this point there is no obvious reason for ρ(e) =0 for all edges within (AB)_0.
Given some feasible (ρ,σ), consider an edge e ∈ E_0,
then we have the estimate:
∀ L ∈𝒫_AB,C : e ∈ L ρ(L) - ρ(e) ≥ n
For all paths L ∈𝒫_AB,C passing through e= { x, y} in the order AB → x → y → C we
we note that ρ(L) ≥ρ(L_y,C)+ ρ(e) where L_y,C∈𝒫_y,C, after dropping the AB→ x portion of the path. But since y ∈ (AB)_0 there is some path L^⋆_AB,y from AB to y with
with ρ(L^⋆_AB,y) = 0. We can combine the two paths, denoted as L^⋆_AB,y∪ L_y,C∈𝒫_AB,C,
and use this to estimate:
ρ(L) ≥ρ(L_y,C)+ ρ(e)= ρ(L^⋆_AB,y∪ L_y,C ) + ρ(e)
≥ n + ρ(e)
by feasibility of ρ.
For a feasible (ρ,σ)
an edge e ∈ E_0 is ρ-binding iff ρ(e) = 0.
The if statement is obvious from the definition of ρ-binding.
Now for the only if statement: Assume the edge e is ρ-binding.
Then either ρ(e)=0 or ∃ L∈𝒫_AB,C containing e such that ρ(L)=n.
If ρ(e)=0 then we are done.
Assuming that it is the other case, we use Pe to obtain
n - ρ(e) ≥ n ρ(e) = 0
which is a contradiction. So we must have ρ(e)=0.
This motivates introducing:
Ê_0 = { e ∈ E_0 : ρ(e) = 0 }
as the ρ-binding edges in E_0. Note that Ê_0 ⊂ E_0.
One might have expected that for an optimal solution there are simply no edges that are not ρ-binding, otherwise one could
get a smaller free energy by making ρ smaller. One cannot directly do this because one has to consider the other constraint for paths 𝒫_A,B that also involves ρ. We will eventually show this is possible. That is we will prove that E_0 = Ê_0, but we cannot do this before we prove some results on the behavior of σ, our next goal.
Define the two σ distance measures for paths within (AB)_0:
d_σ^0(x,y) = min_L ∈𝒫_x,y : L ⊂ E_0σ(L), d̂_σ^0(x,y) = min_L ∈𝒫_x,y : L ⊂Ê_0σ(L)
which can be thought of a distance measures on truncated graphs. We continue to define the distance to be ∞ should the set of such paths above be empty.
The next lemma pertains to the region (AB)_0 and edges E_0 and Ê_0:
Given any optimal solution (ρ',σ'), there exists an optimal solution (ρ,σ) such that ρ+σ=ρ'+σ' and it satisfies the following properties that we prove sequentially:
(a) For all e ∈Ê_0 then e is σ-binding within Ê_0.
(b)
There exists a function k : (AB)_0 →ℤ with 0 ≤ k(x) ≤ 2(n-1) such that
σ(e) = |k(x) - k(y)| for all e = {x,y}∈ E_0 and where:
k(x) = d̂_σ^0(x,A) , d̂_σ^0(x,A) < ∞
2(n-1) , d̂_σ^0(x,A) = ∞
and
k(x) = 2(n-1) - d̂_σ^0(x,B) , d̂_σ^0(x,B) < ∞
0 , d̂_σ^0(x,B) = ∞
If L ∈𝒫_A,B is non-empty, then for all e ∈ E_0\Ê_0 we have ρ(e) ∈ 2+ 2ℤ_≥ 0.
(c) All edges e ∈ E_0 are ρ-binding. In other words E_0 = Ê_0.
This implies that d̂_σ^0 can be replaced by d_σ^0 in tokk and tokk2.
(a) Consider some optimal (ρ',σ'). We use the notation (AB)'_0 and E_0' for the region as in ab0, defined with respect to d_ρ', and Ê_0'⊂ E_0' as the set of edges inside (AB)'_0 with ρ'(e) =0.
Define E_ fail as the edges e ∈Ê_0' which are not σ'-binding
within Ê_0'.
Then set
(ρ(e), σ(e) ) = ( ρ'(e) + σ'(e) , 0 ) , e ∈ E_ fail
(ρ'(e),σ'(e)) , otherwise
Note that (AB)_0 ⊂ (AB)'_0
since ρ(e) ≥ρ'(e). Also E_0 ⊂ E_0' and Ê_0 ⊂Ê_0' for the same reason. And Ê_0 ∩ E_ fail = ∅ since ρ(e)=(ρ'+σ')(e)>0 ∀ e∈ E_ fail.
We aim to show that (ρ,σ)
is (i) feasible, (ii) optimal and (iii) satisfies the statement under investigation in (a). We start with (iii): Consider any edge e ∈Ê_0 with σ(e) ≠ 0. Note that σ(e) = σ'(e) since only edges in E_ fail are changed.
Since this edge is σ'-binding within Ê_0', there is a path L ∈𝒫_A,B intersecting e such that L ⊂Ê_0' and σ'(L) = 2(n-1).
We just need to show that L ⊂Ê_0 so that e is σ-binding inside Ê_0.
Note that L ∩ E_ fail = ∅ since this would otherwise contradict the definition of E_ fail (L being a saturating path).
Thus ρ(L) = ρ'(L) since these edges are not changed, and ρ'(L) =0 since L ⊂Ê_0'.
This implies that ρ(e) =0 for all e ∈ L implying that L ⊂Ê_0, and we are done. For later use, we note that we just proved:
∀ L ∈𝒫_A,B : L ⊂Ê_0' then L ∩ E_ fail = ∅ρ(L) = 0
(i) Since ρ≥ρ' any L ∈𝒫_AB,C is clearly still feasible
for ρ. Also since (σ' + ρ')(e)
= (σ + ρ)(e) for all edges, we need only check paths L ∈𝒫_A,B such that δ_ρ'(L),0=1 and δ_ρ(L),0 = 0.
The condition ρ'(L) = 0 implies that L ⊂Ê_0', but then the converse of toconverse implies that ρ(L) ≠ 0 L ∩ E_ fail≠∅ and thus σ'(L) > 2(n-1) by the failure of binding.
Recall that the failure of saturating for the 𝒫_A,B paths costs +2 in the definition of the integer program, so actually σ'(L) ≥ 2n. Thus (σ + ρ)(L) = σ'(L) ≥ 2n = 2(n- δ_ρ(L),0) as required.
(ii) Optimality is obvious since (ρ+σ)(e)=(ρ'+σ')(e).
(b) We start with an optimal (ρ',σ') satisfying (a). Note that any point x ∈ (AB)_0' has at least one path to either A or B contained in Ê_0' by the definition of these regions. Assume that that d̂_σ'^0(x,A) < ∞, or in other words, assume there is some path from x to A inside Ê_0', and define:
k(x) = d̂_σ'^0(x,A)
where we recall the definition of this hatted distances uses paths inside Ê_0'.
We now aim to find a consistent set of equations tokk and tokk2 (with σ replaced by σ' for now).
We note that if d̂_σ'^0(x,B) = ∞ then we must have k(x) = 0 or otherwise we will violate (a) with some non σ'-binding edge along the path from x to A. Furthermore if d̂_σ'^0(x,B) < ∞ then consider the edge e with σ'(e) ≠ 0 along the piecewise minimal path L: A → x → B that is closest to x (either towards A or towards B)
2(n-1) ≤σ'(L) = d̂_σ'^0(A,x) + d̂_σ'^0(x,B) ≤σ'(L') = 2(n-1)
where L' is a saturating path for e that exists by (a). The first inequality is feasibility and the second comes from deforming the minimal paths in the distance
functions to the path L' (see fig:sigmadeform).
Thus:
k(x) = 2(n-1) - d̂_σ'^0(x,B)
The only case we have not covered for tokk and tokk2 is when d̂_σ'^0(x,A) = ∞. In this case we set k(x) = 2(n-1) and this is consistent with kx2n
since d̂_σ'^0(x,B) =0 is again necessary in order to not violate (a).
For edges e = {x, y}∈Ê_0' we now aim to compute:
|k(x) - k(y)|
There are three cases. Either (i) both {x, y} have infinite distance to B, or (ii) both {x, y} have infinite distance to A or (iii) all distances are finite.
In the first two cases we have k(x) = k(y). We also have σ'(e) = 0 for these cases due to (a).
In the last case we can estimate using the triangle inequality:
| k(x) - k(y) | ≤σ'(e)
If σ'(e) =0 then k(x) = k(y), however if σ'(e) > 0, then (a) implies the existence of a saturating path L such that:
2(n-1) = σ'(L) ≥d̂_σ'^0(A,x) + σ'(e) + d̂_σ'^0(y,B)
where e = {x,y} and the saturating path behaves as L : A → x → y → B. Thus:
σ'(e) ≤ k(y) - k(x) ≤ | k(y) - k(x) |
which combining with triin proves equality. We have now established equality σ'(e) = |k(x) - k(y)| for all three cases above (i-iii).
We now consider edges e ={x,y}∈ E_0' that are not in Ê'_0. We construct a new (ρ,σ) that differs from the original ones on these edges:
( ρ(e) , σ(e) ) = (ρ'(e) + σ'(e) - |k(x) - k(y)|, |k(x) - k(y)| ), e ∈ E_0'\Ê_0'
(ρ'(e),σ'(e) ), otherwise
We need to show that (ρ,σ) is (i) feasible (ii) optimal and (iii) satisfies the requirements in both (a) and (b).
For (i) we first establish that ρ(e) >0 for edges e ={x,y}∈ E'_0\Ê_0'. There are several cases to deal with depending
on whether the d̂_σ' distances are finite or not. See fig:prop0-b.
* If x,y have infinite d̂_σ'^0-distance both to A or both to B then
|k(x) - k(y) | =0 and ρ(e) > 0 since ρ'(e) > 0 on such an edge.
We will need to improve this bound for later use when establishing the statement in (b): Suppose that the d̂^0_σ' distances to B is infinite. If there is any path L ∈𝒫_A,B then we can combine the minimal paths L_A,x⊂Ê_0' and L_y,A⊂Ê_0', that are used
to compute the respective distances d̂_σ' (A,x) = d̂_σ' (A,y) =0,
to form a path L_A,x∪ e ∪ L_y,A∪ L ∈𝒫_A,B giving the estimate:
(ρ' + σ')( L_A,x∪ e ∪ L_y,B∪ L) = ρ'(e) + σ'(e) + (ρ'+ σ')(L) ∈ 2ℤ_≥0
since ρ' and σ' vanish on these minimal paths. Feasibility gives the even condition. But (ρ' + σ')(L) ∈ 2ℤ_≥0 also by feasibility, thus ρ(e) ∈ 2 + 2 ℤ_≥ 0.
The other case where d̂^0_σ'(x,A)=d̂^0_σ'(y,A)=∞ follows in a similar way.
* If x has infinite d̂_σ'^0-distance to A and y has infinite d̂_σ'^0-distance to B then we can construct a new path L : A → y → x → B using the minimal paths from A → y and the minimal
path from x → B both contained in Ê_0'. Since these two minimal d̂_σ'^0-distances vanishes for this new path we must have:
2n + 2 ℤ_≥ 0∋σ'(L) + ρ'(L) = σ'(e) + ρ'(e)
where we used feasibility of (ρ',σ') with the fact that ρ'(e) > 0. In this case we have | k(x) - k(y) | = 2(n-1), implying that ρ(e) ∈ 2 +2 ℤ_≥ 0.
* If both x,y have finite d̂_σ'^0-distances to A and B then, picking k(y) ≤ k(x), we can construct a path L : A → y → x → B, as above, and feasibility now implies that:
2n + 2 ℤ_≥ 0≤σ'(L) + ρ'(L) = σ'(e) + ρ'(e) + k(y) + 2(n-1) - k(x)
= σ'(e) + ρ'(e) + 2(n-1) - |k(x) - k(y)|
or ρ(e) ∈ 2 +2 ℤ_≥ 0.
We have covered all possibilities and established that ρ(e) >0 for such edges, and
furthermore we have established the final condition in (b). A corollary is that (AB)_0 = (AB)_0' (since this only depends on the pattern of zeros in ρ(e)) and thus E_0 = E_0' and Ê_0 = Ê'_0.
We now prove the rest of feasibility (i). Assume by contradiction that there is a path L ∈𝒫_AB,C such that ρ(L) < n, then this path must pass through
at least one edge e with e ∈ E_0'\Ê'_0 because ρ' was feasible so L must go through one of the edges where ρ is changed.
Consider the first such edge e on the journey from C → AB.
It is possible to use this path to construct a new path L' that uses L from C → e and then follows e → AB through Ê_0 = Ê_0' (since both vertices in e are in (AB)_0' this is always possible). Thus this path new path L' only intersects one edge with e ∈ E'_0\Ê_0'. Then:
ρ(L') ≤ρ(L) < n
is still not-feasible for ρ. We use Lemma <ref> which states that for such an edge e and feasible ρ':
ρ(L') > ρ'(L') - ρ'(e) ≥ n
which is a contradiction. The first inequality in rhopp follows from ρ(e) > 0, and the fact that this is the only edge that is changed relative to ρ' along the path.
For paths L ∈𝒫_A,B all paths that pass through a deformed edge
we maintain δ_ρ(L),0 = δ_ρ'(L),0 = 0 and σ + ρ = σ' + ρ' so feasibility for these paths is clear.
Optimality (ii) is also clear.
Now we show (iii) that the new (ρ,σ) satisfies conditions (a) and (b) of the Lemma. (a) is clear since Ê_0 = Ê_0' and we have not changed any σ(e)
inside Ê_0. For (b) we firstly note that σ(e) = |k(x) - k(y)| for all e = {x , y}∈ E_0 by construction. Also d̂_σ^0(x,y) = d̂_σ'^0(x,y)
since Ê_0' = Ê_0 and σ' = σ on these edges. We are done with (b).
(c) We consider an optimal (ρ,σ) satisfying the properties in (a) and (b). We work by contradiction. That is we assume there is at least one edge e ∈ E'_0\Ê'_0 and prove a contradiction. Consider any one of these edges and call it e_⋆. Construct a new solution:
( ρ'(e) , σ'(e) ) = (0, σ(e)) , e = e_⋆
(ρ(e),σ(e) ) , otherwise
This clearly has a smaller objective, so if we can show that (ρ',σ') is feasible we prove a contradiction since (ρ,σ) is optimal.
For paths L ∈𝒫_AB,C that intersect e_⋆ – without loss of generality we may consider paths that intersect e_⋆ only once.
We then use Lemma <ref> which becomes ρ'(L) ≥ n as required.
Now consider paths L ∈𝒫_A,B with e_⋆∈ L. If there are no such paths, then there is nothing more to prove. If there is at least one such path we know from (b) that
ρ(e_⋆) ∈ 2 ℤ_+ and this implies that the even condition in eveness is preserved for (ρ',σ'). Thus we now consider only the inequality implied in eveness.
We set e_⋆ ={x, y} and pick these vertices so that L : A → x → y → B.
Define the subpaths as L_A,x and L_y,B so that L = L_A,x∪ e_⋆∪ L_y,B .
We must consider four cases depending on whether the distances d̂_σ^0(x,B) or d̂_σ^0(y,A) are finite:
* If d̂_σ^0(x,B) < ∞ and d̂_σ^0(y,A) < ∞ then construct two new paths that by-pass e_⋆ as follows. Consider the minimal paths for these finite distances: from A → y and x → B. Join these, respectively, to L_y,B and L_A,x (see fig:estardeform). These minimal paths are contained in Ê_0 (so ρ vanishes on them). We apply feasibility for (ρ,σ) to these two new paths:
d̂_σ^0(A,y) + (ρ + σ)(L_y,B) ≥ 2(n- δ_ρ(L_y,B),0)
(ρ + σ)(L_A,x) + d̂_σ^0(x,B) ≥ 2(n- δ_ρ(L_A,x),0)
adding these two bounds together and using the distance function k(x) we have:
(ρ' + σ)(L) - σ(e_⋆) + k(y) - k(x) ≥ 2(n + 1- δ_ρ(L_y,B),0 - δ_ρ(L_A,x),0)
then σ(e_⋆) = | k(y) - k(x) | ≥ k(y) - k(x) (by condition (b)) and:
1-δ_ρ(L_y,B),0 - δ_ρ(L_A,x),0≥ - δ_ρ(L_y,B),0δ_ρ(L_A,x),0 = - δ_ρ'(L),0
combines to give:
(ρ' + σ)(L) ≥ 2(n - δ_ρ'(L),0)
the required feasibility statement.
* If d̂_σ^0(x,B) = ∞ and d̂_σ^0(y,A) = ∞ then the form of the function k(x) from (b) requires that
k(x) = 0 and k(y) = 2(n-1). This implies σ(e_⋆) = 2(n-1).
Then we note the bound:
(ρ + σ)(L) ≥ (ρ + σ)(e_⋆) + 2 (1- δ_ρ(L_y,B),0δ_ρ(L_A,x),0)
= ρ(e_⋆) + 2 (n- δ_ρ'(L),0)
which follows by dropping all contributions from the path L_y,B and L_A,x except for a crude estimate counting a minimal contribution if either ρ(L_y,B)
or ρ(L_A,x) is non-zero. The bound that we get from this minimal contribution must be even, since (ρ + σ)(e_⋆) is even, and (ρ + σ)(L) is even by feasibility. So any gap between them must be even. So again we get newfeas.
* If d̂_σ^0(x,B) < ∞ and d̂_σ^0(y,A) = ∞ (the reverse case follows a similar argument) then we must have k(y) = 2(n-1).
We also must have d̂_σ^0(x,A) = ∞ since otherwise we could construct a path A → x → B → y inside Ê_0 and this
would violate the condition d̂_σ^0(y,A) = ∞. So k(x) = 2(n-1) and hence d̂_σ^0(x,B) = 0 and σ(e_⋆) = 0. We estimate:
(ρ + σ)(L) ≥ (ρ + σ)(L_A,x) + ρ(e_⋆) + 2 ( 1- δ_ρ(L_y,B),0)
where we again crudely dropped all contributions from L_y,B except if ρ(L_y,B) is non-zero. Evenness also demands the gap in the bound is 2.
We also consider a combined path that bypasses e_⋆ using L_A,x and the minimal d̂_σ^0-distance path from y to B inside E_0
and apply feasibility:
(ρ + σ)( L_A,x) + d̂_σ^0(x,B) ≥ 2(n - δ_ρ(L_A,x),0)
Combinding onebound and twobound gives:
(ρ' + σ)(L) ≥ 2(n + 1- δ_ρ(L_y,B),0 - δ_ρ(L_A,x),0) ≥ 2 (n- δ_ρ'(L),0)
as required.
We have now established feasibility for all possible cases and so we find the desired contradiction. We conclude that there are no such edges and
E_0 = Ê_̂0̂.
We now move on to prove Lemma <ref>.
We use the optimal (ρ,σ) constructed in Lemma <ref>.
We set V' = (AB)_0 and E' = E_0. The above result establishes that ρ vanishes on E' where only σ is non-zero.
We now use this as an input to the following half integer program that lives on the complementary reduced graph defined as:
G^c = (V^c, E^c) V^c = (V \ V') ∪ (AB)' E^c = E \ E'
with (AB)' = V ∩μ_G(V').
Given an optimal (ρ,σ) satisfying the properties in Lemma <ref>, then
ϱ(e) = (ρ(e)+ σ(e))|_E^c - 1_μ(V')(e)
is feasible for the following half integer program on the graph G^c = (V^c, E^c):
M ≡ min_ϱ∑_e ∈ E^c w(e) ϱ(e)
subject to ∀ L ∈𝒫_Γ_k ,Γ_k' : ϱ(L) ∈ |k-k'| + ℤ_≥ 0
and ∀ L ∈𝒫_Γ_k,C : ϱ(L) ∈ (n-1) + ℤ_≥ 0
for all k,k'=0,…,2(n-1)
where ϱ(e) ∈ℤ_≥ 0/2 and Γ_k = { x ∈ (AB)': k(x) = k } and C, are boundary vertices on the reduced graph. Thus:
I ≥ M + w(μ(V')) + ∑_e ={x,y}∈ E' w(e) |k(x) - k(y)|
We firstly check feasibility for paths 𝒫_Γ_k, Γ_k'. We start by picking a subset of
such paths (possibly empty) 𝒫_Γ_k, Γ_k' for all k,k', with the extra condition that
the path only intersects the boundary edges μ( V') twice (this need not always be the case for all paths, even ones that are edge disjoint).
Let k' < k (the case k' = k is trivial) and consider L∈𝒫_Γ_k, Γ_k' . We can construct a path in 𝒫_A,B by attaching minimal curves for the distance d_σ^0 through E'.
We consider such curves from A →Γ_k' and Γ_k → B.
We apply feasability to the combination:
(σ + ρ)(L) + (k' -k) + 2(n-1) ≥ 2 n
Using 1_μ(V')(L) = 2 for such curves gives
ϱ(L) ≥ |k- k'|
Now any curve L ∈𝒫_Γ_k, Γ_k' can be constructed as a sequence of these restricted L curves from Γ_k →Γ_k_1→…Γ_k_N→Γ_k' for arbitrary k_i : i =1 … N. Thus:
ϱ(L) ≥ |k -k_1| + | k_1 - k_2 | + … | k_N - k'| ≥ | k - k'|
by the triangle inequality and widetilde. That concludes feasability for 𝒫_Γ_k, Γ_k'.
For a paths 𝒫_Γ_k, C we again have to deal with possible multiple intersections with μ(V').
We again restrict to 𝒫_Γ_k, C such that these paths intersect μ(V') only once.
Consider L∈𝒫_Γ_k, C and attach a minimal curve to AB in the E' graph.
Then:
ϱ(L) = ρ(L) + σ( L) -1 ≥ρ(L) -1 ≥ n -1
where in the first inequality we simply dropped the σ contribution and in the second we used feasibility for the combined curve ∈𝒫_AB,C (the minimal curve part sits in the region with ρ = 0.) Again any curve in the more general set L ∈𝒫_Γ_k, C can always be written as a combination of curves Γ_k →Γ_k_1→…Γ_k_N→ C. Thus we have:
ϱ(L) ≥ |k-k_1| + … + | k_N-1 - k_N| + n-1 ≥ n-1
as required. We conclude that ϱ is feasible for this intersecting cut problem. The estimate bdbd now follows by plugging in plugin to tohere and finally using the form of σ implied by Lemma <ref> on the rest of the edges E'.
The half integer relaxation for this program is convenient in the next section. The above feasible ϱ is integer valued, but the optimal solution might be half integral.
However this does not bother us, since at this stage we only strive for an inequality. Also once the chain of Theorem <ref> collapses the optimal solution will be integral.
We have now proven Lemma <ref>, as can be seen by picking and choosing results from Lemma <ref> and Lemma <ref>.
§.§ Lemma <ref>: An intersecting cut problem
We now study the ℓ-intersecting cut problem defined in Definition <ref>.
We will solve this problem by a recursive reduction of the ℓ-intersection cut problem to the ℓ-1-intersection cut problem, as described below in Lemma <ref>.
We will use a feasible solution to the ℓ-intersection cut problem to construct a feasible solution to the ℓ-1-intersection cut problem. See fig:int-programs.
Given an feasible ϱ for the ℓ-intersecting cut problem with {Γ_0, Γ_1, …, Γ_ℓ} then
there exists a cut, α, for (Γ_0 ∪Γ_1 ∪…Γ_ℓ-1) : (Γ_ℓ∪ C )
and a cut, β, for
Γ_ℓ : (Γ_0 ∪Γ_1 ∪…Γ_ℓ-1∪ C) where β is disjoint to α such that:
* for ℓ > 1 there is a feasible ϱ' for the ℓ-1-intersecting
cut problem for {Γ_0, Γ_1, …, Γ_ℓ-1∪Γ_ℓ} with:
M(ϱ) = M(ϱ') + 1/2( w( μ(α) ) + w( μ(β) ) )
* for ℓ=1 we simply have the bound:
M(ϱ) ≥1/2( w( μ(α) ) + w( μ(β) ) )
Before we present a proof, we use the above result to prove Lemma <ref>.
Starting from an optimal solution ϱ for the ℓ-intersecting cut problem M(ϱ), we apply Lemma <ref> repeatedly to arrive at
M(ϱ) ≥1/2∑^ℓ-1_k=0( w(μ(α_k))+w(μ(β_k)) )
where α_k is a cut for (Γ_0∪…Γ_k):(Γ_k+1∪…Γ_ℓ∪ C) and β_k is a cut for (Γ_k+1∩…Γ_ℓ):(Γ_0∪…Γ_k∪ C).
Minimizing over the all such cuts α_k and β_k then gives
M(ϱ) ≥1/2∑^ℓ-1_k=0( w(μ(α'_k))+w(μ(β'_k)) )
where α'_k and β'_k are the minimal cuts.
We now use them to construct a new ρ', defined by
ϱ'(e) = 1/2∑_k=0^ℓ-1(1_μ(α'_k)+1_μ(β'_k))(e)
It is clear that ϱ' is feasible from the topology of the cuts (see fig:rhofeasible) so M(ϱ)≥ M(ϱ').
Also,
M(ϱ')=1/2∑_e∈ Ew(e)∑_k=0^ℓ-1(1_μ(α'_k)+1_μ(β'_k))(e) = 1/2∑^ℓ-1_k=0( w(μ(α'_k))+w(μ(β'_k)) )
so M(ϱ')≥ M(ϱ) since α'_k and β'_k are minimal cuts.
Thus we have inequalities in both ways and it must be that
M=M(ϱ) = M(ϱ') = 1/2∑^ℓ-1_k=0( w(μ(α'_k))+w(μ(β'_k)) )
We first consider the region β. Define this as:
β = { x ∈ V : d_ϱ(x, Γ_ℓ) = 0 }
It is clear that this satisfies the cut properties stated in the Lemma.
It is also clear that ϱ̃(e) := ϱ(e) - (1/2) 1_μ(β)(e) ≥ 0. We show that ϱ̃ is feasible for the following 1/2 integer
program:
M̃ ≡min_ϱ̃ M̃(ϱ̃), M̃(ϱ̃) = ∑_e ∈E w(e) ϱ̃(e)
subject to ∀L ∈𝒫_Γ_k,Γ_k': ϱ̃(L) ∈|k-k'| + ℤ_≥0
and ∀L ∈𝒫_Γ_k,C: ϱ̃(L) ∈ℓ/2 + ℤ_≥0
subject to ∀L ∈𝒫_Γ_k ,Γ_ℓ: ϱ̃(L) ∈(ℓ-k-1/2) + ℤ_≥0
and ∀L ∈𝒫_Γ_ℓ,C: ϱ̃(L) ∈(ℓ-1)/2 + ℤ_≥0
for all k,k'=0,⋯,ℓ-1.
This program is defined for ℓ≥ 1. The last constraint is trivial if ℓ=1. We sketch this program in fig:k-half-program.
Feasability is clear for paths that intersect μ(β) the minimal number of times, that is the subset of paths defined below:
(I) 𝒫_Γ_k, Γ_k' = { L ∈𝒫_Γ_k, Γ_k' : 1_μ(β)(L)= 0} ≥|k-k'|
(II) 𝒫_C, Γ_k = { L ∈𝒫_C, Γ_k :1_μ(β)(L) = 0} ≥ℓ/2
(III) 𝒫_Γ_k, Γ_ℓ = { L∈𝒫_Γ_k, Γ_ℓ : 1_μ(β)(L) = 1} ≥(ℓ-k-1/2)
(IV) 𝒫_C, Γ_ℓ = { L ∈𝒫_C, Γ_ℓ : 1_μ(β)(L) = 1 } ≥(ℓ-1)/2,
for all k,k'=0,⋯,ℓ-1. We have listed the constraints for the ϱ' problem on the right and on the left we have given labels to the various cases of paths.
It is also clear we maintain the integer condition for ϱ̃ in tildeM due to the topology of the paths, so we need only consider the inequalities below.
Any other path not in this class can be decomposed using these paths. There are four different cases to consider (see fig:midtypes):
(I)
If L ∈𝒫_Γ_k, Γ_k' with 0 ≤ k < k' ≤ℓ-1 we can bound these paths via:
ϱ̃(L) + ϱ̃(L_x, Γ_ℓ) + ϱ̃(L_x', Γ_ℓ) ≥ϱ̃(L̃_Γ_ℓ, Γ_k)
+ ϱ̃(L̃_Γ_ℓ, Γ_k')
≥ (ℓ-k-1/2) + (ℓ-k'-1/2) ≥ (k' -k)+1
where x and x' are the first and last points inside β where the path L enters and leaves. We have sewn on paths L_x, Γ_ℓ and L_x', Γ_ℓ
to these points, where we can pick these paths as the ones minimizing the distance d_ϱ(x,Γ_ℓ) = 0 and d_ϱ(x',Γ_ℓ) =0.
In particular for these paths ϱ̃(P_x, Γ_ℓ) = ϱ(P_x, Γ_ℓ) = 0 and similarly for x'. The first inequality in ineqPP drops the mid portion
of the curve and applies the bound bdIII for a curve that intersects μ(β) once, that is L̃_Γ_ℓ, Γ_k and L̃_Γ_ℓ, Γ_k'.
(II)
If L ∈𝒫_C, Γ_k for 0 ≤ k ≤ℓ-1 we can, in a similar manner as above, split this into two and show:
ϱ̃(L)
≥ (ℓ-k-1/2) + (ℓ-1)/2 ≥ℓ/2
where we applied bdIII and bdIV.
(III)
If L ∈𝒫_Γ_k, Γ_ℓ for 0 ≤ k ≤ℓ-1, then we simply drop the portion of the path after the first intersection with x ∈β along the path Γ_k → x →Γ_ℓ. Adding the curve ϱ̃(L_x, Γ_ℓ) =0 gives the estimate:
ϱ̃(L)
≥ (ℓ-k-1/2)
where we applied bdIII.
(IV)
If L ∈𝒫_C, Γ_ℓ we do the same and drop the portion of the path after the first intersection to give:
ϱ̃(L)
≥ (ℓ-1)/2
where we again applied bdIV.
This completes the proof that ϱ̃ is feasible for tildeM.
We now introduce the region:
α^c = { x : d_ϱ̃ (x, Γ_ℓ) + d_ϱ̃ (x,C) = (ℓ-1)/2}∪{ x : d_ϱ̃(x,Γ_ℓ) = 0 }∪{ x: d_ϱ̃(x,C) = 0 }
We check that it satisfies the cut properties. It is clear that C∪Γ_ℓ⊂α^c by definition.
Assume that Γ_k ∈α^c for some 0 ≤ k ≤ℓ-1. Thus either:
(ℓ-1)/2 = d_ϱ̃ (Γ_k, Γ_ℓ) + d_ϱ̃ (Γ_k,C) ≥ (ℓ-k-1/2) + ℓ/2 ≥ (ℓ+1)/2
which is not possible. Or d_ϱ̃ (Γ_k, Γ_ℓ) ≥ (ℓ-k-1/2) which is not possible or d_ϱ̃ (Γ_k, C) ≥ℓ/2 which is also not possible.
Thus we have a contradiction and Γ_k ∈α. This establishes the cut properties stated.
We now define:
ϱ'(e) = ϱ̃(e) - 1/21_μ(α)(e)
We aim to show that ϱ'(e) ≥ 0. Consider an edge e={x,y}∈μ(α) with x ∈α^c and y ∈α. The triangle inequality to C states that:
ϱ̃(e) = d_ϱ̃(x,y) ≥ (d_ϱ̃(y,C) - d_ϱ̃(x,C))
So if d_ϱ̃(x,C) = 0 then ϱ̃(e) ≥ d_ϱ̃(y,C) > 0 since y ∈α and so must have this strictly greater than 0.
Similarly for Γ_ℓ:
ϱ̃(e) ≥ d_ϱ̃(x,y) = (d_ϱ̃(y,Γ_ℓ) - d_ϱ̃(x,Γ_ℓ))
So d_ϱ̃(x,Γ_ℓ) = 0 then ϱ̃(e) > 0. Finally if d_ϱ̃ (x, Γ_ℓ) + d_ϱ̃ (x,C) = (ℓ-1)/2 we add the two inequality above to show that:
2ϱ̃(e) ≥ d_ϱ̃(y,C) + d_ϱ̃(y,Γ_ℓ) > 0
Thus in all case we have ϱ̃(e) ≥ 1/2, by the integrality gap. Indeed ϱ'(e) ≥ 0.
We thus have:
M(ϱ) = M(ϱ') + 1/2( w( μ(α) ) + w( μ(β) ) )
as required. For ℓ =1 we simply bound M(ϱ') ≥ 0 and we are done. For ℓ >1 we need to check feasibility of ϱ' for the ℓ-1-intersecting cut program.
Paths that cross μ(α) a minimal number of times are clearly feasible:
(I) 𝒫_C, Γ_ℓ = { L ∈𝒫_C, Γ_ℓ : 1_μ(α)(L) = 0 } ≥(ℓ-1)/2
(II) 𝒫_C, Γ_k = { L ∈𝒫_C, Γ_k :1_μ(α)(L) = 1} ≥(ℓ-1)/2
(III) 𝒫_Γ_k, Γ_ℓ = { L ∈𝒫_Γ_k, Γ_ℓ : 1_μ(α)(L) = 1} ≥(ℓ-k-1)
(IV) 𝒫_Γ_k, Γ_k' = { L ∈𝒫_Γ_k, Γ_k' : 1_μ(α)(L)= 0} ≥|k-k'|
for all k,k'=0,⋯,ℓ-1, where we have listed the constraints for the ϱ' problem on the right and on the left we have given labels to the various cases of paths.
We now prove a basic result that will seed the rest of our discussion. We consider a path L ∈𝒫_C,Γ_ℓ but now with 1_μ(α)(L) ≥ 2, see fig:twointersections.
Consider the first edge e in μ(α) the path crosses (starting at C.) Let e ={x,y} with x ∈α^c and y ∈α.
We know that:
d_ϱ̃(y,C) + d_ϱ̃(y,Γ_ℓ) > (ℓ-1)/2
Using the minimality of these paths and comparing these to the two segments of L split at y we find:
ϱ̃(L) > (ℓ-1)/2 ϱ̃(L) ≥ (ℓ-1)/2 + 1
where we used the fact that the gap for such paths is 1 (see seegap.) If this curve had only two intersections with α (1_μ(α)(L) = 2) we would be done since then we have shown that ϱ'(L) ≥ (ℓ-1)/2.
Another obvious bound applies to any path L:
ϱ̃(L) ≥1/21_μ(α)(L)
since we know ϱ̃ on these edges. This bound is too crude to be used on its own, but we will still make use of it below when we start performing surgery on the paths and we find paths that start and end on the same boundary region. In this later case the bound we just derived can be tight.
We now address the four types of paths:
(I) Consider a path L ∈𝒫_C,Γ_ℓ intersecting N times with μ(α), where N ≥ 2 is even. We aim to show that ϱ̃(L) ≥ (ℓ-1)/2 + N/2. We do this by induction. We have proved the case N=2 in torepeat. We assume it is true for N-2 and prove if for N. Consider the second edge e = {x,y} that intersects μ(α) along the path C → y → x →Γ_ℓ, where x ∈α^c. There are now three cases (a,b,c) to consider depending on which set x belongs to in threesets.
(a) If d_ϱ̃(x,C) + d_ϱ̃(x,Γ_ℓ) = (ℓ-1)/2 then consider the minimal paths L_x,C, L_x,Γ_ℓ defining these two distances.
We show that both of these curves L_x,C, L_x,Γ_ℓ lie entirely inside α^c. If not there would be some y ∈α (the first vertex where
either L_x,C, L_x,Γ_ℓ leaves α^c) with
d_ϱ̃(y,C) + d_ϱ̃(y,Γ_ℓ) = (ℓ-1)/2 and this is a contradiction.
We use L_x,C, L_x,Γ_ℓ to perform surgery on L as in the first figure shown in fig:typeI.
That is we start with L and L_x,C∪ L_x,Γ_ℓ and end up with two paths in 𝒫_C,Γ_ℓ. These later curves intersect μ(α) two times and N-2 times respectively. Thus:
ϱ̃(L) + (ℓ-1)/2 = ϱ̃(L) + ϱ̃(L_x,C∪ L_x,Γ_ℓ)
≥((ℓ-1)/2 + 1) + ((ℓ-1)/2 + (N-2)/2 )
Thus ϱ̃(P) ≥ (ℓ-1)/2 + N/2 as required.
(b) If d_ϱ̃(x,C) = 0, we instead perform surgery with two copies of this minimal path. It is clear this path remains inside α^c. These paths do not cost anything.
See the second figure in fig:typeI for the pattern.
In particular we find, after surgery, a curve that starts and ends in C
and that intersects μ(α) twice, and
a curve in 𝒫_C,Γ_ℓ intersecting N-2 times. For the former curve we use the estimate obvious and find:
ϱ̃(P) ≥( 1 ) + ( (ℓ-1)/2 + (N-2)/2 ) = (ℓ-1)/2 + N/2
as required.
(c) If d_ϱ̃(x,Γ_ℓ) =0 then we again use two copies of this minimal path to perform surgery:
see the third figure in fig:typeI for the pattern.
The right hand side is now a path 𝒫_C,Γ_ℓ with two intersections and a path from 𝒫_Γ_ℓ,Γ_ℓ with N-2 intersections.
Thus:
ϱ̃(L) ≥( (ℓ-1)/2+1 ) + ( (N-2)/2 ) = (ℓ-1)/2 + N/2
where we used torepeat and obvious. We have completed the induction step.
(II) Consider a path L ∈𝒫_C,Γ_k for some 0 ≤ k ≤ℓ-1 and intersecting N times with μ(α), where N ≥ 3 is odd. We aim to show that ϱ̃(L) ≥ (ℓ-1)/2 + N/2. Consider the last edge e = {x,y} that intersects μ(α) along the path C → y → x →Γ_k, where x ∈α^c. There are again three cases (a,b,c) to consider:
(a) If d_ϱ̃(x,C) + d_ϱ̃(x,Γ_ℓ) = (ℓ-1)/2, we again perform surgery as shown in fig:typeII.
from which we arrive at a path in 𝒫_C,Γ_ℓ intersecting N-1 times and
a path in 𝒫_C,Γ_k with one intersection. We dealt with the later path at the start and the former we have already bounded in (I) above. Thus:
ϱ̃(L) + (ℓ-1)/2 ≥( (ℓ-1)/2 + (N-1)/2 ) + ( ℓ/2 )
implying ϱ̃(L) ≥ (ℓ-1)/2 + N/2.
(b) If d_ϱ̃(x,C) = 0, we take these minimal paths and join in the obvious way to find a path in 𝒫_C,C with N-1 intersections and one in
𝒫_C,Γ_k with one intersection. Thus:
ϱ̃(L) ≥( (N-1)/2 ) + ( ℓ/2 ) = (ℓ-1)/2 + N/2
(c) If d_ϱ̃(x,Γ_ℓ) =0 we use these minimal paths to construct a path in 𝒫_Γ_k, Γ_ℓ with one intersection
and one in 𝒫_C, Γ_ℓ with N-1 intersections. Hence:
ϱ̃(L) ≥( ℓ-k-1/2 ) + ( (ℓ-1)/2 + (N-1)/2 ) ≥ (ℓ-1)/2 + N/2
where we used ℓ -k -1 ≥ 0. And we claim victory for these paths.
(III) Consider a path L ∈𝒫_Γ_k,Γ_ℓ for some 0 ≤ k ≤ℓ-1 and intersecting N times with μ(α), where N ≥ 3 is odd.
We now consider the second edge e = {x,y} along the path from L : Γ_k → x → y →Γ_ℓ
with x ∈α^c. As usual, there are three cases:
(a) If d_ϱ̃(x,C) + d_ϱ̃(x,Γ_ℓ) = (ℓ-1)/2, we again perform surgery as shown in fig:typeIII.
The resulting paths are in 𝒫_Γ_k,Γ_ℓ with one intersections
and and in 𝒫_C, Γ_ℓ with (N-1) intersections. Thus:
ϱ̃(L) + (ℓ-1)/2 ≥( ℓ-k-1/2 ) + ( (ℓ-1)/2 + (N-1)/2 )
Or ϱ̃(L) ≥ (ℓ-k-1) + N/2 as required.
(b) Now if d_ϱ̃(x,C) = 0, our surgery results in a path in 𝒫_C,Γ_k with one intersection
and a path in 𝒫_C,Γ_ℓ with N-1 intersections. Hence:
ϱ̃(L) ≥( ℓ/2 ) + ( (ℓ-1)/2 + (N-1)/2 ) = ℓ-1 + N/2 ≥ (ℓ-k-1) + N/2
(c) Now if d_ϱ̃(x,Γ_ℓ) = 0, our surgery results in a path in 𝒫_Γ_k,Γ_ℓ with one intersection
and a path in 𝒫_Γ_ℓ,Γ_ℓ with N-1 intersections. Hence:
ϱ̃(L) ≥( ℓ-k-1/2 ) + ( (N-1)/2 ) = (ℓ-k-1) + N/2
(IV) Consider a path L ∈𝒫_Γ_k,Γ_k' for some 0 ≤ k < k' ≤ℓ-1 and intersecting N times with μ(α), where N ≥ 2 is even.
We now consider the second edge e = {x,y} along the path from L : Γ_k → x → y →Γ_k'
with x ∈α^c. As usual there are three cases:
(a) If d_ϱ̃(x,C) + d_ϱ̃(x,Γ_ℓ) = (ℓ-1)/2,
surgery results in a path in 𝒫_Γ_k,Γ_ℓ with one intersection and another path in 𝒫_C,Γ_k' with N-1 intersections, see fig:typeIV. Thus, using (II) we have:
ϱ̃(L) + (ℓ-1)/2 ≥( ℓ-k-1/2 ) + ( (ℓ-1)/2 + (N-1)/2 )
implying ϱ̃(L) ≥ (ℓ-k-1)+ N/2 ≥ (k' -k) +N/2 where we used ℓ-1 ≥ k'. This is the required bound.
(b) Now if d_ϱ̃(x,C) = 0, our surgery results in a path in 𝒫_Γ_k,C with one intersection and a path in 𝒫_Γ_k',C
with (N-1) intersections. Thus:
ϱ̃(L) ≥( ℓ/2 ) + ( (ℓ-1)/2 + (N-1)/2 ) = ℓ-1 + N/2 ≥ (k'-k) + N/2
where we again used (II).
(c) Now if d_ϱ̃(x,Γ_ℓ) = 0, our surgery results in a path in 𝒫_Γ_k,Γ_ℓ with one intersection
and a path in 𝒫_Γ_ℓ,Γ_k' with N-1 intersections. Hence:
ϱ̃(L) ≥( ℓ-k-1/2 ) + ( ℓ-k'-1 +(N-1)/2 ) ≥ (k'-k) +N/2
where we used (III) and ℓ-k'-1 ≥ -(ℓ-k'-1). And we are done.
The above bound establish feasibility of ϱ' for the ℓ-1 intersecting cut problem.
(The integer gap conditions are again all automatic because the even/oddness of the number of
intersections is fixed by the topology of the path.)
§.§ Lemma <ref>:
Probabilistic convergence for S_R
Consider the renormalized operator:
𝒪 = ( ϱ⊗ϱ) χ^ 2 n 𝒜(AB:C)
and define the renormalized measure:
d μ̂_Ψ(λ̂) = χ^ 2 (n-1) 𝒜(A:B:C) -2 n 𝒜(AB:C) ∑_i |< Ψ| . v_i >|^2 δ( λ̂ - λ̂_i) d λ̂
where λ̂ = λχ^ 2 n 𝒜(AB:C).
We know that new measure satisfies
lim_χ→∞∫_0^∞ d μ̂_Ψ(λ̂) λ̂^m/2 = 1
for m/2 ∈ℤ_≥ 1. Note that we do not know the zeroth moment of λ̂.
Pick some cut-off Λ > 1 and define
Δ_m/2 = ∫ dμ̂_Ψ(λ̂) λ̂^m/2θ(λ̂ -Λ)≤Λ^-m/2∫ dμ̂_Ψ(λ̂) λ̂^mθ(λ̂ -Λ)≤Λ^-m/2∫ dμ̂_Ψ(λ̂) λ̂^m
From now on we will set Λ =2. Then we have:
lim_χ→∞Δ_m/2≤lim_χ→∞Δ_m'/2≤ 2^-m'/2
for all m'∈ℤ and m' ≥ m ≥ 1. Taking m' →∞ proves that lim_χ→∞Δ_m/2=0 for m∈ℤ_≥ 1.
In particular, for any polynomial function f(λ̂) we have that
∫_2^∞ dμ̂_Ψ(λ̂) f(λ̂)
= ∑_k∈ℕf^(k)(0)/k!Δ_k
χ→∞→ 0
In other words, the measure μ̂_Ψ(λ̂) is highly concentrated in the interval [0,2] and it suffices to only consider test functions with compact support on the interval.
Let us approximate the square root function on [0,2] as:
p_M(x) ≡∑_μ=1^M √(2μ/M) b_μ,M(x/2)
where b_μ,M(x) are the Bernstein polynomials of degree M.
Since only b_0,M(x) has the constant monomial in it, we only need the higher moments. For all δ > 0 there exists some integer M such that
√(x) - p_M(x) _L^∞[0,2] < δ
Now we write
| ( ∫_0^∞ dμ̂_Ψ(λ̂) λ̂^1/2) - 1 | ≤∫_2^∞ dμ̂_Ψ(λ̂)λ̂^1/2_C_1
+ | ( ∫_0^2 dμ̂_Ψ(λ̂) p_M(λ̂) ) - 1 |_C_2
+ | ∫_0^2 dμ̂_Ψ(λ̂) (λ̂^1/2 - p_M(λ̂) ) |_C_3
Using Markov inequality along with eq:Delta_m we can show that for any integer m,
Pr(C_1 ≥ϵ)
≤Δ_m/ϵχ→∞→ 0
For C_2 we consider:
C_2 ≤| ( ∫_0^2 dμ̂_Ψ(λ̂) p_M(λ̂) ) - ( ∫_0^2 dμ̂_Ψ(λ̂) p_M(λ̂) )|_C^ first_2
+ | ( ∫_0^2 dμ̂_Ψ(λ̂) p_M(λ̂) ) -1 |_C_2^ second
We can bound the first term using Chebyshev's inequality:
Pr(C_2^ first≥ϵ) ≤σ^2/ϵ^2
where
σ^2 = Var(C_2^ first)
≊Var( ∫^∞_0 dμ̂_Ψ(λ̂) p_M(λ̂) )
= ∑_m=1^M p_m Var( ∫^∞_0 dμ̂_Ψ(λ̂) λ̂^m )
where p_m are Taylor coefficients of p_M(λ).
Note that we have extended the integration limit in eq:c2_var. The error from doing so can be shown to vanish in the limit χ→∞ by application of eq:vanishing. The expectation values of the double moments are related to
( ∫^∞_0 dμ̂_Ψ(λ̂) λ̂^m )^2
= χ^(4n-1)𝒜(A:B:C)-4n𝒜(AB:C)⟨Ψ|^⊗ 2O^m⊗O^m|Ψ⟩^⊗ 2
which can be computed by a different symmetry group optimization problem defined on G. We now minimize over g∈ S_2mn in this graph such that the boundary conditions are g̃_A≡ g^(1)_A g^(2)_A, g̃_B≡ g^(1)_B g^(2)_B and g̃_C =𝕀, where g^(1)_A,B permutes the first mn replicas and leaving the second mn copies invariant; whereas g^(2)_A,B permutes the second mn replicas and leaving the first invariant. Our analysis in sec:main largely carries over.
The main difference is that we must now coarse-grain using the new element X̃≡g̃_A∧g̃_B = X^(1)X^(2) with X^(i) defined similarly as above.
We need the following generalization of Lemma <ref>:
For any q∈ P_4n we have
d(q̃_A,q)+d(q̃_B,q) ≥ d(q̃_A,q̃_̃B̃) + 2(1-δ_#^(1)_1(q),0) + 2(1-δ_#^(2)_1(q),0) +
2δ_q∨τ, ℤ_4n
where q̃_A,B=q_X̃(g̃_A,B), #^(1,2)_1(·) counts the number of singlets in the first (second) sets of 2n elements, and τ={ℤ_2n,ℤ_2n} is the maximal element in P_4n that is disconnected between the two sets of 2n elements.
If q∈ P_2n× P_2n then δ_q∨τ,ℤ_4n=0 and we can simply break down the problem into two smaller problems on disconnected copies of 2n elements. Applying Lemma <ref> on each copy proves the result.
Now suppose that q∈ P_4n\ P_2n× P_2n. Then there must exists some u_ij∈ P_4n\ P_2n× P_2n such that p∨ u_ij = p, where u_ij is the unique partition with a doublet connecting element i from the first copy to element j in the second copy and singlets at every other position. We write
d(q̃_A,p) =d(q̃_A,p∨ u_ij)
= #(q̃_A)+#(p∨ u_ij)-2#(q̃_A∨ p ∨ u_ij)
= 1+#(q̃_A∨ u_ij)+#(p∨ u_ij)-2#(q̃_A∨ p ∨ u_ij)
=d(q̃_A∨ u_ij,p) + 1
and similarly for d(q̃_B,p). Thus
d(q̃_A,p)+d(p,q̃_B) = d(q̃_A∨ u_ij,p) + d(p,q̃_B∨ u_ij) + 2
≥ d(q̃_A∨ u_ij,q̃_B∨ u_ij) + 2
by triangle inequality.
This bound can be strengthen in a similar fashion as the proof in Lemma <ref>.
The biparpite graph of q̃_A∨ u_ij and q̃_B ∨ u_ij is now connected with two cycles, each corresponding to a 2n-element copy.
A singlet in the first copy will break the first cycle and leads to a enhancement of the bound by 2, and likewise for a singlet in the second copy. Since d(q̃_A∨ u_ij,q̃_B∨ u_ij)=d(q̃_A,q̃_B) we obtain
d(q̃_A,p)+d(p,q̃_B) ≥ d(q̃_A,q̃_B) + 2(1-δ_#^(1)_1(q),0) + 2(1-δ_#^(2)_1(q),0) + 2
And this completes the proof.
Using Lemma <ref> we see that we may restrict to the disconnected elements q∈ P_2n× P_2n (and hence g∈ S_mn× S_mn as the coarse-graining retains this information) in the optimization problems, since for any path L∈𝒫_A:B and vertex v in the path, any q(v)∈ P_4n\ P_2n× P_2n will lead to a stricter bound in the integer program.
Thus, we find that the optimal value of our problem is simply twice of that in the original problem:
( ∫ dμ_Ψ(λ) λ^m/2)^2
= χ^-4(n-1)𝒜(A:B:C)+2n(m-2)𝒜(AB:C)(1+O(1/χ))
And we have that σ^2 = O(1/χ) and C^ first_2 Pr→ 0 as χ→∞.
For the second term we have
C_2^ second ≤| ( ∫_0^∞ dμ̂_Ψ(λ̂) p_M(λ̂) ) -1 | + | ( ∫_2^∞ dμ̂_Ψ(λ̂) p_M(λ̂) )|
χ→∞⟶| p_M(1)-1 | ≤δ
where we have used eq:mu_hat_norm to rewrite the moment integrals in the first line, and the bound | p_M(1) - 1| ≤ p_M(x) - √(x)_L^∞[0,2]=δ. The second term in the first line vanishes by eq:vanishing in the χ→∞ limit.
For C_3 we write
C_3 ≤∫_0^2 dμ̂_Ψ(λ̂) | λ̂^1/2 - p_M(λ̂) |
≤∫_0^2 dμ̂_Ψ(λ̂) λ̂^1/2 f(lnλ̂)
+ ∫_0^2 dμ̂_Ψ(λ̂)(1- f(lnλ̂) )| λ̂^1/2 - p_M(λ̂) |
≤∫_0^∞ dμ̂_Ψ(λ̂) λ̂^1/2 f(lnλ̂)_C_3^ first
+ δ∫_0^2 dμ̂_Ψ(λ̂) (1- f(lnλ̂) )_C^ second_3
where passing from the first line to the second we used the fact that p_M(x) approaches √(x) from below, which follows from the positivity of p_M(x).
We have also introduced a semi-positive function f with f(x) = 0 for x≥ 0 and whose other properties we will enumerate below. For the second term, we suppose that:
(1-f(lnλ̂)) λ̂^-1_L^∞_[0,2]≡ f_1 < ∞
So that
C_3^ second≤ f_1 δ∫^2_0 dμ̂_Ψ(λ̂) λ̂≤ f_1 δ∫^∞_0 dμ̂_Ψ(λ̂) λ̂
To proceed we need to following Lemma:
Consider a density matrix ρ on a finite dimensional Hilbert space ℋ and supported on π_ρ with Tr π_ρ≡Λ then
for a real positive semidefinite function f ∈ C^∞(ℝ) such that f(x) = 0 for all x ≥ 0
and such that f' is a rapidly decaying Schwartz function f' ∈𝒮(ℝ), then:
|< η_1 | f( lnρ + lnΛ) | η_2 > |≤𝔉(f')_L^1ρ - π_ρ /Λ_1
with normalized η_1,2∈ℋ and where 𝔉 denotes the Fourier transform.
Since ρ will have a minimum non-zero eigenvalue λ_ min we can cut off f_a(x) = f(x) w_a(x)
where w_a ∈ C^∞ is a smooth
cutoff function:
w_a(x) = w( x / a)
with w(x) = 0 for x < -2 and w(x) = 1 for x > -1 and generally 0 ≤ w ≤ 1 and also w' ∈𝒮. Choosing a > - lnλ_ min - lnΛ≥ 0 we can replace
f with f_a. The result f_a is a smooth function of compact support so this has a Fourier transform. Thus:
f_a( lnρ + lnΛ) =
∫ d s 𝔉(f_a)(s) Λ^isρ^i s
If ρ and ρ' commute then we can simultaneously diagonalize these and
|< η_1 | (ρ^is - (ρ')^is) | η_2 > |
= | ∑_i (exp( - is E_i) - exp( - is E_i')) < η_1 . | i > < i . | η_2 > |
≤∑_i | 1 - exp( - is (E_i' - E_i)) | ≤ |s| ρ - ρ' _1
where E_i≡ -lnλ_i, and the inequality follows from the bound |sin(x)| < |x|.
Thus:
| < η_1 | f( lnρ + lnΛ) | η_2 > - < η_1 | f( lnρ' + lnΛ) | η_2 > |
≤ s 𝔉(f_a)(s) _L^1ρ - ρ' _1
We will show that we can remove the cutoff function w_a, i.e.:
lim_a →∞ s 𝔉(f_a)(s) _L^1 = 𝔉(f')(s) _L^1
for functions f and w that were specified in the statement and above.
Following claimedlimit, if we set ρ' = π_ρ/Λ and use the properties of the function in the statement we find f( lnρ' + lnΛ) =0 away from the subspace of support of ρ, and thus we have proved our claim.
To prove claimedlimit we write
i s 𝔉(f_a)(s) = 𝔉(f_a')(s) = 𝔉( f' w_a) (s) + 𝔉(f w_a') (s)
We have two remainder terms to analyze:
| s 𝔉(f_a)(s) _L^1 - 𝔉( f') _L^1| ≤𝔉( f' w_a) _L^1 + 𝔉( f' (1-w_a)) _L^1
For the second term:
𝔉( f' (1-w_a)) _L^1 ≤ (1+s^2)^-1_L^1 (1+s^2) 𝔉( f' (1-w_a))(s) _L^∞
≤π( f' (1-w_a) _L^1 + (f' (1-w_a))”_L^1 )
≤π1-w_a1+x^2_L^1( f'(1+x^2)_L^∞ + f”'(1+x^2)_L^∞)
+ π(2 f”_L^1w_a' _L^∞ + f' _L^1w_a”_L^∞)
≤π^2 ( sup_x≤ -a|f'(x)(1+x^2)| + sup_x≤ -a|f”'(x)(1+x^2)| )
+ π(2 f”_L^1w_a' _L^∞ + f' _L^1w_a”_L^∞)
where passing to the second line we used the Hausdorff-Young inequality and we also used Hölder's inequality throughout.
But w_a' _L^∞ = (1/a) w' _L^∞→ 0 and similarly w_a”_L^∞ = (1/a^2) w”_L^∞→ 0
as a →∞ and also:
sup_x≤ - a | (1+x^2) f'(x) | < a^-2sup_x≤ - a | x^2 (1+x^2) f'(x) | < a^-2 x^2 (1+x^2) f'(x) _L^∞→ 0
similarly for the f”'(x) term.
For the first term use:
𝔉(f w_a')(s) = 𝔉( (f-c_a) w_a')(s) + c_a ∫_-∞^∞ d x e^i x s w_a'(x)
= 𝔉( (f-c_a) w_a')(s) + c_a ∫_-∞^∞ d x e^i x s a w'(x)
= 𝔉( (f-c_a) w_a')(s) + c_a 𝔉( w')(s a)
for some constant c_a.
We now pick c_a = inf_x < -a f(x). Using the same strategy as above we write
𝔉( (f -c_a) w_a')_L^1 ≤π ( (f-c_a) w_a' _L^1 + ((f-c_a) w_a')”_L^1 )
≤π ( f- c_a_L^∞_(-∞,-a] w_a' _L^1 + (f-c_a) _L^∞_(-∞,-a] w_a”' _L^1)
+ π f”_L^1w_a' _L^∞ + 2π f' _L^1w_a”_L^∞
The last two terms are dealt with as above. Note that:
f - C _L^∞ ≤f _L^∞ + C =C + sup_x | f(b) + ∫_b^x dy f'(y) |
≤ C+ |f(b)| + sup_x∫_b^x dy |f'(y)| ≤ C+ |f(b)| + f' _L^1
which is clearly finite. This analysis implies that c_a is finite and f - c_a _L^∞ is finite. Thus we need to compute:
w_a' _L^1 = w' _L^1 w_a”' _L^1 =1/a^2 w”' _L^1
the latter of which vanishes. Since the first term does not vanish we instead note that:
sup_x < -a |f(x)- c_a| = sup_x< -a f(x) - inf_x < -a f(x)
≈ f(x_s) - f(x_i)
= ∫_x_i^x_s d x f'(x) ≤∫_x_i^x_s d x | f'(x)| ≤∫_-∞^a d x | f'(x)|
≤ a^-2∫_-∞^a d x x^2 | f'(x)| ≤ a^-2 x^2 f'(x) _L^1→ 0
where x_i and x_s approximate the location of the inf and sup respectively. This approximation is what we mean by ≈ and this can be removed after taking limits.
All that is left to do is to compute:
𝔉( w')( a · ) _L^1 = ∫_-∞^∞ ds | 𝔉( w')( a s) | = a^-1𝔉( w') _L^1→ 0
Thus we have establishing the limit claimedlimit.
We now write the first term of eq:C_3 out:
C_3^ firstχ^ -2 (n-1) 𝒜(A:B:C) +n 𝒜(AB:C)
= < 1_AB^⊗ n| Σ_A^† (ϱ^1/2⊗ 1 ) f( ln (ϱ⊗ϱ) + 2 n 𝒜(AB:C) lnχ ) (1 ⊗ϱ^1/2) Σ_A | 1_AB^⊗ n>
≤𝔉(f')_L^1ϱ⊗ϱ - π /χ^ 2 n 𝒜(AB:C)_1 (1 ⊗ϱ^1/2) Σ_A | 1_AB^⊗ n> ( ϱ^1/2⊗ 1) Σ_A | 1_AB^⊗ n>
where we have applied eq:wmc1 in the second line.
Note that (1 ⊗ϱ^1/2) Σ_A | 1_AB^⊗ n> =( ϱ^1/2⊗ 1) Σ_A | 1_AB^⊗ n>=1.
Setting f_2 ≡𝔉(f')_L^1 we have:
C_3 ≤ f_1 δ∫^∞_0 dμ̂_Ψ (λ̂) + f_2 ϱ⊗ϱ - π /χ^ 2 n 𝒜(AB:C)_1 ∫_0^∞ dμ̂_Ψ(λ̂) λ̂^1/2
≤ f_1 δ∫^∞_0 dμ̂_Ψ (λ̂) + f_2 ϱ⊗ϱ - π /χ^ 2 n 𝒜(AB:C)_1 ( 1 + | ( ∫_0^∞ dμ̂_Ψ(λ̂) λ̂^1/2) - 1 | )
Since the same quantity appears on the left hand side of eq:C123 we should subtract and write:
| ( ∫_0^∞ dμ̂_Ψ(λ̂) λ̂^1/2) - 1 | ( 1 - f_2 ϱ⊗ϱ - π /χ^ 2 n 𝒜(AB:C)_1 )
≤ C_1 + C_2 + C_3'
with
C_3' ≤ f_1 δ∫^∞_0 dμ̂_Ψ (λ̂) + f_2 ϱ⊗ϱ - π /χ^ 2 n 𝒜(AB:C)_1 χ→∞→ f_1δ
where we have used eq:wmc2 in the limit.
Now suppose that 0 < f_2 < 1/2 then using the fact that the trace distance is bounded by 2:
| ( ∫_0^∞ dμ̂_Ψ(λ̂) λ̂^1/2) - 1 |
≤C_1 + C_2 + C_3'/1 - 2 f_2
Since δ can be made arbitrarily small, we have C_1Pr→0 by eq:Markov, C_2Pr→0 by Chebyshev's inequality and the vanishing of the variance of moments and eq:C2_second, and C'_3Pr→0 by eq:C3'.
The rest of the proof is fairly standard and we find:
χ^ 2 (n-1) 𝒜(A:B:C) -n 𝒜(AB:C) Tr( ρ_AA^⋆^(1/2))^n = ( ∫_0^∞ dμ̂_Ψ(λ̂) λ̂^1/2) →^Pr 1
The map x → -1/n-1ln x is continuous for x > 0, which is where the random variable is defined so, so by the continuous
mapping theorem we prove cont:prob.
We have imposed various properties on f in the above proof. To finish we must show there exists a function with these properties.
We desire:
* f∈ C^∞(ℝ) with f≥ 0 and f(x) = 0 for x ≥ 0.
Also f' ∈𝒮(ℝ).
* f_1 ≡(1-f(ln x))x^-1_L^∞_[0,2]=sup_x≤ 0 | (1- f(x) ) e^-x |< ∞.
* f_2 = 𝔉(f') _L^1 < 1/2.
We will show existence by explicit construction. Consider a smooth bump function b≥ 0 compactly supported on (-1,0) and set
f(x) = 1/Lb_L^1∫^0_x dx b(x/L)
for some constant L>0 to be determined.
We now check: 1. is trivially satisfied. 2. We have f(x)=1 for x<-L and
f_1 = 1/Lb_L^1sup_-L < x ≤ 0| e^-x∫^x_-L dx σ(x) | < ∞
3. We calculate
f_2 = 𝔉(b(x/L))(s)_L^1/Lb_L^1
= ∫ ds |∫ dx e^-isxb(x/L)|/Lb_L^1
= 𝔉(b)_L^1/Lb_L^1
Thus to satisfy 3. we simply pick L>2𝔉(b)_L^1/b_L^1. Thus, we complete the proof.
jhep
| There is compelling evidence that quantum gravity should be thought of as an emergent phenomenon with the underlying geometry being described by the entanglement structure of a dual/holographic wave-function. The Ryu-Takayanagi (RT) formula <cit.> expresses this idea, connecting areas of minimal surfaces and von Neumann entropies: a measure of bipartite entanglement in pure states. Minimal surfaces can be equivalently described by bit-threads <cit.>, maximal divergence free locally bounded flows between two boundary regions. Furthermore, these bit-threads give a vivid picture of (pure state) bipartite entanglement in the boundary wavefunction, with a thread corresponding to a distillable EPR pair.
Random tensor network states <cit.> model this behavior with a graph G = (E,V) playing the role of the underlying geometry and boundary vertices ∂⊂ V containing the dual Hilbert space. Edges e ∈ E contain maximally entangled states with large bond dimensions χ(e) and vertices are described by randomly chosen tensors that are contracted with the edge states. The RT formula arises as a minimal cut through the graph with edges weighted by w(e) ∝lnχ(e). The cut divides the vertices V into the two disjoint sets each containing the corresponding sets of boundary vertices whose von Neumann entropy we wish to compute. Bit-threads correspond to dual maximal flows between the two sets of boundary vertices with flow capacities set by w(e). This correspondence is a version of the max-flow min-cut theorem that can be proven using strong duality theorems from the theory of linear/convex programs.
Bit-threads however tend to give a misleading picture of multipartite entanglement. For example, a bipartite dominance conjecture was formulated for three party holographic states based on the existence of such bit-thread configurations <cit.>. However, the conjecture of Ref. <cit.> contradicts other measures of tripartite entanglement beyond von Neumann entropies, most notably for this work, that of reflected entropy. Ref. <cit.> argued using the reflected entropy that holographic states have large amounts of tripartite entanglement and thus, disproved a version of the bipartite dominance conjecture.
Generally, minimal areas are only a limited probe of the underlying geometry, and one might expect other geometric objects – such as surfaces of various co-dimensions – to play an important role in a putative correspondence between geometry and quantum information. For example, computational complexity is believed to be associated to co-dimension 0 or 1 regions in spacetime <cit.>. There are now also hints that a class of tripartite entanglement should be associated to spatial co-dimension 2 objects <cit.>. In this paper, we find further evidence for the latter by proving a correspondence between reflected entropy and a minimal triway cut. Triway cuts generalize the bipartite cuts described above, and are likely the closest graph analog of a co-dimension 2 object that is defined for any graph.[The cuts themselves are co-dimension 1, however the three cuts meet at some locus that might be considered co-dimension 2.]
Triway cuts are integer optimization programs that cannot be dualized to a bit-thread description.[What we mean here is that the (Lagrange) dual flow programs do not have the same optimal value, i.e., there is a duality gap. However, it is possible to find a “dual” if one considers more exotic optimization problems. See sec:disc for more discussion on this topic.]
Relaxing the integer constraint gives a linear program that underestimates the cut. The ratio between these values, the output of the integer program over that of the linear program, is generally a difficult quantity to compute, and is called the integrality gap.
In fact, computing the integrality gap is an NP-complete problem <cit.>.
We now introduce our main result in more detail.
The reflected entropy S_R <cit.> of a state ρ_AB is defined as the entropy of AA^⋆ in the canonical purification | ρ_AB^1/2> ∈ℋ_AA^⋆ BB^⋆.
The Rényi generalization of S_R is a simple one parameter family of quantum information measures:
S^(n)_R(A:B) = -1/n-1ln Trρ_AA^⋆^n ρ_AA^⋆ = Tr_BB^⋆| ρ_AB^1/2> < ρ_AB^1/2|
We will prove:
For integer n > 1, the reflected entropy of a random tensor network state at large bond dimension, with a unique entanglement wedge for AB:C and a unique triway cut for A:B:C (with tensions specified below), satisfies:
lim_χ→∞ S_R^(n)(A:B) /lnχ = 1/n-1𝒜_𝐭(A:B:C) - n/n-1𝒜(AB:C)
where 𝒜_𝐭(A:B:C) is the area of a multiway cut with tensions 𝐭≡(t_A:B,t_B:C,t_C:A) = (2(n-1),n,n)
[In this paper we mostly consider triway cuts defined with this tension. For this reason, we will often abbreviate the triway cuts simply as 𝒜(A:B:C) when it is unambiguous.]
and 𝒜(AB:C) is the area of the minimal cut (with tension 1) for AB:C.
Averaging is taken with respect to the Haar measure over unitary matrices that are applied to vertex states in the graph. See Definition <ref> for the precise construction. The triway cut is defined in the same way as a cut: we split the vertices into three disjoint subsets containing respectively boundary vertices A,B,C. The area is then the sum over the edges e which intersect two of the three regions, weighted by the respective tensions and w(e). See Definition <ref> and fig:network-multi-cut.
The Markov gap is defined as <cit.>:
h(A:B) = S_R(A:B) - I(A:B) ≥ 0
where I(A:B) is the mutual information. The Markov gap vanishes iff the three party state has a particular structure: a classical superposition of states with only bipartite entanglement between the three different parties <cit.>. The Markov gap thus detects a certain class of non-trivial tri-partite entanglement.[h vanishes on GHZ states, so it does not detect all kinds of tripartite entanglement <cit.>. A refined version based on the entanglement of purification does better <cit.>. This is generally harder to compute but the results of this paper help compute it for a class of RTN states <cit.>.]
In particular, we have the following lower bound:
Under the uniqueness assumption for n=2 in Theorem <ref>,
the (normalized) Markov gap (MG) of a random tensor network state at large bond dimension is lower bounded by:
MG ≡lim_χ→∞h(A,B)/lnχ≥ 2 𝒜_𝐬(A:B:C) - 𝒜(A:BC) - 𝒜(B:AC)- 𝒜(C:AB)
where 𝒜_𝐬 is the standard minimal triway cut with equal tensions 𝐬=(1,1,1).
This follows from ∂_n S_R^(n)≤ 0 and Theorem <ref> applied at n=2.
We will show that the right-hand side of eq:MG-bound is determined by the integrality gap of the integer program <cit.>:
min_ρ∑_eρ(e) w(e)
∀ L ∈𝒫_A,B∪𝒫_A,C∪𝒫_B,C : ∑_e ∈ Lρ(e) ≥ 1
where ρ(e) ∈ℤ_≥ 0 and 𝒫_x,y refers to all paths through the edges of the network connecting vertices x and y. The integrality gap IG is the ratio between the two optimal values of the program before and after relaxing the integer constraint on ρ, a standard concept in the theory of integer programming <cit.>. The original program computes 𝒜(A:B:C) with equal tensions, while the relaxed program allows the domain walls to split into pairs and form three minimal cuts with tensions 1/2 (see fig:relax). The relaxed program is dual to the multicommodity flow problem <cit.> on three parties for which efficient algorithms exist. In this sense, we can interpret the integrality gap as an obstruction to obtaining a bit threads picture. Explicitly our bound relates the two “gaps”:
MG ≥ (IG-1) × ( 𝒜(A:BC) + 𝒜(B:AC) + 𝒜(C:AB))
Roughly speaking IG-1 ≥ 0 measures how computationally hard the integer program is, and so this gives an intriguing link between the Markov gap and complexity.[Note that this is logically different from the computational complexity of the state often discussed in AdS/CFT.]
While an obvious and natural continuation in n away from the integers exists for the triway cut problem, we have not yet rigorously established that this still computes the n-Rényi reflected entropy. We have no reason to believe otherwise[Any lingering doubt might be used to challenge the conclusion <cit.> that the bipartite dominance conjecture <cit.> is false. However, independent of the EW duality, using gaps we have now rigorously proven the bipartite dominance conjecture false for any random tensor network with IG≠ 1 for the program program-intro.], and so it is worth noting that the limit n → 1 reproduces the entanglement wedge cross-section:
lim_n → 1 ( 1/n-1𝒜(A:B:C) - n/n-1𝒜(AB:C) ) = 2 EW(A:B)
where EW(A:B) is simply the minimal cut dividing A:B on the sub-graph defined by the “entanglement wedge” of the cut AB:C (see fig:EW).
The calculation of reflected entropy in holography involves analytically continuing a two-parameter (usually called m,n) replica trick computation. It was shown in Ref. <cit.> that the continuation of the naive saddles proposed in Ref. <cit.> suffers from an order of limits issue, an analog of which exists for RTNs as well <cit.>. By incorporating new saddles, rigorously performing an analytic continuation in m and proposing an analytic continuation in n via the triway cut problem, we have resolved this issue in general RTNs. This motivates a similar prescription with the inclusion of new saddles even in AdS/CFT.
A summary of this paper is as follows. In sec:sum, we state our main theorems
pertaining to the computation of
Rényi reflected entropies for the state |ρ_AB^m/2⟩ with m ≥ 2 an even integer.
In sec:prelim we give some required background, including results on standard network flows, minimal cuts, and the mathematics of permutations and set partitions that make appearances in various statistical mechanics models that we consider. sec:main proves that the optimal solution to the reflected entropy statistical mechanics model can be found in a series of coarser models, finally ending in the multiway cut problem. In sec:cont we continue the m parameter away from even integers to m → 1, where we make contact with the reflected entropy. In this step, we use the method of moments in conjunction with a weak form of measure concentration for random tensor network states. In sec:disc, we discuss various aspects of our work such as bit threads, the relation to entanglement of purification as well as generalization to hypergraphs. Several lengthy proofs are relegated to Appendices.
We end this introduction with a list of common notations used in this article:
l|l|l
Glossary of symbols and notations.
Term Description First defined in
G={E,V} graph Def. <ref>
w(e) weight of an edge e∈ E after Def. <ref>
E_G[V'] or E[V'] set of edges that have some vertex in V' before Def. <ref>
V_G[E'] or V[E'] set of vertices that lie in E' before Def. <ref>
𝒫_A,B set of paths from A to B before Def. <ref>
𝒫_A,B set of edge-disjoint paths from A to B before Def. <ref>
r_A or r_A:B cut region containing A or dividing A:B before eq:mu_def
μ(r)⊂ E cut surface of a region r ⊂ V eq:mu_def
𝒜(A:B) minimal cut for A and B Def. <ref>
𝒜(A:B:C) minimal triway cut among A, B and C Def. <ref>
1_s(x) indicator function of the set s after cpfeas
S_N symmetric group of order N before eq:Cayley
P_N partitions of a set of order N before eq:q(g)
B_N set of string of N Boolean algebras before eq:bk(q)
P(g) coarse graining P: S_mn→ P_mn before eq:q(g)
q_g_0(p) coarse graining q_g_0: P_mn→ P_#(g_0) eq:q_g0
q_X(p) coarse graining q_X: P_mn→ P_2n eq:q(g) and eq:q_g0
s(q) coarse graining s: P_2n→ s(P_2n) ≃ B_2n before eq:S_metric
b^k(q) coarse graining b^k: P_2n→ (B_2n)^k ≃ℤ_2 before d1s
u_k largest partition with a singlet at location k after eq:bk(q)
#(x) cycle (x∈ S_N) or block (x∈ P_N) counting function eq:Cayley and eq:P_metric
#_1(q) number of singlets in q∈ P_N before eq:S_metric
a ∧ b meet of a and b beginning of sec:ppb
a ∨ b join of a and b beginning of sec:ppb
d(g_1,g_2) (g_1,g_2 ∈ S_N) Cayley distance in S_N eq:Cayley
d(q_1,q_2) (q_1,q_2 ∈ P_N) distance on semimodular lattice P_N eq:P_metric and eq:P_metric_2
d_1(s_1,s_2) (s_i=s(p_i):p_i∈ P_N) singlet distance on P_N eq:S_metric
d(b_1,b_2) (b_1,b_2 ∈ B_N) Hamming distance on B_N eq:B_metric and d1s
d_ρ(x,y) (x,y∈ E) graph distance induced by ρ:E→ℝ eq:d_graph
Throughout this article, we will define various distance functions on different sets. Most of these distances will be termed universally by d(·,·), with an understanding that we use different definitions based on the set in context, as shown in Table <ref>.
For a function ρ(e) on the set of edges ρ:E→ℝ we will sometimes write ρ(E') ≡∑_e∈ E'ρ(e) for a subset E'⊂ E.
Note: An alternate proposal for the entanglement wedge cross section (EW) was made in Refs. <cit.> relating it to the entanglement of purification (E_P). In Ref. <cit.>, we used the results obtained in this paper to prove the E_P=EW conjecture for specific RTNs.
Also note that triway cuts in holography have also been discussed in Refs. <cit.> as a candidate holographic dual to a different quantity, the multi-entropy. We will comment more on the relation to this work later in the paper. | null | null | null | In conclusion, we have rigorously demonstrated that the (m,n)-R'enyi reflected entropies in random tensor networks are computed by saddles involving triway cuts as shown in fig:network-multi-cut for arbitrary m and integer n. Moreover, there is a natural analytic continuation of the triway cut problem for non-integer n which leads to the holographic proposal relating to the entanglement wedge cross section. We now comment on various aspects of our work.
§.§ Bit Threads
As discussed earlier, bit threads provide a vivid picture of the entanglement structure of holographic states. Based on the structure of bit thread configurations, Ref. <cit.> conjectured that the three party entanglement structure of holographic systems is dominated by bipartite entanglement. However, a version of this conjecture is in conflict with the S_R=2EW proposal <cit.>, for which we have found further evidence in this paper. While this is conclusive, we would nevertheless like to discuss a way to see how one can get as close to bit threads as possible.
As reviewed in sub:minmax, the RT formula can be recast as a max flow problem by convex duality. Moreover, the natural form of the min-cut problem from the RTN perspective is an integer program where the optimization is over domain walls with integer energy costs, representing the different permutations that contribute to the free energy minimization problem. For the entanglement entropy, the problem can be relaxed to a linear program over the real numbers since there isn't an integrality gap for this problem. Once relaxed, convex duality naturally leads to a max-flow problem, i.e., bit threads.
In the context of reflected entropy however, we showed that the triway cut problem is equivalent to an integer program with a non-trivial integrality gap. While the Rényi reflected entropies are computed by triway cuts, which are related to the entanglement wedge cross section, the non-integer program allows relaxation to surfaces that are related to the original RT surfaces as shown in fig:relax (e.g. for n=2). For RTNs, this is identical to the mutual information, as found from assuming the mostly bipartite conjecture. Thus, we can think of the relaxation of the integer program as the “incorrect" step that led to bit threads, as well as the mostly bipartite conjecture.
It has been conjectured that one can amend the mostly bipartite story from bit threads by considering a generalized “hyperthread” optimization program <cit.> (see also <cit.>). Hyperthreads are threads connecting between multiple (k≥ 3) boundary regions and it is conjectured that the optimal value of a k-thread program may be a measure of k-partite entanglement. In the case of 3-threads, the optimal configuration saturates a minimal triway cut. In this sense, our results serve as a firsthand bridge that connects the 3-thread problem to a concrete quantum information measure. We think it should be possible to derive the hyperthread optimization as a dual problem of the various integer programs appearing in this paper – indeed the triway cut does not admit a bit thread dual but it may still be able to be “dual” to a more exotic program such as the hyperthreads. On the other hand, reflected entropy can be naturally generalized to accommodate k-party systems <cit.>. It would be interesting to see if the formalism we developed in this paper extends to such case and if there is possible connection to the k-thread programs. We leave these investigations to future works.
§.§ More General Tensor Networks
There are various aspects of RTNs that make them good models of holography such as their relation to fixed-area states <cit.>. However, there are other aspects that are missing such as the lack of mutual backreaction between domain walls, as well as the commutativity of area operators <cit.>. Thus, it is interesting to analyze the extent to which variants of random tensor networks can model holography.
For instance, the RT formula can be reproduced by choosing tensors that are 2-designs where the average is the same as the Haar average up to the second moment <cit.>. Random stabilizer tensor networks are an example which form at most a projective 3-design <cit.>. However, they satisfy the bipartite dominance conjecture <cit.> and thus, do not accurately model the reflected entropy for holographic states. Our paper certainly makes use of larger moments, so the discrepancy is no surprise. In fact, our results suggest that the integrality gap would become visible to a projective 4-design, at least when computing the (2,2) Rényi reflected entropy (involving canonical purifications of the density matrix ρ_AB^2/ Trρ_AB^2). The implications for the Rényi reflected entropy and the Markov gap are less clear, and we leave it as an important open question to understand at what level of k-design the Markov gaps becomes large, of order lnχ.
Another possible generalization of the RTN that can be considered is to use non-maximally entangled edge states <cit.>. In this case, the calculation for the (m,n)-Rényi reflected entropy is similar except the energy costs on the domain wall become functions of the entanglement spectrum of the edge states. In this case, it is harder to prove anything about the analytic continuation, but we expect similar saddles to play an important role.
Even more ambitious would be to use tensor networks with edge degrees of freedom, as in <cit.>, with a possible payoff of more realistic backreaction.
Another generalization we can consider is that of hypergraph RTNs. Hypergraphs are a natural generalization of graphs where edges connecting 2 vertices are generalized to hyperedges potentially connecting more than 2 vertices. States that satisfy the RT formula on hypergraphs were discussed in Refs. <cit.>. Such states have a natural construction in terms of the RTN where hyperedges are formed by projecting onto GHZ states coupling multiple vertex tensors <cit.>. One could then ask what the reflected entropy for such states is. Although we do not have a proof analogous to the one for usual RTNs, assuming that triway cut-like configurations dominate, we can compute the Rényi reflected entropy. The important ingredient is the generalization of the free energy optimization problem. The free energy cost of an edge in the RTN for vertex permutations g_1 and g_2 is weighted by #(g_2 g_1^-1). It is easy to work out the generalization for hyperedges. For instance, for a 3-edge with vertex permutations g_1, g_2 and g_3, the free energy cost is #(g_2g_1^-1∨ g_3g_1^-1), which measures the number of orbits of the relevant elements.
§.§ Holographic States
Having obtained rigorous results for RTNs, it is natural to guess that similar saddles, which are different from the naive saddles written down in Ref. <cit.>, will also play an important role in holography, resolving the issues with analytic continuation found in Ref. <cit.>.
In particular, this would imply that geometric saddles involving backreacted versions of triway cuts would compute the (m,n)-Rényi reflected entropy in holographic states.
It is useful to contrast this with the case of the multi-entropy, a symmetric multipartite entanglement measure defined by Ref. <cit.>.
In particular, for the case of three parties, the bulk dual was proposed to be a triway cut.
However, it was argued by Ref. <cit.> that despite the multi-entropy being computed by a triway cut in RTNs, the corresponding configurations were not gravitational saddles in AdS/CFT.
The basic reason is that in a smooth gravitational saddle, the local neighborhood of any point is a Euclidean ball.
This turns out to be true for the triway cut configuration at n=2.
However, for the triway cut configurations at n>2, the topology is such that the junction of the triway cut is not smooth.
A similar argument also holds for the entanglement negativity and the odd entropy <cit.>.
Given this, one may worry that while triway cuts are relevant for RTNs, they may not compute the (m,n)-Rényi reflected entropy in holographic states.
However, one can check that for the reflected entropy of (m,n)-R'enyi, the junction of the triway cut is consistent with the topology of a Euclidean ball, thus avoiding the above failure mode.
Thus, we still expect the topologies of the RTN saddles leading to triway cuts to continue to be relevant for holographic states.
A particular case where this is known to be true is that of the (2,n)-Rényi reflected entropy computed in Ref. <cit.>.
§.§ Effective Description
In Ref. <cit.>, we discussed an effective description of the canonical purification as a superposition over states with different values of area for the squeezed cross sections. E.g., for hyperbolic RTNs we have
< g r a p h i c s >
.
It is natural to expect a generalization of this result to general RTNs where the squeezed cross sections are solutions to the triway cut problem at different values of n.
While this effective description was proposed in the context of RTNs, where there is no backreaction and the superposition is simply over different sets of vertices in the graph, it is natural to ask if it can be understood in holographic systems as well. For the special case of two intervals in the vacuum state at m=2, it was found in Refs. <cit.> that the Rényi reflected entropies for arbitrary n can be computed by the torus partition function. This makes analytic continuation easy and is related to the fact that for m=2, there is no independent X element. The calculation of Rényi reflected entropy can then similarly be decomposed into fixed-area sectors of the entanglement wedge cross section, which correspond to different horizon areas in the BTZ saddle. It is then easy to see that the effective description with different squeezed cross sections is represented by these fixed-area states which induce varying conical defects at the horizon, which translate into the angle subtended at the triway cut at different values of n (see fig:torus_slice).
TF would like to thank Alejandro Dominguez-Garcia for discussions. SL would like to thank Shiliang Gao, Jingwei Xu, and Kiran Kumar A.S. for useful discussions. PR would like to thank Abhijit Gadde for discussions.
PR is supported in part by a grant from the Simons Foundation, by funds from UCSB, the Berkeley Center for Theoretical Physics; by the Department of Energy, Office of Science, Office of High Energy Physics under QuantISED Award DE-SC0019380, under contract DE-AC02-05CH11231 and by the National Science Foundation under Award Number 2112880.
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0360. | null |
http://arxiv.org/abs/2409.17544v1 | 20240926052216 | Optimizing the Induced Correlation in Omnibus Joint Graph Embeddings | [
"Konstantinos Pantazis",
"Michael Trosset",
"William N. Frost",
"Carey E. Priebe",
"Vince Lyzinski"
] | stat.ML | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.ME",
"stat.TH"
] |
theoremTheorem
exampleExample
corollary[theorem]Corollary
definitionDefinition[section]
defnDefinition[section]
remarkRemark
propositionProposition
lemmaLemma
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17464v1 | 20240926015128 | Computation of $\langle Φ^2\rangle$ and quantum fluxes at the polar interior of a spinning black hole | [
"Noa Zilberman",
"Marc Casals",
"Adam Levi",
"Amos Ori",
"Adrian C. Ottewill"
] | gr-qc | [
"gr-qc"
] |
[email protected]
Department of Physics, Technion, Haifa 32000, Israel
Princeton Gravity Initiative, Princeton University, Princeton NJ 08544, USA
[email protected]
Institut für Theoretische Physik, Universität Leipzig,
Brüderstraße 16, 04103 Leipzig, Germany
School of Mathematics and Statistics, University College Dublin, Belfield, Dublin 4, D04 V1W8, Ireland
Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, CEP 22290-180, Brazil
[email protected]
Department of Physics, Technion, Haifa 32000, Israel
[email protected]
Department of Physics, Technion, Haifa 32000, Israel
[email protected]
School of Mathematics and Statistics, University College Dublin, Belfield, Dublin 4, D04 V1W8, Ireland
§ ABSTRACT
Renormalization of physical quantities for quantum field theories in curved spacetimes can be achieved via the subtraction of counterterms
in a consistent manner within a regularization scheme such as a point-splitting method.
Pragmatic mode-sum regularization (PMR) is a point-splitting method which is particularly suitable for rotating black hole spacetimes.
We extend and tailor the t-splitting variant of PMR specifically for the interior of a Kerr black hole on the axis of rotation, focusing on a minimally-coupled massless scalar field in the physically-motivated Unruh state. The method addresses unique challenges within the black hole interior that do not occur outside. In particular, while the infinite sum over multipolar number l converges in the black hole exterior, it diverges in the interior, necessitating the subtraction of a so-called intermediate divergence which includes introducing an additional “small” split in the direction of the polar angle θ. This procedure is outlined and justified, along with the standard PMR method's subtraction of counterterms mode-by-mode.
We apply this method to calculate the renormalized energy-momentum fluxes ⟨ T_uu⟩ _ren^U, ⟨ T_vv⟩ _ren^U (where u and v are the standard Eddington coordinates) and the renormalized field square ⟨Φ^2⟩ _ren^U throughout the Kerr black hole interior, spanning from (just off) the event horizon to (just off) the inner horizon. Special emphasis is placed on the vicinity of the inner horizon, where the t-splitting results for ⟨ T_uu⟩ _ren^U and ⟨ T_vv⟩ _ren^U asymptote to those obtained directly at the inner horizon using a different method in a previous work.
In an Appendix, we develop an alternative variant of the t-splitting PMR method, dubbed the analytic extension variant, which does not include the intermediate divergence subtraction. We utilize it to perform independent computations that are used to verify the standard t-splitting variant presented in the main text.
Computation of ⟨Φ^2⟩
and quantum fluxes at the polar interior of a spinning
black hole
Adrian C. Ottewill
September 28, 2024
======================================================================================
§ INTRODUCTION
Within the framework of semiclassical gravity, the gravitational field
is kept classical whereas the matter fields are quantized. Semiclassical
gravity is expected to be a valid framework in the limit that the
physical scales are much larger than the Planck scales and, as such, it has
provided significant results. For example, in black hole (BH) settings,
semiclassical
gravity has led to the pioneering discovery by Hawking <cit.>
that astrophysical BHs emit quantum thermal radiation in their exterior.
In its turn, in the interior region of BHs, recent work within semiclassical
gravity has unveiled an irregularity of the so-called Cauchy
horizon (CH) <cit.> (see <cit.> in the non-rotating case), which (at least naively[Semiclassical analyses on fixed Reissner-Nordström and Kerr metrics (as well as their corresponding de Sitter variants) indicate that the quantum energy-momentum fluxes typically diverge at the CH like V^-2 (where V is a regular Kruskal coordinate vanishing at the CH) – which is stronger than the divergence of energy-momentum perturbations in the analogous classical problem. However, when attempting to translate this observation to the near-CH backreaction analysis, one should recall that a semiclassical BH in the Unruh state undergoes evaporation (which is manifested already at the event horizon). This needs to be taken into account when attempting to evolve the Einstein equations from the event horizon towards the CH.]) suggests dominance
over that due to classical effects <cit.> (see, e.g., <cit.> in the non-rotating case).
As mentioned, within semiclassical gravity, matter fields are treated
as Quantum Field Theories (QFTs). As is well-known, however, QFTs
suffer from ultraviolet divergences and, hence, the expectation values
of most physical quantities need to be appropriately renormalized. Most importantly,
the renormalized expectation value of the stress-energy tensor
(RSET),
⟨ T_μν⟩ _ren^Ψ[Typically, a hat is placed over a quantity in order to distinguish its quantum version over its classical version since, mathematically, they are objects of very different types. In order to reduce cluttering, however, we will not make such a distinction. Therefore, in particular, Φ will equally denote a classical scalar field or its quantum version (which is an operator-valued distribution), and similarly for the stress energy tensor T_μν. The distinction between classical and quantum quantities should be clear from the context.], when the field is in a quantum state Ψ, is the quantity which appears on the right hand side of the
semiclassical Einstein equations:
G_μν=8π⟨ T_μν⟩ _ren^Ψ ,
where G_μν is the Einstein tensor and we take units where c=G=1.
That is, the RSET replaces the classical stress-energy
tensor T_μν in the classical Einstein equations. Ideally,
one would
evaluate
the RSET
and the Einstein tensor
in the same spacetime
but that is a very tall order. Thus, typically, one
follows a perturbative approach whereby the RSET is calculated on
a background spacetime and one would then solve the semiclassical
Einstein equations for the backreacted metric; such a procedure could be carried out iteratively to arbitrary order. In curved spacetimes,
Wald <cit.> established axioms which a physically-meaningful RSET should satisfy.
Henceforth we shall focus on the case that the matter
field is a scalar field Φ. In this case, a
quantity which is easier to calculate than the RSET but which is also of physical significance is the renormalized field square (or vacuum polarization) ⟨Φ^2(x)⟩ _ren^Ψ: in particular, it is important for spontaneous symmetry breaking (see, e.g., <cit.> in the context of black holes).
There exist several methods for renormalization
but the one of main interest in this paper involves using the so-called point-splitting regularization
method <cit.> (see, e.g., <cit.> for a review). Point-splitting-based renormalization methods <cit.> are particularly useful for calculational purposes and can be used for both the renormalized field square and the RSET, satisfying Wald's physical axioms in the latter case.
The point-splitting method essentially consists of the following.
First, the expectation value at a
spacetime point x of a physical quantity
which is quadratic
in the quantum field
(which, mathematically, is an operator-valued distribution) and its derivatives
is temporarily made a bi-tensor[There is a freedom in the choice of such bi-tensor, while the final physical result is independent of that choice.] by evaluating each
one of the two field factors at a different spacetime point: one factor at x
and the other factor at, say, x'.
Such a bi-tensor is then
regular
as long as the
two spacetime points do not coincide (and are not connected by a null
geodesic <cit.>).
One then subtracts
from this unrenormalized
bi-tensor
a
so-called counterterm[The counterterm is typically expressed as a sum of truly divergent subterms and a finite subterm.]
which is purely-geometrical (and so state-independent).
Finally, one takes the
coincidence limit (x'→ x) in the result of such subtraction,
yielding the renormalized expectation value of the quantity of interest.
In the case that the quantity
of interest is the field square
Φ^2 or the stress-energy tensor
T_μν, we
respectively
denote
the
unrenormalized bi-tensor by
1/2G_Ψ(x,x')
or ⟨ T_μν(x,x')⟩^Ψ,
the counterterm by
1/2G^CT(x,x') or T_μν^CT(x,x'),
and
the point-splitting regularization
procedure then
amounts to
⟨Φ^2(x)⟩ _ren^Ψ=1/2lim_x'→ x(G_Ψ(x,x')-G^CT(x,x'))
or ⟨ T_μν(x)⟩ _ren^Ψ=lim_x'→ x(⟨ T_μν(x,x')⟩ ^Ψ-T_μν^CT(x,x')).
In this paper, for the two-point function G_Ψ(x,x') we shall later make the choice of the anticommutator G_Ψ^(1)(x,x')≡⟨{Φ(x),Φ(x')}⟩ _Ψ, also called the Hadamard two-point function (HTPF), although other choices for G_Ψ(x,x') are also possible.
An expression for the HTPF in Kerr in terms of modes which are amenable to practical computations was derived in <cit.> for the exterior of the BH and by us <cit.> for the interior.
From the fact that the
classical stress-energy tensor may be obtained by applying a certain
differential operator quadratic in the field (see Eq. (<ref>) below),
it follows that the RSET may be obtained by applying a related differential
operator to G_Ψ(x,x')-G^CT(x,x') and afterwards
taking the coincidence limit.
Unfortunately, from a technical point of view, such a renormalization
procedure is notoriously hard to carry out in practice, at least in
the case of BH background spacetimes. The main reason is that, typically,
one calculates the unrenormalized bi-tensor (1/2G_Ψ(x,x') or
⟨ T_μν(x,x')⟩ ^Ψ ) via a full
(Fourier and angular) infinite mode decomposition whereas the counterterms
(1/2G^CT(x,x') or T_μν^CT(x,x')) are
instead known in terms of geometrical quantities (they are usually
known as an expansion for small geodesic distance between the two
spacetime points[The geodesic distance between the two spacetime points is unique as
long as the points are `close' enough.] x and x'). Only in very special (highly-symmetric) spacetimes can one analytically obtain both
the modes of the unrenormalized quantity and the corresponding
mode sums in closed form; one can then subtract from the closed form expression the counterterm and finally take the coincidence limit x'→ x, thereby performing the entire renormalization procedure analytically. In the other cases, which include all 4-dimensional BH spacetimes, one must resort to a numerical evaluation of the full mode sum, which can be rather challenging since the convergence of the infinite mode sum slows
down as x' approaches x (and the mode sum diverges in the actual limit
x'→ x).
Therefore, one typically seeks to find a mode decomposition of
the counterterm, so that the renormalization subtraction can be carried
out mode-by-mode, thus improving the convergence of the mode sum.
For that purpose, it is useful to separate the points x and x'
in a coordinate direction which corresponds to a symmetry (in cases where
there is one) of the background spacetime and then re-express the
counterterm as a mode sum decomposition with respect to the associated coordinate.
Commonly, the background spacetime is stationary and either spherically-symmetric or only axisymmetric, and so, accordingly, one separates the points in
the time direction (so-called t-splitting) or corresponding angular
directions (so-called θ- or φ-splitting, depending
on whether the direction of separation is along the polar angle or
the azimuthal angle, respectively). In the case of t-, θ-
or φ-splitting, the corresponding decomposition of the counterterm is in terms
of, respectively, Fourier frequency ω-modes, spherical/spheroidal l-harmonics or azimuthal m-modes.
In its turn, the unrenormalized bi-tensor involves all sums: an (infinite) integral over ω, an (infinite) sum over l and a (finite) sum over m.
It is worth mentioning that the point-splitting method is expected to fail at the spacetime regions where the Killing vector associated with the direction of symmetry with respect to which the splitting is carried out has zero norm (since the splitting would be along a null direction, along which the two-point function diverges).
In particular, this would mean that in a Kerr spacetime φ-splitting
might fail on the pole and t-splitting on the boundary of the ergoregion, which is where the Killing vector ∂_t becomes spacelike. (In particular, at the axis of rotation the ergoregion boundary meets the horizons, leading to the failure of the method there, as we empirically see.)
If the background is static and the quantum state is thermal, t-splitting may render the expressions
particularly amenable to computations if one further takes advantage
of a Euclideanization technique,
whereby the spacetime is made Riemannian via a Wick rotation of the time coordinate.
Another option for performing the renormalization is to calculate differences
of renormalized expectation values
in two different
states: because the counterterms are state-independent, it is clear
that such difference is equal to the coincidence limit of the difference
between the unrenormalized bi-tensors in the two different states
(e.g., ⟨Φ^2(x)⟩ _ren^Ψ_1-⟨Φ^2(x)⟩ _ren^Ψ_2=1/2lim_x'→ x(G_Ψ_1(x,x')-G_Ψ_2(x,x'))
and ⟨ T_μν(x)⟩ _ren^Ψ_1-⟨ T_μν(x)⟩ _ren^Ψ_2=lim_x'→ x(⟨ T_μν(x,x')⟩ ^Ψ_1-⟨ T_μν(x,x')⟩ ^Ψ_2),
for two states Ψ_1 and Ψ_2). Such differences are already regular
and so no actual renormalization needs to be carried out. Such a calculation
is particularly useful if one happens to know through some other means
the value of the renormalized expectation value in some reference
state, from which (together with the calculation of the difference
between unrenormalized bi-tensors) the value of the renormalized expectation
value in another state could be thus obtained. We call this the state
subtraction method.
Let us from now on focus only on BH spacetimes, for which the main relevant quantum states are the following.
The Unruh state <cit.>
describes an astrophysical BH evaporating via the emission of
Hawking radiation and is hence the state of interest in this paper.
In Schwarzschild or Reissner-Nordström (RN), the Hartle-Hawking (HH) state <cit.>
is meant to describe a
BH in thermal equilibrium with its own Hawking radiation and is the only state
for which the Euclideanization procedure turns the Fourier ω-integral into a much more computationally practical discrete sum.
In Kerr,
a proposal for this state was made in <cit.> but,
unfortunately, off the symmetry axis it is not well-defined for bosons <cit.>, whereas the analogous state for massless fermions only exists near the event horizon (EH) <cit.>;
possibly relatedly,
the Euclideanization procedure is
in principle
not immediately applicable
in Kerr.
Finally, the Boulware state <cit.>
is irregular
on the EH and is (only) appropriate to describe the quantum fields
around a star-like object.
Due to symmetries of the metric and quantum state under consideration, in spherically-symmetric BH spacetimes (e.g. Schwarzschild and RN) the quantities computed (RSET and vacuum polarization) may depend only on the radial coordinate r, and in the axially-symmetric (e.g. Kerr) case the dependence may only be on r and polar angle θ. In particular, a computation at the CH is valid generally at the inner horizon (IH, whose ingoing section is the CH), with some θ-dependence in the rotating case.
Despite the above-mentioned technical difficulties for renormalization
in BH background spacetimes,
significant progress has been made over the years.
Renormalization in QFT in BH spacetimes has a long history starting
in the late 1970's, which we next review classified by method and spacetime, starting with spherically-symmetric BHs and afterwards moving on to rotating BHs.
First, via the method of state subtraction and expected behavior of a reference quantum state in some asymptotic region, the
renormalized expectation values of physical quantities
for the Unruh, Hartle-Hawking and Boulware states
in Schwarzschild spacetime were obtained when
approaching the EH or radial infinity <cit.>.
Recently, Refs. <cit.>
used the state subtraction method to obtain renormalized expectation values on the CH of an RN-de Sitter (dS) BH.
Another early calculation of the RSET in a BH spacetime was carried out in <cit.> for the Hartle-Hawking state in Schwarzschild spacetime. This work made use of Euclideanization and then regularization was implemented at the level of the heat kernel representation for the Euclidean Green function, while taking the spacetime coincidence limit in the kernel, although unfortunately this calculation contained a slip unrelated to the renormalization process as noted by Howard <cit.>.
Turning to the point-splitting
method, the t-splitting variant together with the Euclideanization technique
was used by various authors
in order to obtain renormalized expectation values
in static, spherically-symmetric BH spacetimes, namely,
Schwarzschild (e.g., <cit.>), RN (e.g., <cit.>) and
(lukewarm) RNdS (e.g., <cit.>).
Typically, in order to speed up the convergence of the mode sums, these works use WKB asymptotics for large multipole number l and/or Euclidean frequency (such WKB asymptotics are not readily generalizable to the case of Lorentzian frequency ω).
It is also worth noting a new variant of the point-splitting method, called the extended coordinate method, which involves splitting in both the angular direction
and
in the time direction combined with the Euclideanization technique. The extended coordinate method has been applied
in (4- and higher-dimensional) Schwarzschild <cit.> and in
Schwarzschild-anti-dS <cit.>; see <cit.> for renormalization also via multiple-direction splitting but without Euclideanization (and so is in principle directly generalizable to Kerr) in the case of Bertotti-Robinson spacetime.
Of most relevance for this paper is the so-called pragmatic mode-sum regularization (PMR) method, which also has the advantage of not using Euclideanization while still using point-splitting.
PMR has been
developed and used to calculate renormalized expectation values for
a scalar field in the following cases: using θ-splitting,
in Schwarzschild <cit.>,
in RNdS <cit.>
and
inside the EH
of RN <cit.>;
using t-splitting, in Schwarzschild <cit.>;
separately using t-, θ- and φ-splitting, in
Schwarzschild <cit.>.
The literature results mentioned so far were for spherically-symmetric BHs. In
the astrophysically most important case of a stationary, rotating
(Kerr) BH, because of the lack of spherical symmetry, θ-splitting
is not possible and, because of the lack of staticity, the Euclideanization
technique
is in principle not implementable
either. Thus, progress in Kerr was for a long time made only by calculating
differences of renormalized expectation values of quantities in two
different states. This was some times combined with the expected behavior
of the RSET for one of the states in some specific spacetime region (namely, at infinity
or near a horizon) in order to
apply the state subtraction method so as to
gain knowledge about the behavior of the RSET for the other state in that
region: see Refs. <cit.> outside the EH and Ref. <cit.> on the CH (and <cit.> on the CH of Kerr-dS).
An exception to that is the axis of symmetry of Kerr, where no rotation is
`felt', and that was used to directly obtain the RSET in
a `formal' Hartle-Hawking
state on the pole of the EH in Refs. <cit.>. An important technical
breakthrough in renormalization was achieved using the t- and φ-splitting
variants of PMR in <cit.>,
where the authors managed to calculate the RSET outside the EH of
Kerr. This is so far the only time that the calculation
of an RSET has been carried out outside a Kerr BH.
In the current
paper, we present the method and results for an analogous calculation
of certain components of the RSET and the renormalized field square inside
a Kerr BH, all the way from (just off) the EH to (just off) the IH.
We have mentioned our calculation in <cit.> of the RSET energy-flux components on the CH of Kerr using state subtraction (which was done for an array of values of the polar angle θ and the BH angular momentum). In fact, in <cit.> (see especially its Supplemental Material) we also presented t-splitting results for the same RSET components on the pole
(i.e., θ=0[Even though θ=π is also a pole, the value of the scalar field is the same at θ=π as at θ=0, and so we indistinctively refer to “the pole" or “the axis of rotation".])
very near -but off- the IH (as the method we use is inapplicable at exactly the IH itself). The extrapolation of these results onto the IH allowed us to check our result exactly on the IH, which was independently obtained with the state-subtraction method. In particular, this comparison provided a crucial test for the state subtraction procedure used directly at the IH, which was based on a non-conventional reference state.
In this paper we describe in detail the method that we used in <cit.> off the IH, and here we
also use it to obtain new results.
Specifically, the method that we develop here is the t-splitting variant of PMR for
the calculation of renormalized quantities on the pole
between (just off) the EH and (just off) the IH of a Kerr spacetime g_αβ
for a minimally-coupled massless scalar field in the Unruh state |0⟩ _U
(we use throughout a U subscript or superscript
to indicate that the field is in the Unruh state). The quantities that we give computationally-amenable expressions
for are the renormalized expectation value of the field square (the vacuum polarization), ⟨Φ^2⟩ _ren^U,
and the energy flux components[The Eddington coordinates are null in spherical symmetry. In the rotating case, they become null on the pole (and on the horizons), which facilitates the interpretation of
T_uu and T_vv as the energy flux components in the context of this paper.] of the RSET, ⟨ T_yy⟩ _ren^U,
where y∈{u,v}, and u and v are the Eddington coordinates [see
Eq. (<ref>) below]. The significance of these flux components lies in their role in understanding backreaction near the CH, as outlined in Ref. <cit.>.
Broadly speaking, it is more convenient to treat the trace-reversed stress-energy tensor, denoted by T_αβ, which is related to the original tensor T_αβ by
T_αβ≡ T_αβ-1/2g_αβT_ μ^μ ,
since it admits the following simple form:
T_αβ=Φ_,αΦ_,β .
It is also worth noting that when taken as sources to the Einstein equation, there is no advantage to using the stress-energy tensor over its trace-reversed counterpart. Indeed, one may work with the alternative version of the Einstein
equation R_αβ=8πT_αβ, where R_αβ is the Ricci tensor, which involves the trace-reversed stress-energy tensor directly.
However, at the focus of this paper is the pole of Kerr – where g_uu=g_vv=0 – hence trace-reversal does not change the flux components. That is, at the pole, the following holds:
T_yy(r,θ=0)=T_yy(r,θ=0) .
Hence, in this paper, we treat the flux components directly, and they are given in terms of the field derivatives by T_yy=Φ_,yΦ_,y.
As mentioned,
t-splitting in BH background spacetimes involves a separation of the spacetime points in the t direction
(which is a symmetry of the background)
and a decomposition of the unrenormalized bitensor, and so also of the corresponding renormalized quantity, that involves two infinite sums in the multipolar number l and frequency ω, as well as a finite sum in the azimuthal number m.
We note that the interior of
the BH reveals some distinct features which give rise to specific technical challenges
which do not appear in the exterior (as in the calculation in <cit.>). In particular, expressions for renormalized
quantities inside the BH contain the mentioned double infinite sums such that the
innermost, multipolar l-sum diverges. We refer to this as the intermediate divergence (ID)
problem and we show how to deal with it (namely, by including a `small'
split in the polar angle direction, on the top of the split in the
t direction).
We then use the t-splitting PMR method in order to derive the results
on the pole for the renormalized energy fluxes ⟨ T_yy⟩ _ren^U
on approaching the IH
that were already shown in Ref. <cit.> (agreeing with the state-subtraction results computed directly at the IH therein),
as well as to obtain new results. The new results on the pole are these energy fluxes all the way between the two horizons (see Figs. <ref>–<ref>)
as well as the renormalized field square ⟨Φ^2⟩ _ren^U (see Figs. <ref>–<ref>).
Apart from the IH vicinity, we also focus on the EH vicinity at the pole, where we obtain numerical support for regularity of the Unruh state there (reflected in the vanishing
of ⟨ T_uu⟩ _ren^U as (r-r_+)^2 in the r→ r_+ limit), which is a property that has not yet been rigorously proven in the case of Kerr.
In Table <ref>
we provide a summary of the numerical values of various
quantities of physical interest (such as the fluxes and the field square) at the pole of the horizons for the values of the Kerr BH angular momentum that we considered in this paper.
The rest of this paper is organized as follows. In Sec. <ref>
we introduce Kerr spacetime, the scalar field and its Eddington modes, the Unruh state
as well as a conserved quantity. In Sec. <ref>, the t-splitting procedure is extended and customized to the Kerr interior (at the pole, θ=0), accommodating for the complexities arising there and describing the required extra steps in the procedure. In Sec. <ref> we present the aforementioned numerical results, and we end with a discussion in Sec. <ref>.
The Appendices complement the rest of the paper, and are as follows:
Appendix <ref> gives the asymptotic
expressions for large multipole number l (and m=0) for the interior radial function ψ_ω l^int
and the exterior reflection coefficient ρ_ω l^up, which
are then used in Sec. <ref>; Appendix
<ref> illustrates and justifies our treatment of the intermediate divergence problem arising in the sum over l; Appendix <ref>
focuses on the numerical methods implemented in the current work; finally,
Appendix <ref> offers an alternative
approach to the computation, which we call the analytic extension variant of t-splitting,
following computations of several quantities which may
be compared with their standard t-splitting counterparts.
Supplementing this paper is a Mathematica notebook which includes the PMR t-splitting counterterms for the field square and for the full stress-energy tensor, given at a general spacetime point in a Kerr spacetime. (The results for the full stress-energy tensor are given in Boyer-Lindquist coordinates, and are then translated to the flux components in coordinates (u,v,θ,ϕ) at the pole.)
We use
metric signature (-+++) and units where c=G=1.
§ PRELIMINARIES
In this section, we introduce various issues at the basis of this paper: the background Kerr
BH spacetime, the wave equation satisfied by the scalar field propagating
on Kerr spacetime, computationally-convenient modes of the
scalar field (namely, the interior and exterior Eddington modes), and a brief presentation of the Unruh state.
We finish the section by noting the existence of a conserved quantity and its interpretation in the Unruh state.
§.§ The Kerr metric
We begin with the Kerr metric, a solution to the vacuum Einstein equations
describing a spinning BH of mass M and angular momentum J, given
in Boyer-Lindquist coordinates (t,r,θ,φ)
by the line element
ds^2=-(1-2Mr/ρ^2)dt^2+ρ^2/Δdr^2+ρ^2dθ^2+(r^2+a^2+2Mra^2/ρ^2sin^2θ)sin^2θdφ^2-4Mra/ρ^2sin^2θdφdt,
where a≡ J/M and
ρ^2 ≡ r^2+a^2cos^2θ ,
Δ ≡ r^2-2Mr+a^2 .
In this paper we only consider the sub-extremal case, that is, in
which the BH parameters satisfy |a|/M<1.
We note that Δ given above may be written as
Δ=(r-r_+)(r-r_-)
where the roots
r_±=M±√(M^2-a^2)
mark the locations of the EH (at r=r_+) and the
IH (at r=r_-). We dub[Strictly, one needs two patches of Boyer-Lindquist coordinates in order to cover both the interior and the exterior regions, since these coordinates are irregular on the horizons; see Eqs. (<ref>), (<ref>) and (<ref>) for the coordinates {U,V,θ,φ_+ }, which are regular across the EH.] the region r>r_+ the
BH exterior (or “outside the BH”), and the region r_-<r<r_+
– the BH interior (or “inside the BH” – corresponding
to the shaded region in Fig. <ref>).
Both r=r_+ and r=r_- correspond to null causal surfaces.
Regarding the IH, however, we point out that in the (physically-realistic)
case of gravitational collapse, only the ingoing section (the
right one of the two segments denoted “IH” in Fig. <ref>)
maintains the causal role of a Cauchy horizon, being the boundary
of the domain of predictability for an initial data surface reaching spacelike infinity (in the external universe).[In the case of an eternal BH, the relevant initial data surface reaches spacelike infinity in both external universes (including the parallel universe), and then both sections of the IH are Cauchy horizons.]
The surface gravity parameter κ_± corresponding to the
horizon at r_± is given by
κ_±≡r_+-r_-/2(r_±^2+a^2) .
We introduce the “tortoise coordinate” r_* defined through
dr/dr_*=Δ/(r^2+a^2). In this paper, we pick
a constant of integration such that
r_*=r+1/2κ_+log(|r-r_+|/r_+-r_-)-1/2κ_-log(|r-r_-|/r_+-r_-) .
Note that r_* diverges at both horizons; in particular, r_*→-∞
at r=r_+ and r_*→∞ at r=r_-.
The future-directed Eddington coordinates, u and
v, may be defined in the BH exterior by
u_ext≡ t-r_*, v≡ t+r_*
and in the BH interior by
u_int≡ r_*-t, v≡ r_*+t .
The Eddington coordinates are null at the pole of Kerr (where g^uu=g^vv=0) and off the pole at both r=r_+ and r=r_- (but not elsewhere).[Although u and v are not necessarily regular at the horizons,
they are null there in the sense that they are co-directed with the
corresponding Kruskal coordinates (which are regular and null at the
corresponding horizon).]
While v parameterizes the EH, the interior and exterior u coordinates
diverge there (see Fig. <ref>). This motivates defining
the Kruskal coordinates U and V, which remain
regular across the EH. In the BH exterior they are defined by
U(u_ext)≡-1/κ_+exp(-κ_+u_ext), V(v)≡1/κ_+exp(κ_+v) ,
and in the BH interior by
U(u_int)≡1/κ_+exp(κ_+u_int), V(v)≡1/κ_+exp(κ_+v) .
The V(v) coordinate is the same in the interior and
exterior regions. Regarding U(u), the interior U(u_int)
is a smooth (and in fact analytic) continuation of the exterior U(u_ext).
An analogous set of Kruskal coordinates may be defined to expose the
regularity of the metric at the IH, but these IH coordinates are not
required in this paper.
Finally, we note that in Kerr, all free-falling observers share the
same asymptotic value of dφ/dt on approaching r→ r_±,
Ω_±≡a/2Mr_± .
Since the t coordinate diverges at both horizons (as in the spherically
symmetric case), the above fact implies that φ also diverges
there (unlike in the spherically symmetric case). Hence, we use Ω_± to define an azimuthal coordinate
which stays regular at the horizon at r_±:
φ_±≡φ-Ω_±t .
§.§ The wave equation and its separation
We consider a massless, minimally-coupled scalar field Φ,
obeying the wave equation
□Φ=0,
where □ is the covariant d'Alembertian.
Due to separability of this equation on a Kerr background, we decompose
the field into (ω lm) modes
Φ_ω lm(t,r,θ,φ)=const·ψ_ω lm(r)/√(r^2+a^2)e^-iω tZ_lm^ω(θ,φ) .
The angular functions Z_lm^ω(θ,φ)
are the spheroidal harmonics, given by
Z_lm^ω(θ,φ)=1/√(2π)S_lm^ω(θ)e^imφ ,
where S_lm^ω(θ) is the (real) spheroidal
wave function (see Ref. <cit.> and references
therein),
satisfying the eigenvalue equation:
1/sinθd/dθ(sinθdS_lm^ω(θ)/dθ)+(a^2ω^2cos^2θ-m^2/sin^2θ+E_lm(aω))S_lm^ω(θ)=0,
with E_lm(aω) the corresponding eigenvalue, obtained
by requiring regularity at θ=0,π.
We normalize the spheroidal wave functions as in Eq. (2.10) Ref. <cit.>, namely:
∫_0^π[S_lm^ω(θ)]^2 sinθ dθ =1.
Of particular practical relevance for this paper is the fact that the angular function is zero at the poles if m≠ 0, i.e., S_lm^ω(θ=0,π)∝δ_m^0.
The so-called radial function ψ_ω lm(r)
solves a simple scattering-like equation which we term the radial
equation:
d^2ψ_ω lm/dr_*^2+V_ω lm(r)ψ_ω lm=0,
with the effective potential
V_ω lm(r)≡K_ω m^2(r)-λ_lm(aω)Δ/(r^2+a^2)^2-G^2(r)-dG(r)/dr_*
where
K_ω m(r)≡(r^2+a^2)ω-am, λ_lm(aω)≡ E_lm(aω)-2amω+a^2ω^2, G(r)≡rΔ/(r^2+a^2)^2 .
Note that V_ω lm involves the frequency ω in a non-trivial
way via the angular eigenvalue. Carrying Eq. (<ref>)
to the asymptotic domains outside and inside the BH, r→∞,
r→ r_+ and r→ r_-, we obtain three different limiting
values:
V_ω lm≃ω^2, r→∞ (r_*→∞),
ω_+^2, r→ r_+ (r_*→-∞),
ω_-^2, r→ r_- (r_*→∞),
where we define
ω_±≡ω-mΩ_± .
For that reason (and due to the potential being short-range), the
asymptotic behavior of solutions to Eq. (<ref>) is generally
of the form e^± iω r_* as r→∞, e^± iω_+r_*
as r→ r_+, and e^± iω_-r_* as r→ r_-,
corresponding to free waves in these r_*→±∞ domains.
§.§ The Eddington modes
The Eddington modes are solutions to Eq. (<ref>) which conform
with the general separated form of Eq. (<ref>) and which
admit initial data that uniformly oscillate with either u or v
along the relevant initial null hypersurface. Their decomposition
allows for a convenient numerical computation of these modes, requiring
one to merely solve the ODE given in Eq. (<ref>) for
the radial function (along with the standard computation of the spheroidal
harmonics).
We briefly introduce two families of Eddington modes (each consisting
of an ingoing set and an outgoing set): the exterior Eddington modes
which are defined in the BH exterior (r_+<r<∞) and the interior
Eddington modes which are defined in the BH interior (r_-<r<r_+).
For a more detailed introduction of the various modes, see Sec. III
in Ref. <cit.>.
The exterior Eddington modes
We begin by defining two sets of exterior radial functions [solutions
to Eq. (<ref>) in the BH exterior], “in” functions
denoted ψ_ω lm^in and “up” functions denoted
ψ_ω lm^up, which are determined by the boundary
conditions:
ψ_ω lm^in(r)≃τ_ω lm^ine^-iω_+r_* r_*→-∞
e^-iω r_*+ρ_ω lm^ine^iω r_* r_*→∞ ,
ψ_ω lm^up(r)≃
e^iω_+r_*+ρ_ω lm^upe^-iω_+r_* r_*→-∞
τ_ω lm^upe^iω r_* r_*→∞ .
The coefficients τ_ω lm^Λ and ρ_ω lm^Λ
(with Λ either “in” or “up”) are respectively the
transmission and reflection coefficients, and may be determined numerically.
The “in” and “up” exterior Eddington modes, respectively denoted
by f_ω lm^in and f_ω lm^up,
are then defined in terms of ψ_ω lm^in and ψ_ω lm^up
as
f_ω lm^in(x) =1/√(4π|ω|(r^2+a^2))Z_lm^ω(θ,φ)e^-iω tψ_ω lm^in(r) ,
f_ω lm^up(x) =1/√(4π|ω_+|(r^2+a^2))Z_lm^ω(θ,φ)e^-iω tψ_ω lm^up(r) ,
where x denotes a spacetime point. The prefactors are determined
such that the modes are normalized to unity (with respect to the Klein-Gordon
inner product; see e.g. Ref. <cit.>).
The interior Eddington modes
Since the interior Eddington modes are defined exclusively in the
BH interior, where r and t switch roles as temporal and spatial
coordinates, defining the two spanning sets of modes will only require
a single (internal) radial function, denoted ψ_ω lm^int,
defined as a solution of Eq. (<ref>)
equipped with the initial condition at r→ r_+,
ψ_ω lm^int≃ e^-iω_+r_*, r→ r_+ .
The “right” and “left” interior Eddington modes, respectively
denoted f_ω lm^R and f_ω lm^L, are then defined
using ψ_ω lm^int and its complex conjugate as
f_ω lm^R(x) =1/√(4π|ω_+|(r^2+a^2))Z_lm^ω(θ,φ)e^-iω tψ_ω lm^int(r) ,
f_ω lm^L(x) =1/√(4π|ω_+|(r^2+a^2))Z_lm^ω(θ,φ)e^-iω tψ_ω lm^int*(r) ,
where, again, the prefactors ensure Klein-Gordon normalization.
§.§ Field quantization and Unruh state
In this subsection we sketch the quantization of the field and the
definition of our quantum state of interest, namely the Unruh state.
For details, we refer the reader to Secs. III and IV in Ref. <cit.>. The
scalar field Φ is typically quantized by expanding it in terms
of a choice of modes and then promoting the coefficients in the mode
expansion to (creation and annihilation) operators. In the previous
subsection we introduce the Eddington modes since they are convenient
for practical calculations. However, the Unruh state is instead more
naturally defined in terms of some other modes, the so-called Unruh
modes, which consist of the union of the following two subsets of
modes.
The first subset, which merges with the Eddington f^in_ω lm in the BH exterior, is defined as having no upward excitations coming from {U∈ℝ,V=0} (i.e., from the union of H_L and H_past, see Fig. <ref>) and having positive frequencies with respect to the Eddington coordinate v along {U→ -∞, V>0} (i.e., along PNI, see Fig. <ref>); the second subset
is defined by being positive frequency with respect to the Kruskal
coordinate U along H_L∪ H_past and having no upward excitations from PNI. The Unruh state
<cit.> is then defined as the quantum state which is annihilated
by the annihilation operator coefficients when expanding the quantum
field in Unruh modes. The Unruh state is the state of relevance for
astrophysical BHs since it models a BH evaporating via the emission
of quantum, thermal (Hawking) radiation.
Regularity of the Unruh state at the EH is anticipated for decades.
We note, however, that it has not yet been actually proven in the case of a scalar field in
Kerr, but our numerical results here (see Fig. <ref>)
provide – for the first time, to the best of our knowledge – numerical
support for it.[As mentioned in the Introduction, in Ref. <cit.> we calculated the renormalized flux components in the Unruh state on the CH by the method of state subtraction. We note that the reference state we used is similarly expected to be regular on the CH, which is also supported by numerical calculations, but no actual rigorous proof is known.]
Furthermore, regularity of the Unruh state on the EH and up to, but excluding, the CH, has been proven in <cit.> for massless fermions in Kerr (and in <cit.> for scalars in Kerr-dS for sufficiently small angular momentum of the BH).
We denote by ⟨ T_μν(x)⟩ _ren^U
the RSET at the spacetime point x when the field is in the Unruh
state. As mentioned in the Introduction, in this paper we are interested
in the calculation of, specifically, the energy flux components
of the RSET in the Unruh state, ⟨ T_yy(x)⟩ _ren^U,
where y=u,v, as well as the Unruh-state renormalized field square ⟨Φ^2(x)⟩ _ren^U.
§.§ The conserved quantity
As reflected from Eq. (<ref>) along with the fact that
g_uu=g_vv [in coordinates (u,v,θ,φ̃)
where φ̃ may be φ, φ_- or φ_+],
the difference between the two flux components T_uu and T_vv (at any angle θ)
equals its trace-reversed counterpart:
T_uu(r,θ)-T_vv(r,θ)=T_uu(r,θ)-T_vv(r,θ) .
Energy-momentum conservation, along with stationarity of the background
and of the quantum state, implies r-independence of this quantity (now applied to the RSET) times
r^2+a^2 (related to an area element), that is
d/dr[(r^2+a^2)(⟨ T_uu(r,θ)⟩ _ren-⟨ T_vv(r,θ)⟩ _ren)]=0 .
We accordingly define the r-independent (yet θ-dependent)
quantity ℱ(θ), which we sometimes dub “the conserved quantity”:
ℱ(θ)≡(r^2+a^2)(⟨ T_uu(r,θ)⟩ _ren-⟨ T_vv(r,θ)⟩ _ren) .
In the Unruh state, carrying ℱ(θ)
to infinity (where only the outflux ⟨ T_uu⟩ _ren exists) shows it coincides with the Hawking energy outflux (per
unit solid angle in the θ direction). The corresponding mode-sum
expression (see Eq. (B42) in Ref. <cit.>) is
ℱ(θ)=ħ/8π^2∑_l=0^∞∑_m=-l^l∫_0^∞[S_lm^ω(θ)]^2ω[(πω_+/κ_+)-1](1-|ρ_ω lm^up|^2)dω .
Evidently, the RHS only depends on scattering in the BH exterior (via
ρ_ω lm^up) – but is independent of the radial
function (and of r).
In this paper, we mainly focus on the axis of rotation of Kerr, i.e. θ=0. Plugging θ=0 in Eq. (<ref>), only m=0 contributes to the m-sum (as mentioned, due to S_lm^ω(θ=0)∝δ_m^0). Hence, the Hawking energy outflux per unit solid angle in the polar direction is
ℱ_0≡ℱ(θ=0)=ħ/8π^2∑_l=0^∞∫_0^∞[S_ω l(0)]^2ω[(πω/κ_+)-1](1-|ρ_ω l^up|^2)dω .
where S_ω l and ρ^up_ω l are, respectively, S^ω_lm and ρ^up_ω lm restricted to m=0, as defined later in Eq. (<ref>).
§ THE PMR T-SPLITTING PROCEDURE INSIDE KERR
The t-splitting method generally involves splitting the point of interest in the t direction, and utilizing the symmetry of a t-independent background to decompose the known counterterms into frequency modes and perform the regularization procedure on a frequency mode-by-mode basis.
This method has been used by, e.g., <cit.> for computations outside
and inside static spherical BHs. However, the methods involved are generally inapplicable in Kerr (off the axis of symmetry).
In recent years, the t-splitting variant of the PMR method was
developed <cit.>, allowing practical computations on stationary backgrounds. It has been since implemented outside a
rotating BH <cit.>, as well as outside
and inside a spherical (charged or neutral) BH <cit.>. In this section, we describe the implementation
of the t-splitting variant of PMR for computing ⟨Φ^2⟩ _ren^U
and the renormalized fluxes, ⟨ T_uu⟩ _ren^U
and ⟨ T_vv⟩ _ren^U,
in the interior (up to, but excluding, the IH) of a Kerr BH, at the pole (that
is, r_-<r<r_+ and θ=0). There are two notable differences
between the BH exterior and interior, which are then reflected in
some aspects of the corresponding t-splitting scheme: (i)
Unlike its exterior behavior, for large values of l the effective
potential (<ref>) inside the BH acts as a potential well
rather than a potential barrier (see Fig. <ref>,
which allows appreciating this difference visually for a selected
mode). Then, while outside the BH the field modes decay exponentially
in l for fixed ω (making the numerical implementation very
efficient), in the interior we encounter a diverging mode-sum
(see Eq. (<ref>); this problem will be discussed in what follows and in Appendix <ref>).
(ii) Outside the BH there exist null geodesics connecting the points x and x' involved in the splitting,
which introduce an oscillatory behavior of the ω-integrand
(see Ref. <cit.>). This is not the case inside, where
t changes its nature and becomes spacelike. (See, however, the analytic extension variant in Appendix <ref>.)
We now present a schematic overview of the t-splitting procedure, specializing in its PMR implementation
inside a Kerr BH, at the pole. For that matter, we denote the quantity
of interest (either ⟨Φ^2⟩^U or the
fluxes ⟨ T_yy⟩^U) generally by P, whose individual mode contribution is generally denoted by E_ω lm. The latter
is comprised of an angular dependence (being the squared spheroidal
wavefunction, [S_lm^ω(θ)]^2)
times a function of r [composed of the internal radial function
ψ_ω lm^int (<ref>) and its derivative
dψ_ω lm^int/dr_*]. We shall use the term `bare' expression for an expectation value P
(or, loosely, just `bare quantity')
when such expectation value has not yet been renormalized, in other words,
a bare expression is the
formal expression[Henceforth, the expressions for all bare quantities are understood
to be merely formal expressions.] for the corresponding unrenormalized bi-tensor evaluated at coincidence (x=x'). Since this paper's focus
is on the pole, only m=0 has a non-vanishing contribution to the
sum over m (since, as indicated earlier, S_l(m≠0)^ω(0)=0).
Thus we may remove the m index, and write the bare
mode-sum for a quantity P at the pole as
P_bare(x)=∫_0^∞(∑_l=0^∞E_ω l(x))dω ,
where E_ω l is E_ω lm reduced to m=0.
We would like to regularize this sum using t-splitting, in which
we introduce a split in the t direction, denoted ε≡ t'-t.
Outside the BH <cit.>, the renormalized quantity P_ren
is given by
P_ren(x)=lim_ε→0[∫_0^∞cos(ωε)(∑_l=0^∞E_ω l(x))dω-C(ε,x)],
where C(ε,x) is the counterterm (specifically,
G^CT(x,x')/2 or T_μν^CT(x,x') mentioned
in the Introduction in the cases of, respectively, ⟨Φ^2⟩
or the fluxes) <cit.> translated
to be given in terms of ε as described in Refs. <cit.>. However, the sum ∑_l=0^∞E_ω l
fails to converge inside the BH: In fact, we find that at large
l the sequence E_ω l behaves as
E_ω l^div≡ c_0+c_1ω^2+c_2l(l+1)
with an additional O(1/l(l+1)) piece,
where the coefficients c_0,c_1 and c_2 are independent
of l and ω (but are functions of position). We establish
this form (analytically for the leading order, being c_0 for
the field square and c_2l(l+1)
for the fluxes,
and empirically for the remaining terms) in Sec. <ref>
and the figures within. In particular, it is crucial that the O(l^0)
term, c_0+c_1ω^2, presents the exact dependence
on ω, not just a leading order in an expansion in ω.
The diverging piece E_ω l^div constitutes what
we shall call the intermediate-divergence (ID) problem, and
we hereafter refer to E_ω l^div as the ID[We note that, in Refs. <cit.>, divergences in the l-sum for fixed frequency where also observed in the Euclidean Green function for points separated along the time direction outside a Schwarzschild black hole. These were referred to as “superficial" divergences and they were removed by subtracting Dirac-δ distributions (and derivatives of them).
We also note that, on the other hand, still outside a BH but in the Lorentzian case and either in Schwarzschild or Kerr, no such IDs are present <cit.>.]. The
treatment of this delicate technical issue is presented in Appendix
<ref>, and requires introducing
an additional “small” split in the θ direction (namely,
a split that is taken to zero before closing the split in the t
direction). At this point, we only mention that the ID [which,
crucially, has the form given in Eq. (<ref>)] may be
simply subtracted – postponing the justification of this subtraction
to the mentioned Appendix.
Then, our renormalized quantity is
P_ren(x)=lim_ε→0[∫_0^∞cos(ωε)E^basic(ω,x)dω-C(ε,x)] ,
where
E^basic(ω,x)≡∑_l=0^∞[E_ω l(x)-E_ω l^div(x)],
which we shall refer to as the basic integrand function.
Finally, we next rewrite Eq. (<ref>) in its PMR form.
First, C(ε,x) is expanded for small ε. The O(ε^0) term in the expansion is denoted by e(x). The rest of the expansion of C(ε,x) is Fourier-decomposed and the Fourier modes are denoted by E^sing(ω,x).
This ω-dependent PMR counterterm, E^sing(ω,x), may be subtracted from
the integrand in ω prior to integration over ω, and the so-called finite counterterm e(x) remains to be subtracted after integration. This way, the entire computation
is done at coincidence. The final expression for the PMR renormalized
quantity at a point x inside the BH is then
P_ren(x)=∫_0^∞[E^basic(ω,x)-E^sing(ω,x)]dω-e(x) .
This form, including the construction of the PMR counterterms E^sing(ω,x) and e(x), is established in
Refs. <cit.> for a stationary BH exterior,
and extended here to the interior.
The various components (E_ω l(x), E_ω l^div(x),
and the counterterms E^sing(ω,x) and e(x))
are given explicitly (in our case of the polar Kerr interior), for
both the field square and the fluxes in the Unruh state, in the following
sections: For the individual bare mode contribution E_ω l,
see Sec. <ref>; the ID (the diverging piece
E_ω l^div) is discussed in Sec. <ref>,
with the leading order in l analytically worked out; the PMR
counterterms E^sing(ω,x) and e(x)
are given in Sec. <ref>. The information provided
in these three subsections, along with Eq. (<ref>),
comprises the t-splitting PMR recipe for the computation of ⟨Φ^2⟩ _ren^U
and ⟨ T_yy⟩ _ren^U
at the pole inside a Kerr BH.
§.§ The bare mode contribution
In what follows, we write the individual mode contribution to the
bare mode-sum expression of the quantities of interest – ⟨Φ^2⟩ ^U
and ⟨ T_yy⟩ ^U – for a
general r value inside a Kerr BH, at the pole (θ=0). The
presented results follow immediately from computations done in Ref. <cit.>, as we hereafter describe.
§.§.§ ⟨Φ^2⟩ _bare^U
We begin with the HTPF in the Unruh state at a Kerr BH interior, given
in Eqs. (B8) and (B9) in Ref. <cit.> (along with Eq. (6.38) therein)
as
G_U^(1)(x,x')=ħ∫_0^∞[∑_l=0^∞∑_m=-l^lG_ω lm(x,x')]dω ,
where
G_ω lm(x,x')=
|ω_+|/ω_+[(πω_+/κ_+)({ f_ω lm^L(x),f_ω lm^L*(x')} +|ρ_ω lm^up|^2{ f_ω lm^R(x),f_ω lm^R*(x')}).
.+2cosech(πω_+/κ_+)(ρ_ω lm^up{ f_ω lm^R(x),f_ω lm^L*(x')})+ω_+/ω|τ_ω lm^in|^2{ f_ω lm^R(x),f_ω lm^R*(x')}] .
From the HTPF one may easily obtain a mode-sum expression for the
bare (i.e., unrenormalized) expectation value of the field square
in the Unruh state,
⟨Φ^2(x)⟩ _bare^U=ħ/2∫_0^∞(∑_l=0^∞∑_m=-l^l[lim_x'→ xG_ω lm(x,x')])dω .
The square brackets yield an expression whose θ-dependence
is factored out as [S_lm^ω(θ)]^2,
which gives rise to a major simplification at the pole: since S_lm^ω(θ=0)∝δ_0m,
we are left only with the m=0 contribution to the sum over m.
To ease notation from this point on, we denote m=0 quantities by
simply removing the m index. For example, we denote
S_ω l(θ)≡ S_l(m=0)^ω(θ), ρ_ω l^up≡ρ_ω l(m=0)^up, ψ_ω l^int(r)≡ψ_ω l(m=0)^int(r)
V_ω l(r)≡ V_ω l(m=0)(r), λ_l(aω)≡λ_l(m=0)(aω), etc.
Plugging the mode functions given in Eq. (<ref>) into
Eq. (<ref>) and taking m=0 followed by the coincidence
limit and θ=0, one obtains the bare mode-sum ⟨Φ^2⟩ _bare^U
at the pole in the interior of a Kerr BH
⟨Φ^2⟩ _bare^U=∫_0^∞(∑_l=0^∞E_ω l)dω ,
where[Here we write ⟨Φ^2⟩ _bare^U
as in Eq. (<ref>), choosing E_ω l to denote
the individual mode contribution. For ⟨ T_yy⟩ _bare^U
which follows next, we choose a different notation as the analog of
E_ω l (to be explained in Sec. (<ref>)).
The same note applies to E_ω l^div in what follows.]
E_ω l =ħ[S_ω l(0)]^2/8π^2ω(r^2+a^2)×
[(πω/κ_+)|ψ_ω l^int|^2(1+|ρ_ω l^up|^2)+2 cosech(πω/κ_+)(ρ_ω l^up(ψ_ω l^int)^2)+(1-|ρ_ω l^up|^2)|ψ_ω l^int|^2] .
§.§.§ ⟨ T_uu⟩ _bare^U
and ⟨ T_vv⟩ _bare^U
As in Eq. (B10) in Ref. <cit.>, taking αβ to
be yy, (where, again, y denotes either u or v), we begin
with the following “bare” expression for the trace-reversed flux components at a general θ, with the azimuthal coordinate taken as φ̃ which may be either φ, φ_+ or φ_-:[To conform with the t-splitting procedure described above, we change
the order of summation and integration appearing in Eq. (B10) in Ref.
<cit.> to have the integration over ω performed
last. (Clearly, performing such an exchange there should not matter
since this bare quantity diverges in any case.)
The procedure used here has been constructed and justified for this
specific order of summation and integration.]
⟨T_yy⟩ _bare^U=∫_0^∞(∑_l=0^∞∑_m=-l^lT_yy(ω lm))dω ,
with the individual mode contribution:
T_yy(ω lm)≡ħ/2lim_x'→ x[G_ω lm(x,x')_,yy'] .
In Appendix B of Ref. <cit.> we obtain an expansion of
the latter in powers of Δ [given in Eq. (<ref>)]:
T_yy(ω lm)=T_yy(ω lm)^𝒜+T_(ω lm)^ℬΔ+T_(ω lm)^𝒞Δ^2 ,
with the coefficients T_yy(ω lm)^𝒜,
T_(ω lm)^ℬ and T_(ω lm)^𝒞 given in Eqs. (B35)-(B38) therein, including a dependence on the choice of azimuthal coordinate φ̃.
Focusing now on θ=0 (which is hereafter implied in the notation), a few simplifications occur:
(i) As mentioned earlier, since S_lm^ω(θ=0)∝δ_0m, only m=0 remains in the sum over m.
(ii) The metric components g_yy are identically 0, hence trace-reversal (through Eq. (<ref>)) does not change the flux components, i.e. T_yy=T_yy.
(iii) The flux components T_yy are the same whether the azimuthal coordinate is φ, φ_+ or φ_-.
We may thus write the bare Unruh mode contribution to the fluxes ⟨ T_yy⟩_bare^U [in
the form of Eq. (<ref>), with T_yy(ω l)
taking the role of E_ω l]
⟨ T_yy⟩ _bare^U=∫_0^∞(∑_l=0^∞T_yy(ω l))dω ,
where T_yy(ω l) is the m=0, θ=0 version of T_yy(ω l m) of Ref. <cit.>,
and similarly admits the following expansion in Δ:
T_yy(ω l)=T_yy(ω l)^𝒜+T_(ω l)^ℬΔ+T_(ω l)^𝒞Δ^2 .
Taking m=0 and θ=0 in Eqs. (B35)–(B38) in Ref. <cit.>,
T_uu(ω l)^𝒜=ħ[S_ω l(0)]^2/32π^2ω(r^2+a^2)
((πω/κ_+)[|ψ_ω l,r_*^int|^2+ω^2|ψ_ω l^int|^2+2ω^2+|ρ_ω l^up|^2(|ψ_ω l,r_*^int|^2+ω^2|ψ_ω l^int|^2-2ω^2)].
.+2 cosech(πω/κ_+)(ρ_ω l^up[(ψ_ω l,r_*^int)^2+ω^2(ψ_ω l^int)^2])+(1-|ρ_ω l^up|^2)(|ψ_ω l,r_*^int|^2+ω^2|ψ_ω l^int|^2-2ω^2)) ,
T_vv(ω l)^𝒜=ħ[S_ω l(0)]^2/32π^2ω(r^2+a^2)
((πω/κ_+)[|ψ_ω l,r_*^int|^2+ω^2|ψ_ω l^int|^2-2ω^2+|ρ_ω l^up|^2(|ψ_ω l,r_*^int|^2+ω^2|ψ_ω l^int|^2+2ω^2)].
.+2 cosech(πω/κ_+)(ρ_ω l^up[(ψ_ω l,r_*^int)^2+ω^2(ψ_ω l^int)^2])+(1-|ρ_ω l^up|^2)(|ψ_ω l,r_*^int|^2+ω^2|ψ_ω l^int|^2+2ω^2)) ,
T_(ω l)^ℬ =-ħ[S_ω l(0)]^2r/16π^2ω(r^2+a^2)^3((πω/κ_+)(ψ_ω l^intψ_ω l,r_*^int*)(1+|ρ_ω l^up|^2).
.+2cosech(πω/κ_+)(ρ_ω l^upψ_ω l^intψ_ω l,r_*^int)+(1-|ρ_ω l^up|^2)(ψ_ω l^intψ_ω l,r_*^int*)) ,
T_(ω l)^𝒞=ħ[S_ω l(0)]^2r^2/32π^2ω(r^2+a^2)^5
((πω/κ_+)|ψ_ω l^int|^2(1+|ρ_ω l^up|^2)+2 cosech(πω/κ_+)(ρ_ω l^up(ψ_ω l^int)^2)+(1-|ρ_ω l^up|^2)|ψ_ω l^int|^2) .
Combining Eqs. (<ref>)-(<ref>) with Eqs. (<ref>) and (<ref>),
taking either y=u or v, one obtains the bare mode-sum expression
for the fluxes ⟨ T_uu⟩ _bare^U
and ⟨ T_vv⟩ _bare^U
at the pole inside a Kerr BH.
§.§ Intermediate divergence
As mentioned at the beginning of Sec. <ref>,
the ID captures the large-l diverging behavior of the individual
mode contribution, and is generally (for our quantities of interest,
⟨Φ^2⟩ and ⟨ T_yy⟩)
of the form given in Eq. (<ref>). In this subsection,
we aim to justify this form as well as analytically compute the large-l
leading order ID coefficient for both ⟨Φ^2⟩
and ⟨ T_yy⟩ (being c_0
for ⟨Φ^2⟩ and c_2 for ⟨ T_yy⟩).
§.§.§ The ID of ⟨Φ^2⟩
We begin with Eq. (<ref>), and wish to take its
large-l regime (l≫1). To this end, we denote
l̃≡ l+1/2 ,
and note that, at large l (see e.g. Eq. (67) in Ref. <cit.>),
S_ω l(0)≃√(l̃) .
In addition, large-l m=0 modes outside the BH undergo total
reflection, namely (see Eq. (<ref>))
|ρ_ω l^up|≃1 .
Applying these two very simple facts to Eq. (<ref>),
we are left with the intermediate expression at large l:
E_ω l≃ħl̃/4π^2ω(r^2+a^2)((πω/κ_+)|ψ_ω l^int|^2+cosech(πω/κ_+)[ρ_ω l^up(ψ_ω l^int)^2]) .
Putting together the large-l WKB form of the interior radial
function given in Eq. (<ref>), and the large-l reflection
coefficient ρ_ω l^up given in Eq. (<ref>),
one finds the following leading order large-l expressions:
|ψ_ω l^int|^2/ω(r^2+a^2) ≃1/l̃√((r_+-r)(r-r_-))((πω/κ_+)+(-1)^lcosech(πω/κ_+)cos[2l̃g(r)]),
(ρ_ω l^up(ψ_ω l^int)^2)/ω(r^2+a^2)≃-1/l̃√((r_+-r)(r-r_-))(cosech(πω/κ_+)+(-1)^l(πω/κ_+)cos[2l̃g(r)]),
where g(r) is given in Eq. (<ref>) below (in
fact, the exact form of g(r) does not matter here).
Plugging these into Eq. (<ref>) (and recalling
^2x-cosech^2x=1), we easily obtain the desired
large-l plateau (for fixed ω) of the sequence,
E_ω l≃ħ/4π^2√((r_+-r)(r-r_-)) ,
which is evidently independent of ω and l. That is, casting
into the general form of Eq. (<ref>), we analytically obtain for the
ID of ⟨Φ^2⟩ (the full ID for the field square corresponds to just the leading-order asymptotics),
E_ω l^div=c_0=ħ/4π^2√((r_+-r)(r-r_-)) ,
and
c_1=c_2=0 .
The correspondence of the large-l behavior of E_ω l
and the constant E_ω l^div computed here, as well
as the convergence of the l-sum of E_ω l-E_ω l^div,
is demonstrated in Fig. <ref> for a fixed typical
ω, taken here to be ω=1/M. The top left panel shows
both quantities E_ω l and E_ω l^div as
a function of l, and the top right panel shows their difference
E_ω l-E_ω l^div (which is numerically found to behave like 1/l(l+1), as illustrated by the attached fit).
The partial sums ∑_l=0^l_max(E_ω l-E_ω l^div),
displayed in the bottom panel of the figure, demonstrate the
resulting convergence as l_max→∞, yielding
the (basic) integrand value at that ω, E^basic(ω)
defined in Eq. (<ref>) [this is done through a fit
as described in Appendix <ref>]. This
E^basic value, represented by the dashed horizontal line,
is then highlighted in the left panel of Fig. <ref>
by a bold red point at ω M=1.[All figures in this section, Figs. <ref>-<ref>,
correspond to the specific case r/M=0.9 – which is a typical
point between the horizons – inside an a/M=0.8 BH. They are
aimed to illustrate the regularization procedure. The general picture
emerging from these figures is typical to the case a/M=0.9 as
well, and to generic r values – provided that r is not too
close to the horizons. However, as the horizons are approached, it
becomes increasingly difficult to perform the procedure demonstrated
here (as the singular piece E^sing(ω)
diverges at the horizons, see Sec. <ref>).
In addition, although we did not explicitly check this, we expect
a similar picture to emerge also at other a/M values (as long as
a/M is not too close to 0 or 1).
[In Figs. <ref> and <ref>
we demonstrated the large-l behavior for a specific ω value,
being ω=1/M, but this large-l behavior is shared by all
fixed ω values].]
§.§.§ The ID of ⟨ T_uu⟩ and ⟨ T_vv⟩
The ID for ⟨ T_yy⟩, which
we denote by T_yy(ω l)^div, has been claimed above to take the form given in Eq. (<ref>),
with generally non-vanishing coefficients c_0,c_1 and c_2
[unlike ⟨Φ^2⟩ for which c_1,c_2=0,
see Eq. (<ref>)]. In the current subsection
we justify this form in two parts: first, analytically computing the
large-l leading order coefficient c_2; and second, numerically
establishing the O(l^0) behavior being c_0+c_1ω^2,
as well as the convergence of the sum over l following the ID
subtraction [as in Eq. (<ref>)].
We begin with the individual mode contribution T_yy(ω l)
given in Eqs. (<ref>)-(<ref>), depending
in particular on the radial function ψ_ω l^int.
The latter evolves according to the radial equation (<ref>),
governed by the effective potential V_ω l obtained by setting m=0 in Eq. (<ref>). Notably, the dependence of V_ω l on l
comes entirely from the (m=0) spheroidal-eigenvalue term ∝λ_l(aω),
whose asymptotic behavior at large l takes the form (see Theorem
10 in Ref. <cit.>)
λ_l(aω)=l(l+1)+1/2a^2ω^2+λ̂_ l(aω)
with the remainder term λ̂_ l(aω) vanishing at l→∞
(like 1/l(l+1)). We then obtain the following simple form
for the effective potential holding asymptotically at large-l,
V_ω l≃-l(l+1)Δ/(r^2+a^2)^2 ,
with an O(l^0) remainder. Then, the radial equation
(<ref>) admits the large-l form
d^2ψ_ω l/dr_*^2≃l(l+1)Δ/(r^2+a^2)^2ψ_ω l .
We may thus expect the large-l behavior of the bare flux expression
to be some power of l(l+1) (which will be confirmed
below).
Recall that E_ω l, the mode contribution to ⟨Φ^2⟩ _bare^U
given in Eq. (<ref>), yielded an O(l^0)
ID E_ω l^div (given in Eq. (<ref>)).
However, from Eq. (<ref>), when switching from
⟨Φ^2⟩ to (trace-reversed) RSET components we
encounter (a product of) two differentiations of the mode functions.
These two differentiations may potentially lead to an “amplification”
of the leading order by a factor of l(l+1). Specifically,
such amplification may emerge from derivatives with respect to r_*.
[In principle such amplifications may also arise from derivatives
with respect to θ and φ, but these derivatives do
not exist for the (trace-reversed) energy flux components; and derivatives with respect
to t merely lead to additional ω factors (rather than
l(l+1)).] We shall hence concentrate on contributions involving
ψ_ω l,r_*^int, which indeed involves amplification
by a factor of l̃ compared to ψ_ω l^int
(as one can see, e.g., from its WKB form – see Eq. (<ref>)
or (<ref>), which includes factors of e^± il̃g(r)).
We now look more closely at the expression for T_yy(ω l)
as given in Eqs. (<ref>)-(<ref>). It is the
sum of three different terms: T_yy(ω l)^𝒜,
T_(ω l)^ℬΔ, and T_(ω l)^𝒞Δ^2.
Evidently, T_yy(ω l) depends on the
radial function ψ_ω l^int and its first derivative
ψ_ω l,r_*^int quadratically, either through
(i) |ψ_ω l^int|^2
or (ψ_ω l^int)^2 (appearing in
T_yy(ω l)^𝒜 and T_(ω l)^𝒞),
(ii) combinations involving one derivative, ψ_ω l^intψ_ω l,r_*^int*
or ψ_ω l^intψ_ω l,r_*^int
(appearing in T_(ω l)^ℬ
only), or (iii) combinations of two first-order derivatives,
|ψ_ω l,r_*^int|^2 or (ψ_ω l,r_*^int)^2
(appearing in T_yy(ω l)^𝒜
only). Based on the above, the highest order in l that can potentially
occur may emerge only from these latter terms, quadratic in ψ_ω l,r_*^int
[namely |ψ_ω l,r_*^int|^2 or
(ψ_ω l,r_*^int)^2], which are
anticipated to contribute at order l(l+1). (In particular,
T_(ω l)^ℬ and T_(ω l)^𝒞
do not contribute at this level.) We shall now see concretely that
this is indeed the case.
To proceed, we focus on T_yy(ω l)^𝒜
from Eq. (<ref>) or (<ref>), keeping only the terms
quadratic in the derivatives ψ_ω l,r_*^int
(conveniently, these terms are shared by both the uu and vv
components, as expected from T_uu-T_vv being regular). This
leaves us with the potential ∝ l(l+1) contribution [In the asymptotic expression in Eq. (<ref>), as well as in other ones below (including Eq. (<ref>)) within this subsection, we only claim the leading-order term in l(l+1) to be correct.
]
to T_yy(ω l):
T_yy(ω l)≃ħ[S_ω l(0)]^2/32π^2ω(r^2+a^2)
[(πω/κ_+)|ψ_ω l,r_*^int|^2(1+|ρ_ω l^up|^2)+2 cosech(πω/κ_+)[ρ_ω l^up(ψ_ω l,r_*^int)^2]+(1-|ρ_ω l^up|^2)|ψ_ω l,r_*^int|^2] .
We may now analyze the O(l(l+1)) content
of the above terms quadratic in ψ_ω l,r_*^int,
by using Eq. (<ref>), from which we obtain
1/2[(ψ_ω l^int)^2]_,r_*r_*=(ψ_ω l,r_*^int)^2+ψ_ω l^intψ_ω l,r_*r_*^int≃(ψ_ω l,r_*^int)^2+l(l+1)Δ/(r^2+a^2)^2(ψ_ω l^int)^2
yielding
(ψ_ω l,r_*^int)^2≃1/2[(ψ_ω l^int)^2]_,r_*r_*-l(l+1)Δ/(r^2+a^2)^2(ψ_ω l^int)^2 .
Similarly, using Eq. (<ref>) and its conjugate,
we obtain
|ψ_ω l,r_*^int|^2≃1/2[|ψ_ω l^int|^2]_,r_*r_*-l(l+1)Δ/(r^2+a^2)^2|ψ_ω l^int|^2 .
Using these two last relations, we may now rewrite Eq. (<ref>)
(again,
only claiming its leading order in l(l+1) to be correct)
as
T_yy(ω l)≃ T_yy(ω l)^(1)l(l+1)+T_yy(ω l)^(2)
where
T_yy(ω l)^(1)=-ħ[S_ω l(0)]^2/32π^2ω(r^2+a^2)Δ/(r^2+a^2)^2
[(πω/κ_+)|ψ_ω l^int|^2(1+|ρ_ω l^up|^2)+2 cosech(πω/κ_+)[ρ_ω l^up(ψ_ω l^int)^2]+(1-|ρ_ω l^up|^2)|ψ_ω l^int|^2]
and
T_yy(ω l)^(2)=ħ[S_ω l(0)]^2/64π^2ω(r^2+a^2)
[(πω/κ_+)(|ψ_ω l^int|^2)_,r_*r_*(1+|ρ_ω l^up|^2)+2 cosech(πω/κ_+)(ρ_ω l^up[(ψ_ω l^int)^2]_,r_*r_*)+(1-|ρ_ω l^up|^2)(|ψ_ω l^int|^2)_,r_*r_*] .
Focusing first on T_yy(ω l)^(1),
one may recognize it is proportional to E_ω l, the mode
contribution of ⟨Φ^2⟩ _bare^U
given in Eq. (<ref>):
T_yy(ω l)^(1)=-Δ/4(r^2+a^2)^2E_ω l .
Next, a comparison of Eqs. (<ref>) and (<ref>)
reveals that T_yy(ω l)^(2)
is related to E_ω l via
T_yy(ω l)^(2)=1/8(r^2+a^2) d^2/dr_*^2[(r^2+a^2)E_ω l(r)] .
Taking the large-l limit in the last two equations – recalling
that the l→∞ limit of E_ω l is E_ω l^div
given in Eq. (<ref>) – we obtain
lim_l→∞T_yy(ω l)^(1)=-Δ/4(r^2+a^2)^2 E_ω l^div
and
lim_l→∞T_yy(ω l)^(2)=1/8(r^2+a^2) d^2/dr_*^2[(r^2+a^2)E_ω l^div(r)] .
(Recall that E_ω l^div depends on r but is independent
of l.) We can therefore rewrite the large-l asymptotic behavior
of Eq. (<ref>) as
T_yy(ω l)≃ T_yy(ω l)^(1)l(l+1)≃[-Δ/4(r^2+a^2)^2 E_ω l^div] l(l+1) .
Comparing this result to our “canonical” large-l form given
in Eq. (<ref>) we find that
c_2=-Δ/4(r^2+a^2)^2E_ω l^div .
Plugging in the explicit form of E_ω l^div found
in Eq. (<ref>) (as well as the explicit form of Δ),
we obtain
c_2=ħ√((r_+-r)(r-r_-))/16π^2(r^2+a^2)^2 .
This settles the large-l leading order O(l(l+1))
contribution to the ID.
Numerically exploring the next-to-leading order at large-l, we find that it is ∝ l^0, as demonstrated in the top right panel of Fig. <ref>. We denote this l-independent term by c_01(ω), and write the divergent piece[We see numerically that the next order after l^0 is 1/l(l+1) (and is, in particular, convergent in a sum over l), as illustrated at the bottom left panel of Fig. <ref>.]
T_yy(ω l)^div as[Even though c_2 and c_01 are functions of r, we do not
make that dependence explicit here.]
T_yy(ω l)^div =c_2l(l+1)+c_01(ω) ,
with c_2 as given in Eq. (<ref>). This c_01(ω) term may thus be defined as
c_01(ω)≡lim_l→∞[T_yy(ω l)-c_2l(l+1)] .
A numerical exploration shows that this limit indeed exists, and furthermore, it takes exactly the form
c_01(ω)=c_0+c_1ω^2
with c_0 and c_1 independent of ω and l (as we shortly show numerically). After establishing this form, it indeed becomes justified (as mentioned above, and as demonstrated – partly numerically, partly analytically – in detail in
Appendix <ref>) to subtract T_yy(ω l)^div
from the bare mode contribution T_yy(ω l).
The convergent sequence resulting from this subtraction is consequently summed over l to yield the
basic integrand function, denoted by T_yy^basic(ω):
T_yy^basic(ω)≡∑_l=0^∞(T_yy(ω l)-T_yy(ω l)^div) .
This multiple-step procedure, resulting in the basic integrand value
T_yy^basic for a given ω, is illustrated
in Fig. <ref> for the case y=u. In the top
left panel we present the bare mode contribution T_uu(ω l)
as a sequence in l for a fixed ω value (taken here to be
ω=1/M, as for the ⟨Φ^2⟩ case
in Fig. <ref>). In the top right panel, we subtract
the analytically-computed leading order of the ID, c_2l(l+1)
with c_2 as given in Eq. (<ref>), and are left with
the large-l plateau value c_01(ω)[In particular, this plateau shows that the limit lim_l→∞[T_uu(ω l)-c_2l(l+1)],
defining c_01(ω), indeed exists (namely, there
are no intervening, weaker, large-l divergences).
]. In the bottom left panel we further subtract this c_01(ω)
term (obtained numerically) and are left with a sequence T_uu(ω l)-T_uu(ω l)^div, which is numerically found to behave as 1/l(l+1)
(as demonstrated by a fit). The corresponding convergence of the l-sum of this sequence is demonstrated in the bottom right panel,
which portrays the partial sums ∑_l=0^l_max(T_uu(ω l)-T_uu(ω l)^div) and their approach to T_uu^basic(ω)
as l_max→∞. This T_uu^basic value [which is extracted through a fit as described in Appendix
<ref>], represented by the dashed horizontal
line, is then highlighted in the left panel of Fig. <ref>
by a bold red point at ω M=1.[In practice, in the numerical implementation of the computation we
extract T_yy^basic(ω) and
c_01(ω) simultaneously using a slightly different
(but mathematically equivalent) method – see Appendix <ref>.]
We now turn to empirically establish the form (<ref>)
of the O(l^0) piece of the ID, c_01(ω).
Our claim is that this quantity admits the exact parabolic
form c_0+c_1ω^2. This is numerically demonstrated in the left panel
of Fig. <ref>, by attaching a fit c_0+c_1ω^2
to the numerically extracted (see Appendix <ref>)
c_01(ω) term, presented as a function of
ω. The accuracy of this fit is demonstrated in the right
panel, which portrays the difference between c_01(ω)
and its c_0+c_1ω^2 fit – indicating (together with
the left panel) a relative difference smaller than 10^-94
in the case depicted here. (This was achievable due to the fact that
we had a typical precision of hundreds of figures – at least 250 – for the raw
material ρ^up_ω l,
ψ_ω l^int and ψ_ω l,r^int,
see Appendix <ref>.)
This striking agreement (presumably limited only by numerical precision)
provides solid empirical evidence for the validity of Eq. (<ref>).
In fact, we empirically-numerically found a closed expression for c_1, being[Notably, this expression for c_1 may also be written as c_1=(1/4)(1-v_ω/2)E_ω l^div where E_ω l^div is given in Eq. (<ref>) and the function v_ω is the prefactor of -ω^2 in the subleading large-l form of the effective potential V_ω l, which is v_ω=a^2Δ[2(a^2+r^2)^2]^-1-1. Playing around these relations is in fact how we arrived at Eq. (<ref>)]
c_1=ħ/8(3-a^2Δ/2(a^2+r^2)^2)1/4π^2√((r_+-r)(r-r_-)) .
[Note, however, that the values of neither c_0 nor c_1 are actually used in our numerical computation (see <ref>).]
It should also be noted that while Figs. <ref> and <ref>
(as well as Fig. <ref> to follow) focus specifically
on T_uu, a similar behavior is seen for T_vv
(see also footnote <ref>).
§.§ The t-splitting PMR counterterms
In the previous subsection we analyzed the
ID for the vacuum polarization and the fluxes, namely
E_ω l^div and T_yy(ω l)^div, respectively. In this subsection we present the PMR counterterms. That is, the `sing' modes E^sing, as well as their finite counterpart e,
derived from the counterterms C(ε,x). In the notation of the beginning of this section, used to illustrate the method for the generic quantity P, the symbols E^sing and e were used generically both for the fluxes and the vacuum polarization. Here, similarly to the above subsection, we use the E^sing and e notation to correspond specifically to the vacuum polarization, and T_yy^sing and e_yy for the fluxes.
We shall first present the PMR counterterms for the vacuum polarization and then for the fluxes.
§.§.§ The t-splitting PMR counterterms for ⟨Φ^2⟩
We first write Eq. (<ref>) for ⟨Φ^2⟩:
⟨Φ^2(x)⟩ _ren^U=∫_0^∞[E^basic(ω,x)-E^sing(ω,x)]dω-e(x) .
The t-splitting PMR counterterms for ⟨Φ^2⟩,
denoted E^sing(ω,x) and e(x),
are obtained as described in Ref. <cit.> – namely, by
expanding
the DeWitt-Schwinger counterterm (see Ref. <cit.>)
in the point separation ε=t'-t up to order ε^0 included, and then Fourier transforming
this expansion into frequency space. See the Supplemental Material for the results of this computation at a general spacetime point in Kerr. In our case (at the pole, where the dependence on the spacetime point reduces to a dependence
on r), this yields simply[Note that the notations used here, which were motivated by a uniform
treatment of ⟨Φ^2⟩ and ⟨ T_yy⟩
in Eq. (<ref>), slightly differ from those used in Ref. <cit.>.
Comparing Eq. (<ref>) with Eqs. (3.15)-(3.16) therein,
our E^sing replaces ħ F_sing and e(x)
replaces ħ d(x). In addition, since we are at the
BH interior (where an ID E_ω l^div exists), ħ F(ω,x)
is replaced by E^basic(ω,x) which is defined
by ∑_l=0^∞[E_ω l(x)-E_ω l^div(x)]
(with E_ω l and E_ω l^div given respectively
in Eqs. (<ref>) and (<ref>)).
Moreover, comparing Eq. (<ref>) with Eq. (3.17) in
Ref. <cit.>, we see that writing the latter for the case considered here (polar Kerr) would come with the
prefactors a(r,θ,φ)=-(a^2+r^2)/4π^2Δ
and c(r,θ,φ)=0. That is, while generally
there is an additional 1/(ω+μ) term in E^sing
(where μ is a scale ambiguity), in our case this term is not
present. ]
E^sing(ω,r)=ħa^2+r^2/4π^2Δω
and the finite counterterm
e(r)=ħ/48π^2M^2/Δ(a^2-r^2)^2/(a^2+r^2)^3 .
Fig. <ref> allows to appreciate the
part of the regularization given in Eq. (<ref>),
in which the singular piece E^sing(ω,r)
is subtracted from the basic integrand and then the remainder is integrated
over ω. The basic integrand
E^basic(ω,x),
extracted numerically per ω (as demonstrated in the bottom
panel of Fig. <ref>), is presented in the
left panel of Fig. <ref> as a function of
ω. Being dominated at large ω by its singular piece
E^sing(ω) [given in Eq. (<ref>)],
it diverges linearly in ω. The right panel shows the regular
difference E^basic(ω)-E^sing(ω)
as a function of ω. This regularized integrand is then integrated
over in order to obtain the final result for ⟨Φ^2⟩ _ren^U,
after subtracting the final counterterm e(r), as prescribed
in Eq. (<ref>).
§.§.§ The t-splitting PMR counterterms for ⟨ T_uu⟩
and ⟨ T_vv⟩
We write Eq. (<ref>) for the flux component ⟨ T_yy⟩
as
⟨ T_yy(x)⟩ _ren^U=∫_0^∞(T_yy^basic(ω,x)-T_yy^sing(ω,x))dω-e_yy(x) ,
where T_yy^basic(ω,x) is
given in Eq. (<ref>), and we denote by T_yy^sing(ω,x)
and e_yy(x) the ⟨ T_yy⟩-versions
of the aforementioned PMR counterterms E^sing(ω,x) and
e(x), respectively.
One may obtain the t-splitting PMR counterterms for the RSET
as described in Ref. <cit.> – which includes translating the Christensen
counterterms given in Ref. <cit.> to be expressed
in terms of the point separation ε (expanded to order ε^0 included), then Fourier decomposing them
to obtain an expansion
in ω. The results of this computation at a general spacetime point in a Kerr background are given for the full stress-energy tensor in Boyer-Lindquist coordinates in the Supplemental Material.
The flux components [in coordinates (u,v,θ,ϕ)] at the pole are then obtained (also given in the Supplemental Material), yielding (the dependence on x reduces to a dependence on r)[By comparing to Ref. <cit.> [see, in particular,
Eqs. (3.8-3.10) therein], our T_yy^sing(ω,x)
stands for ħ F_yy^Sing(ω,x) and
the integrand function F_yy(ω,x) is replaced
by T_yy^basic(ω,x) defined
in Eq. (<ref>) [therein, T_yy(ω l)
is the bare mode contribution given in Sec. (<ref>)
and T_yy(ω l)^div is the
ID given in Eq. (<ref>)]. Casting Eq. (<ref>) into the form of Eq. (3.9) in Ref.
<cit.>, the prefactors in the case considered here (polar Kerr) would be a_yy(r)=(a^2+r^2)/(2π^2Δ),
b_yy(r)=M(a^2(2M-3r)+r^3)/(24π^2(a^2+r^2)^2Δ)
and c_yy=d_yy=0. That is, while generally the singular part
T_yy^sing(ω,r) may also include
lnω and 1/(ω+μ e^-γ) terms
(with μ a scale ambiguity and γ Euler's constant), these
terms are absent in our case.]
T_yy^sing(ω,r)=ħ/6a^2+r^2/2π^2Δω^3+ħ M(a^2(2M-3r)+r^3)/24π^2(a^2+r^2)^2Δω ,
and the finite counterterm
e_yy(r)=
ħ M^2/1440π^2(a^2+r^2)^7Δ[18a^10+a^8(-11M^2+15Mr-216r^2)+r^8(-81M^2+99Mr-32r^2).
.+2a^4r^4(-334M^2+55Mr+68r^2)+2a^2r^6(297M^2-298Mr+73r^2)-2a^6r^2(29M^2-410Mr+138r^2)] .
Note that the t-splitting PMR counterterms, both for ⟨Φ^2⟩
[Eqs. (<ref>) and (<ref>)] and ⟨ T_yy⟩
[Eqs. (<ref>) and (<ref>)] are proportional
to 1/Δ, hence diverge as 1/δ r_±
at the horizons, where
δ r_±≡|r-r_±|/M
is a (dimensionless) radial coordinate distance to the corresponding
horizon, see Eq. (<ref>)]. This implies that t-splitting
will be increasingly difficult to apply as the horizons are approached.
Fig. <ref> allows one to appreciate the final
part of the regularization for ⟨ T_uu⟩ _ren^U,
given in Eq. (<ref>), in which the singular piece
is subtracted from the basic integrand and then integrated over ω.
The basic integrand T_uu^basic(ω),
extracted numerically per ω (as demonstrated in the bottom
right panel of Fig. <ref> and described in Appendix <ref>), is presented in the left panel
of Fig. <ref> as a function of ω.
Being dominated at large ω by its singular piece T_uu^sing(ω)
[given in Eq. (<ref>)], it diverges like ω^3.
The right panel shows the regular difference T_uu^basic(ω)-T_uu^sing(ω)
as a function of ω. This regularized integrand (which numerically
seems to decay at large ω as 1/ω^3) is then integrated
to obtain the final result for
⟨ T_uu⟩ _ren^U(r) at the pole,
after subtracting the corresponding finite counterterm e_uu(r),
as prescribed in Eq. (<ref>).
§ NUMERICAL RESULTS
Using the t-splitting method described in the previous section,
we may now compute ⟨Φ^2⟩ _ren^U
and the fluxes ⟨ T_uu⟩ _ren^U
and ⟨ T_vv⟩ _ren^U
at the pole inside a Kerr BH, observing the range between the EH and
the IH. The main computational methods and various numerical details
are postponed to Appendix <ref>.
Since different a/M values may yield qualitatively-different
behaviors, we choose to work here with two different values: a/M=0.8
and a/M=0.9.[In particular, it turns out that an a/M=0.8 BH has positive IH
flux values at the pole, whereas in the a/M=0.9 case the polar
IH flux values are negative.] For both a/M values we compute ⟨Φ^2⟩ _ren^U
and the fluxes ⟨ T_uu⟩ _ren^U
and ⟨ T_vv⟩ _ren^U
as a function of r between the horizons, as well as pay special
attention to the horizon vicinities.
Notably, the t-splitting method cannot be implemented directly
at the horizons (in particular, the corresponding counterterms diverge
there as 1/Δ, see Sec. <ref>), and
approaching them is increasingly difficult. This limits our ability
to directly explore the very close vicinity of the horizons using
t-splitting, and in what follows we typically approach the horizons
up to a distance in r of 10^-5M or 10^-4M (with precision dropping
rapidly beyond that). Nevertheless, this vicinity suffices to obtain
the asymptotic behavior near the horizons for the fluxes (see Secs.
<ref> and <ref>).
(However, for ⟨Φ^2⟩ _ren^U
near the IH this vicinity does not suffice to expose the final asymptotic behavior, as we discuss in Sec. <ref>).
Some “anchors” are present at the horizons: First, we have the
numerically computed values of ⟨ T_uu⟩ _ren^U
and ⟨ T_vv⟩ _ren^U
at the IH from the state-subtraction computation (see Ref. <cit.>).
In addition, from the expected regularity of the Unruh
state at the EH (see Sec. <ref>), we know that ⟨ T_uu⟩ _ren^U=0
there [seen by transforming to the regular U Kruskal coordinate
at the EH, Eqs. (<ref>) and (<ref>)].
This fact allows obtaining the (negative) value of ⟨ T_vv⟩ _ren^U
at the EH [utilizing the conserved quantity (r^2+a^2)(⟨ T_uu⟩ _ren^U-⟨ T_vv⟩ _ren^U),
which may be computed independently]. Regularity also implies a
δ r_+^2
behavior of ⟨ T_uu⟩ _ren^U
on approaching the EH, and a Taylor series in δ r_+ for
⟨ T_vv⟩ _ren^U.
For ⟨Φ^2⟩_ren^U in the Unruh state we have no a priori knowledge, but an analytic result exists at the EH
in a different quantum state, a `formal' HH state (defined only at the pole) <cit.>, as will be
briefly discussed at the end of this section (see Sec. <ref>).
For a verification of our results at a general r value inside
the BH, it is beneficial to compare them against an independent method
of computation. For that purpose, in addition to the procedure
used here, we have developed an alternative method (the analytic
extension method), which is another variant of the t-splitting PMR method inside
the BH. This method is described in Appendix <ref>
(the difference between the two variants lies in the manner in which the
extension from the exterior to the interior is carried out). We numerically
implemented this alternative procedure for the computation of ⟨Φ^2⟩ _ren^U
and ⟨ T_yy⟩ _ren^U
at two r values in the a/M=0.8 case, and performed a (successful)
comparison with the results obtained via the procedure described
and employed in this paper (see Appendix <ref>
for more details).
Finally, we note that in the presentation of our numerical results in the next two subsections, we first begin with the fluxes and then proceed to the field square. We do it in this order mainly because we consider the former to be more physically interesting than the latter.
(However, in the previous analytical section Sec. <ref>, we instead followed the reverse order since the
analytical constructions for the fluxes were built on the results for the field square).
§.§ Plots for the fluxes
§.§.§ Between the horizons
Figures <ref> and <ref> present
the fluxes ⟨ T_uu⟩ _ren^U,
⟨ T_vv⟩ _ren^U and
their difference between the horizons in the a/M=0.8 and a/M=0.9
cases, respectively.
In both cases we see that the fluxes, which start off at the EH
as ⟨ T_uu⟩ _ren^U=0
and ⟨ T_vv⟩ _ren^U<0
(as should be), initially become increasingly negative as r decreases,
and then follow a non-trivial behavior, with a few trend and sign
changes, until they reach their (either positive or negative) IH values.
(We leave the near-horizon behaviors to be explored in the next subsections.)
Notably, although the IH values in the a/M=0.9 case are non-positive
at both horizon vicinities, there is a region inside the BH for which
they are positive (that is, each flux component vanishes at two points
inside the BH – unlike the a/M=0.8 case, where there is only
one such point separating the negative and positive domains).
In the r-range computed, we find three extrema in the a/M=0.8 case
(two are clearly seen in Fig. <ref> and the closest
one to the IH is better seen in Fig. <ref>) and four extrema
in the a/M=0.9 case (three are clearly visible in Fig. <ref>
and the one closest to the IH is obtained at around r=r_-+2×10^-4M
and is too subtle to be clearly seen in the figures in their present
scale).
Generally, the fluxes in the a/M=0.8 case are typically an
order of magnitude larger than those in the a/M=0.9 case.
The difference between the fluxes ⟨ T_uu⟩ _ren^U-⟨ T_vv⟩ _ren^U at the pole equals ℱ_0/(r^2+a^2),
where ℱ_0 is given in Eq. (<ref>) and corresponds to the Hawking
radiation outflux per unit solid angle in the polar direction. For a/M=0.8
this conserved quantity is ℱ_0≈-1.7454781×10^-6ħ M^-2,
and for a/M=0.9 it is ℱ_0≈-7.0150098×10^-7ħ M^-2
(as computed at r=r_- from our state subtraction results of Ref. <cit.>).
§.§.§ EH vicinity
For a discussion of the EH vicinity, it is useful to recall the parameter δ r_+, which represendts the radial coordinate distance to the EH and is defined as δ r_+≡(r_+-r)/M.
The expected regularity of the Unruh state at the EH implies the vanishing
of ⟨ T_uu⟩ _ren^U there, at
least as δ r_+^2 [otherwise, in the regular Kruskal
U coordinate, ⟨ T_UU⟩ _ren^U would diverge].
In Fig. <ref> we verify and explore this δ r_+^2
behavior of ⟨ T_uu⟩ _ren^U
at the EH vicinity in both a/M values at the pole. The prefactor
c of δ r_+^2 for each case is extracted numerically,
and the plot portrays the c·δ r_+^2 behavior as dashed
lines that run through the numerically computed dots. The two c
values are both negative and are of similar magnitude, being c≈-7.29×10^-6ħ M^-4
for a/M=0.8 and c≈-8.25×10^-6ħ M^-4 for a/M=0.9.
The other flux component, ⟨ T_vv⟩ _ren^U,
is not portrayed on this graph, as it does not have any specific anticipated
behavior at the EH (since in particular, unlike u, the Eddington
v coordinate does not diverge at the EH). The near EH behavior
of ⟨ T_vv⟩ _ren^U
is depicted at the righthand side of the general r plots in the
previous subsection [Figs. <ref> and <ref>],
obtaining [from Eq. (<ref>) in Appendix <ref>]
finite values at the pole of the EH:
⟨ T_vv⟩ _ren^U=-5.454619×10^-7ħ M^-4
in the a/M=0.8 case, and ⟨ T_vv⟩ _ren^U=-2.442739×10^-7ħ M^-4
in the a/M=0.9 case.
§.§.§ IH vicinity
To discuss the IH vicinity, we recall the dimensionless parameter δ r_-≡(r-r_-)/M.
Unlike the EH, which is regular and hence invites a regular behavior
of the fluxes (in particular
⟨ T_uu⟩ _ren^U=O(δ r_+^2),
as seen in the previous section), the IH vicinity offers a non-regular,
more intricate behavior. We shall briefly explore this behavior, up
to the current limitation to our computed points (which is δ r_-=10^-5
for a/M=0.8 and δ r_-=10^-4 for a/M=0.9).
In Figs. <ref> and <ref>, we zoom into
the IH vicinity, corresponding to the leftmost side of Figs. <ref>
and <ref>, respectively. (We focus on the ⟨ T_uu⟩ _ren^U
component for convenience, but ⟨ T_vv⟩ _ren^U
shows a similar picture.) This reveals yet another valley in the a/M=0.8
case, and another peak at the leftmost side of the a/M=0.9 plot
– both are very close to the IH. These join a sequence of a few
peaks and valleys on approaching the IH in both cases. The presence
of such minima or maxima points so close to the IH exposes a non-regular
behavior (i.e., not a simple power series in δ r_-, as in
the analogous EH case).
In a previous work <cit.> we computed the
flux components exactly at the IH using the – independent – method of state subtraction for a variety
of a/M and θ values. In particular, the values we found
at the pole for a/M=0.8 are ⟨ T_uu⟩ _ren^U=3.23163918×10^-5ħ M^-4
and ⟨ T_vv⟩ _ren^U=3.01345442×10^-5ħ M^-4,
and for a/M=0.9 they are ⟨ T_uu⟩ _ren^U=-1.702041202×10^-6ħ M^-4
and ⟨ T_vv⟩ _ren^U=-2.323817852×10^-6ħ M^-4.
In Sec. 3 in the Supplemental Material of that Letter, we compared
the r→ r_- limit of the t-splitting results with the state-subtraction
IH result in these two a/M values, reaching an agreement of four
digits of precision. Figs. <ref> and <ref>
visually portray this agreement.
§.§ Plots for ⟨Φ^2⟩
Figures <ref> and <ref>
portray ⟨Φ^2⟩ _ren^U as
a function of r/M between the horizons at the pole of a/M=0.8
and a/M=0.9 Kerr BHs, respectively, with the EH vicinity zoomed-in.
In Figs. <ref> and <ref>
we zoom into the IH vicinity, corresponding to the left side of Figs.
<ref> and <ref>.
At the EH the behavior of ⟨Φ^2⟩ _ren^U
seems perfectly regular, approaching a (positive) extrapolated value
of 2.21×10^-5ħ M^-2 at the EH in the a/M=0.8 case and a
(negative) extrapolated value of -1.64×10^-4ħ M^-2 at the EH
in the a/M=0.9 case (see also Sec. <ref>
which mentions this different sign issue for the field square at the EH on the axis of rotation, although in a different quantum
state).
However, at the IH vicinity ⟨Φ^2⟩ _ren^U
seems to follow a less trivial behavior, in particular a rapid decrease
towards the IH which may raise the suspicion of a divergence there.
We note, however, that the computed results do not in fact reveal the asymptotic form of ⟨Φ^2⟩ _ren^U at the IH vicinity. By employing state subtraction (as in Ref. <cit.>) and considering the regularity of the reference state at the IH, we managed to determine that ⟨Φ^2⟩ _ren^U in fact reaches a finite asymptotic value in the limit r→ r_- along the axis of rotation, which it approaches with an r_*^-3 tail. (The actual limiting value of ⟨Φ^2⟩ _ren^U at the IH remains unknown, since we only computed the state difference).
In addition, while ⟨Φ^2⟩ _ren^U asymptotes to a finite value at the IH on the pole, we have numerical indications that off the pole (θ≠0,π) ⟨Φ^2⟩ _ren^U actually diverges like r_* on approaching the IH. However, this requires further investigation, and this entire very-near IH exploration of ⟨Φ^2⟩ _ren^U remains beyond the scope of this paper (in particular, since this exploration was not done using t-splitting, which is the topic of the current paper).
It is
worth comparing these plots to the corresponding plots in the analogous
RN case with charge parameter
Q/M=0.8, presented and explored in Ref. <cit.>.
In the RN case examined there, ⟨Φ^2⟩ _ren^U
sharply increases towards the IH, as may be seen in Fig. 1 therein.
(Note that here we have a sharp decrease rather than increase on approaching
the IH, but this difference does not seem to be significant for the
present discussion). This trend, continuing up to about δ r_-∼10^-6,
raises the suspicion of a divergence at the IH. However, beyond this
point the trend changes, and a peak is obtained at about δ r_-∼10^-9
(see Fig. 2 therein). Then, a surprising intricate behavior follows,
eventually leading to a finite value at the IH. This finite
asymptotic value is obtained after a sequence of a few minima and
maxima, followed by a final inverse-power decay which is only exposed
as deep as δ r_-∼10^-175. [Reaching this very small
δ r_- domain in the RN case was not done through a straightforward
numerical computation of the radial function, but using a certain
approximation (the so-called semi-asymptotic approximation)
that was tailored to the θ-splitting method used there. Here
we use t-splitting instead (generally using θ-splitting
is not pragmatic in Kerr due to the lack of spherical symmetry), hence
the method that allowed us to approach extremely close to the IH in
the RN case is not available in the Kerr case.
§.§.§ Testing the EH value of ⟨Φ^2⟩ _ren:
the polar Hartle-Hawking state
We wish to test the t-splitting computation of ⟨Φ^2⟩ _ren^U,
but we do not have any value known a priori in the Unruh state (unlike the anchors we had for the fluxes, as described above). Hence, we may search for simple tests available in other states. The Hartle-Hawking state does not exist in Kerr, due to the existence of superradiant modes <cit.>. However, at the pole, there are no superradiant modes and so we may define a formal “polar
Hartle-Hawking” analog, which we denote with a superscript H; see Refs. <cit.> for mode-sum expressions for the HTPF in this state, which contain a pole singularity at a real frequency value for θ≠ 0,π but are well-defined for θ= 0,π.
Using Wick rotation, Frolov <cit.> found a closed form expression for
⟨Φ^2⟩ _ren^H at the pole (θ= 0,π) of the EH
of an electrically-charged and rotating, Kerr-Newman BH [see Eq. (11) therein], reading
⟨Φ^2(r_+)⟩ _ren^H=1/24π^2(r_+^2+a^2)(r_+κ_+-a^2/r_+^2+a^2) .
Note that in the RN case, with a=0, this quantity is always positive
– whereas in the Kerr case (BH charge Q=0) this quantity is positive for
a/M<√(3)/2 and nonpositive otherwise. We see something similar
happening in the Unruh state (whose difference from the polar-Hartle-Hawking
state is regular): ⟨Φ^2⟩ _ren^U
at the pole of the EH is positive for a/M=0.8 and negative for a/M=0.9 [see
Sec. <ref>]. However, we have only considered these two spin values, so we do not know whether there is a spin value in which ⟨Φ^2⟩ _ren^U transits from positive to negative, and if so, what it is.
Using the bare mode-sum expression corresponding to the polar-Hartle-Hawking
state at the pole (outside the scope of this paper), and regularizing
via the standard t-splitting procedure, we obtained ⟨Φ^2⟩ _ren^H
at the pole for several r values approaching the EH, for both spin values. Extrapolating
these t-splitting values to the EH in both cases, we reached an
agreement of at least 4 figures with the analytical result of Eq. (<ref>) (being ⟨Φ^2(r_+)⟩ _ren^H≈1.319×10^-4ħ M^-2
for a/M=0.8 and ⟨Φ^2(r_+)⟩ _ren^H≈-9.425×10^-5ħ M^-2
for a/M=0.9).
§ DISCUSSION
In this paper, we employ the method of point splitting – specifically, t-splitting – to calculate the Unruh-state fluxes ⟨ T_uu⟩ _ren^U, ⟨ T_vv⟩ _ren^U, as well as the field square ⟨Φ^2⟩ _ren^U, for a minimally-coupled massless scalar field, in the interior of a spinning BH on the axis of rotation. These fluxes are particularly crucial for understanding backreaction near the IH, as discussed in Ref. <cit.>.
We calculated ⟨ T_uu⟩^U_ren, ⟨ T_vv⟩^U_ren and ⟨Φ^2⟩^U_ren for two different BH spin values, at the pole θ=0, spanning from near the EH to near the IH. Our results, displayed in the various figures of Sec. <ref>, provide a quantitative picture of the behavior of ⟨ T_uu⟩^U_ren, ⟨ T_vv⟩^U_ren and ⟨Φ^2⟩^U_ren in the BH interior. In particular, our results include a focus on the IH vicinity, validating our state-subtraction results computed directly at the IH (see Ref. <cit.>). In addition, our computations at the EH vicinity provide numerical support for the anticipated regularity of the Unruh state at the EH.
The method presented and employed in this paper is the interior counterpart of the t-splitting method formulated in the BH exterior, generally introduced in Ref. <cit.> and employed outside a Kerr BH in Ref. <cit.> (φ-splitting was also use for computations outside a Kerr BH in that work, but this variant is technically more difficult and in particular is not applicable at the pole). Note, however, that employing t-splitting inside a Kerr BH is significantly harder and involves unique challenges not present in the BH exterior. This is rooted in the form of the centrifugal potential (in particular at large l), see Fig. <ref>. Both inside and outside the BH, the large-l effective potential generally scales as l^2. In the exterior, it acts as a potential barrier. As a consequence, the l-series converges exponentially fast, and the summation over l is easily performed. Inside the BH, however, the potential at large l acts as a potential well. As a consequence, the l-series does not converge, strictly speaking. This constitutes the so-called intermediate divergence (ID) problem. To overcome this problem, we introduce a “small split" in θ, that is taken to vanish before the coincidence limit in the t direction is taken. With the aid of this additional limiting process, we can identify a certain l-sequence (to which we sometimes refer as the ID) which properly captures the large-l divergent piece of the original sequence – and which should be subtracted from that original l-sequence before summation, in order to obtain the correct renormalized result.
As part of this treatment, we show (partly analytically and partly numerically) that the ID attains a specific form, given in Eq. (<ref>).
This ID subtraction procedure is illustrated through various figures (in particular, Figs. <ref> and <ref> for the field square and Figs. <ref>, <ref> and <ref> for T_uu) and justified in an appendix.
After this ID is subtracted, the remaining regular l-series does converge (and thereby yields the basic ω-integrand). However, even post-subtraction, the convergence of the regularized l-series is rather slow, proceeding as 1/l (that is, 1/l^2 for the l-sequence
). This presents a significant numerical challenge, which we overcome by computing the l-sequence up to l=300, and then fitting the sequence as a series of inverse powers 1/l^k,[For a more precise account of the numerical implementation, see Appendix <ref>.] typically reaching k∼100. For this high-order fit to succeed, we had to compute the individual l-contributions with a high precision of more than ∼250 decimal figures. This, in turn, required the computation of the reflection coefficient ρ_ω l^up and the radial function ψ_ω l^int [as well as its derivative ψ_ω l,r^int and also the spheroidal eigenfunctions S_ω l(0)] at that level of precision.
Upon successful computation of the l-sum, we obtained the ω-integrand, paving the way to integration (after subtracting the known PMR counterterms).
This eventually allowed the renormalized quantity (either the field square ⟨Φ^2⟩ _ren^U or the fluxes ⟨ T_uu⟩ _ren^U and ⟨ T_vv⟩ _ren^U) to be computed at a wide range of r values, spanning from (just off) the EH to (just off) the IH.
Close to the horizons (located at r_+ and r_-), the t-splitting computation at the pole becomes more challenging (mainly due to the divergence of counterterms). In particular, t-splitting on the axis of rotation is inapplicable directly at the horizons. Nevertheless, we managed to compute the fluxes ⟨ T_vv⟩ _ren^U and ⟨ T_uu⟩ _ren^U sufficiently close to the horizons, so as to obtain the horizon limits of these two quantities by extrapolation.
In particular, our results extrapolated to the IH (as demonstrated in Figs. <ref> and <ref>) show remarkable agreement with those obtained using the state-subtraction method directly at the IH <cit.>, hence validating the state-subtraction method used therein.
In addition, on approaching the EH, ⟨ T_uu⟩ _ren^U decays to zero as expected (like δ r_+^2, see Fig. <ref>).
For ⟨Φ^2⟩_ren^U, while we were able to successfully obtain the EH limit,
this was not possible at the IH. Obtaining the IH limit of ⟨Φ^2⟩ _ren^U poses difficulties, likely attributed to a non-trivial, non-monotonic behavior near the IH, as observed in the analogous RN case of Ref. <cit.>.[Remarkably, using state subtraction, and taking advantage of the presumed regularity of the reference quantum state of Ref. <cit.>, we indeed found that ⟨Φ^2⟩ _ren^U reaches a finite value (which is yet unknown) at the IH, approaching it as r_*^-3 (as in the analogous RN case; this analysis also reproduced the non-trivial and non-monotonic behavior of ⟨Φ^2⟩ _ren^U on approaching the IH, similarly to that observed in the aforementioned RN case.)]
Still, as our method involves the unique step of ID subtraction, it would be worth testing it against another independent method at general r values inside the BH.
In Appendix <ref>, we present an alternative variant of the t-splitting PMR method, dubbed the analytic extension method. This method leverages an analytic extension of the HTPF mode contributions from the exterior to the interior of the BH, building on the analyticity of the background geometry at the EH. While this approach circumvents the ID problem, it introduces challenges of dealing with growing oscillatory behavior in l and ω, which can be managed by our “oscillation cancellation" procedure. Our results at two specific r values reveal a remarkable agreement between the two variants, which, in particular, provides a crucial test for the t-splitting PMR method described in this manuscript.
As previously discussed, our calculations necessitated computing contributions for l up to approximately l_max = 300. However, since our analysis was confined to the pole, we only needed to compute these contributions for m = 0 modes. To extend our study off the pole, one must compute all m modes across the entire domain -l≤ m≤ l. Consequently, this will significantly increase the number of modes that need to be computed –- and consequently, the required computational resources –- typically by a factor of l_max = 300. Such an increase seems impractical, at least with our current methods.[Note, however, that for a computation directly on the IH we can use the state-subtraction method, which is much more efficient numerically. It requires a smaller l range (say, up to a few dozen), and hence is practical to apply off the pole as well. Indeed, we have performed this computation for an a/M=0.8 BH in the entire range 0≤θ≤π/2, see Ref. <cit.>.]
In Table <ref> we summarize the values of the various quantities at the two horizons at the pole of a Kerr BH which we have given in the main text.
There exist several compelling directions for further research. An immediate extension would be to study a wider range of values for the spin parameter a/M beyond the presently treated values of a/M=0.8 and a/M=0.9. It would also be valuable to extend the method for off-pole computations, although, as mentioned, currently this seems to be numerically challenging (without the adoption of more efficient methods such as state subtraction, which is presently only applicable for computations at the IH). Furthermore, a fuller picture of semiclassical effects inside a Kerr BH will be gained by extending our analysis to other components of the RSET.
In addition, it is highly valuable to extend our analysis to other quantum fields. An exploration of other scalar fields may include allowing a non-vanishing coupling parameter ξ. Of higher physical relevance is the study of the quantum electromagnetic field, which is the actual field observed in nature. It is also worth noting the importance of the linearized-gravitational semiclassical contribution, which should be no less significant than its electromagnetic counterpart (but likely to be technically and conceptually more intricate).
A.O. and N.Z. were supported by the Israel Science Foundation under
Grant No. 600/18. N.Z. also acknowledges support by the Israeli Planning
and Budgeting Committee.
§ LARGE-L ASYMPTOTICS
In this Appendix we briefly describe the large-l analysis of the
interior radial function ψ_ω l^int (see Eq. (<ref>)) and the reflection coefficient
ρ^up_ω l (see Eq. (<ref>)) in a Kerr BH, focusing on the m=0 case, given that only m=0 modes are relevant at the pole. The analysis we describe yields the leading
order expressions which are subsequently used in Sec. <ref>
for the ID computation of ⟨Φ^2⟩.
Generally speaking (for a general m value), the problem of modes
traveling on a Kerr spacetime is very different from the analogous
RN problem. In particular, the fact that the effective potential in
Kerr approaches three different asymptotic values: ω_-^2
at the IH, ω_+^2 at the EH, and ω^2 at infinity
[see Eq. (<ref>)] – whereas in RN all three coincide
– is the root to a cascade of differences between the two cases.
However, the particular instance of m=0 modes in Kerr, in which
all three asymptotic frequencies coincide (ω_+=ω_-=ω),
turns out to be analogous to the RN case (for most relevant problems).
There are still some tiny differences between the polar Kerr and RN
cases, e.g. the form of the surface gravity parameters (<ref>)
or the effective potential (<ref>), involving ω
in a non-trivial way through the angular eigenvalue.
Our analysis for the polar Kerr case thus follows closely the one
presented in Ref. <cit.> (mainly in sections III and V
therein) for the RN case, with slight modifications to be pointed
out.
§.§ The interior radial function in the WKB approximation
Our goal here is to apply the WKB approximation to the m=0 interior
radial function ψ_ω l^int at large l. To this end, we follow
closely the analysis in Sec. V in Ref. <cit.>. The validity
condition of the WKB approximation should be
√(V_ω l)M≫1 .
To pinpoint the difference between the RN and polar Kerr analyses,
we start with the large-l form of the m=0 effective potential
V_ω l given in Eq. (<ref>), and compare
it to its RN counterpart given in Eq. (5.9) in Ref. <cit.>
(note that, due to the way they are defined in the corresponding radial
equation, the RN potential V_l is analogous to minus the
Kerr potential V_ω lm). Clearly, the large-l V_ω l
is analogous to its RN counterpart, in which the denominator (r^2+a^2)^2
is replaced by r^4 and Δ is written out as in Eq. (<ref>).
Furthermore, it suffices in this leading-order analysis to replace l(l+1)
by l̃^2, where l̃=l+1/2.
Then, the large-l WKB approximation is valid as long as
l̃M/r^2+a^2√((r_+-r)(r-r_-))≫1 .
The general WKB form of the radial function is
ψ_ω l^WKB=1/V_ω l^1/4(r)[a_+exp(i∫^r_*√(V_ω l(r(r̃_*)))dr̃_*)+a_-exp(-i∫^r_*√(V_ω l(r(r̃_*)))dr̃_*)]
(no lower limits of integration were applied on the phase functions
±∫^r_*√(V_ω l(r(r̃_*)))dr̃_*,
since any constants of integration can be absorbed into the prefactors
a_±). In computing the phase functions, note that since dr_*/dr
cancels the r^2+a^2 factor arising from V_ω l, we
obtain precisely as in RN:
∫^r_*√(V_ω l(r(r̃_*)))dr̃_*≃-l̃arctan(r-M/√((r_+-r)(r-r_-)))+const .
Hence
ψ_ω l^WKB≃√(r^2+a^2)/√(l̃)[(r_+-r)(r-r_-)]^1/4[a_+e^-il̃g(r)+a_-e^il̃g(r)]
where we denote
g(r)≡arctan(r-M/√((r_+-r)(r-r_-)))
and the constants of integration are absorbed into the prefactors
a_±, to be determined. The prefactors a_± are then found
by matching with the near-EH solution, which is precisely as its RN
counterpart given in Eq. (5.3) in Ref. <cit.> (as turns
out, when expressed in terms of κ_+). Finally, we obtain
the
large-l WKB approximated form of ψ_ω l^int:
ψ_ω l^WKB≃√(r^2+a^2)/√(l̃)[(r_+-r)(r-r_-)]^1/4√(κ_+/2π)e^iπ/4l̃^iω/κ_+e^-iω r_+Γ(1-iω/κ_+)(i^l̃-1e^-πω/2κ_+e^-il̃g(r)+(-i)^l̃e^πω/2κ_+e^il̃g(r)) ,
which is identical to the RN counterpart given in Eqs. (5.13)-(5.15) in Ref. <cit.>,
up to a prefactor being √(r^2+a^2) instead of r.
§.§ ρ_ω l^up at large l
Following the analysis presented in Sec. III in Ref. <cit.>,
keeping track of all potential factors that may cause a difference
between RN and polar Kerr (e.g. certain powers of r^2+a^2
replacing r^2), the reflection coefficient
ρ_ω l^up at large l [The leading order in the large-l limit obviously coincides with that in the large-l̃ limit (recall l̃≡ l+1/2). More specifically, the leading order in the expansion of ρ_ω l^up for l→∞ (which is our concern here) precisely matches the corresponding leading order in its expansion for l̃→∞. In the expressions below – and in fact throughout the paper – we freely use either l or l̃, as convenient in each place.]
is found to have precisely
the same form as its RN counterpart [given in Eq. (3.17) therein]:
ρ_ω l^up≃l̃^-2iω/κ_+e^2iω r_+Γ(iω/κ_+)/Γ(-iω/κ_+) .
We note that in Eq. (3.5.13) in Ref. <cit.>[We note typographical errors in Eq. (3.5.12) <cit.>, which should read:
I_ω̃≡ e^-iω̃r_+[(4Mr_+-2Q^2)κ_+]^-iω̃/2κ_+[-(4Mr_–2Q^2)κ_-]^-iω̃/2κ_-.
This expression follows the notation in Ref. <cit.> and so, in particular the κ_- in this expression is equal to minus the κ_- in Eq. (<ref>) of the current paper.], an expression for the large-l asymptotics of the radial reflection coefficient is given in the more general case of a mode for generic azimuthal number m of a field with arbitrary spin on Kerr-Newman space-time.
We have checked that our Eq. (<ref>) indeed agrees with Eq. (3.5.12) in Ref. <cit.> (when taking into account footnote <ref> as well as the fact that some quantities – such as the tortoise coordinate – are defined differently here and in Ref. <cit.>), for the particular case of m=0, zero field spin (i.e., scalar field) and zero black hole charge (i.e., Kerr spacetime).
See also Appendix A in Ref. <cit.>, where a similar large-l asymptotic analysis was done in the Schwarzschild case. We have checked that our asymptotic results reduce to their Schwarzschild counterparts given in Ref. <cit.>.
§ THE INTERMEDIATE DIVERGENCE PROBLEM
As described in Sec. <ref>, a crucial
step of our method is the subtraction of a divergent piece E_ω l^div (the ID)
of the form given in Eq. (<ref>), prior to summation
and integration. For ⟨Φ^2⟩ ^U we
demonstrated that c_1=c_2=0 [leaving only an ω- and l-independent
ID, for which we found an analytic expression in Eq. (<ref>)].
For the fluxes ⟨ T_yy⟩ ^U (for which the ID was specifically denoted by T_yy(ω l)^div)
we have all three prefactors c_0,c_1 and c_2 generally
non-vanishing [with c_2 analytically computed in Eq. (<ref>), and also with an empirical expression for c_1given in Eq. (<ref>)].
In this Appendix, we provide a justification for the ID
subtraction. We shall use the same notation as in the beginning of Sec. <ref>, treating both the vacuum polarization and the fluxes as the generic quantity P (and similarly for all related quantities, such as E_ω l to denote the bare mode contribution, E_ω l^div to denote the ID, etc.).
To overcome the ID problem, we shall introduce an additional split in the θ
direction, denoted by δ≡θ'-θ. This
split is “small” in the sense that we shall always
take the limit δ→0 before taking the limit ε→0
(see below).
The analytical treatment of the ID becomes simpler if we express the
large-l singular behavior in terms of the angular eigenvalue λ_ω l≡λ_ω l(m=0)
[instead of l(l+1)].[Anywhere else in the manuscript, we use the notation λ_l(aω)
[or λ_lm(aω) for a general m, as in
Eq. (<ref>)], since the spheroidal function's dependence
on ω is always through the combination aω. Here, however, we
prefer to separate the dependence on ω,l from the dependence
on the BH parameters a,M. Hence we use a different notation,
λ_ω l, highlighting only the ω,l indices,
keeping the dependence on the parameters a,M implicit (such
as for the other quantities appearing in this Appendix).] In order to relate the two large-l expressions, we recall that λ_ω l takes the following large l asymptotic form <cit.> (stating again Eq. (<ref>) for convenience)
λ_ω l=l(l+1)+1/2a^2ω^2+λ̂_ω l,
with the remainder term λ̂_ω l vanishing like 1/l(l+1) as l→∞.
Thus, we may equally well rewrite the large-l asymptotic behavior
of E_ω l as
c_0+c̃_1ω^2+c_2λ_ω l≡Ẽ_ω l^div
(plus a term whose infinite sum over l converges), where c̃_1≡ c_1-a^2c_2/2.
As already mentioned, the analytical treatment of the intermediate
divergence becomes simpler if expressed in terms of Ẽ_ω l^div.
However, the actual numerical mode-sum procedure becomes more convenient
when done in terms of E_ω l^div [namely, using
l(l+1) instead of λ_ω l, as in Eq. (<ref>)].
For this reason, in Sec. <ref> (presenting
the basic analytical ID treatment) we use the large-l asymptotic
form (<ref>). Then, in Sec. <ref>
we describe the translation of the ID analysis from Ẽ_ω l^div
to E_ω l^div.
§.§ Basic ID analysis
The mode contribution E_ω l may be written (at coincidence)
with the radial and angular parts factored out:
E_ω l=[S_ω l(0)]^2H_ω l(r) ,
for some radial function H_ω l(r).
As mentioned, inside the BH we introduce a small split δ in the polar angle
θ (in addition to the “primary” split ε in
t). Since we focus on the pole, we have θ=0 and θ'=δ.
The renormalized quantity P_ren is then given by:
P_ren=lim_ε→0lim_δ→0[∫_0^∞dω cos(ωε)(∑_l=0^∞H_ω lS_ω l(0)S_ω l(δ))-C(ε)]
where, recall, C(ε) is the counterterm (with the dependence on x henceforth suppressed). Using
E_ω l, this may be written
as
P_ren=lim_ε→0lim_δ→0[∫_0^∞dω cos(ωε)(∑_l=0^∞E_ω lS_ω l(δ)/S_ω l(0))-C(ε)] .
The large-l divergent piece of E_ω l is Ẽ_ω l^div
given in Eq. (<ref>). The residual, denoted Ê_ω l≡ E_ω l-Ẽ_ω l^div,
has a convergent sum over l. We next substitute
E_ω l=Ê_ω l+Ẽ_ω l^div
in Eq. (<ref>). Accordingly, we define
P_ren=P̂+P_div
where
P̂=lim_ε→0lim_δ→0[∫_0^∞cos(ωε)(∑_l=0^∞Ê_ω lS_ω l(δ)/S_ω l(0))dω-C(ε)]
and
P_div=lim_ε→0lim_δ→0∫_0^∞cos(ωε)(∑_l=0^∞Ẽ_ω l^divS_ω l(δ)/S_ω l(0))dω .
Our goal in this section is to show P_div=0, hence establishing P_ren=P̂.
For P̂, since the series ∑_lÊ_ω l is regular,
we assume that we may readily take the δ→0 limit (i.e.,
"closing" the small split in θ) in Ê_ω lS_ω l(δ)/S_ω l(0)
already prior to summation. Then,
P̂=lim_ε→0[∫_0^∞cos(ωε)(∑_l=0^∞Ê_ω l)dω-C(ε)]
[which resembles Eq. (<ref>) for the BH exterior].
Plugging the form (<ref>) of Ẽ_ω l^div
into P_div, we have:
P_div=c_0P_div^(0)+c̃_1P_div^(1)+c_2P_div^(2)
where
P_div^(0)=lim_ε→0lim_δ→0∫_0^∞cos(ωε)∑_l=0^∞S_ω l(δ)/S_ω l(0)dω ,
P_div^(1)=lim_ε→0lim_δ→0∫_0^∞ω^2cos(ωε)∑_l=0^∞S_ω l(δ)/S_ω l(0)dω
and
P_div^(2)=lim_ε→0lim_δ→0∫_0^∞cos(ωε)∑_l=0^∞(λ_ω lS_ω l(δ)/S_ω l(0))dω .
We shall now analyze each of these three terms P_div^(k), k=0,1,2, separately.
We begin with P_div^(0). Denoting
Σ_0(δ,ω)≡∑_l=0^∞S_ω l(δ)/S_ω l(0) ,
we have
P_div^(0)=lim_ε→0lim_δ→0∫_0^∞cos(ωε)Σ_0(δ,ω)dω .
We numerically obtained
the following simple equality:
Σ_0(δ,ω)=1/2sin(δ/2)J_0(ωsinδ) .
(We numerically verified this relation for various δ,ω pairs of
values, with more than 20 decimals, using the software Mathematica <cit.>.) In particular, the dependence
on ω is only through the combination ωsinδ
in the argument of the Bessel function. We now show that the right
hand side of Eq. (<ref>) vanishes.
The following equality is known to hold at sinδ≠ε
[Note that in Eq. (<ref>), and similarly in Eqs. (<ref>) and (<ref>)
below, the resulting integral may contain an additional distribution with sharp support
at sinδ=ε. However, this is irrelevant to our analysis,
being outside of the (δ,ε) domain that concerns
us.] [see Eq. (7) in Sec. 13.42 in Ref. <cit.>][We shall assume that both ε and δ (and sinδ
likewise) are positive. This assumption is unnecessary, but if relaxed,
we should replace the step function Θ(sinδ-ε)
by Θ(sin^2δ-ε^2).] :
∫_0^∞cos(ωε)J_0(ωsinδ)dω=1/√(sin^2δ-ε^2)Θ(sinδ-ε) .
Then, in the relevant domain of δ and ε, being
sinδ<ε (because of the order of the limits that
we take), we find
∫_0^∞cos(ωε)Σ_0(δ,ω)dω=0 .
This (partly numerically, partly analytically) establishes that
P_div^(0)=0 ,
hence its subtraction is justified. This settles the ⟨Φ^2⟩
case (for which c̃_1=c_2=0). However, for the fluxes
we still need to address P_div^(1) and P_div^(2).
We next consider P_div^(1), which we write
by plugging Eq. (<ref>) into Eq. (<ref>) and using
Eq. (<ref>) as
P_div^(1) =lim_ε→0lim_δ→01/2sin(δ/2)[∫_0^∞ω^2cos(ωε)J_0(ωsinδ)dω] .
Now,
using a formal manipulation (interchanging the derivative with the
integration), we may write:
∫_0^∞ω^2cos(ωε)J_0(ωsinδ)dω=-∂^2/∂ε^2∫_0^∞cos(ωε)J_0(ωsinδ)dω.
Thus,
by taking minus the second derivative with respect to ε of
Eq. (<ref>), we obtain the following distributional identity valid for sinδ≠ε: [In this footnote we present an alternative derivation of Eq. (<ref>). Strictly speaking, this integral is not well
defined because the ω^2 factor [or the ω factor
in the analogous situation of the integral appearing in Eq. (<ref>)]
spoils convergence at large ω. Nevertheless, the integral
(like some other integrals involved in our t-splitting method)
becomes well-defined with the procedure of “oscillation cancellation"
(see Ref. <cit.>), and it is this “generalized integral” that we
want to compute here. An alternative (and equivalent) formulation
of this generalized integral is via multiplying the integrand by a
regulating function exp(-c ω) (with c>0) and then taking
the limit c→0 after integration. This alternative definition,
involving the exp(-c ω) factor, is more commonly used in
the mathematical physics literature; however, it is much harder to
numerically implement it (compared to oscillation cancellation). Nevertheless,
for the analytical derivation of this (generalized) integral
– which is our concern here – this second method [i.e. using
the regulating function exp(-c ω)] is very convenient.
Thus, the relevant integrand now becomes ω^2cos(ωε)J_0(ωsinδ)exp(-c ω).
Unfortunately, Mathematica is unable to directly perform this integral.
However, when reduced to the evaluation of lim_c→0∫_0^∞ω^2exp(iωε)J_0(ωsinδ)exp(-c ω),
Mathematica successfully yields the desired result given in Eq. (<ref>).]
∫_0^∞ω^2cos(ωε)J_0(ωsinδ)dω=-2ε^2+sin^2δ/(sin^2δ-ε^2)^5/2Θ(sinδ-ε) .
Then, for sinδ<ε (being again the relevant domain
of δ and ε), we have
∫_0^∞ω^2cos(ωε)Σ_0(δ,ω)dω=0 ,
hence also
P_div^(1)=0 .
Finally, we deal with P_div^(2). We denote:
Σ_2(δ,ω)≡∑_l=0^∞(λ_ω lS_ω l(δ)/S_ω l(0))
so that
P_div^(2)=lim_ε→0lim_δ→0[∫_0^∞cos(ωε)Σ_2(δ,ω)dω] .
In order to evaluate Σ_2(δ,ω), we recall
the angular equation satisfied by S_ω l [Eq. (<ref>)
for m=0]:
1/sinθd/dθ(sinθd/dθS_ω l(θ))+(-a^2ω^2sin^2θ+λ_ω l)S_ω l(θ)=0 .
We may therefore write (upon replacing the independent variable θ
by δ)
λ_ω lS_ω l(δ)=D_ω[S_ω l(δ)] ,
where D_ω is the linear differential operator defined by
D_ω≡-1/sinδd/dδ(sinδd/dδ)+a^2ω^2sin^2δ .
We may then express Σ_2(δ,ω) as:
Σ_2(δ,ω)=∑_l=0^∞(D_ω[S_ω l(δ)]/S_ω l(0))=∑_l=0^∞(D_ω[S_ω l(δ)/S_ω l(0)]) .
Noting that the linear operator D_ω is independent of l,
we may interchange it with the sum over l, which yields
Σ_2(δ,ω)=D_ω[∑_l=0^∞(S_ω l(δ)/S_ω l(0))]=D_ω[Σ_0(δ,ω)]=D_ω[1/2sin(δ/2)J_0(ωsinδ)] ,
where we have used Eqs. (<ref>) and (<ref>). Explicitly,
we obtain
Σ_2(δ,ω)=α_0(δ)J_0(ωsinδ)+α_1(δ)ω J_1(ωsinδ)+α_2(δ)ω^2J_0(ωsinδ)
with the (ω-independent) coefficients
α_0(δ) =1/16(sin(δ/2))^-3[-3+cosδ] ,
α_1(δ) =-sinδ/4(sin(δ/2))^-3 ,
α_2(δ) =1/4(sin(δ/2))^-3[1-cosδ] .
We note that the above derivation of Σ_2(δ,ω)
involved a formal manipulation (interchange of the D_ω differential
operator with the sum over l), which we are unable to justify rigorously;
nevertheless, we also verified Eq. (<ref>) numerically (with
more than 40 digits of precision), for several δ,ω pairs.
[The numerical evaluation of Σ_2(δ,ω)
given as an infinite sum in Eq. (<ref>) involved the procedure
of oscillation cancellation (on the partial sums over l). This
procedure is further explained (and implemented as part of the analytic extension method) in Appendix <ref>. Here – and in the analytic extension – we encounter the need for oscillation cancellation
of a discrete sequence, differing from the continuous case discussed
in Ref. <cit.>.]
Taking the expression for Σ_2(δ,ω)
given in Eq. (<ref>) and plugging it into the ω
integral in Eq. (<ref>), we note that both the ∝ J_0(ωsinδ)
and ∝ω^2J_0(ωsinδ) terms have
already been shown to not contribute to the integral in the relevant
domain sinδ<ε [see Eqs. (<ref>) and
(<ref>) respectively]. We are left with the ∝ω J_1(ωsinδ)
term in Σ_2.
First, we note that (see, e.g., Eq. (10.6.6) in Ref. <cit.>)
ω J_1(ωsinδ)=-∂/∂(sinδ)J_0(ωsinδ).
Thus, similarly to Eq. (<ref>), we may formally write:
∫_0^∞ωcos(ωε)J_1(ωsinδ)dω=-∂/∂(sinδ)∫_0^∞cos(ωε)J_0(ωsinδ)dω .
Finally, by differentiating Eq. (<ref>) by minus sinδ, we obtain at sinδ≠ε: [As an alternative derivation, we also obtained Eq. (<ref>) with the aid of Mathematica and the methods
described in footnote <ref>.]
∫_0^∞ωcos(ωε)J_1(ωsinδ)dω=sinδ/(sin^2δ-ε^2)^3/2Θ(sinδ-ε) .
Again, at sinδ<ε we find that this integral vanishes.
All in all, we find that, for sinδ<ε,
∫_0^∞cos(ωε)Σ_2(δ,ω)dω=0 ,
implying the vanishing of the integral on the RHS of Eq. (<ref>),
and hence:
P_div^(2)=0 .
We here mention another, formal
derivation of Eq. (<ref>): Recalling
the relation Σ_2(δ,ω)=D_ω[Σ_0(δ,ω)]
[see Eq. (<ref>)], we may simply apply the differential
operator D_ω to ∫_0^∞cos(ωε)Σ_0(δ,ω)dω.
This last integral vanishes for sinδ<ε [see Eq. (<ref>)], hence
so does the integral in Eq. (<ref>).
To conclude, putting together Eqs. (<ref>), (<ref>),
(<ref>) and (<ref>), we find that
P_div=0 .
Recalling Eq. (<ref>), this in turn implies that
P_ren=P̂ .
Stated in other words, we have shown – partly numerically, partly analytically – that the subtraction of the ID
— namely the singular piece Ẽ_ω l^div
as given in Eq. (<ref>) — is justified.
§.§ Replacing λ_ω l by l(l+1)
We would like to convert the above results (justifying the ID subtraction)
from Ẽ_ω l^div to E_ω l^div,
where, recall,
Ẽ_ω l^div=c_0+c̃_1ω^2+c_2λ_ω l , E_ω l^div=c_0+c_1ω^2+c_2l(l+1)
(and c̃_1=c_1-a^2c_2/2).
The analysis of the previous section established that
P_ren=P̂=lim_ε→0[∫_0^∞cos(ωε)(∑_l=0^∞Ê_ω l) dω-C(ε)] .
We rewrite this result as
P_ren=lim_ε→0[∫_0^∞cos(ωε)Σ̂ dω -C(ε)]
where
Σ̂≡∑_l=0^∞Ê_ω l=∑_l=0^∞(E_ω l-Ẽ_ω l^div) ,
or more explicitly,
Σ̂=∑_l=0^∞[E_ω l-c_0-c̃_1ω^2-c_2λ_ω l] .
Using the large-l asymptotic behavior of λ_ω l
given in Eq. (<ref>), we obtain
Σ̂=∑_l=0^∞[E_ω l-c_0-(c̃_1+a^2/2c_2)ω^2-c_2l(l+1)-c_2λ̂_ω l]=∑_l=0^∞(E_ω l-E_ω l^div-c_2λ̂_ω l).
We split this sum into two (regular) pieces as
Σ̂=∑_l=0^∞(E_ω l-E_ω l^div)-c_2∑_l=0^∞λ̂_ω l .
Note that, unlike the first sum on the RHS, the second sum ∑_l=0^∞λ̂_ω l
does not depend at all on the dynamics of the problem (namely on
ψ_ω l, ρ^up_ω l, etc). We shall now focus on this last sum, writing it explicitly as
∑_l=0^∞λ̂_ω l=∑_l=0^∞[λ_ω l-l(l+1)-1/2a^2ω^2] .
Numerically exploring this quantity we find, quite surprisingly, that
this sum actually vanishes (we are unaware of a source that
demonstrates this analytically, but we have verified it, for a variety of
aω values, up to 40 decimals). This implies that
Σ̂=∑_l=0^∞(E_ω l-E_ω l^div) .
Substituting this result back into Eq. (<ref>), we find that
the desired renormalized quantity P_ren is given by
P_ren=lim_ε→0[∫_0^∞cos(ωε)∑_l=0^∞(E_ω l-E_ω l^div)dω-C(ε)] ,
as stated in Eq. (<ref>) along with Eq. (<ref>).
Thus, when we come to sum the series E_ω l over l, we
are allowed to subtract its large-l singular piece in either of
the forms E_ω l^div≡ c_0+c_1ω^2+c_2l(l+1)
or Ẽ_ω l^div≡ c_0+c̃_1ω^2+c_2λ_ω l
– both subtraction schemes yield the (same) correct result.
Returning to the scheme considered in this paper (described in Sec. <ref>), we conclude that we are indeed
allowed to simply drop the ID, i.e. the term E_ω l^div
in Eq. (<ref>), prior to summation and integration.
§ NUMERICAL METHODS: THE COMPUTATIONAL SIDE
This section deals with the numerical implementation of the t-splitting
regularization procedure in practice — from the computation of
the basic ingredients ρ_ω l^up, ψ_ω l^int
and ψ_ω l,r^int (as well as S_ω l(θ=0))
to the performance of the various
steps (including various challenges that arise), leading to the renormalized
quantities ⟨Φ^2⟩ _ren^U
and ⟨ T_yy⟩ _ren^U
in the cases under consideration (detailed in Sec. <ref>).
It is worth recalling the following fact: while outside the BH the
mode contribution per l and ω decays exponentially in l for fixed ω (owing
to the potential barrier, as already mentioned in Sec. <ref>),
in the BH interior the sum over l does not converge. (Indeed, for
T_yy the sequence in l diverges like l(l+1),
while for Φ^2 it approaches a non-vanishing constant – implying
a divergent l-sum in both cases.) We named this large-l divergent
piece the ID [i.e. E_ω l^div of the form given
in Eq. (<ref>)]. After subtracting the ID, the resulting
sequence decays at large l like 1/l(l+1), yielding
a convergent l-sum.
Still, this slow decay rate poses a challenge to the feasibility of numerical implementation. In order to achieve sufficient accuracy, which is necessary given that many orders of magnitude of precision are still to be lost in the subsequent steps of the regularization procedure, a vast l-range would be required. For instance, it would need to be on the order of ∼10^4, whereas outside the BH, an order of ∼10^1 or at most ∼10^2 would suffice.
We overcome this difficulty by performing a fit on the sequence of
partial sums (to be further described in what follows) with ∼100
(or even more) orders in 1/l (or in 1/(l+1)). This,
in turn, requires taking an l range reaching l=300. In addition,
performing such a high-order fit also requires computing the individual
mode contributions E_ω l, and in turn their basic
constituents ρ^up_ω l, ψ_ω l^int,
ψ_ω l,r^int and S_ω l(θ=0), to a typical precision of hundreds of figures (at least ∼250).
In what follows, we provide a detailed description of our computation of these basic components with the required high precision. We then discuss the implementation of the summation over l and the subsequent numerical integration over ω.
§.§ Computation of ρ^up_ω l, ψ_ω l^int, ψ_ω l,r^int and S_ω l(θ=0)
In order to calculate the outside reflection coefficient ρ^up_ω l (defined via (<ref>) and (<ref>)) and the interior radial solution ψ_ω l^int (defined via (<ref>) and (<ref>)), as well as its derivative ψ_ω l,r^int, we used various methods, which served as a check of our results.
We note that the radial ODE (<ref>) satisfied by ψ_ω l^int has two regular singular points (at r=r_±) and one irregular singular point (at r=∞), and so it is a confluent Heun equation.
This greatly facilitates the computation of the solution between the IH and EH, corresponding to the interval between the two regular singular points.
The radial ODE requires the calculation of the angular eigenvalue λ_l(aω), which we did via the in-built “SpheroidalEigenvalue" function in the software Mathematica. We next briefly describe the various methods we employed to calculate ρ^up_ω l, ψ_ω l^int and ψ_ω l,r^int.
In the first and main method, we expressed the solution
ψ_ω lm^int
in terms of the
in-built “HeunC" function in Mathematica:
ψ_ω lm^int(r) =√(r^2 + a^2/r_+^2 + a^2)e^- i (1 + κ) (ε - m q)/2e^i ε x κ (1 -
x)^i (ε - τ)/2 x ^- i (ε + τ)/2HeunC(q_H, α_H, γ_H, δ_H, ε_H,
x),
where ε=2Mω, τ = (ε - m q)/κ, x=(r_+ - r)/(2 M κ), κ=√(1-q^2), q=a/M and
q_H =λ_lm +τ ^2-ε ^2+i (τ +κε ),
α_H =2 κε (τ -ε +i),
γ_H =-i τ -i ε +1,
δ_H =-i τ +i ε +1,
ε_H =2 i κε.
For obtaining ψ_ω l^int, merely set m=0 in Eq. (<ref>).
The second method that we used is the so-called MST method (after the original authors, Mano, Suzuki and Takasugi; see <cit.> for a review) which was derived for the exterior of the BH and which we extended to the interior. It essentially consists of writing the solutions to the radial ODE (<ref>) as infinite series of hypergeometric functions and finding the coefficients in the series as solutions to three-term recurrence relations. By matching a series representation which converges everywhere outside the EH except at r=∞ with another one which converges everywhere except at r=r_+, a series representation for the outside scattering coefficients, which includes ρ^up_ω l, is obtained. We used such MST series representation to calculate ρ^up_ω l.
We then used our extension inside the EH of the MST series representation for ψ_ω l^int in order to obtain this solution and its r-derivative inside the EH.
As a third and last method, we also used simple power series expansions for ψ_ω l^int about r=r_+ and about r=r_-. In the former case, the boundary condition (<ref>) for ψ_ω l^int at r=r_+ is readily incorporated. In the latter case, in order to incorporate the boundary for ψ_ω l^int at r=r_-, we calculated the scattering coefficients of ψ_ω l^int at r=r_- (namely, A_ω lm and B_ω lm in Eq. (3.23) of Ref. <cit.>) by using the extension inside the EH mentioned above of the MST method.
Clearly, the expansion about r=r_+ thus requires much less computational work than the expansion about r=r_- but, on the other hand, it converges a lot more slowly near r=r_-. We note that the coefficients in both expansions about r=r_± satisfy four-term recurrence relations.
Finally, the spheroidal eigenfunctions S_ω l(θ=0) [defined via Eqs. (<ref>), (<ref>) and (<ref>)] were computed using Mathematica's built-in function SpheroidalPS. In particular, taking normalization into account, our S_ω l(θ=0) corresponds to √((2l+1)/2) SpheroidalPS [l,0,i a ω,1]. The numerical evaluation of the spheroidal eigenfunctions was done to 300 significant digits.
§.§ Numerical methods
As described above, we
have computed ρ_ω l^up, ψ_ω l^int(r), ψ_ω l,r^int(r) and
S_ω l(θ=0) for the chosen a/M (and r/M) values,
typically reaching up to l=300 and ω=10/M with an increment of
dω=1/200M.[Closer to the horizons (i.e., in the regions corresponding to about
δ r_±∼10^-3-10^-5), the usable range in ω (with
the same, fixed l range) drops to ω≳2/M.]
The raw material [ρ_ω l^up, ψ_ω l^int(r),
ψ_ω l,r^int(r) and S_ω l(θ=0)]
is then used to construct the bare mode contribution [given in Sec.
<ref> or Eq. (<ref>))]
per ω,l in the mentioned ranges. This bare mode contribution
is then treated as described in Sec. <ref>,
performing the regularization steps while summing and integrating,
to finally reach the desired renormalized quantity. We now provide
a more detailed account of the numerical implementation of this regularization
procedure.
First, in principle, the ID needs to be subtracted from the bare mode
contribution, to be followed by a summation over
l (per ω) to produce the basic integrand function [defined
in Eq. (<ref>)]. For Φ^2 this ID subtraction
stage (demonstrated in Fig. <ref>) merely includes the removal of the analytically-known O(l^0)
leading order c_0 [see Eq. (<ref>)], leaving
a remainder that decays like 1/l(l+1), which is to be
summed over. As already mentioned above, we overcome this slow decay
by fitting the sequence of partial sums (namely ∑_l'=0^l(E_ω l'-E_ω l'^div))
with ∼100 (or more) orders in 1/l (starting with order l^0).
The fitted coefficient of this l^0 term then constitutes the
desired l-sum. This allows the basic integrand E^basic(ω)
to be extracted to a precision of typically ≳50 digits, depending
on r (the l-sum error is in fact evaluated by comparing fits
with different numbers of orders in 1/l).
Let us now turn to the fluxes. First we note that we only directly
calculate T_uu, whereas T_vv is calculated
from it via the conserved quantity given in Eq. (<ref>)
with θ=0, noting Eq. (<ref>),
and whose value we obtain at r=r_- using our state subtraction
results of Ref. <cit.>. Specifically, we calculate T_vv
as
⟨ T_vv(r,0)⟩ _ren=⟨ T_uu(r,0)⟩ _ren+(r_-^2+a^2)/(r^2+a^2)(⟨ T_vv(r_-,0)⟩ _ren-⟨ T_uu(r_-,0)⟩ _ren) .
Hence, we now focus on the direct computation of ⟨ T_uu⟩ _ren^U.
For T_uu, the large-l divergent piece T_uu(ω l)^div
has an analytically-known leading-order piece [c_2l(l+1),
with c_2 given in Eq. (<ref>)] and an additional
O(l^0) piece [namely c_01(ω)
defined in Eq. (<ref>) ]. This quantity c_01(ω)
is, in principle, not known analytically, but can be extracted numerically. Thus,
the straightforward procedure would be to compute the l-sum in
two stages: first, numerically extracting c_01(ω),
and using it to obtain the regularized sequence T_uu(ω l)-T_uu(ω l)^div;
and then summing this regularized sequence over l to obtain T_uu^basic(ω). This procedure is demonstrated in Fig. <ref>. However, we
find it simpler and more efficient (and significantly more accurate)
to perform the desired (regularized) mode sum in a single step, in
a manner that does not require the knowledge of c_01(ω).
To this end, we construct the sequence of partial sums ∑_l'=0^l[T_uu(ω l')-c_2l'(l'+1)]
and fit it with ∼100 (or more) orders in 1/(l+1),
starting with order (l+1)^1. The coefficient of (l+1)^0
constitutes the desired regularized mode sum, namely the quantity
T_uu^basic(ω), which is extracted
to a precision of ≳50 digits (depending on r).
As a side product, this fit also yields the unknown parameter c_01(ω)
(as the coefficient of the leading order term (l+1)^1),
to a typical precision of ∼10^2 figures (depending on r).
This allows us to numerically explore its dependence on ω,
and hence to verify its highly-accurate parabolic form (which is crucial
for the justification of the ID subtraction, see Sec.
<ref>), as demonstrated in Fig. <ref>.
In fact, we routinely repeat this parabolicity test in all cases computed.
(An alternative test is taking the third-order numerical derivative
of c_01(ω) with respect to ω, observing
its extremely small deviation from zero.)
In the next step, portrayed in Fig. <ref> for Φ^2 and in Fig. <ref> for T_uu, we subtract the ω-dependent singular piece
(the PMR counterterm E^sing(ω) for Φ^2
or T_yy^sing(ω) for T_yy,
as given in Sec. <ref>) from the basic integrand.
Integrating the resulting integrand also proved somewhat challenging due to its slow convergence, necessitating fitting to a series of inverse powers 1/ω^k.
We carry out the integration over ω, and finally subtract
the finite counterterm as described in Eq. (<ref>).
The integration range is as described above (ω∈[0,10/M]
for a typical r value, decreasing towards the horizon vicinities
up to around ω∈[0,2/M]).
The results of this computation for the various cases are presented
in Sec. <ref> and in the figures within. The
absolute error differs from figure to figure, depending on the presented
quantity and range, but overall it is generally less than 1% from the vertical scale range spanned in each
figure. (We do not mention a relative error in the quantities themselves,
since some cross the horizontal axis in the presented ranges). In
particular, we should emphasize that the error in all points shown
in these figures is too small to be visually discernible on the displayed
figures.
§ THE ANALYTIC EXTENSION VARIANT
In this appendix, we develop a variant of the PMR t-splitting method in the
BH interior which differs from the one presented in the rest of the
manuscript. We refer to this variant as the analytic extension
method. This approach is subsequently utilized for comparison and
cross-verification of the results provided in the main text. This
variant of t-splitting is based on the fundamental idea that the
physical quantities we aim to compute – which are regular functions
of r, such as ⟨Φ^2⟩ _ren^U
and ⟨ T_yy⟩ _ren^U
– must exhibit analytic behavior at the EH (see Sec. <ref>) [As far as we know, this analyticity has not been rigorously proven for the Unruh state in Kerr. However, this assumption (commonly accepted in the study of BHs) forms the basis of our approach, and it is crucial for justifying the method adopted in this appendix. In Sec. <ref> we give some evidence towards this assumption.].
This principle suggests that the expressions for the HTPF mode contributions
can be extended past the EH into the BH interior region (a similar
idea was considered in related works, see Refs. <cit.>).
We proceed by deriving explicit expressions for the mode contributions
inside the BH. This involves analytically extending the expressions
for the external HTPF mode contributions into the BH interior. The
resultant mode contributions display oscillatory behavior at large
values of angular momentum (l) and frequency (ω). However,
this issue, can be addressed through an “oscillation
cancellation" procedure, to be described. Moreover,
large-ω oscillations are now accompanied by exponential growth,
but this too can be mitigated using our “oscillation
cancellation" technique. Nevertheless, this behavior
introduces significant numerical challenges, especially when attempting
to decrease r, not to mention approaching the IH. Therefore, we
have implemented this method only at two relatively manageable r
values, namely r=1.4M and r=1.5M (at the pole) inside a Kerr
BH of spin a/M=0.8.
Comparing this analytic extension variant of t-splitting
with the standard variant described in the main text reveals
a significant fact: the difficulty encountered in one method is absent
in the other. Specifically, the intermediate divergence in l present
in the standard variant does not exist in the analytic extension variant.
Moreover, the oscillatory behavior (accompanied by exponential growth)
in ω observed in the analytic extension variant is absent
in the standard variant. In other words, the ID subtraction step is
not required in the application of the analytic extension variant,
whereas the oscillation cancellation step is not needed in the application
of the standard variant. Consequently, comparing the results obtained
using both methods serves as a robust tool for verifying the validity
of each, with their unique and non-trivial intermediate steps.
We shall now turn to develop the method, ending with some numerical
results. The method was initially devised and tested in the RN case
before being adapted to Kerr (for general θ). Then, while
this appendix focuses on the pole of a Kerr BH, in accordance with
the overall manuscript, it is worth noting that the method can be
readily adapted to these other cases and was successfully applied for computations in the RN case as well.
§.§ Analytic extension of the HTPF
We start with the HTPF at the exterior of a Kerr BH, given in Eq. (3.22) in Ref. <cit.> (and in our notation, in
Eq. (5.4) in Ref. <cit.>) in terms of the exterior Eddington
modes f_ω lm^in and f_ω lm^up
of Eqs. (<ref>) and (<ref>):
G^(1)_U(x,x')=ħ∫_0^∞dω∑_l=0^∞∑_m=-l^l{ f_ω lm^in(x),f_ω lm^in*(x')} +ħ∫_0^∞dω_+∑_l=0^∞∑_m=-l^l(πω_+/κ_+){ f_ω lm^up(x),f_ω lm^up*(x')} ,
where curly brackets denote symmetrization with respect to the spacetime point, that is, for two functions ζ and ξ, we define {ζ(x),ξ(x')}≡ζ(x)ξ(x')+ξ(x)ζ(x'). Concentrating on the polar axis (θ=0, hence only m=0 survives),
this reduces to
G^(1)_U(x,x')=ħ∫_0^∞dω∑_l=0^∞[{ f_ω l^in(x),f_ω l^in*(x')} +(πω/κ_+){ f_ω l^up(x),f_ω l^up*(x')}] , (r>r_+,θ=0)
where, as stated earlier, eliminating the m index in our notation
is equivalent to taking m=0 (for quantities defined with ω lm
indices).
To extend the “in” and “up” mode contributions analytically beyond
the EH and into the BH, we will examine the contribution of each mode
near the EH (where u diverges) at some constant v. Starting
with f_ω l^in and considering the asymptotic form
of ψ_ω l^in as r_*→-∞ [by setting
m=0 in Eq. (<ref>)], we find that it assumes the
following simple asymptotic form at the EH vicinity:
f_ω l^in≃S_ω l(0)/√(8π^2|ω|(r_+^2+a^2))τ_ω l^ine^-iω v , (EH vicinity) .
This form corresponds to a purely ingoing wave which may be extended
smoothly beyond the EH. However, taking the r_*→-∞ asymptotic
behavior of ψ_ω l^up [by setting m=0 in
Eq. (<ref>)] in the general form of f_ω l^up,
we find that f_ω l^up has two distinct contributions at
the EH: a smooth ingoing ∝ e^-iω v term, and an outgoing
∝ e^-iω u_ext term which is infinitely oscillatory
at the EH, as u_ext→∞ there. (This ∝ e^-iω u_ext
term exists in the entire vicinity of r=r_+, which includes the
vicinity of the EH.) The asymptotic form of the “up” modes at the EH vicinity
is then [
One may notice that the form (<ref>) presented here for the asymptotic behavior of f_ω l^up in the EH limit differs from Eq. (3.15) in Ref. <cit.> by an additional term proportional to e^-iω u_ext. As elaborated in Ref. <cit.>, Eq. (3.15) therein aligns more closely with the intuitive interpretation of a typical wavepacket originating from H_past, then being reflected to the EH and transmitted to FNI (as illustrated in Fig. 2 therein).
However, note that while Eq. (3.15) in Ref. <cit.> conforms with the more intuitive picture commonly accepted in the literature, the form (<ref>) presented here is the exact form (as obtained by simply plugging Eq. (<ref>) into Eq. (<ref>), taking m=0). It is this form that facilitates analytic continuation, as described in the current appendix.
]
f_ω l^up≃S_ω l(0)/√(8π^2|ω|(r_+^2+a^2))(ρ_ω l^upe^-iω v+e^-iω u_ext) , (EH vicinity) .
On the other side of the EH, within the BH interior, there are the
“right” and “left” Eddington modes of Eq. (<ref>)
(in which we set m=0). Considering the asymptotic behavior of ψ_ω l^int
given in Eq. (<ref>), one obtains the asymptotic behavior
of f_ω l^R and f_ω l^L at the EH vicinity: [The same remark made in footnote <ref> regarding the asymptotic behavior of f^up_ω l at the EH is valid here for f^L_ω l, comparing the above Eq. (<ref>) with Eq. (3.19) of Ref. <cit.>. As manifested in the latter (see also Fig. 2 therein), the f^L_ω l Eddington modes are commonly thought of as arising from H_L and having zero initial data at the EH. However, the form given here constitutes the exact asymptotic behavior at r→ r_+,
and hence applies to H_L as well as to the EH. While indeed the f^L_ω l modes are more naturally tied to H_L and may be intuitively thought of as arising from H_L (where u_int varies), in the current context of analytical extension we restrict our attention to the asymptotic behavior at the EH vicinity.]
f_ω l^R≃S_ω l(0)/√(8π^2|ω|(r_+^2+a^2))e^-iω v , (EH vicinity)
f_ω l^L≃S_ω l(0)/√(8π^2|ω|(r_+^2+a^2))e^iω u_int , (EH vicinity)
[where u_int
is now the internal coordinate given in Eq. (<ref>)].
We now wish to match the exterior Eddington modes with the interior
ones, analytically extending beyond the EH. We denote the extension
of exterior quantities to interior quantities by ↦.
Comparing Eq. (<ref>) with Eqs. (<ref>) and (<ref>),
it is clear that the “in” modes extend through the EH in a regular
manner, with f_ω l^in matched to τ_ω l^inf_ω l^R:
f_ω l^in↦τ_ω l^inf_ω l^R .
The extension of the “up” modes is trickier. The ∝ρ_ω l^upe^-iω v
term in Eq. (<ref>) passes through the EH regularly (as
v remains regular there), and matches to ρ_ω l^upf_ω l^R
in the BH interior. However, as u_ext→∞ at the EH, the term ∝ e^-iω u_ext does not pass regularly
to the BH interior. To proceed, we write u_ext=v-2r_*
[see Eq. (<ref>)] with r_* as given in Eq. (<ref>).
Approaching r_+ from the BH exterior, r_* has the form
r_*≃ r_++1/2κ_+log(r-r_+/r_+-r_-) , r→ r_+^(+).
where r→ r_+^(+) denotes the limit of r approaching r_+ from above.
Hence, at the EH vicinity, we may write
e^-iω u_ext≃ A· z^iα ,
where
A≡ e^-iω(v-2r_+) , z≡r-r_+/r_+-r_- , α≡ω/κ_+ .
The z^iα term is singular as z vanishes at r=r_+.
We shall now analytically extend it through the EH.
At z>0 we have the original function g(z)≡exp[iα ln(z)],
and we want to analytically extend it to the negative real axis of
z. We wish to express the resultant analytically-extended function
in the form q exp[iα ln(-z)], where
q is a pre-factor to be determined. To this end, we express z
as z=|z| e^iϕ where ϕ≡arg(z). Then ln(z)=ln|z|+iϕ,
and hence the original function becomes
g(z)=e^iα (ln|z|+iϕ)=e^iα ln|z|e^-αϕ .
Evaluating this function at z=-|z|, which corresponds to taking
ϕ=π [Here we analytically extend along a curve in the complex plane bypassing the r=r_+ singularity from above. A second option would be to bypass the r=r_+ singularity from below. (Note that since the quantity of interest, the HTPF, is real and analytic across the EH, one may analytically extend along any curve of choice in the complex plane, and the result should be independent of the curve chosen.) This second option would result in replacing e^-απ in what follows by e^απ. However, the real part of the final mode-sum expression given in Eq. (<ref>) is in fact invariant under this choice of curve. (The imaginary part changes its sign under this change of curve, but this does not concern us, as discussed in footnote <ref>).], we obtain
g(z)=e^iα ln(-z)e^-απ,
z<0,
and therefore the sought-after “de-amplification factor” q is
q=e^-απ
and the analytic extension of e^-iω u_ext is
e^-iω u_ext↦ Ae^-απ(-z)^iα .
Approaching the EH from the BH interior, we have
r_*≃ r_++1/2κ_+log(r_+-r/r_+-r_-) , r→ r_+^(-)
where r→ r_+^(-) denotes the limit of r approaching r_+ from below.
Hence we have here
e^iω u_int=A(-z)^iα
with A, z and α given in Eq. (<ref>),
and the matching is
e^-iω u_ext↦ e^-απe^iω u_int .
The “up” mode is then analytically extended to the BH interior
as [see Eq. (<ref>)]
f_ω l^up ↦ρ_ω l^upf_ω l^R+e^-πω/κ_+f_ω l^L .
Similarly, one obtains
e^iω u_ext↦ A^*e^απ(-z)^-iα=e^απe^-iω u_int
and
f_ω l^up* ↦ρ_ω l^up*f_ω l^R*+e^πω/κ_+f_ω l^L*.
Now, equipped with the extension of the modes through the EH, we may
return to the exterior expression of the HTPF at the pole [given
in Eq. (<ref>)] and carry it to the BH interior.
Using Eqs. (<ref>), (<ref>) and (<ref>)
and simplifying by hypergeometric identities, one obtains the interior
HTPF in the analytic extension variant (at the pole), denoted G_ae^U(x,x') (hereafter, a subscript/superscript “ae" denotes the analytic extension):
G_ae^U(x,x') =ħ∫_0^∞dω∑_l=0^∞[(πω/κ_+)({ f_ω l^L(x),f_ω l^L*(x')} +|ρ_ω l^up|^2{ f_ω l^R(x),f_ω l^R*(x')}).
+2[cosech(πω/κ_+)+sinh(πω/κ_+)](ρ_ω l^up{ f_ω l^R(x),f_ω l^L*(x')})+|τ_ω l^in|^2{ f_ω l^R(x),f_ω l^R*(x')}
.+icosh(πω/κ_+)(ρ_ω l^up{ f_ω l^R(x),f_ω l^L*(x')})] .
As this expression should yield a manifestly real result, the imaginary
part appearing in the third line of Eq. (<ref>) should vanish
(after integration and summation). [Note that the positive sign of the imaginary part appearing in the third line of Eq. (<ref>) was obtained following our choice of analytically extending along a curve bypassing the r=r_+ singularity from above. If one were to bypass the r=r_+ singularity from below, the sign of this imaginary term would be negative. This sign ambiguity does not concern us, given
the expectation for the vanishing of this imaginary piece after summation and integration.]
We may write G_ae^U as
G_ae^U(x,x') =G_stn^U(x,x')+G_dif^U(x,x'),
where G_stn^U(x,x') denotes the interior
HTPF in the standard variant [at the pole, which is the θ=0, m=0 version
of Eqs. (<ref>) and (<ref>)], and G_dif^U(x,x')
is their difference, which reads
G_dif^U(x,x')=2ħ∫_0^∞dω∑_l=0^∞[sinh(πω/κ_+)(ρ_ω l^up{ f_ω l^R(x),f_ω l^L*(x')})+icosh(πω/κ_+)(ρ_ω l^up{ f_ω l^R(x),f_ω l^L*(x')})] .
The entire G_dif^U(x,x') quantity is expected to vanish in order for the two variants to yield the same results – see footnote <ref>.
Since we know that the HTPF should be real, from this point on we concentrate only on the real part of G^U_ae (and of G^U_dif).
§.§ Individual mode contributions at coincidence
We now provide computationally-amenable expressions for the mode contributions in the
analytic extension variant, at coincidence (x'→ x), for both
⟨Φ^2⟩ ^U and ⟨ T_yy⟩ ^U. It is in fact more compact to explicitly give the difference in the ω l-mode contributions between the analytic extension and standard variants, derived from (the real part of) G_dif^U(x,x') of Eq. (<ref>). We denote this difference by E_ω l^dif for the field square [given in Eq. (<ref>) below] and by T_yy(ω l)^dif for the fluxes [given in Eq. (<ref>) below].
The summation and integration (with appropriate regularization) of these “dif" quantities are expected
to vanish, in order for the two variants to yield the same result. Since both quantities, E_ω l^dif and T_yy(ω l)^dif, diverge exponentially with ω (due to the sinh(πω/κ_+) function appearing in both expressions), this vanishing a priori seems far from trivial. In the next subsections we explore and numerically show that the difference between the analytic extension variant and standard variant indeed vanishes for both the field square and the fluxes. (In practice, we present the computation of the full analytic extension renormalized quantities and show their agreement with their standard-variant counterparts).
The fact that the difference indeed vanishes then serves as a robust test of
both variants of t-splitting, the standard and the analytic extension
variants, each involving its own unique procedure (as discussed above, as well as summarized in Sec. <ref>). [Here (and in the rest of this appendix) we focus on the quantities of interest, the field square and the fluxes (or their difference), derived from the real part of G_dif^U(x,x') followed by the coincidence limit x'→ x. It is worth noting, however, that the entire G_dif^U(x,x') quantity, prior to taking the coincidence limit, is expected to vanish.
The individual mode contribution to G_dif^U(x,x'), given in Eq. (<ref>),
includes two parts, a real part and an imaginary part, both growing
exponentially with ω as dictated by the hyperbolic functions sinh(πω/κ_+) and cosh(πω/κ_+). We would expect that both real and imaginary parts will vanish separately. However, here we focus on verifying this expectation numerically only for the real part in the coincidence limit.
]
§.§.§ Individual mode contribution to ⟨Φ^2⟩ ^U
Using the explicit forms of the interior Eddington mode functions
of Eq. (<ref>), one obtains the mode contribution to
⟨Φ^2⟩ ^U in the analytic extension,
denoted E_ω l^ae:
E_ω l^ae=E_ω l^stn+E_ω l^dif
where E_ω l^stn is the standard expression used
in this paper, given in Eq. (<ref>), and E_ω l^dif
is the difference given by
E_ω l^dif=ħ[S_ω l(0)]^2/4π^2ω(r^2+a^2)sinh(πω/κ_+)[ρ_ω l^up(ψ_ω l^int)^2] .
§.§.§ Individual mode contribution to ⟨ T_yy⟩ ^U
Deriving the flux components from the HTPF of the analytical extension
variant given in Eq. (<ref>) is done in the exact same manner
as in Appendix B in Ref. <cit.>. This yields the individual
mode contribution to ⟨ T_yy⟩ ^U
in the analytic extension variant, which we write as
T_yy(ω l)^ae=T_yy(ω l)^stn+T_yy(ω l)^dif,
where T_yy(ω l)^stn is the
standard expression used in the main manuscript [given in Eqs. (<ref>)-(<ref>)], and T_yy(ω l)^dif
is the difference between the two variants, given by
T_yy(ω l)^dif=ħ[S_ω l(0)]^2/16π^2ω(r^2+a^2)sinh(πω/κ_+)(ρ_ω l^up[ω^2(ψ_ω l^int)^2+(ψ_ω l,r_*^int)^2-2Δ r/(r^2+a^2)^2ψ_ω l^intψ_ω l,r_*^int+Δ^2r^2/(r^2+a^2)^4(ψ_ω l^int)^2]) .
§.§ The regularization procedure in the analytic extension variant
The quantities E_ω l^ae, given in Eqs. (<ref>) and (<ref>),
and T_yy(ω l)^ae, given in
Eqs. (<ref>) and (<ref>), constitute the individual mode
contributions to ⟨Φ^2⟩ ^U and ⟨ T_yy⟩ ^U,
respectively, within the analytic extension variant. However, the mode sums of these quantities are clearly
divergent. These mode sums may be regularized within the analytic extension
variant of t-splitting in several steps, outlined briefly in this
section [it may be compared with the standard PMR t-splitting procedure, outlined in Sec. <ref> and
summarized in Eqs. (<ref>) and (<ref>)].
§.§.§ Regularization of the l sum
As in the standard variant, the first step involves summing over l.
However, whereas the standard variant encounters a diverging sum that
is addressed by the ID subtraction,
in the analytic extension variant the l-sum's failure to converge is due
to growing oscillations.
To obtain these oscillations analytically, we look into the large-l limit of E_ω l^ae and T_yy(ω l)^ae (as we did in Sec. <ref> for the individual mode contributions in the standard variant). While the leading order large-l behavior of E_ω l and T_yy(ω l)
in the standard variant is ∝ l^0 and ∝ l^2, respectively, with r-dependent coefficients found analytically in Eqs. (<ref>) and (<ref>), the analytic extension counterpart has an extra multiplicative factor of ±cosh(πω/κ_+)(-1)^lcos[(l+1/2)s(r)], where s(r) is a function of r to be given and the minus sign goes with the E_ω l^ae case. Explicitly, one finds to leading order in l,
E_ω l^ae≃-1/4π^2√((r-r_-)(r_+-r))cosh(πω/κ_+)(-1)^lcos[(l+1/2)s(r)] , l≫1,
and
T_yy(ω l)^ae≃l^2√((r_+-r)(r-r_-))/16π^2(r^2+a^2)^2cosh(πω/κ_+)(-1)^lcos[(l+1/2)s(r)] , l≫1,
where
s(r)≡2arctan[r-M/√((r_+-r)(r-r_-))] .
Clearly, this introduces oscillatory behavior of the mode contributions, with an amplitude growing as l^2 in the T_yy(ω l)^ae case. This large-l behavior is confirmed numerically for both E_ω l^ae and T_yy(ω l)^ae.
The wavelength of the oscillation at large l, which we hereby denote by λ̃_l(r), can be read from Eqs. (<ref>)-(<ref>). Oscillations with this same wavelength clearly appear also in the sequence of partial sums, whose limit at ∞ is the (generalized) sum we are interested in.
Remarkably, this oscillation may be damped to reveal the sum by the method of oscillation cancellation, which includes operating on the sequence of partial sums with a modified
version (to be adapted to the discrete case) of the self cancellation operator introduced in Ref. <cit.>.
For a function f(x) of a variable x we define the operator
O_Δx[f(x)]≡f(x)+f(x+Δ x)/2 ,
where Δ x is some chosen increment. The limit x→∞ of O_Δx[f(x)] coincides with the x→∞ limit of f(x), if the latter exists. If it does not exist, O_Δx[f(x)] may be used to define a generalized limit of the original function.
For a function exhibiting oscillatory behavior of wavelength λ̃ at large x (possibly times a non-exponential function of x), an application of this operator acts to damp the oscillations while leaving the non-oscillatory content of the function at x→∞ unaffected, hence producing a generalized limit at infinity. In particular, if x is a continuous variable, it is clearly most beneficial to take Δ x to be λ̃/2. Then, one application of O_λ̃/2 suffices to “kill" the oscillation completely in the case of constant amplitude, or more generally, reduce the amplitude to its x derivative if it is a function of x. Applying this operator (repeatedly, in the case of non-constant amplitude) on an accumulation function produces the generalized infinite integral. Similarly, in the discrete case (which is what we have here, as l is a discrete variable), applying this operator on the sequence of partial sums may yield the generalized infinite sum. However, a slight difficulty arises in this case, since half the wavelength is not a whole number and hence can not be taken as the increment for averaging. In that case, we may take Δ x to be the closest integer to λ̃/2 (given that λ̃ is well-enough covered by the discrete set of points). This may affect the convergence rate, perhaps slightly decreasing the efficiency of the damping (i.e. increasing the number of required repetitions). In the cases we computed, in which the wavelength in l is sufficiently long to be well-covered, the effect of l being discrete turned out to be quite negligible.
While the oscillation cancellation described above is indeed very effective in damping the oscillation and revealing the generalized sum, it turns out that fitting the partial sums (at large l) as a power series in 1/l multiplied by a superposition of cos[2π l/λ̃_l] and sin[2π l/λ̃_l] – also globally multiplied by l^2 in the case of fluxes – is significantly more numerically efficient, and this is the method we use in practice (typically with ∼10^2 orders for each of the cos and sin terms).
Another remark concerns the relationship between the oscillatory behavior one finds in the BH interior and what occurs in the BH exterior, where the mode contributions decay exponentially in l at fixed ω (due to the potential barrier). The mode contributions we consider here represent the analytic extension of those in the BH exterior to the BH interior. As it appears, the (negative) real exponent of l in the BH exterior is analytically extended to the BH interior as a purely imaginary exponent, thereby converting the exponential decay to oscillations in l.
§.§.§ Regularization of the ω integral
After performing the sum over l per ω via the oscillation cancellation procedure described above (or alternatively via a fit, as described), one may construct the basic integrand function in ω, denoted E^ae(ω) [the analytic-extension analog of Eq. (<ref>)], also denoted by T_yy^ae(ω) for the fluxes. (Dependence on the spacetime point is implied in this notation, and sometimes added explicitly to the parentheses.)
As in the standard variant, the regularization of this analytic-extension basic integrand includes subtracting the PMR counterterms (see Sec. <ref>). However, while in the standard
variant this leaves a converging integrand, here we again remain with
oscillations which interfere with the convergence of the ω-integral (at large ω). These oscillations differ from the single oscillation encountered in the l-sum by introducing two major complications: (i) the oscillatory
behavior is composed of an entire spectrum (which is reminiscent of the spectrum of oscillations one encounters outside the BH, see Ref. <cit.>), and (ii)
most crucially, these oscillations are accompanied by exponential growth! (This feature has no exterior counterpart.)
Hence, while the oscillatory behavior in l was damped via a simple oscillation cancellation procedure (merely averaging points roughly half-a-wavelength away), the large-ω behavior of the basic integrand requires a modified oscillation cancellation procedure – in particular, one that accounts for the exponential growth (this procedure will be described below).
The oscillations we find, which we number by an index i, are of the general form
∝ e^η_iωe^iωΦ_i ,
where η_i>0 and Φ_i∈ℝ.
η_i denotes the exponential growth parameter. We shall refer to Φ_i as the “ω-frequency", and denote its corresponding “ω-wavelength" by λ̃_i=2π/Φ_i.
We order the oscillations by their ω-frequency – from the lowest ω-frequency (longer ω-wavelength) towards higher ω-frequencies (shorter ω-wavelengths).
Notably, these oscillations (including their parameters η_i and Φ_i which are discussed below) are shared by both the field square and the fluxes. The full leading order large-ω asymptotic behavior, however, is numerically found to include multiplication of the above oscillatory (and exponentially-growing) form by ω^1/2 for the field square and ω^5/2 for the fluxes.
Generally speaking, the spectrum of oscillations in ω encountered in the BH interior is induced (through the concept of analytic continuation) by the oscillations in ω existing outside the BH, which are in turn related to a family of non-radial null geodesics connecting the points x and x' associated with the separation in the t direction.
In particular, each such connecting null geodesic determines a certain Δ t≡ t’-t, which in turn is the ω-frequency (see Ref. <cit.> for a detailed treatment of this issue in the Schwarzschild case, but this behavior also carries over to any stationary BH, and in particular to the Kerr case under consideration – see Ref. <cit.>).
Notably, these (real) ω-frequencies, comprising the spectrum of oscillations outside the BH, are r dependent.
By the very nature of the analytic extension method, the (r-dependent) ω-frequencies encountered inside the BH are expected to be related to their external counterparts by analytic continuation from r>r_+ to r<r_+.
Exploring this process of analytic continuation, one finds that all (i≥ 1) ω-frequencies – which are real at r>r_+ – acquire a universal imaginary part -π/κ_+.
In addition, these ω-frequencies at r<r_+ also have a real part, which –- like for their r>r_+ counterparts –- does depend on i and r. As in the BH exterior, this real part of the ω-frequencies may be obtained by numerically analysing certain connecting null geodesics. However, this analysis is beyond the scope of this paper.
The aforementioned imaginary part of the ω-frequencies inside the BH gives rise to a universal (i.e., at any radius r∈ (r_-,r_+)) exponential growth parameter:
η_i =π/κ_+ , for every i≥1,
whereas the real part of the ω-frequencies gives rise to the ω-frequencies Φ_i appearing in Eq. (<ref>). (As an illustration, for a/M=0.8 the first three Φ_i values are 36.8 M, 68.0 M and 99.1 M.) Both these aforementioned parameters, η_i and Φ_i for the i≥1 spectrum of oscillations, match what we find numerically.
In addition to this i≥1 spectrum of oscillations, we also find another (single) oscillation with a different exponential factor and a significantly longer ω-wavelength, hence we denote it by i=0.
The origin of this longer oscillation is not entirely clear to us.
Nevertheless, a simple analysis, based on a somewhat speculative argument
suggests that the parameters characterizing the oscillation are given by
η_0 =π/κ_--[2a-1/κ_+arctan(a/r_+)+1/κ_-arctan(a/r_-)] ,
Φ_0(r)= [1/2κ_+ln(a^2+r_+^2)-1/2κ_-ln(a^2+r_-^2)-4Mln(r_+-r_-)]-2r_*(r) .
where r_* is as given in Eq. (<ref>).
These expressions for η_0 and Φ_0 seem to match the oscillation we find numerically.
The oscillation cancellation procedure, which accommodates an exponentially growing amplitude, constitutes of taking averages with suitable weights.
In our case, the suitable self cancellation
operator for a function f(x) that behaves like ∝ e^η xe^(2π x)i/λ̃ is
O_Δ x^exp[f(x)]≡f(x)+exp(-ηΔ x) f(x+Δ x)/1+exp(-ηΔ x),
where Δ x is a chosen increment.
We typically take this operator with Δ x=λ̃/2. The weights were chosen such that with that Δ x, a pure exponentially-growing oscillation e^η xe^(2π x)i/λ̃ is fully cancelled in one application of the operator. If Δ x is not exactly (i.e. slightly deviating from) λ̃/2, the operator still acts to damp the oscillation and yields the same result (even if less efficiently, in the sense that more applications of the operator are needed, hence also a larger range of modes as each repetition shortens the original list by Δ x). Also, note that in our case, the function is not purely of the form e^η xe^(2π x)i/λ̃ but multiplied by certain powers of the variable x. In this case, as discussed for the self cancellation operator of Eq. (<ref>), applying the operator O_λ̃/2^exp does not fully cancel the growing oscillation (but still acts to significantly damp it at each application).
The justification for using this operator is similar to that of the standard oscillation cancellation; see the very brief discussion around Eq. (<ref>). That is, the procedure is constructed to be justified for the accumulation function,
building on the concept of a generalized integral, but it can be translated into a similar procedure for the integrand function itself.[The procedure for the integrand function (rather than accumulation function) involves various subtleties. In particular, in order to preserve the value of the integral, we attach a zero vector as long as the original ω vector to the beginning of the original integrand, prior to any application of O_λ̃/2^exp. That is, we first double the original ω range – the first copy is set to be identically zero and the second copy is the original integrand. The oscillation cancellation procedure is then applied on this lengthened integrand function. Notably, although the processed integrand function depends on the specifics of the oscillation cancellation procedure applied (i.e. the number of operations performed with a certain Δ x), the resulting integral remains invariant.]
Comparing Eq. (<ref>) with Eq. (<ref>), one finds that the shorter oscillations i ≥ 1 are accompanied by a stronger exponent compared to that of the long i=0 oscillation.
Hence, we typically first damp the short oscillations, until the long oscillation is exposed and damped as well.
Applying a multiple-damping procedure (by repeated application of O^exp_Δ x with suitable Δ x at each repetition) effectively damps the exponentially growing oscillations.
In the cases we looked at, it has been used to “kill” more than a hundred orders of magnitude in the ω-integrand.
Clearly, achieving such fantastic damping is only possible with extremely accurate
data, which is indeed what we have
(see Sec. <ref>).
Finally, after the oscillations have been sufficiently damped, the “regularized",
non-oscillatory piece of the integrand is exposed and integrated over.
Then, following a subtraction of the finite PMR counterterm
(see Sec. <ref>), the computation of the renormalized quantity of interest is finally complete.
§.§ Summary of regularization using the analytic extension variant
In the standard variant, Eqs. (<ref>) and (<ref>) summarize the regularization procedure and the components involved. Within the analytic extension variant, the procedure that yields P_ren (where P is the quantity of interest, either the field square or the fluxes), may be written schematically as
P_ren(x)=∫_0^∞(O_[ω][E^ae(ω,x)-E^sing(ω,x)])dω-e(x) ,
where O_[ω] is an operator damping the oscillations in ω (described below),
and E^ae(ω,x) is the analytic-extension basic integrand, defined as
E^ae(ω,x)≡∑_l=0^∞(O_[l][E^ae_ω l(x)]) .
Here, O_[l] is an operator damping the oscillations in l (described below), and the functions E_ω l^ae are the bare mode contributions of the analytic extension method, given for the field square in Eqs. (<ref>) and (<ref>), and replaced by T_yy(ω l)^ae of Eqs. (<ref>) and (<ref>) for the fluxes.
The O_[l] operator mentioned in the above recipe consists of a multiple application of the operator O_Δ x of Eq. (<ref>) with the increment Δ x taken to be as close as possible to half the wavelength λ̃_l/2, as described in Sec. <ref>. The O_[ω] operator is more involved, as it damps the entire complex spectrum of oscillations in ω described above.
It hence consists of a multiple application of the operator O^exp_Δ x of Eq. (<ref>) for each i-oscillation, with a suitable increment Δ x=λ̃_i/2 and an exponential growth parameter η_i, as described in Sec. <ref>.
A comparison of Eqs. (<ref>) and (<ref>) with their standard variant counterparts given in Eqs. (<ref>) and (<ref>) reveals the differences, as well as similarities, between the two methods.
Both methods include the subtraction of t-splitting PMR counterterms, E^sing(ω,x) and the finite counterterm e(x), given in Sec. <ref>. In this sense, both are variants of the PMR t-splitting method.
However, the bare mode contribution in the two variants is different from the outset, leading to distinct asymptotic behaviors at large l and ω, which naturally influence the convergence of the mode sums. Therefore, different treatments are required when performing the integration and summation.
While in the standard variant, the l sum is performed via the subtraction of an ID E^div_ω l, in the analytic extension variant this step is replaced by oscillation cancellation, as described in Sec. <ref>. In addition, while in the standard variant the resulting ω-integrand is convergent following the counterterm subtraction, in the analytic extension variant one still has to deal with exponentially growing oscillations, which is done through a specially tailored procedure of oscillation cancellation, as described in Sec. <ref>.
§.§ Numerical results and comparison with the standard variant
The expressions provided in Sec. <ref>,
given in terms of the numerically-computable ingredients ψ_ω l^int,
ρ_ω l^up and S_ω l(0) (see Sec. <ref>),
may be used to compute ⟨Φ^2⟩ _ren^U
and ⟨ T_yy⟩ _ren^U following the regularization steps of the analytic extension variant outlined in Sec. <ref>. Notably, this includes performing
multiple oscillation cancellations which challenge the numerical implementation of the method, requiring very high accuracy and a wide range of modes (since, as mentioned, each application of an oscillation cancellation operator shortens the original range).
The results of these computations may be subsequently compared with
the standard-variant results.
We focused on ⟨ T_uu⟩ _ren^U and ⟨Φ^2⟩ _ren^U at two selected r values, 1.4M and 1.5M, inside a Kerr BH of spin parameter a/M=0.8 (in which r_-=0.4M and r_+=1.6M), at the pole. (It is worth noting that these quantities are shown as a function of r in Figs. <ref>, <ref> in the main text.)
For ⟨ T_uu⟩ _ren^U at r=1.5M, we found a striking agreement with the standard-variant result of -1.019578781× 10^-7ħ M^-4 up to a relative difference of 6×10^-8.
At r=1.4M the accuracy dropped mainly due to longer ω-wavelengths, particularly the i=0 one,
which necessitates more modes for effective damping (while the range of modes prepared was the same for both r values). Then, for ⟨ T_uu⟩ _ren^U at r=1.4M, we found a relative difference of 2× 10^-5 from the standard-variant result of -5.61533442× 10^-7ħ M^-4.
Similar computations of ⟨Φ^2⟩ _ren^U at these r values have also shown nice agreement with the standard-variant t-splitting results.
The agreement exposed is very
impressive, given the many orders of magnitude that needed to be damped in order for the physical results
to be extracted from the bare mode contributions.
Fig. <ref> illustrates the dramatic decrease in orders of magnitude by the procedure of oscillation cancellation described above, focusing on the integrand T_uu^ae(ω) at r=1.5M. The integrand is depicted before and after the oscillation cancellation procedure, showing a reduction by roughly ∼115 order of magnitude.
(Besides this remarkable reduction in the ω-integrand, a decrease of 10-40 orders of magnitude was already achieved at an earlier stage, when constructing this basic integrand per ω from the corresponding series in l via the associated oscillation cancellation procedure.)
This challenge of damping such a large number of orders of magnitude is unique to the analytic extension. Yet, the two methods yield the same results in the cases we examined (up to the small error mentioned above), thereby providing the sought-after verification for the standard method described in the main text.
Following the mentioned drop in accuracy from r=1.5M to r=1.4M, we did not attempt to decrease r further. The technical difficulty increases with further decreasing r, mainly due to the dependence of the oscillation ω-wavelengths on r [along with
the presence of the (r-independent) exponential growth which drastically increases the numerical requirements of the entire procedure]; as the ω-wavelengths become
longer, a larger range of highly accurate modes is required for effective damping
of the oscillation.
We see that the analytic extension variant is far from an effective method of computation, due to the presence of multiple oscillations and exponential growth which put a strong demand on the numerical data required. However, we have demonstrated that it is indeed feasible to use this method to reproduce and verify our standard t-splitting results in a few chosen cases.
10
Fawcett M. S. Fawcett and B. F. Whiting,
Spontaneous symmetry breaking near a black hole,
Workshop on Quantum Structure of Space and Time, London, England, Cambridge University Press (1982).
KerrIH:2022 N. Zilberman, M. Casals, A. Ori and A. C. Ottewill,
Quantum fluxes at the inner horizon of a spinning black hole,
Phys. Rev. Lett. 129, 261102 (2022).
HTPF:2022 N. Zilberman, M. Casals, A. Ori and A. C. Ottewill,
Two-point function of a quantum scalar field in the interior
region of a Kerr black hole, Phys. Rev. D. 106, 125011
(2022).
Cardoso:2017soq V. Cardoso, J. .L. Costa, K. Destounis, P. Hintz and A. Jansen,
Quasinormal modes and Strong Cosmic Censorship, Phys. Rev. Lett. 120, 031103
(2018).
FluxesIH:2020 N. Zilberman, A. Levi and A. Ori, Quantum
Fluxes at the Inner Horizon of a Spherical Charged Black Hole, Phys. Rev. Lett. 124,
171302 (2020).
KSCH24 C. Klein, M. Soltani, M. Casals and S. Hollands, Infinite Quantum Twisting at the Cauchy Horizon of Rotating Black Holes, Phys. Rev. Lett. 132,
121501 (2024).
2021PhRvD.104b4066Z N. Zilberman and A. Ori,
Quantum fluxes at the inner horizon of a near-extremal spherical charged black hole, Phys. Rev. D. 104, 024066
(2021).
2022PhRvD.106d4060C M. Casals and C. Marinho,
Glimpses of violation of strong cosmic censorship in rotating black holes, Phys. Rev. D. 106, 044060
(2022).
Sbierski:2022cnj J. Sbierski,
Instability of the Kerr Cauchy Horizon Under Linearised Gravitational Perturbations, Ann. PDE 9, 7
(2023).
duffy2008renormalized G. Duffy and A. C. Ottewill,
Renormalized stress tensor in Kerr space-time: Numerical results for the Hartle-Hawking vacuum, Phys. Rev. D. 77, 024007 (2008).
Frolov:1985spb V. P. Frolov and A. I. Zelnikov,
VACUUM POLARIZATION OF THE ELECTROMAGNETIC FIELD NEAR A ROTATING BLACK HOLE, Phys. Rev. D. 32, 3150 (1985).
2021PhRvL.127w1301K C. Klein, J. Zahn and S. Hollands,
Quantum (Dis)Charge of Black Hole Interiors, Phys. Rev. Lett. 127, 231301 (2021).
2021PhRvD.104b5009K C. Klein and J. Zahn,
Renormalized charged scalar current in the Reissner-Nordström-de Sitter spacetime, Phys. Rev. D. 104, 025009 (2021).
DeWittBook:1965 B. S. DeWitt, Dynamical Theory of
Groups and Fields, Gordon and Breach, New York (1965).
Jacobson:1998 T. Jacobson, Semiclassical Decay of
Near-Extremal Black Holes, Phys. Rev. D. 57, 4890
(1998).
Carter:1966 B. Carter, Complete Analytic Extension
of the Symmetry Axis of Kerr's Solution of Einstein's Equations,
Phys. Rev. 141, 1242 (1966).
GravesBrill:1960 J. C. Graves and D. R. Brill, Oscillatory
Character of Reissner-Nordström Metric for an Ideal Charged Wormhole,
Phys. Rev. 120, 1507 (1960).
Hawking:1974 S. W. Hawking, Black hole explosions?,
Nature 248, 30 (1974).
Hawking:1975 S. W. Hawking, Particle creation by
black holes, Comm. Math. Phys. 43, 199 (1975).
Tipler F. J. Tipler, Singularities in conformally
flat spacetimes, Phys. Lett. A 64, 8 (1977).
Ori:1992 A. Ori, Structure of the singularity inside
a realistic rotating black hole, Phys. Rev. Lett. 68,
2117 (1992).
Ori:1999 A. Ori, Oscillatory null singularity inside
realistic spinning black holes, Phys. Rev. Lett. 83,
5423 (1999).
Winstanley:Young E. Winstanley and P. M. Young, Vacuum polarization for lukewarm black holes, Phys. Rev. D. 77, 024008 (2008).
th:CasalsPhD M. Casals, Electromagnetic Quantum Field Theory on Kerr-Newman Black Holes, Ph. D. thesis, arXiv:0802.1885 [gr-qc]] (2004).
Jensen:Ottewill B. P. Jensen and A. Ottewill, Renormalized electromagnetic stress tensor in Schwarzschild spacetime, Phys. Rev. D. 39, 1130 (1989).
PhysRevLett.91.051301 E. D. Carlson, W. H. Hirsch, B. Obermayer, P. R. Anderson and P. B. Groves, Stress-Energy Tensor for a Massless Spin 1/2 Field in Static Black Hole Spacetimes, Phys. Rev. Lett. 91, 051301 (2003).
PhysRevD.97.104060 O. J. C. Dias, F. C. Eperon, H. S. Reall, and J. E. Santos, Strong cosmic censorship in de Sitter space, Phys. Rev. D 97, 104060 (2018).
BradyDrozMorsnik:1998 P. R. Brady, S. Droz, and S. M. Morsnik,
Late-time singularity inside nonspherical black holes, Phys. Rev. D. 58,
084034 (1998).
Dafermos:2017 M. Dafermos and J. Luk, The interior
of dynamical vacuum black holes I: The C^0-stability of the Kerr
Cauchy horizon, [arXiv:1710.01722 [gr-qc]] (2017).
luk2017strong J. Luk and S.-J. Oh Strong cosmic censorship in spherical symmetry for two-ended asymptotically flat initial data I. The interior of the black hole region, [arXiv:1702.05715 [gr-qc]] (2017).
Hiscock:1981 W. A. Hiscock, Evolution of the interior
of a charged black hole, Phys. Lett. A 83, 110 (1981).
PoissonIsrael:1990 E. Poisson and W. Israel, Internal
structure of black holes, Phys. Rev. D. 41, 1796 (1990).
OriMassInflation:1991 A. Ori, Inner structure of
a charged black hole: An exact mass-inflation solution, Phys. Rev. Lett. 67,
789 (1991).
BradySmith:1995 P. R. Brady and J. D. Smith, Black
hole singularities: a numerical approach, Phys. Rev. Lett. 75,
1256 (1995).
Piran S. Hod and T. Piran, Mass Inflation in Dynamical
Gravitational Collapse of a Charged Scalar Field, Phys. Rev. Lett. 81,
1554 (1998).
Burko:1997 L. M. Burko, Structure of the black
hole's Cauchy-horizon singularity, Phys. Rev. Lett. 79,
4958 (1997).
Dafermos:2005 M. Dafermos, The interior of charged
black holes and the problem of uniqueness in general relativity,
Comm. Pure App. Math. 58(4), 445–504 (2005).
BirrellDavies:1978 N. D. Birrell and P. C. W. Davies,
On falling through a black hole into another universe, Nature (London)
272, 35 (1978).
Hiscock:1980 W. A. Hiscock, Quantum-mechanical
instability of the Kerr-Newman black-hole interior, Phys. Rev. D. 21,
2057 (1980).
Christensen:Fulling S. M. Christensen and
S. A. Fulling,
Trace anomalies and the Hawking effect,
Phys. Rev. D. 15, 2088 (1977).
OttewillWinstanley:2000 A. C. Ottewill and E. Winstanley,
Renormalized stress tensor in Kerr space-time: General results,
Phys. Rev. D. 62, 084018 (2000).
CDNOW M. Casals, S. R. Dolan, B. C. Nolan, A. C. Ottewill and E. Winstanley,
Quantization of fermions on Kerr space-time,
Phys. Rev. D. 87, 064027 (2013).
Casals:Ottewill:2005 M. Casals and A. C. Ottewill,
Canonical quantization of the electromagnetic field on the
Kerr background,
Phys. Rev. D. 71, 124016 (2005).
Boulware D. G. Boulware,
Spin-1/2 quantum field theory in Schwarzschild space,
Phys. Rev. D. 12, 350 (1975).
CasalsOttewill:2015 M. Casals and A. C. Ottewill, High-order
tail in Schwarzschild spacetime, Phys. Rev. D. 92,
124055 (2015).
Frolov:Thorne V. P. Frolov and K. S. Thorne, Renormalized stress-energy tensor near the horizon of a slowly evolving, rotating black hole, Phys. Rev. D. 39, 2125 (1989).
AAt:2015 A. Levi and A. Ori, Pragmatic mode-sum
regularization method for semiclassical black-hole spacetimes, Phys. Rev. D. 91,
104028 (2015).
AAtheta:2016 A. Levi and A. Ori, Mode-sum regularization
of ⟨ϕ^2⟩ in the angular-splitting method, Phys. Rev. D. 94,
044054 (2016).
AARSET:2016 A. Levi and A. Ori, Versatile Method
for Renormalized Stress-Energy Computation in Black-Hole Spacetimes,
Phys. Rev. Lett. 117, 231101 (2016).
LeviRSET:2017 A. Levi, Renormalized stress-energy
tensor for stationary black holes, Phys. Rev. D. 95,
025007 (2017).
Christensen:1976 S. M. Christensen, Vacuum expectation
value of the stress tensor in an arbitrary curved background: The
covariant point separation method, Phys. Rev. D. 14,
2490 (1976).
Christensen:1978 S. M. Christensen, Regularization,
renormalization, and covariant geodesic point separation, Phys. Rev. D. 17,
946 (1978).
LeviEilonOriMeentKerr:2017 A. Levi, E. Eilon, A. Ori
and M. van de Meent, Renormalized Stress-Energy Tensor of
an Evaporating Spinning Black Hole, Phys. Rev. Lett. 118,
141102 (2017).
Candelas:1980 P. Candelas, Vacuum polarization in
Schwarzschild spacetime, Phys. Rev. D. 21, 2185 (1980).
Frolov:1982 V. P. Frolov, Vacuum polarization near
the event horizon of a charged rotating black hole, Phys. Rev. D. 26,
954 (1982).
Fawcett:1983 M. S. Fawcett, The energy-momentum
tensor near a black hole, Comm. Math. Phys. 89, 103
(1983).
Candelas-Howard:1984 P. Candelas and K. W. Howard, Vacuum
⟨ϕ^2⟩ in Schwarzschild spacetime, Phys. Rev. D. 29,
1618 (1984).
Sasaki-Tagoshi:2003 M. Sasaki and H. Tagoshi, Analytic Black Hole Perturbation Approach to Gravitational Radiation, Living Rev. Relativ. 6,
6 (2003).
Howard-Candelas:1984 K. W. Howard and P. Candelas, Quantum
stress tensor in Schwarzschild space-time, Phys. Rev. Lett. 53,
403 (1984).
HowardRSET:1984 K. W. Howard, Vacuum ⟨ T_μ^ ν⟩
in Schwarzschild spacetime, Phys. Rev. D. 30, 2532
(1984).
Candelas_Jensen:1986 P. Candelas and B. P. Jensen, Feynman
Green function inside a Schwarzschild black hole, Phys. Rev. D. 33,
1596 (1986).
Anderson:1989 P. R. Anderson, ⟨ϕ^2⟩
for massive fields in Schwarzschild spacetime, Phys. Rev. D. 39,
3785 (1989).
Jen_Otte:1989 B. P. Jensen and A. C. Ottewill, Renormalized
electromagnetic stress tensor in Schwarzschild spacetime, Phys. Rev. D. 39,
1130 (1989).
Anderson:1990 P. R. Anderson, A method to compute
⟨ϕ^2⟩ in asymptotically flat, static,
spherically symmetric spacetimes, Phys. Rev. D. 41,
1152 (1990).
McLau_Jen_Otte:1992 B. P. Jensen, J. G. McLaughlin
and A. C. Ottewill, Anisotropy of the quantum thermal state
in Schwarzschild space-time, Phys. Rev. D. 45,
3002 (1992).
Otte_Taylor:2011 A. C. Ottewill and P. Taylor, Renormalized
Vacuum Polarization and Stress Tensor on the Horizon of a Schwarzschild
Black Hole Threaded by a Cosmic String, Class. Quant. Grav. 28,
015007 (2011).
Hartle:Hawking:2011 J. B. Hartle and S. W. Hawking, Path-integral derivation of black-hole radiance, Phys. Rev. D. 13,
2188 (1976).
breen2012hadamard C. Breen and A. C. Ottewill, Hadamard renormalization of the stress energy tensor in a spherically symmetric black hole space-time with an application to lukewarm black holes, Phys. Rev. D. 85,
084029 (2012).
Freitas:Casals G. Freitas and
M. Casals, A novel method for renormalization in quantum-field theory in curved spacetime, International Journal of Modern Physics D 27,
1843001 (2018).
2017PhRvD..96j5020T P. Taylor and C. Breen, Mode-sum prescription for vacuum polarization in black hole spacetimes in even dimensions, Phys. Rev. D. 96,
105020 (2017).
2016PhRvD..94l5024T P. Taylor and C. Breen, Mode-sum prescription for the vacuum polarization in odd dimensions, Phys. Rev. D. 94,
125024 (2016).
2018PhRvD..98j5006B C. Breen and P. Taylor, Vacuum polarization for varying quantum scalar field parameters in Schwarzschild-anti-de Sitter spacetime, Phys. Rev. D. 98,
105006 (2018).
taylor2022mode P. Taylor, C. Breen and A. Ottewill, Mode-sum prescription for the renormalized stress energy tensor on black hole spacetimes, Phys. Rev. D. 106,
065023 (2022).
Ander_His_Sam:1995 P. R. Anderson, W. A. Hiscock and
D. A. Samuel, Stress-energy tensor of quantized scalar fields
in static spherically symmetric spacetimes, Phys. Rev. D. 51,
4337 (1995).
hafner2024hadamard D. Häfner and C. Klein, Hadamard property of the Unruh state for massless fermions on Kerr spacetime : the large a case, arXiv:2403.09261 (2024).
SchAssaf:2018 A. Lanir, A. Levi, and A. Ori, Mode-sum
renormalization of ⟨Φ̂^2⟩ for a quantum scalar
field inside a Schwarzschild black hole, Phys. Rev. D. 98,
084017 (2018).
2023AnHP...24.2401K C. K. M. Klein Construction of the Unruh State for a Real Scalar Field on the Kerr-de Sitter Spacetime, Annales Henri Poincaré 24,
2401 (2023).
GroupPhiRN:2019 A. Lanir, A. Ori, N. Zilberman, O. Sela,
A. Maline and A. Levi, Analysis of quantum effects inside
spherical charged black holes, Phys. Rev. D. 99, 061502(R)
(2019).
Group:2018 A. Lanir, A. Levi, A. Ori and O. Sela, Two-point
function of a quantum scalar field in the interior region of a Reissner-Nordstrom
black hole, Phys. Rev. D. 97, 024033 (2018).
HH:1976 J. B. Hartle and S. W. Hawking, Path-integral
derivation of black-hole radiance, Phys. Rev. D. 13,
2188 (1976).
Israel:1976 W. Israel, Thermo-field dynamics of
black holes, Phys. Lett. A. 57, 107 (1976).
Unruh:1976 W. G. Unruh, Notes on black-hole evaporation,
Phys. Rev. D. 14, 870 (1976).
LeviThetaRSET A. Levi, Stress-energy tensor mode-sum
regularization in spherically symmetric backgrounds, in preparation.
OriLinear:1999 A. Ori, Evolution of linear gravitational
and electromagnetic perturbations inside a Kerr black hole, Phys. Rev. D. 61,
024001 (1999).
WatsonBessel G. A. Watson,
Treatise on the Theory of Bessel Functions, Cambridge University
Press (1944).
Teukolsky:1973 S. A. Teukolsky,
Perturbations of a Rotating Black Hole. I. Fundamental Equations
for Gravitational, Electromagnetic, and Neutrino-Field Perturbations,
Astrophys. J. 185, 635 (1973).
BertiCardosoCasals:2006 E. Berti,
V. Cardoso and M. Casals, Eigenvalues and eigenfunctions of
spin-weighted spheroidal harmonics in four and higher dimensions,
Phys. Rev. D. 14, 024013
(2006).
Kay:Wald B. S. Kay and R. M. Wald, Theorems on the uniqueness and thermal properties of stationary, nonsingular, quasifree states on spacetimes with a bifurcate Killing horizon,
Phys. Rep. 207, 49
(1991).
RokhlinXiao:2007 V. Rokhlin
and H. Xiao, Approximate formulae for certain prolate spheroidal
wave functions valid for large values of both order and back-limit,
Appl. Comput. Harmon. Anal. 22, 105–123 (2007).
Sela:2018 O. Sela, Quantum
effects near the Cauchy horizon of a Reissner-Nordström black hole,
Phys. Rev. D. 98, 024025
(2018).
Mathematica13.2 Research,
Inc., Mathematica, Version 13.2, Champaign, IL (2022).
Hollands:2020cqg S. Hollands, R. M. Wald
and J. Zahn, Quantum instability of the Cauchy
horizon in Reissner-Nordström-deSitter
spacetime, Class. Quant. Grav. 37, 115009 (2020).
Hollands:2020prdS. Hollands, C. Klein
and J. Zahn, Quantum stress tensor at the Cauchy horizon
of the Reissner-Nordström-de Sitter
spacetime, Phys. Rev. D. 102 (8), 085004 (2020).
Wald:1977 R. M. Wald, The
Back Reaction Effect in Particle Creation in Curved Spacetime, Commun.
math. Phys. 54,1—19 (1977).
Wald:1978 R. M. Wald, Trace anomaly of a conformally invariant quantum field in curved spacetime, Phys. Rev. D 17, 1477–1484 (1978).
BrownOttewill:1986 M. R. Brown and A. C. Ottewill, Photon propagators and the definition and approximation of renormalized stress tensors in curved space-time, Phys. Rev. D 34, 1776–1786 (1986).
KayRadzikowskiWald B. S. Kay,
M. J. Radzikowski and R. M. Wald,
Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy
Horizon, Commun. Math. Phys. 183, 533 – 556 (1997).
Schwinger:1951J. Schwinger, On Gauge Invariance and Vacuum Polarization, Phys. Rev. 82, 664 (1951).
FrolovThorne:1989 V. P. Frolov and K. S. Thorne, Renormalized
stress-energy tensor near the horizon of a slowly evolving, rotating
black hole, Phys. Rev. D. 39, 2125 (1989).
DamourRuffini:1976 T. Damour and R. Ruffini, Black-hole evaporation in the Klein-Sauter-Heisenberg-Euler formalism, Phys. Rev. D. 14(2), 332 (1976).
BussCasals C. Buss and M. Casals,
Study of a Scalar Field on the Maximally Extended Schwarzschild Spacetime, Journal of Physics: Conference Series, vol. 689, no. 1, p. 012002, IOP Publishing (2016).
NIST:DLMF F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider,
R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders,
H. S. Cohl, and M. A. McClain, eds.,
NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/.
| Within the framework of semiclassical gravity, the gravitational field
is kept classical whereas the matter fields are quantized. Semiclassical
gravity is expected to be a valid framework in the limit that the
physical scales are much larger than the Planck scales and, as such, it has
provided significant results. For example, in black hole (BH) settings,
semiclassical
gravity has led to the pioneering discovery by Hawking <cit.>
that astrophysical BHs emit quantum thermal radiation in their exterior.
In its turn, in the interior region of BHs, recent work within semiclassical
gravity has unveiled an irregularity of the so-called Cauchy
horizon (CH) <cit.> (see <cit.> in the non-rotating case), which (at least naively[Semiclassical analyses on fixed Reissner-Nordström and Kerr metrics (as well as their corresponding de Sitter variants) indicate that the quantum energy-momentum fluxes typically diverge at the CH like V^-2 (where V is a regular Kruskal coordinate vanishing at the CH) – which is stronger than the divergence of energy-momentum perturbations in the analogous classical problem. However, when attempting to translate this observation to the near-CH backreaction analysis, one should recall that a semiclassical BH in the Unruh state undergoes evaporation (which is manifested already at the event horizon). This needs to be taken into account when attempting to evolve the Einstein equations from the event horizon towards the CH.]) suggests dominance
over that due to classical effects <cit.> (see, e.g., <cit.> in the non-rotating case).
As mentioned, within semiclassical gravity, matter fields are treated
as Quantum Field Theories (QFTs). As is well-known, however, QFTs
suffer from ultraviolet divergences and, hence, the expectation values
of most physical quantities need to be appropriately renormalized. Most importantly,
the renormalized expectation value of the stress-energy tensor
(RSET),
⟨ T_μν⟩ _ren^Ψ[Typically, a hat is placed over a quantity in order to distinguish its quantum version over its classical version since, mathematically, they are objects of very different types. In order to reduce cluttering, however, we will not make such a distinction. Therefore, in particular, Φ will equally denote a classical scalar field or its quantum version (which is an operator-valued distribution), and similarly for the stress energy tensor T_μν. The distinction between classical and quantum quantities should be clear from the context.], when the field is in a quantum state Ψ, is the quantity which appears on the right hand side of the
semiclassical Einstein equations:
G_μν=8π⟨ T_μν⟩ _ren^Ψ ,
where G_μν is the Einstein tensor and we take units where c=G=1.
That is, the RSET replaces the classical stress-energy
tensor T_μν in the classical Einstein equations. Ideally,
one would
evaluate
the RSET
and the Einstein tensor
in the same spacetime
but that is a very tall order. Thus, typically, one
follows a perturbative approach whereby the RSET is calculated on
a background spacetime and one would then solve the semiclassical
Einstein equations for the backreacted metric; such a procedure could be carried out iteratively to arbitrary order. In curved spacetimes,
Wald <cit.> established axioms which a physically-meaningful RSET should satisfy.
Henceforth we shall focus on the case that the matter
field is a scalar field Φ. In this case, a
quantity which is easier to calculate than the RSET but which is also of physical significance is the renormalized field square (or vacuum polarization) ⟨Φ^2(x)⟩ _ren^Ψ: in particular, it is important for spontaneous symmetry breaking (see, e.g., <cit.> in the context of black holes).
There exist several methods for renormalization
but the one of main interest in this paper involves using the so-called point-splitting regularization
method <cit.> (see, e.g., <cit.> for a review). Point-splitting-based renormalization methods <cit.> are particularly useful for calculational purposes and can be used for both the renormalized field square and the RSET, satisfying Wald's physical axioms in the latter case.
The point-splitting method essentially consists of the following.
First, the expectation value at a
spacetime point x of a physical quantity
which is quadratic
in the quantum field
(which, mathematically, is an operator-valued distribution) and its derivatives
is temporarily made a bi-tensor[There is a freedom in the choice of such bi-tensor, while the final physical result is independent of that choice.] by evaluating each
one of the two field factors at a different spacetime point: one factor at x
and the other factor at, say, x'.
Such a bi-tensor is then
regular
as long as the
two spacetime points do not coincide (and are not connected by a null
geodesic <cit.>).
One then subtracts
from this unrenormalized
bi-tensor
a
so-called counterterm[The counterterm is typically expressed as a sum of truly divergent subterms and a finite subterm.]
which is purely-geometrical (and so state-independent).
Finally, one takes the
coincidence limit (x'→ x) in the result of such subtraction,
yielding the renormalized expectation value of the quantity of interest.
In the case that the quantity
of interest is the field square
Φ^2 or the stress-energy tensor
T_μν, we
respectively
denote
the
unrenormalized bi-tensor by
1/2G_Ψ(x,x')
or ⟨ T_μν(x,x')⟩^Ψ,
the counterterm by
1/2G^CT(x,x') or T_μν^CT(x,x'),
and
the point-splitting regularization
procedure then
amounts to
⟨Φ^2(x)⟩ _ren^Ψ=1/2lim_x'→ x(G_Ψ(x,x')-G^CT(x,x'))
or ⟨ T_μν(x)⟩ _ren^Ψ=lim_x'→ x(⟨ T_μν(x,x')⟩ ^Ψ-T_μν^CT(x,x')).
In this paper, for the two-point function G_Ψ(x,x') we shall later make the choice of the anticommutator G_Ψ^(1)(x,x')≡⟨{Φ(x),Φ(x')}⟩ _Ψ, also called the Hadamard two-point function (HTPF), although other choices for G_Ψ(x,x') are also possible.
An expression for the HTPF in Kerr in terms of modes which are amenable to practical computations was derived in <cit.> for the exterior of the BH and by us <cit.> for the interior.
From the fact that the
classical stress-energy tensor may be obtained by applying a certain
differential operator quadratic in the field (see Eq. (<ref>) below),
it follows that the RSET may be obtained by applying a related differential
operator to G_Ψ(x,x')-G^CT(x,x') and afterwards
taking the coincidence limit.
Unfortunately, from a technical point of view, such a renormalization
procedure is notoriously hard to carry out in practice, at least in
the case of BH background spacetimes. The main reason is that, typically,
one calculates the unrenormalized bi-tensor (1/2G_Ψ(x,x') or
⟨ T_μν(x,x')⟩ ^Ψ ) via a full
(Fourier and angular) infinite mode decomposition whereas the counterterms
(1/2G^CT(x,x') or T_μν^CT(x,x')) are
instead known in terms of geometrical quantities (they are usually
known as an expansion for small geodesic distance between the two
spacetime points[The geodesic distance between the two spacetime points is unique as
long as the points are `close' enough.] x and x'). Only in very special (highly-symmetric) spacetimes can one analytically obtain both
the modes of the unrenormalized quantity and the corresponding
mode sums in closed form; one can then subtract from the closed form expression the counterterm and finally take the coincidence limit x'→ x, thereby performing the entire renormalization procedure analytically. In the other cases, which include all 4-dimensional BH spacetimes, one must resort to a numerical evaluation of the full mode sum, which can be rather challenging since the convergence of the infinite mode sum slows
down as x' approaches x (and the mode sum diverges in the actual limit
x'→ x).
Therefore, one typically seeks to find a mode decomposition of
the counterterm, so that the renormalization subtraction can be carried
out mode-by-mode, thus improving the convergence of the mode sum.
For that purpose, it is useful to separate the points x and x'
in a coordinate direction which corresponds to a symmetry (in cases where
there is one) of the background spacetime and then re-express the
counterterm as a mode sum decomposition with respect to the associated coordinate.
Commonly, the background spacetime is stationary and either spherically-symmetric or only axisymmetric, and so, accordingly, one separates the points in
the time direction (so-called t-splitting) or corresponding angular
directions (so-called θ- or φ-splitting, depending
on whether the direction of separation is along the polar angle or
the azimuthal angle, respectively). In the case of t-, θ-
or φ-splitting, the corresponding decomposition of the counterterm is in terms
of, respectively, Fourier frequency ω-modes, spherical/spheroidal l-harmonics or azimuthal m-modes.
In its turn, the unrenormalized bi-tensor involves all sums: an (infinite) integral over ω, an (infinite) sum over l and a (finite) sum over m.
It is worth mentioning that the point-splitting method is expected to fail at the spacetime regions where the Killing vector associated with the direction of symmetry with respect to which the splitting is carried out has zero norm (since the splitting would be along a null direction, along which the two-point function diverges).
In particular, this would mean that in a Kerr spacetime φ-splitting
might fail on the pole and t-splitting on the boundary of the ergoregion, which is where the Killing vector ∂_t becomes spacelike. (In particular, at the axis of rotation the ergoregion boundary meets the horizons, leading to the failure of the method there, as we empirically see.)
If the background is static and the quantum state is thermal, t-splitting may render the expressions
particularly amenable to computations if one further takes advantage
of a Euclideanization technique,
whereby the spacetime is made Riemannian via a Wick rotation of the time coordinate.
Another option for performing the renormalization is to calculate differences
of renormalized expectation values
in two different
states: because the counterterms are state-independent, it is clear
that such difference is equal to the coincidence limit of the difference
between the unrenormalized bi-tensors in the two different states
(e.g., ⟨Φ^2(x)⟩ _ren^Ψ_1-⟨Φ^2(x)⟩ _ren^Ψ_2=1/2lim_x'→ x(G_Ψ_1(x,x')-G_Ψ_2(x,x'))
and ⟨ T_μν(x)⟩ _ren^Ψ_1-⟨ T_μν(x)⟩ _ren^Ψ_2=lim_x'→ x(⟨ T_μν(x,x')⟩ ^Ψ_1-⟨ T_μν(x,x')⟩ ^Ψ_2),
for two states Ψ_1 and Ψ_2). Such differences are already regular
and so no actual renormalization needs to be carried out. Such a calculation
is particularly useful if one happens to know through some other means
the value of the renormalized expectation value in some reference
state, from which (together with the calculation of the difference
between unrenormalized bi-tensors) the value of the renormalized expectation
value in another state could be thus obtained. We call this the state
subtraction method.
Let us from now on focus only on BH spacetimes, for which the main relevant quantum states are the following.
The Unruh state <cit.>
describes an astrophysical BH evaporating via the emission of
Hawking radiation and is hence the state of interest in this paper.
In Schwarzschild or Reissner-Nordström (RN), the Hartle-Hawking (HH) state <cit.>
is meant to describe a
BH in thermal equilibrium with its own Hawking radiation and is the only state
for which the Euclideanization procedure turns the Fourier ω-integral into a much more computationally practical discrete sum.
In Kerr,
a proposal for this state was made in <cit.> but,
unfortunately, off the symmetry axis it is not well-defined for bosons <cit.>, whereas the analogous state for massless fermions only exists near the event horizon (EH) <cit.>;
possibly relatedly,
the Euclideanization procedure is
in principle
not immediately applicable
in Kerr.
Finally, the Boulware state <cit.>
is irregular
on the EH and is (only) appropriate to describe the quantum fields
around a star-like object.
Due to symmetries of the metric and quantum state under consideration, in spherically-symmetric BH spacetimes (e.g. Schwarzschild and RN) the quantities computed (RSET and vacuum polarization) may depend only on the radial coordinate r, and in the axially-symmetric (e.g. Kerr) case the dependence may only be on r and polar angle θ. In particular, a computation at the CH is valid generally at the inner horizon (IH, whose ingoing section is the CH), with some θ-dependence in the rotating case.
Despite the above-mentioned technical difficulties for renormalization
in BH background spacetimes,
significant progress has been made over the years.
Renormalization in QFT in BH spacetimes has a long history starting
in the late 1970's, which we next review classified by method and spacetime, starting with spherically-symmetric BHs and afterwards moving on to rotating BHs.
First, via the method of state subtraction and expected behavior of a reference quantum state in some asymptotic region, the
renormalized expectation values of physical quantities
for the Unruh, Hartle-Hawking and Boulware states
in Schwarzschild spacetime were obtained when
approaching the EH or radial infinity <cit.>.
Recently, Refs. <cit.>
used the state subtraction method to obtain renormalized expectation values on the CH of an RN-de Sitter (dS) BH.
Another early calculation of the RSET in a BH spacetime was carried out in <cit.> for the Hartle-Hawking state in Schwarzschild spacetime. This work made use of Euclideanization and then regularization was implemented at the level of the heat kernel representation for the Euclidean Green function, while taking the spacetime coincidence limit in the kernel, although unfortunately this calculation contained a slip unrelated to the renormalization process as noted by Howard <cit.>.
Turning to the point-splitting
method, the t-splitting variant together with the Euclideanization technique
was used by various authors
in order to obtain renormalized expectation values
in static, spherically-symmetric BH spacetimes, namely,
Schwarzschild (e.g., <cit.>), RN (e.g., <cit.>) and
(lukewarm) RNdS (e.g., <cit.>).
Typically, in order to speed up the convergence of the mode sums, these works use WKB asymptotics for large multipole number l and/or Euclidean frequency (such WKB asymptotics are not readily generalizable to the case of Lorentzian frequency ω).
It is also worth noting a new variant of the point-splitting method, called the extended coordinate method, which involves splitting in both the angular direction
and
in the time direction combined with the Euclideanization technique. The extended coordinate method has been applied
in (4- and higher-dimensional) Schwarzschild <cit.> and in
Schwarzschild-anti-dS <cit.>; see <cit.> for renormalization also via multiple-direction splitting but without Euclideanization (and so is in principle directly generalizable to Kerr) in the case of Bertotti-Robinson spacetime.
Of most relevance for this paper is the so-called pragmatic mode-sum regularization (PMR) method, which also has the advantage of not using Euclideanization while still using point-splitting.
PMR has been
developed and used to calculate renormalized expectation values for
a scalar field in the following cases: using θ-splitting,
in Schwarzschild <cit.>,
in RNdS <cit.>
and
inside the EH
of RN <cit.>;
using t-splitting, in Schwarzschild <cit.>;
separately using t-, θ- and φ-splitting, in
Schwarzschild <cit.>.
The literature results mentioned so far were for spherically-symmetric BHs. In
the astrophysically most important case of a stationary, rotating
(Kerr) BH, because of the lack of spherical symmetry, θ-splitting
is not possible and, because of the lack of staticity, the Euclideanization
technique
is in principle not implementable
either. Thus, progress in Kerr was for a long time made only by calculating
differences of renormalized expectation values of quantities in two
different states. This was some times combined with the expected behavior
of the RSET for one of the states in some specific spacetime region (namely, at infinity
or near a horizon) in order to
apply the state subtraction method so as to
gain knowledge about the behavior of the RSET for the other state in that
region: see Refs. <cit.> outside the EH and Ref. <cit.> on the CH (and <cit.> on the CH of Kerr-dS).
An exception to that is the axis of symmetry of Kerr, where no rotation is
`felt', and that was used to directly obtain the RSET in
a `formal' Hartle-Hawking
state on the pole of the EH in Refs. <cit.>. An important technical
breakthrough in renormalization was achieved using the t- and φ-splitting
variants of PMR in <cit.>,
where the authors managed to calculate the RSET outside the EH of
Kerr. This is so far the only time that the calculation
of an RSET has been carried out outside a Kerr BH.
In the current
paper, we present the method and results for an analogous calculation
of certain components of the RSET and the renormalized field square inside
a Kerr BH, all the way from (just off) the EH to (just off) the IH.
We have mentioned our calculation in <cit.> of the RSET energy-flux components on the CH of Kerr using state subtraction (which was done for an array of values of the polar angle θ and the BH angular momentum). In fact, in <cit.> (see especially its Supplemental Material) we also presented t-splitting results for the same RSET components on the pole
(i.e., θ=0[Even though θ=π is also a pole, the value of the scalar field is the same at θ=π as at θ=0, and so we indistinctively refer to “the pole" or “the axis of rotation".])
very near -but off- the IH (as the method we use is inapplicable at exactly the IH itself). The extrapolation of these results onto the IH allowed us to check our result exactly on the IH, which was independently obtained with the state-subtraction method. In particular, this comparison provided a crucial test for the state subtraction procedure used directly at the IH, which was based on a non-conventional reference state.
In this paper we describe in detail the method that we used in <cit.> off the IH, and here we
also use it to obtain new results.
Specifically, the method that we develop here is the t-splitting variant of PMR for
the calculation of renormalized quantities on the pole
between (just off) the EH and (just off) the IH of a Kerr spacetime g_αβ
for a minimally-coupled massless scalar field in the Unruh state |0⟩ _U
(we use throughout a U subscript or superscript
to indicate that the field is in the Unruh state). The quantities that we give computationally-amenable expressions
for are the renormalized expectation value of the field square (the vacuum polarization), ⟨Φ^2⟩ _ren^U,
and the energy flux components[The Eddington coordinates are null in spherical symmetry. In the rotating case, they become null on the pole (and on the horizons), which facilitates the interpretation of
T_uu and T_vv as the energy flux components in the context of this paper.] of the RSET, ⟨ T_yy⟩ _ren^U,
where y∈{u,v}, and u and v are the Eddington coordinates [see
Eq. (<ref>) below]. The significance of these flux components lies in their role in understanding backreaction near the CH, as outlined in Ref. <cit.>.
Broadly speaking, it is more convenient to treat the trace-reversed stress-energy tensor, denoted by T_αβ, which is related to the original tensor T_αβ by
T_αβ≡ T_αβ-1/2g_αβT_ μ^μ ,
since it admits the following simple form:
T_αβ=Φ_,αΦ_,β .
It is also worth noting that when taken as sources to the Einstein equation, there is no advantage to using the stress-energy tensor over its trace-reversed counterpart. Indeed, one may work with the alternative version of the Einstein
equation R_αβ=8πT_αβ, where R_αβ is the Ricci tensor, which involves the trace-reversed stress-energy tensor directly.
However, at the focus of this paper is the pole of Kerr – where g_uu=g_vv=0 – hence trace-reversal does not change the flux components. That is, at the pole, the following holds:
T_yy(r,θ=0)=T_yy(r,θ=0) .
Hence, in this paper, we treat the flux components directly, and they are given in terms of the field derivatives by T_yy=Φ_,yΦ_,y.
As mentioned,
t-splitting in BH background spacetimes involves a separation of the spacetime points in the t direction
(which is a symmetry of the background)
and a decomposition of the unrenormalized bitensor, and so also of the corresponding renormalized quantity, that involves two infinite sums in the multipolar number l and frequency ω, as well as a finite sum in the azimuthal number m.
We note that the interior of
the BH reveals some distinct features which give rise to specific technical challenges
which do not appear in the exterior (as in the calculation in <cit.>). In particular, expressions for renormalized
quantities inside the BH contain the mentioned double infinite sums such that the
innermost, multipolar l-sum diverges. We refer to this as the intermediate divergence (ID)
problem and we show how to deal with it (namely, by including a `small'
split in the polar angle direction, on the top of the split in the
t direction).
We then use the t-splitting PMR method in order to derive the results
on the pole for the renormalized energy fluxes ⟨ T_yy⟩ _ren^U
on approaching the IH
that were already shown in Ref. <cit.> (agreeing with the state-subtraction results computed directly at the IH therein),
as well as to obtain new results. The new results on the pole are these energy fluxes all the way between the two horizons (see Figs. <ref>–<ref>)
as well as the renormalized field square ⟨Φ^2⟩ _ren^U (see Figs. <ref>–<ref>).
Apart from the IH vicinity, we also focus on the EH vicinity at the pole, where we obtain numerical support for regularity of the Unruh state there (reflected in the vanishing
of ⟨ T_uu⟩ _ren^U as (r-r_+)^2 in the r→ r_+ limit), which is a property that has not yet been rigorously proven in the case of Kerr.
In Table <ref>
we provide a summary of the numerical values of various
quantities of physical interest (such as the fluxes and the field square) at the pole of the horizons for the values of the Kerr BH angular momentum that we considered in this paper.
The rest of this paper is organized as follows. In Sec. <ref>
we introduce Kerr spacetime, the scalar field and its Eddington modes, the Unruh state
as well as a conserved quantity. In Sec. <ref>, the t-splitting procedure is extended and customized to the Kerr interior (at the pole, θ=0), accommodating for the complexities arising there and describing the required extra steps in the procedure. In Sec. <ref> we present the aforementioned numerical results, and we end with a discussion in Sec. <ref>.
The Appendices complement the rest of the paper, and are as follows:
Appendix <ref> gives the asymptotic
expressions for large multipole number l (and m=0) for the interior radial function ψ_ω l^int
and the exterior reflection coefficient ρ_ω l^up, which
are then used in Sec. <ref>; Appendix
<ref> illustrates and justifies our treatment of the intermediate divergence problem arising in the sum over l; Appendix <ref>
focuses on the numerical methods implemented in the current work; finally,
Appendix <ref> offers an alternative
approach to the computation, which we call the analytic extension variant of t-splitting,
following computations of several quantities which may
be compared with their standard t-splitting counterparts.
Supplementing this paper is a Mathematica notebook which includes the PMR t-splitting counterterms for the field square and for the full stress-energy tensor, given at a general spacetime point in a Kerr spacetime. (The results for the full stress-energy tensor are given in Boyer-Lindquist coordinates, and are then translated to the flux components in coordinates (u,v,θ,ϕ) at the pole.)
We use
metric signature (-+++) and units where c=G=1. | null | null | null | In this paper, we employ the method of point splitting – specifically, t-splitting – to calculate the Unruh-state fluxes ⟨ T_uu⟩ _ren^U, ⟨ T_vv⟩ _ren^U, as well as the field square ⟨Φ^2⟩ _ren^U, for a minimally-coupled massless scalar field, in the interior of a spinning BH on the axis of rotation. These fluxes are particularly crucial for understanding backreaction near the IH, as discussed in Ref. <cit.>.
We calculated ⟨ T_uu⟩^U_ren, ⟨ T_vv⟩^U_ren and ⟨Φ^2⟩^U_ren for two different BH spin values, at the pole θ=0, spanning from near the EH to near the IH. Our results, displayed in the various figures of Sec. <ref>, provide a quantitative picture of the behavior of ⟨ T_uu⟩^U_ren, ⟨ T_vv⟩^U_ren and ⟨Φ^2⟩^U_ren in the BH interior. In particular, our results include a focus on the IH vicinity, validating our state-subtraction results computed directly at the IH (see Ref. <cit.>). In addition, our computations at the EH vicinity provide numerical support for the anticipated regularity of the Unruh state at the EH.
The method presented and employed in this paper is the interior counterpart of the t-splitting method formulated in the BH exterior, generally introduced in Ref. <cit.> and employed outside a Kerr BH in Ref. <cit.> (φ-splitting was also use for computations outside a Kerr BH in that work, but this variant is technically more difficult and in particular is not applicable at the pole). Note, however, that employing t-splitting inside a Kerr BH is significantly harder and involves unique challenges not present in the BH exterior. This is rooted in the form of the centrifugal potential (in particular at large l), see Fig. <ref>. Both inside and outside the BH, the large-l effective potential generally scales as l^2. In the exterior, it acts as a potential barrier. As a consequence, the l-series converges exponentially fast, and the summation over l is easily performed. Inside the BH, however, the potential at large l acts as a potential well. As a consequence, the l-series does not converge, strictly speaking. This constitutes the so-called intermediate divergence (ID) problem. To overcome this problem, we introduce a “small split" in θ, that is taken to vanish before the coincidence limit in the t direction is taken. With the aid of this additional limiting process, we can identify a certain l-sequence (to which we sometimes refer as the ID) which properly captures the large-l divergent piece of the original sequence – and which should be subtracted from that original l-sequence before summation, in order to obtain the correct renormalized result.
As part of this treatment, we show (partly analytically and partly numerically) that the ID attains a specific form, given in Eq. (<ref>).
This ID subtraction procedure is illustrated through various figures (in particular, Figs. <ref> and <ref> for the field square and Figs. <ref>, <ref> and <ref> for T_uu) and justified in an appendix.
After this ID is subtracted, the remaining regular l-series does converge (and thereby yields the basic ω-integrand). However, even post-subtraction, the convergence of the regularized l-series is rather slow, proceeding as 1/l (that is, 1/l^2 for the l-sequence
). This presents a significant numerical challenge, which we overcome by computing the l-sequence up to l=300, and then fitting the sequence as a series of inverse powers 1/l^k,[For a more precise account of the numerical implementation, see Appendix <ref>.] typically reaching k∼100. For this high-order fit to succeed, we had to compute the individual l-contributions with a high precision of more than ∼250 decimal figures. This, in turn, required the computation of the reflection coefficient ρ_ω l^up and the radial function ψ_ω l^int [as well as its derivative ψ_ω l,r^int and also the spheroidal eigenfunctions S_ω l(0)] at that level of precision.
Upon successful computation of the l-sum, we obtained the ω-integrand, paving the way to integration (after subtracting the known PMR counterterms).
This eventually allowed the renormalized quantity (either the field square ⟨Φ^2⟩ _ren^U or the fluxes ⟨ T_uu⟩ _ren^U and ⟨ T_vv⟩ _ren^U) to be computed at a wide range of r values, spanning from (just off) the EH to (just off) the IH.
Close to the horizons (located at r_+ and r_-), the t-splitting computation at the pole becomes more challenging (mainly due to the divergence of counterterms). In particular, t-splitting on the axis of rotation is inapplicable directly at the horizons. Nevertheless, we managed to compute the fluxes ⟨ T_vv⟩ _ren^U and ⟨ T_uu⟩ _ren^U sufficiently close to the horizons, so as to obtain the horizon limits of these two quantities by extrapolation.
In particular, our results extrapolated to the IH (as demonstrated in Figs. <ref> and <ref>) show remarkable agreement with those obtained using the state-subtraction method directly at the IH <cit.>, hence validating the state-subtraction method used therein.
In addition, on approaching the EH, ⟨ T_uu⟩ _ren^U decays to zero as expected (like δ r_+^2, see Fig. <ref>).
For ⟨Φ^2⟩_ren^U, while we were able to successfully obtain the EH limit,
this was not possible at the IH. Obtaining the IH limit of ⟨Φ^2⟩ _ren^U poses difficulties, likely attributed to a non-trivial, non-monotonic behavior near the IH, as observed in the analogous RN case of Ref. <cit.>.[Remarkably, using state subtraction, and taking advantage of the presumed regularity of the reference quantum state of Ref. <cit.>, we indeed found that ⟨Φ^2⟩ _ren^U reaches a finite value (which is yet unknown) at the IH, approaching it as r_*^-3 (as in the analogous RN case; this analysis also reproduced the non-trivial and non-monotonic behavior of ⟨Φ^2⟩ _ren^U on approaching the IH, similarly to that observed in the aforementioned RN case.)]
Still, as our method involves the unique step of ID subtraction, it would be worth testing it against another independent method at general r values inside the BH.
In Appendix <ref>, we present an alternative variant of the t-splitting PMR method, dubbed the analytic extension method. This method leverages an analytic extension of the HTPF mode contributions from the exterior to the interior of the BH, building on the analyticity of the background geometry at the EH. While this approach circumvents the ID problem, it introduces challenges of dealing with growing oscillatory behavior in l and ω, which can be managed by our “oscillation cancellation" procedure. Our results at two specific r values reveal a remarkable agreement between the two variants, which, in particular, provides a crucial test for the t-splitting PMR method described in this manuscript.
As previously discussed, our calculations necessitated computing contributions for l up to approximately l_max = 300. However, since our analysis was confined to the pole, we only needed to compute these contributions for m = 0 modes. To extend our study off the pole, one must compute all m modes across the entire domain -l≤ m≤ l. Consequently, this will significantly increase the number of modes that need to be computed –- and consequently, the required computational resources –- typically by a factor of l_max = 300. Such an increase seems impractical, at least with our current methods.[Note, however, that for a computation directly on the IH we can use the state-subtraction method, which is much more efficient numerically. It requires a smaller l range (say, up to a few dozen), and hence is practical to apply off the pole as well. Indeed, we have performed this computation for an a/M=0.8 BH in the entire range 0≤θ≤π/2, see Ref. <cit.>.]
In Table <ref> we summarize the values of the various quantities at the two horizons at the pole of a Kerr BH which we have given in the main text.
There exist several compelling directions for further research. An immediate extension would be to study a wider range of values for the spin parameter a/M beyond the presently treated values of a/M=0.8 and a/M=0.9. It would also be valuable to extend the method for off-pole computations, although, as mentioned, currently this seems to be numerically challenging (without the adoption of more efficient methods such as state subtraction, which is presently only applicable for computations at the IH). Furthermore, a fuller picture of semiclassical effects inside a Kerr BH will be gained by extending our analysis to other components of the RSET.
In addition, it is highly valuable to extend our analysis to other quantum fields. An exploration of other scalar fields may include allowing a non-vanishing coupling parameter ξ. Of higher physical relevance is the study of the quantum electromagnetic field, which is the actual field observed in nature. It is also worth noting the importance of the linearized-gravitational semiclassical contribution, which should be no less significant than its electromagnetic counterpart (but likely to be technically and conceptually more intricate).
A.O. and N.Z. were supported by the Israel Science Foundation under
Grant No. 600/18. N.Z. also acknowledges support by the Israeli Planning
and Budgeting Committee. | null |
http://arxiv.org/abs/2409.18056v1 | 20240926165935 | Hopf formulae for homology of skew braces | [
"M. Gran",
"T. Letourmy",
"L. Vendramin"
] | math.QA | [
"math.QA",
"math.CT",
"math.RA",
"16T25, 18E13, 20N99"
] |
§ ABSTRACT
The variety of skew braces contains several interesting subcategories as subvarieties, as for instance the varieties of radical rings, of groups and of abelian groups. In this article the methods of non-abelian homological algebra are applied to establish some new Hopf formulae for homology of skew braces, where the coefficient functors are the reflectors from the variety of skew braces to each of the three above-mentioned subvarieties. The corresponding central extensions of skew braces are characterized in purely algebraic terms, leading to some new results, such as an explicit Stallings–Stammbach exact sequence associated with any exact sequence of skew braces, and a new result concerning central series.
Infering Alt-text For UI Icons With Large Language Models During App Development
Christoph Csallner
September 28, 2024
================================================================================
§ INTRODUCTION
Skew braces appeared originally in connection with the study of set-theoretic solutions to the Yang–Baxter equation <cit.>. Now their applications go far beyond this domain,
as they appear in several different areas; see for example <cit.>.
A skew brace <cit.> is a triple (A,+,∘),
where (A,+) and (A,∘) are groups such that
the compatibility condition
a∘ (b+c)=a∘ b-a+a∘ c holds for all a,b,c∈ A.
Skew braces form a variety of universal algebras 𝖲𝖪𝖡, and generalise at the same time groups and radical rings. Concretely, given a group (G,·) one can give G a skew brace structure by taking +=· and ∘=·. In particular, this means that the variety 𝖦𝗋𝗉 of groups is a subvariety of the variety 𝖲𝖪𝖡 of skew braces. On the other hand, a radical ring (R,+,·) is a (not necessarily commutative) ring without unit such that the operation a∘ b=a+a· b+b is a group operation. In this case, the triple (R,+,∘) is a skew brace. We will recall in Section <ref> a characterisation of radical rings <cit.> that implies that radical rings also form a subvariety 𝖱𝖺𝖽𝖱𝗇𝗀 of the variety 𝖲𝖪𝖡 of skew braces.
Since
skew braces form a variety of Ω-groups in the sense of Higgins <cit.>, hence in particular a semi-abelian category <cit.> (in fact, even a strongly protomodular category <cit.>), it is natural
to look at non-abelian homology of skew braces with coefficient functors in the subvarieties 𝖦𝗋𝗉 of groups, 𝖠𝖻 of abelian groups, and 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings.
Indeed, in recent years there have been some relevant developments in non-abelian homological algebra (see <cit.>, for instance, and the references therein). The comonadic approach to homology theory of algebraic structures <cit.> turns out to be perfectly compatible with the fundamental concept of semi-abelian category, allowing one to extend and improve some classical results in group homology to a general categorical context including compact groups, crossed modules, Lie algebras and cocommutative Hopf algebras, for instance.
This article is a first step in the direction of applying the above approach to non-abelian homology to the variety of skew braces, by providing some precise descriptions of the second homology group of a skew brace in terms of a generalized Hopf formula.
In order to briefly explain this, it is useful to first recall this classical formula in the category 𝖦𝗋𝗉 of groups and its relationship with the notion of central extensions. Consider the adjunction
⊥ ["U"', shift right=2, from=1-1, to=1-3]
[""', shift right=3, from=1-3, to=1-1]
where 𝖺𝖻𝖦𝗋𝗉→𝖠𝖻 is the classical abelianisation functor sending a group G to the quotient 𝖺𝖻(G) = G/[G,G] of G by its derived subgroup [G,G]. Given a free presentation of a group G
0 K F G 0
[from=1-1, to=1-2]
[tail, from=1-2, to=1-3]
["f", two heads, from=1-3, to=1-4]
[from=1-4, to=1-5]
where F is a free group, consider the quotient F/[K,F] of F by the commutator subgroup [K,F] as in the diagram
F F/[K,F]
G
[two heads, from=1-1, to=1-3]
["f"', two heads, from=1-1, to=2-2]
["f̅", two heads, from=1-3, to=2-2]
where the induced morphism f is then a weakly universal central extension of G <cit.>. The Galois group of this central extension f of G turns out to be an invariant of G, called the fundamental group π_1(G) <cit.> of G, that is naturally isomorphic to the second integral homology group of G
π_1(G)≅K ∩ [F,F]/[K,F]≅𝖧_2 (G)
via the classical Hopf formula, i.e. the right-hand isomorphism. The commutators [K,F] and [F,F] appearing in this formula are “relative” to the chosen subvariety of the variety 𝖦𝗋𝗉 of groups, in this case the variety 𝖠𝖻 of abelian groups, so that they could also be denoted by [K,F]_𝖠𝖻 and [F,F]_𝖠𝖻, respectively. The homology group 𝖧_2 (G) can also be obtained by applying comonadic homology theory <cit.> to the variety 𝖦𝗋𝗉 with respect to the abelianisation functor 𝖺𝖻𝖦𝗋𝗉→𝖠𝖻, so that 𝖧_2 (G) ≅𝖧_2 (G, 𝖺𝖻), the right-hand side homology group now being the one arising from the comonadic approach by taking 𝖺𝖻 as coefficient functor:
K ∩ [F,F]_𝖠𝖻/[K,F]_𝖠𝖻≅𝖧_2 (G, 𝖺𝖻)
The comonadic homology theory <cit.> in the semi-abelian context (as developed in <cit.>), allows one to establish several generalized Hopf formulas for homology in other semi-abelian varieties than the variety of groups, relatively to a given subvariety (not necessarily the one of “abelian algebras”, as it is the case in the example above). The main difficulty in computing these formulas is to find an algebraic characterization of the central extensions (in the sense of <cit.>) with respect to the given subvariety, so that the denominator in the formula (<ref>) can be made explicit. We do this in the present article in the variety 𝖲𝖪𝖡 with respect to three particularly interesting subvarieties: the varieties 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings, 𝖦𝗋𝗉 of groups and 𝖠𝖻 of abelian groups. These are connected by the following adjunctions
⊥
⊢ ⊢
⊥ ["F", shift left=2, from=1-1, to=1-3]
["G", shift left=2, from=1-1, to=3-1]
["U_G", shift left=2, from=1-3, to=1-1]
["", shift left=2, from=1-3, to=3-3]
["U_R", shift left=2, from=3-1, to=1-1]
["F̂", shift left=2, from=3-1, to=3-3]
["U", shift left=2, from=3-3, to=1-3]
["U_A", shift left=2, from=3-3, to=3-1]
where U_A, U_R, U_G and U are inclusion functors, and F̂, G, F and 𝖺𝖻 their left adjoints.
First, in Section <ref>, we characterize the central extensions in 𝖲𝖪𝖡 corresponding to the subvariety 𝖦𝗋𝗉 of groups (Proposition <ref>) that are the surjective homomorphisms f A → B of skew braces such that [(f),A]_𝖦𝗋𝗉=0,
where [(f),A]_𝖦𝗋𝗉
is the additive subgroup of A generated by all the elements of the form
{ a*b, b*a, c+b*a-c | a∈ (f ),b ∈ A ,c∈ A },
where a*b=-a+a∘ b-b, which is an ideal of A.
Section <ref> is devoted to the study of central extensions of skew braces relative to the subvariety 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings.
For this purpose we introduce a new object, the radicalator (Definition <ref>).
This is generated as a subgroup of the additive group by the elements (a+b)∘ c -b∘ c +c -a∘ c and a+b-a-b for all a,b,c∈ A.
The radicalator turns out to be an ideal [A,A]_𝖱𝖺𝖽𝖱𝗇𝗀 (Proposition <ref>) playing a similar role with respect to the reflection from 𝖲𝖪𝖡 to 𝖱𝖺𝖽𝖱𝗇𝗀 to the role played by the derived subgroup in the reflection from 𝖦𝗋𝗉 to 𝖠𝖻. Indeed, the quotient A →A/[A,A]_𝖱𝖺𝖽𝖱𝗇𝗀 is the universal one transforming a skew brace A into a radical ring A/[A,A]_𝖱𝖺𝖽𝖱𝗇𝗀, and an extension f A → B of skew braces is central relatively to the subvariety 𝖱𝖺𝖽𝖱𝗇𝗀 if and only if [(f), A]_𝖱𝖺𝖽𝖱𝗇𝗀 =0 (Theorem <ref>).
This pushes forward Rump's connection between skew braces and radical rings
<cit.> (see also <cit.>). We also observe that the central extensions of skew braces relative to the subvariety of braces (= skew braces of abelian type) can also be characterized in algebraic terms (Remark
<ref>).
In Section <ref>, we consider the reflection from 𝖲𝖪𝖡 to the subvariety 𝖠𝖻 of abelian groups, and characterize the corresponding central extensions. For this, given an ideal I of A, the additive subgroup generated by the set
{ [a,b]_+, a*b, [a,b]_∘ for all a∈ I, b∈ A}.
is shown to be an ideal (Proposition <ref>), denoted by [I,A]_𝖠𝖻 (here [a,b]_+ and [a,b]_∘ are the usual group-theoretic commutators with respect to the operations + and ∘, respectively). An extension f A → B of skew braces is then central with respect to 𝖠𝖻 if and only if [(f), A]_𝖠𝖻=0 (Proposition <ref>).
Note that these central extensions of skew braces have already been studied in <cit.>. We also observe that the commutator [(f), A]_𝖠𝖻 coincides with the Huq commutator defined in <cit.> (Proposition <ref>).
In the last section we deduce the corresponding Hopf formulae for homology of skew braces, and
then apply the Stallings–Stammbach exact sequence holding in any semi-abelian variety <cit.> to these contexts. A result relating lower central series to low-dimensional homology concludes the article (Corollary <ref>).
This work opens the way to the study of relative commutators in skew braces in the sense of <cit.>, as well as to the theory of higher central extensions of skew braces, following the recent developments in categorical algebra <cit.>.
§ PRELIMINARIES
Recall that a skew brace <cit.> is a triple (A,+,∘),
where (A,+) and (A,∘) are groups such that
the compatibility condition
a∘ (b+c)=a∘ b-a+a∘ c holds for all a,b,c∈ A. The inverse of an element a∈ A with respect to the circle
operation ∘ will be denoted by a'. We will denote the groups (A,+) and (A,∘) by A_+ and A_∘, respectively.
Let A be a skew brace. There are two canonical actions by automorphisms
λ A_∘→(A_+), λ_a(b)=-a+a∘ b,
ρ A_∘→(A_+), ρ_a(b)=a∘ b-a.
We can introduce a binary operation * A × A → A defined by a*b=-a+a∘ b-b for all a,b∈ A.
By definition, a normal subgroup I of A_+
is an ideal of A if
a*x ∈ I and x*a ∈ I for all a∈ A and x∈ I.
A normal subgroup I of of the group (A,+) such that I is a subgroup of A_∘ and A*I⊂ I is called a strong left ideal.
Let A be a skew brace. Then the center Z(A_+) is a strong left ideal of A.
We call extension of B a surjective morphism f:A→ B in 𝖲𝖪𝖡, and we also denote this extension by (A,f).
Since skew braces form a variety 𝖲𝖪𝖡 of Ω-groups in the sense of Higgins <cit.>, any subvariety 𝒳 of 𝖲𝖪𝖡 is admissible in the sense of Categorical Galois Theory <cit.>. We denote by I 𝖲𝖪𝖡→𝒳 the reflector to the subvariety 𝒳 of 𝖲𝖪𝖡, left adjoint of the inclusion functor U 𝒳→𝖲𝖪𝖡.
For any A in 𝖲𝖪𝖡, the kernel of the A-component η_A A → UI(A) of the unit η of the adjunction
𝒳 ⊥ 𝖲𝖪𝖡["U"', shift right=3, from=1-1, to=1-3]
["I"', shift right=3, from=1-3, to=1-1]
will be denoted by R(A), so that
0 R(A) A UI(A) 0
[from=1-1, to=1-2]
[from=1-2, to=1-3]
[from=1-3, to=1-4]
[from=1-4, to=1-5]
is an exact sequence in 𝖲𝖪𝖡.
Observe that R 𝖲𝖪𝖡→𝖲𝖪𝖡 is an endofunctor of 𝖲𝖪𝖡.
Given an extension f:A→ B we form the pullback
A×_BA A
A B
["t", from=1-1, to=1-2]
["s"', from=1-1, to=2-1]
["f", from=1-2, to=2-2]
["f"', from=2-1, to=2-2]
where A ×_B A = { (a_1, a_2) ∈ A × A | f(a)=f(b)}, while s and t are the first and second projection, respectively.
The definition of central extension relative to a given subvariety 𝒳 of 𝖲𝖪𝖡 (as in the work of the Fröhlich school <cit.>) is then the following:
For a subvariety 𝒳 of the variety 𝖲𝖪𝖡 of skew braces, an extension (A,f) is central if R(s)=R(t).
As observed in <cit.>, this definition coincides with the categorical one in the case of varieties of Ω-groups, hence in particular in the variety 𝖲𝖪𝖡.
Any subvariety 𝒳 of 𝖲𝖪𝖡 as above
is then admissible in the sense of Categorical Galois Theory for the class of surjective homomorphisms (see Theorem 3.4 in <cit.>).
This means that the left adjoint I in (<ref>) preserves a special type of pullbacks, namely the ones of the form
A×_UI(A)U(B) U(B)
A UI(A)
[from=1-1, to=1-2]
[from=1-1, to=2-1]
["ϕ", from=1-2, to=2-2]
["η_A"', from=2-1, to=2-2]
where ϕ is any surjective homomorphism and η_A is the A-component of the unit η of the adjunction. The property of admissibility guarantees the validity of a general Galois theorem <cit.> that in our case allows one to classify the central extensions of a skew brace B in terms of a suitable category of (internal) actions in 𝒳 on the Galois groupoid of a weakly universal central extension (E,p) of B (see the last section in <cit.> for more details).
§ THE SUBVARIETY OF GROUPS
By the classical Birkhoff theorem, the variety of groups can be identified with the subvariety 𝖦𝗋𝗉 of 𝖲𝖪𝖡 determined by the additional identity x + y= x ∘ y. It then follows that there is an adjunction between these two categories
𝖦𝗋𝗉 ⊥ ["U_G"', shift right=3, from=1-1, to=1-3]
["F"', shift right=3, from=1-3, to=1-1]
where 𝖦𝗋𝗉 is the subvariety of 𝖲𝖪𝖡 whose objects are skew braces with the property that the two group operations + and ∘ are equal. This subvariety is isomorphic to the category of groups, of course, and this justifies the slight abuse of notation. The functor U_G is a full inclusion, and in order to describe its left adjoint F in (<ref>) the following definition will be needed:
Let A be a skew brace, we denote by A*A the subgroup of A_+ generated by the elements a*b for all a,b∈ A.
It is well known that A*A is an ideal of A and it is the smallest ideal I such that the quotient A/I is a group.
In our situation the reflector F: 𝖲𝖪𝖡→𝖦𝗋𝗉 is the functor sending the skew brace A to the quotient F(A) = A/(A*A) by the ideal A*A (with obvious definition on morphisms). Note that here the ideal A*A is precisely the kernel R(A) of the A-component of the unit η_A A → A/(A*A) of the adjunction (<ref>). This adjunction is admissible in the sense of Categorical Galois theory, since 𝖦𝗋𝗉 is a subvariety of the variety 𝖲𝖪𝖡 (see Section 5 in <cit.>).
Let A be a skew brace and I⊂ A an ideal, we define [I,A]_𝖦𝗋𝗉 to be the additive subgroup of A generated by the set
{ a*b, b*a, c+b*a-c | a∈ I, b,c ∈ A}.
Let A be a skew brace and I⊂ A an ideal of A. Then [I,A]_𝖦𝗋𝗉 is an ideal.
Let J=[I,A]_𝖦𝗋𝗉.
Since I is an ideal, we have that J⊂ I. Thus, by definition of J, it is clear that J*A⊂ J and A*J⊂ J.
Moreover,
a*(c+b)= a*c+c+ a*b-c ∈ J
for all a∈ J and b,c∈ A. Thus, c+ a*b-c∈ J, so that J is an ideal of A.
The additive subgroup of A generated by the subset
{ a*b, b*a | a∈ I,b∈ A}
is not an ideal of A, in general.
The database of <cit.> contains a (minimal) counterexample
of size 24.
An extension (A,f) is central with respect to the adjunction (<ref>) if and only if [(f),A]_𝖦𝗋𝗉=0.
Assume that (A,f) is a central extension. Take A ×_B A as in (<ref>). Clearly, for any a∈(f), (a,0)∈ A ×_B A. If b∈ A, then
(a,0)*(b,b) = (a*b,0)∈R(A ×_ B A).
Since (A,f) is central, by definition a*b=R(s)(a*b,0) = R(t)(a*b,0) = 0. Similarly, one can show that b*a=0. As b ∈ A and a ∈(f) were chosen arbitrarily, it follows that [(f),A]_𝖦𝗋𝗉=0.
Assume now that [(f),A]_𝖦𝗋𝗉=0. Let (a,b) and (a_1,b_1) be two elements in A ×_B A. Since -a_1+ b_1 ∈ (f) one has that a_1 ∘ (-a_1+b_1) = b_1 so that -a_1+b_1=a_1'∘ b_1.
It follows that
a*a_1-b*b_1 =-a+a∘ a_1 -a_1+b_1-b∘ b_1 +b
=-a+a∘ a_1 +a_1'∘ b_1-b∘ b_1 +b
=-a+a∘ a_1 ∘ a_1'∘ b_1-b∘ b_1 +b
= -a+ a∘ b_1-b∘ b_1 +b
= -a+ (a∘ b'∘ b ∘ b_1) -b∘ b_1 +b
= -a+ (a∘ b'+ b ∘ b_1) -b∘ b_1 +b
= -a + a∘ b'+b
= -a +a ∘ b' ∘ b
=0,
where we have used the fact that a_1'∘ b_1 ∈ (f) and a∘ b' ∈ (f). It follows that the restrictions R(s) and R(t) of the first and the second projections s and t (with the notations from diagram (<ref>)) are equal, as desired.
§ THE SUBVARIETY OF RADICAL RINGS
We write 𝖱𝖺𝖽𝖱𝗇𝗀 to denote the category of radical rings. As it follows from the results in <cit.> this category can be presented as the subvariety of 𝖲𝖪𝖡 determined by the following additional two identities:
(a + b) ∘ c = a ∘ c - c + b ∘ c
and
a + b = b + a.
This important observation implies that the variety 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings determines an adjunction
𝖱𝖺𝖽𝖱𝗇𝗀 ⊥ ["U_R"', shift right=3, from=1-1, to=1-3]
["G"', shift right=3, from=1-3, to=1-1]
where U_R is the inclusion functor and G its left adjoint associating, with any skew brace A, its universal radical ring G(A). This adjunction is again admissible in the sense of Categorical Galois theory <cit.> (essentially for the same reasons as for the subvariety 𝖦𝗋𝗉 of 𝖲𝖪𝖡, as explained in the previous section).
The main goal of this section is to characterize the extensions in 𝖲𝖪𝖡 that are central with respect to the adjunction (<ref>).
Let A be a skew brace. We define the right distributor of a,b,c∈ A to be the element
[a,b,c]=(a+b)∘ c -b∘ c +c -a∘ c.
Let A be a skew brace and I an ideal of A. Then for all c∈ I and a,b∈ A, the elements [a,b,c], [c,a,b] and [b,c,a] are all in I.
Let A→ A/I, a↦a, be the canonical map.
Then
[a,b,c]=[a,b,c]=[a,b,0]=0.
Similarly, [c,a,b]=[b,c,a]=0.
Let A be a skew brace, and I⊂ A an ideal. We define [I,A]_𝖱𝖺𝖽𝖱𝗇𝗀 to be the additive subgroup of A generated by the elements [a,b,c], [c,a,b], [b,c,a] and a+b-a-b for all a∈ I and b,c∈ A.
Our next goal is to show that [I,A]_𝖱𝖺𝖽𝖱𝗇𝗀 is an ideal.
If we write J=[I,A]_𝖱𝖺𝖽𝖱𝗇𝗀, then J_+ is a normal subgroup of A_+. Indeed, for any j ∈ J ⊂ I and a ∈ A, the element j + a -j -a belongs to J, so that J is stable under conjugation in A.
Let A be a skew brace, I⊂ A an ideal and J=[I,A]_𝖱𝖺𝖽𝖱𝗇𝗀. Then I_+/J_+ ⊂ Z(A_+/J_+). In addition, if a,b,c∈ A and one of them is in I, then
(a+b)∘ c=a∘ c-c+b∘ c
in I_+/J_+.
This is a direct consequence of the definition of J.
Let A be a skew brace, I⊂ A an ideal and J=[I,A]_𝖱𝖺𝖽𝖱𝗇𝗀. For all a∈ I and b,d∈ A we have the following identities in A_+/J_+:
(b∘ a-b)∘ d-d =b∘ a∘ d-b∘ d,
(a∘ b-b)∘ d -d = a∘ b∘ d-b∘ d,
(b-b∘ a)∘ d -d = b∘ d-b∘ a∘ d,
(b-a∘ b)∘ d -d = b∘ d- a∘ b∘ d.
Apply Lemma <ref> to (b∘ a-b+b)∘ d and (a∘ b-b+b)∘ d and then to (-(b∘ a-b))∘ d and (-(a∘ b-b))∘ d.
Let A be a skew brace, I⊂ A an ideal and J=[I,A]_𝖱𝖺𝖽𝖱𝗇𝗀. Then A*J⊂ J. In particular, the actions λ and ρ of A factor through actions λ and ρ on A_+/J_+.
To see that λ_a(J)⊂ J, it suffices to check on the generators of J. It is clear that λ_d([I_+,A_+])⊂ [I_+,A_+]⊂ J for all d∈ A. Let then a,b,c,d∈ A, we must show that λ_d([a,b,c])=0 in the quotient group A_+/J_+ where a,b,c∈ A and at least one of these elements is in I.
First assume that c∈ I.
By Lemma <ref>,
λ_d([a,b,c]) = (d∘ a+λ_d(b))∘ c -d∘ b∘ c+d∘ c-d∘ a∘ c
=d∘ a∘ c-c+ λ_d(b)∘ c -d∘ b∘ c+d∘ c-d∘ a∘ c
=d∘ a∘ c-c+ c-d∘ c+d∘ b∘ c -d∘ b∘ c+d∘ c-d∘ a∘ c
=0.
Now, if we assume that b∈ I, then
-c+λ_d(b)∘ c∈ I,
-d∘ b∘ c +d∘ c=λ_d(-b∘ c+c)∈ I.
Thus
λ_d([a,b,c]) =d∘ a∘ c-c+ λ_d(b)∘ c -d∘ b∘ c+d∘ c-d∘ a∘ c
= -c+λ_d(b)∘ c-d∘ b∘ c+d∘ c
= -d∘ b∘ c+d∘ c-c+λ_d(b)∘ c
=-d∘ b∘ c+(d+λ_d(b))∘ c
=0.
Similarly, if we assume that a∈ I,
λ_d([a,b,c]) = (d∘ a -d)∘ c-c+d∘ c-d∘ a∘ c
= (d∘ a -d+d)∘ c-d∘ a∘ c
=0.
Let A be a skew brace, and I⊂ A an ideal. The subgroup [I,A]_𝖱𝖺𝖽𝖱𝗇𝗀 is an ideal of A.
Let J=[I,A]_𝖱𝖺𝖽𝖱𝗇𝗀.
It is left to show that J*A⊂ J. It suffices to show that a∘ b-b∈ J for all a∈ J and b∈ A. Since J⊂ I, it is sufficient to show this on the generators of J. Let a∈ J and b,c∈ A. By Lemma <ref>,
(b-a)∘ c=(b-a-b+b)∘ c=(b-a-b)∘ c-c+b∘ c.
Thus
(b-a-b)∘ c-c = (b-a)∘ c - b∘ c
= b ∘ c - c + (-a) ∘ c - b ∘ c
= b∘ c-a∘ c+c-b∘ c
= - a∘ c + c.
Hence
[a,b]_+∘ c-c = a∘ c-c + (b-a-b)∘ c-c=0.
Let d∈ A. By using Corollary <ref>,
[a,b,c]∘ d-d =(b∘((b'∘ a-b')∘ c -c)-b +c -a∘ c)∘ d-d
= (b∘((b'∘ a-b')∘ c -c)-b)∘ d-d +(c -a∘ c)∘ d-d
= b∘((b'∘ a-b')∘ c -c)∘ d-b∘ d+c∘ d -a∘ c∘ d
= b∘((b'∘ a-b')∘ c -c)∘ d-b+b-b∘ d+c∘ d -a∘ c∘ d.
We have shown earlier that the action ρ of A factors through an action ρ on A_+/J_+. Thus
[a,b,c]∘ d-d = ρ_b((b'∘ a-b')∘ c -c)∘ d)+b-b∘ d+c∘ d -a∘ c∘ d
=ρ_b((b'∘ a-b')∘ c∘ d -c∘ d+d)+b-b∘ d+c∘ d -a∘ c∘ d
=ρ_b(b'∘ a∘ c∘ d-b'∘ c∘ d+d)+b-b∘ d+c∘ d -a∘ c∘ d
=a∘ c∘ d-c∘ d+b∘ d-b∘ d+c∘ d -a∘ c∘ d
=0.
To obtain the second and third equality, we used Corollary <ref>. Assume now that b∈ J and a,c,d∈ A. (This situation is similar to the previous one, except that now we use the fact that multiplying and subtracting on the right by d preserves elements of the commutator subgroup [I_+,A_+].) Then
[a,b,c]∘ d-d =((a+b)∘ c-a∘ c -b∘ c +c +[a∘ c, -c+b∘ c]_+)∘ d-d
= ((a+b)∘ c-a∘ c -b∘ c +c)∘ d-d
= (a∘((-a'+a'∘ b)∘ c-c)-a-b∘ c+c)∘ d-d
=0.
Finally, assume that c∈ J and a,b,d∈ A. In this case,
[a,b,c]∘ d-d =(((a+b)∘ c-b-a)+a+b -b∘ c +c -a∘ c)∘ d-d
= (((a+b)∘ c-b-a)+b -b∘ c +c +a-a∘ c)∘ d-d
=0
by Lemma <ref>.
The radicalator of a skew brace A is the ideal
A_R=[A,A]_𝖱𝖺𝖽𝖱𝗇𝗀.
It is now clear that the reflector F: 𝖲𝖪𝖡→𝖱𝖺𝖽𝖱𝗇𝗀 in (<ref>) is the functor sending the skew brace A to the quotient F(A) = A/A_R by the radicalator A_R of A (with obvious definition on morphisms).
For a skew brace A, let
Z_R(A)={ z∈ Z(A_+): [z,b,c]=[b,z,c]=[b,c,z]=0 for all b,c∈ A}.
Let A be a skew brace and z∈ Z_R(A).
Then
[z+a,b,c] =[a,z+b,c]=[a,b,z+c]= [a,b,c]
for all a,b,c∈ A.
Let a,b,c∈ A. To prove
[z+a,b,c]=[a,b,c], we use that z∈ Z(A_+) and compute
[z+a,b,c] = (z+a+b)∘ c-b∘ c+c-(z+a)∘ c
=((a+b)+z)∘ c-b∘ c+c-(a+z)∘ c
=(a+b)∘ c-c+z∘ c-b∘ c+c-(a∘ c-c+z∘ c )
=[a,b,c]+a∘ c-c+b∘ c-c+z∘ c-b∘ c+c-z∘ c+c-a∘ c
=[a,b,c]+a∘ c-c+(b+z)∘ c-(z+b)∘ c+c-a∘ c
=[a,b,c].
A similar calculation shows that
[a,z+b,c]=[a,b,c].
Finally, for the remaining equality, we
note that both
a∘ z-a=λ_a(z) and
b∘ z-b=λ_b(z) are central
elements in the additive group of A. Then, using the skew brace compatibility condition,
[a,b,z+c] = (a+b)∘ (z+c)-b∘ (z+c)+(z+c)-a∘ (z+c)
=(a∘ z-a)+(a+b)∘ c-b∘ c+c-a∘ c+(a-a∘ z)
=(a+b)∘ c-b∘ c+c-a∘ c
=[a,b,c].
Let A be a skew brace. Then Z_R(A) is a subbrace of A.
It follows immediately from
Lemma <ref> that
Z_R(A) is a subgroup of Z(A_+).
We now prove that
Z_R(A) is a subgroup of A_∘.
Let k,k_1∈ Z_R(A) and b,c∈ A. First,
k∘ k_1'+b =(k-k_1+b∘ k_1)∘ k_1'
= (b∘ k_1-k_1+k)∘ k_1'=b+k∘ k_1'.
Hence, k∘ k_1'∈ Z(A_+). Then,
(k∘ k_1'+b)∘ c =(k-k_1+ b∘ k_1)∘ (k_1'∘ c)
=k∘ k_1'∘ c -c+b∘ c.
Therefore [k'∘ k_1,b,c]=0. Similarly one sees that [b,k'∘ k_1,c]=0.
Finally,
(b∘ k∘ k_1'-k∘ k_1'+c∘ k∘ k_1')∘ k_1=b∘ k-k+c∘ k=(b+c)∘ k.
Thus
b∘ k∘ k_1'-k∘ k_1'+c∘ k∘ k_1' = (b+c)∘ k∘ k_1'.
This implies that [b,c,k∘ k_1']=0, and
the proof is concluded.
An extension (A,f) in 𝖲𝖪𝖡 is central with respect to the adjunction (<ref>) if and only if (f)⊂ Z_R(A) or equivalently if [(f),A]_𝖱𝖺𝖽𝖱𝗇𝗀=0.
Assume that f A → B is an extension in 𝖲𝖪𝖡 that is central with respect to the adjunction (<ref>). We write
A ×_B A = { (a_1,a_2) ∈ A × A | f(a_1) = f(a_2)}
for the “object part” of the pullback in diagram (<ref>).
Let us first prove that a+b-a-b=0 for any a ∈(f), b∈ A. The elements (a,0) and (b,b) are in A ×_B A, hence
(a,0) + (b,b) - (a,0) -(b,b) = (a+b-a-b, b-b)
= (a+b-a-b,0)∈ R(A ×_B A).
By applying the restrictions R(s) and R(t) of the pullback projections we get
a+b-a-b = R(s)(a+b-a-b,0)= R(t) (a+b-a-b,0) =0.
Let then a∈(f) and b,c ∈ A. We claim that
[a,b,c]=0. The elements (a,0), (b,b) and (c,c) are all in A ×_B A, so that
((a,0) + (b,b) ) ∘ (c,c) -(b,b) ∘ (c,c) + (c,c) - (a,0) ∘ (c,c)
= (a+b, b) ∘ (c,c) - (b∘ c, b∘ c) +(c,c) -(a ∘ c, c)
= ((a+b)∘ c -b ∘ c +c - a ∘ c, b ∘ c -b ∘ c +c - c)
= ([a,b,c], 0) ∈ R( A ×_B A).
The assumption that R(s)=R(t) gives
[a,b,c]= R(s)([a,b,c], 0) = R(t)([a,b,c], 0)=0.
By a similar argument one can check that [c,a,b]=0 and [b,c,a]=0
for any a∈(f) and b,c ∈ A. This proves the first implication.
Conversely, assume now that [(f),A]_𝖱𝖺𝖽𝖱𝗇𝗀=0.
Let (a,a+k)∈ A×_B A, (a_1,a_1+k_1)∈ A×_B A and (a_2,a_2+k_2)∈ A×_B A be such that k,k_1,k_2∈(f).
It will suffice to prove that the equality R(s) = R(t) holds for all the generators of R(A ×_B A) = [A×_B A,A ×_B A]_𝖱𝖺𝖽𝖱𝗇𝗀. For this, observe that the assumption (f) ⊂ Z(A_+) directly implies that
[a+k, a_1+k_1] = a+k + a_1 + k_1 - (a+k) - (a_1 + k_1)
= a + a_1 -a-a_1
= [a, a_1].
Then, as a consequence of Lemma <ref>, we obtain the equality
[a+k,a_1+k_1,a_2+k_2] = [a,a_1,a_2],
which concludes the proof.
The arguments used in the proof of Theorem <ref> also provide a characterization of central extensions of skew left braces with respect to the subvariety 𝖡𝖱 of braces <cit.>, as we now briefly explain. The subvariety 𝖡𝖱 of 𝖲𝖪𝖡 is determined by the additional identity a+b-a-b =0, since braces are precisely left skew braces of abelian type <cit.>. We then have the following adjunction
𝖡𝖱 ⊥ 𝖲𝖪𝖡["U_B"', shift right=3, from=1-1, to=1-3]
[""', shift right=3, from=1-3, to=1-1]
where U_B is the inclusion functor and 𝖻𝗋 its left adjoint sending a skew brace A to the quotient 𝖻𝗋(A) = A/ [A,A]_𝖡𝖱, with [ A,A]_𝖡𝖱 the ideal of A generated by all the commutators [a,b] = a+b -a-b (for any a,b ∈ A). One can then see that an extension f A → B of skew braces is central with respect to the subvariety 𝖡𝖱 of braces if and only if its kernel (f) satisfies the condition [(f), A]_𝖡𝖱= 0, where [(f), A]_𝖡𝖱 is the ideal of A generated by all the commutators of the form [k,a]= k+a-k-a, where k ∈(f), a ∈ A. Indeed, by looking at the first part of the proofs of the two implications in Theorem <ref> one realizes that the condition [(f), A]_𝖡𝖱= 0 is indeed necessary and sufficient for an extension to be central with respect to the adjunction (<ref>).
Similar results can be obtained by considering the subvarieties 𝖭𝗂𝗅_𝗇𝖲𝖪𝖡 and 𝖲𝗈𝗅_𝗇𝖲𝖪𝖡 of 𝖲𝖪𝖡, where 𝖭𝗂𝗅_𝗇𝖲𝖪𝖡 denotes the variety of skew left braces of n-nilpotent type and 𝖲𝗈𝗅_𝗄𝖲𝖪𝖡 the variety of skew braces of n-solvable type (n ≥ 1 is a positive integer). In the group case the characterizations of all these types of relative central extensions were established in <cit.> (Section 9). We shall not pursue this further in the present article, but it would be interesting to describe the corresponding relative commutators of skew braces on the model of what was done in <cit.> in the category of groups.
§ THE SUBVARIETY OF ABELIAN GROUPS
Let A be a skew brace, we denote by A' the subgroup of A_+ generated by the elements a*b=-a+a∘ b -b and a+b-a-b for all a,b∈ A.
A direct calculation shows that A' is an ideal of A and it is the smallest ideal I such that the quotient A/I is an abelian group; see <cit.>.
The reflector F: 𝖲𝖪𝖡→𝖠𝖻 in the adjunction
𝖠𝖻 ⊥ ["U"', shift right=3, from=1-1, to=1-3]
["F"', shift right=3, from=1-3, to=1-1]
sends the skew brace A to the quotient F(A) = A/A' of A by the ideal A' (with obvious definition on morphisms). From the categorical point of view, the reflector F in this adjunction gives the “abelianisation functor”, since the (internal) abelian objects in the variety of skew braces are precisely the skew braces for which the two group structures coincide and are abelian (<cit.>). Note that in the literature these skew braces are also referred to as trivial braces.
Let A be a skew brace, and let I⊂ A be an ideal of A. We define the relative commutator [I,A]_𝖠𝖻 to be the additive subgroup generated by the set
{ [i,b]_+, i*b, [i,b]_∘| i∈ I, b∈ A}.
Let A be a skew brace, and I⊂ A an ideal of A. Then, [I,A]_𝖠𝖻 is an ideal of A.
Since J=[I,A]_𝖠𝖻⊂ I, J contains the commutator subgroup [J_+,A_+] so J_+ is normal in A_+. In addition, J*A⊂ J. To see that A*J⊂ J, it is enough to show that for k∈ J and b∈ A, b*k∈ J. Since -b+b∘ k∈ I, in A_+/J_+ one has
-b+b∘ k=b +-b+b∘ k-b=b∘ k-b.
Therefore,
b*k=-k+b∘ k-b=-k+k∘ b-b = k*b = 0.
An extension (A,f) is central with respect to the adjunction (<ref>) if and only if [(f),A]_𝖠𝖻=0.
Assume first that (A,f) is a central extension. Take A ×_B A as in (<ref>). Let a∈(f) and b∈ A. Then (a,0)∈ A ×_B A and
(a,0)*(b,b) = (a*b,0)∈R(A ×_ B A).
Since (A,f) is central, by definition a*b=R(s)(a*b,0) = R(t)(a*b,0) = 0. The first part of the proof of Theorem <ref>, where it is shown that [a,b]_+=0, still applies here. The proof that [a,b]_∘=0 is also similar, and is left to the reader.
Conversely, let (a,b) and (a_1,b_1) be two elements in A ×_B A. As we saw in Proposition <ref>, one has a*a_1=b*b_1. Then,
[a,a_1]_+-[b,b_1]_+ =a+a_1-a-a_1+b_1+b-b_1-b
=a+a_1-a+b-a_1-b
=a+a_1-a_1-a+b-b
=0.
Similarly, one has that [a,a_1]_∘∘ [b,b_1]_∘' =0, which implies that the restrictions R(s) and R(t) of the first and the second projections s and t (with the notations from diagram (<ref>)) are equal, as desired.
One can ask whether the commutator [I,A]_𝖠𝖻 defined above coincides with the Huq commutator of normal subobjects
defined in <cit.>. We shall now see that this is indeed the case.
Given an ideal I of a skew brace A,
consider the set-theoretic map c I × A → A defined by c(i,a) = i+a for any i∈ I and a ∈ A.
The Huq commutator [I,A]_𝖧𝗎𝗊 is the smallest ideal J of A
such that the composite map
I× A A A/J["c", from=1-1, to=1-2]
["π", from=1-2, to=1-3]
is a homomorphism of skew braces (here π is the canonical quotient defined by π(a)=a).
For any ideal I of a skew brace A,
[I,A ]_𝖧𝗎𝗊= [I,A ]_𝖠𝖻.
By definition of [I,A ]_𝖠𝖻, the set-theoretic map
[column sep=scriptsize]
I× A A A/[I,A]_[from=1-1, to=1-2]
[from=1-2, to=1-3]
defined by
ϕ(i,a) = i + a
is a homomorphism of skew braces, so that [I,A ]_𝖧𝗎𝗊⊂ [I,A ]_𝖠𝖻. For the other inclusion, consider any quotient π A → A/J having the property that the composite map π c in (<ref>) is a homomorphism of skew braces. For any i ∈ I, a∈ A,
i∘a = π c(i,0) ∘π c (0,a) = π c(i,a) = π c (0,a) ∘π c (i,0) = a∘i,
hence [i,a]_∘ = 0.
Similarly, one checks that [i,a]_+ = 0, and i*a=0, so that
all the generators appearing in the Definition <ref> of [I,A ]_𝖠𝖻 must be sent to 0 by π. It follows that [I,A ]_𝖠𝖻⊂ J, hence in particular [I,A ]_𝖠𝖻⊂ [I,A ]_𝖧𝗎𝗊, as desired.
A different and equivalent description of the Huq commutator of two ideals in the category 𝖲𝖪𝖡 was given in <cit.>, where the Huq commutator was also shown to coincide with the Smith commutator.
§ HOPF FORMULAE FOR HOMOLOGY
The characterization of the central extensions obtained in the previous sections will now provide some new Hopf formulae for the homology of skew braces. Indeed, the variety 𝖲𝖪𝖡 of skew braces is a variety of Ω-groups and the subcategories 𝖦𝗋𝗉, 𝖱𝖺𝖽𝖱𝗇𝗀, 𝖡𝖱 and 𝖠𝖻 are all subvarieties so that the methods of <cit.> apply, as also the recent and more general ones developed in the semi-abelian context <cit.>.
The way one defines the (comonadic) homology of an algebra B in a semi-abelian variety ℂ relatively to a subvariety 𝕏 of ℂ
[column sep=scriptsize]
𝕏 ⊥ ℂ["U"', shift right=3, from=1-1, to=1-3]
["F"', shift right=3, from=1-3, to=1-1]
can be briefly explained as follows (we refer the reader to <cit.> for more details).
The forgetful functor sending the algebra B to its underlying set | B | has a left adjoint, the “free algebra functor”, naturally inducing a comonad
(𝔾ℂ→ℂ, ϵ G ⇒ 1_ℂ, δ G → G^2)
on ℂ. The axioms of a comonad allows one to build a simplicial object 𝔾(B) in ℂ <cit.>, with the standard “free presentation” of B being given by ϵ_B G(B) → B.
The “homology algebras” 𝖧_i(B,F) of B (with coefficients in the reflector F ℂ→𝕏 to the subvariety 𝕏) are the ones of the chain complex N(F(𝔾(B)) obtained from the simplicial object F(𝔾(B)) in 𝕏 via the “Moore normalization” functor associating a chain complex with any simplicial object in 𝕏.
In the special case of the reflector F 𝖲𝖪𝖡→𝖦𝗋𝗉,
consider a “free” presentation
0 K F B 0
[from=1-1, to=1-2]
[tail, from=1-2, to=1-3]
["f", two heads, from=1-3, to=1-4]
[from=1-4, to=1-5]
of a skew brace B, where F = 𝔾(B) is the “free” skew brace on the underlying set of B.
Consider also the ideal [F,F]_𝖦𝗋𝗉 = F*F of F as defined in Section <ref> and the ideal
[K,F]_𝖦𝗋𝗉 = ⟨{ a*b, b*a, c+b*a-c | a∈ K,b ∈ A, c∈ A }⟩_F.
Then the first homology group 𝖧_1(B,F) of the skew brace B (with coefficients in F) is given by
𝖧_1 (B,F) = B/[B,B]_𝖦𝗋𝗉,
while the
second homology group of B is given by the Hopf formula
𝖧_2 (B,F) ≅K ∩ [F,F]_𝖦𝗋𝗉/[K,F]_𝖦𝗋𝗉.
In particular, this latter is an invariant of the skew brace B, since it does not depend on the choice of the free presentation.
According to the results in the above mentioned articles on the so-called 5-term exact sequence, also referred to as the Stallings–Stammbach sequence, we then get the following:
Any short exact sequence
0 K A B 0
[from=1-1, to=1-2]
[tail, from=1-2, to=1-3]
["f", two heads, from=1-3, to=1-4]
[from=1-4, to=1-5]
in the variety 𝖲𝖪𝖡 of skew braces induces the following exact sequence in 𝖦𝗋𝗉:
[column sep=scriptsize]
𝖧_2(A,F) 𝖧_2(B,F) K/[K,A]_ 𝖧_1(A,F) 𝖧_1(B,F) 0.
["𝖧_2(f)", two heads, from=1-1, to=1-2]
[from=1-2, to=1-3]
[from=1-3, to=1-4]
["𝖧_1(f)", two heads, from=1-4, to=1-5]
[from=1-5, to=1-6]
The same kind of results can be stated by replacing [F,F]_𝖦𝗋𝗉 = F*F with [F,F]_𝖱𝖺𝖽𝖱𝗇𝗀 and [K,F]_𝖦𝗋𝗉 with [K,F]_𝖱𝖺𝖽𝖱𝗇𝗀, so that the second homology radical ring
𝖧_2(B, G) of B (with coefficients in the reflector G 𝖲𝖪𝖡→𝖱𝖺𝖽𝖱𝗇𝗀) is given by
𝖧_2 (B,G) ≅K ∩ [F,F]_𝖱𝖺𝖽𝖱𝗇𝗀/[K,F]_𝖱𝖺𝖽𝖱𝗇𝗀.
In case one chooses the subvariety 𝖡𝖱 of braces, one gets a new Hopf formula for the second homology of a skew brace
𝖧_2 (B,𝖻𝗋) ≅K ∩ [F,F]_𝖡𝖱/[K,F]_𝖡𝖱,
with coefficient functor 𝖻𝗋𝖲𝖪𝖡→𝖡𝖱, where the relative commutators [F,F]_𝖡𝖱 and [K,F]_𝖡𝖱 are described in Remark <ref>.
From this, as shown in <cit.>, one can also deduce a result concerning the lower central series. We explain here only the case of the subvariety 𝖠𝖻 of abelian groups, however the same method can be applied to the subvarieties 𝖦𝗋𝗉 of groups and 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings. One defines A^0 =A, and the (n+1)th term of the series inductively by A^n+1 = [A^n, A]_𝖠𝖻, for any n ≥ 1. Note that each A^n+1 = [A^n, A]_𝖠𝖻 is an ideal of A by Proposition <ref>. Note that these are central series
Given a morphism f A → B in 𝖲𝖪𝖡, assume that
𝖧_1(f) 𝖧_1(A,𝖺𝖻 ) →𝖧_1(A,𝖺𝖻)
is an isomorphism and
𝖧_2(f) 𝖧_2(A,𝖺𝖻 ) →𝖧_2(A,𝖺𝖻)
is surjective. Then, for any n≥ 1, the induced morphism A/A^n→B/B^n is an isomorphism.
§.§ Acknowledgements
This work was partially supported by
the project OZR3762 of Vrije Universiteit Brussel, the
FWO Senior Research Project G004124N,
and the Fonds de la Recherche Scientifique under Grant CDR No.
J.0080.23. Letourmy is supported by FNRS.
abbrv
| Skew braces appeared originally in connection with the study of set-theoretic solutions to the Yang–Baxter equation <cit.>. Now their applications go far beyond this domain,
as they appear in several different areas; see for example <cit.>.
A skew brace <cit.> is a triple (A,+,∘),
where (A,+) and (A,∘) are groups such that
the compatibility condition
a∘ (b+c)=a∘ b-a+a∘ c holds for all a,b,c∈ A.
Skew braces form a variety of universal algebras 𝖲𝖪𝖡, and generalise at the same time groups and radical rings. Concretely, given a group (G,·) one can give G a skew brace structure by taking +=· and ∘=·. In particular, this means that the variety 𝖦𝗋𝗉 of groups is a subvariety of the variety 𝖲𝖪𝖡 of skew braces. On the other hand, a radical ring (R,+,·) is a (not necessarily commutative) ring without unit such that the operation a∘ b=a+a· b+b is a group operation. In this case, the triple (R,+,∘) is a skew brace. We will recall in Section <ref> a characterisation of radical rings <cit.> that implies that radical rings also form a subvariety 𝖱𝖺𝖽𝖱𝗇𝗀 of the variety 𝖲𝖪𝖡 of skew braces.
Since
skew braces form a variety of Ω-groups in the sense of Higgins <cit.>, hence in particular a semi-abelian category <cit.> (in fact, even a strongly protomodular category <cit.>), it is natural
to look at non-abelian homology of skew braces with coefficient functors in the subvarieties 𝖦𝗋𝗉 of groups, 𝖠𝖻 of abelian groups, and 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings.
Indeed, in recent years there have been some relevant developments in non-abelian homological algebra (see <cit.>, for instance, and the references therein). The comonadic approach to homology theory of algebraic structures <cit.> turns out to be perfectly compatible with the fundamental concept of semi-abelian category, allowing one to extend and improve some classical results in group homology to a general categorical context including compact groups, crossed modules, Lie algebras and cocommutative Hopf algebras, for instance.
This article is a first step in the direction of applying the above approach to non-abelian homology to the variety of skew braces, by providing some precise descriptions of the second homology group of a skew brace in terms of a generalized Hopf formula.
In order to briefly explain this, it is useful to first recall this classical formula in the category 𝖦𝗋𝗉 of groups and its relationship with the notion of central extensions. Consider the adjunction
⊥ ["U"', shift right=2, from=1-1, to=1-3]
[""', shift right=3, from=1-3, to=1-1]
where 𝖺𝖻𝖦𝗋𝗉→𝖠𝖻 is the classical abelianisation functor sending a group G to the quotient 𝖺𝖻(G) = G/[G,G] of G by its derived subgroup [G,G]. Given a free presentation of a group G
0 K F G 0
[from=1-1, to=1-2]
[tail, from=1-2, to=1-3]
["f", two heads, from=1-3, to=1-4]
[from=1-4, to=1-5]
where F is a free group, consider the quotient F/[K,F] of F by the commutator subgroup [K,F] as in the diagram
F F/[K,F]
G
[two heads, from=1-1, to=1-3]
["f"', two heads, from=1-1, to=2-2]
["f̅", two heads, from=1-3, to=2-2]
where the induced morphism f is then a weakly universal central extension of G <cit.>. The Galois group of this central extension f of G turns out to be an invariant of G, called the fundamental group π_1(G) <cit.> of G, that is naturally isomorphic to the second integral homology group of G
π_1(G)≅K ∩ [F,F]/[K,F]≅𝖧_2 (G)
via the classical Hopf formula, i.e. the right-hand isomorphism. The commutators [K,F] and [F,F] appearing in this formula are “relative” to the chosen subvariety of the variety 𝖦𝗋𝗉 of groups, in this case the variety 𝖠𝖻 of abelian groups, so that they could also be denoted by [K,F]_𝖠𝖻 and [F,F]_𝖠𝖻, respectively. The homology group 𝖧_2 (G) can also be obtained by applying comonadic homology theory <cit.> to the variety 𝖦𝗋𝗉 with respect to the abelianisation functor 𝖺𝖻𝖦𝗋𝗉→𝖠𝖻, so that 𝖧_2 (G) ≅𝖧_2 (G, 𝖺𝖻), the right-hand side homology group now being the one arising from the comonadic approach by taking 𝖺𝖻 as coefficient functor:
K ∩ [F,F]_𝖠𝖻/[K,F]_𝖠𝖻≅𝖧_2 (G, 𝖺𝖻)
The comonadic homology theory <cit.> in the semi-abelian context (as developed in <cit.>), allows one to establish several generalized Hopf formulas for homology in other semi-abelian varieties than the variety of groups, relatively to a given subvariety (not necessarily the one of “abelian algebras”, as it is the case in the example above). The main difficulty in computing these formulas is to find an algebraic characterization of the central extensions (in the sense of <cit.>) with respect to the given subvariety, so that the denominator in the formula (<ref>) can be made explicit. We do this in the present article in the variety 𝖲𝖪𝖡 with respect to three particularly interesting subvarieties: the varieties 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings, 𝖦𝗋𝗉 of groups and 𝖠𝖻 of abelian groups. These are connected by the following adjunctions
⊥
⊢ ⊢
⊥ ["F", shift left=2, from=1-1, to=1-3]
["G", shift left=2, from=1-1, to=3-1]
["U_G", shift left=2, from=1-3, to=1-1]
["", shift left=2, from=1-3, to=3-3]
["U_R", shift left=2, from=3-1, to=1-1]
["F̂", shift left=2, from=3-1, to=3-3]
["U", shift left=2, from=3-3, to=1-3]
["U_A", shift left=2, from=3-3, to=3-1]
where U_A, U_R, U_G and U are inclusion functors, and F̂, G, F and 𝖺𝖻 their left adjoints.
First, in Section <ref>, we characterize the central extensions in 𝖲𝖪𝖡 corresponding to the subvariety 𝖦𝗋𝗉 of groups (Proposition <ref>) that are the surjective homomorphisms f A → B of skew braces such that [(f),A]_𝖦𝗋𝗉=0,
where [(f),A]_𝖦𝗋𝗉
is the additive subgroup of A generated by all the elements of the form
{ a*b, b*a, c+b*a-c | a∈ (f ),b ∈ A ,c∈ A },
where a*b=-a+a∘ b-b, which is an ideal of A.
Section <ref> is devoted to the study of central extensions of skew braces relative to the subvariety 𝖱𝖺𝖽𝖱𝗇𝗀 of radical rings.
For this purpose we introduce a new object, the radicalator (Definition <ref>).
This is generated as a subgroup of the additive group by the elements (a+b)∘ c -b∘ c +c -a∘ c and a+b-a-b for all a,b,c∈ A.
The radicalator turns out to be an ideal [A,A]_𝖱𝖺𝖽𝖱𝗇𝗀 (Proposition <ref>) playing a similar role with respect to the reflection from 𝖲𝖪𝖡 to 𝖱𝖺𝖽𝖱𝗇𝗀 to the role played by the derived subgroup in the reflection from 𝖦𝗋𝗉 to 𝖠𝖻. Indeed, the quotient A →A/[A,A]_𝖱𝖺𝖽𝖱𝗇𝗀 is the universal one transforming a skew brace A into a radical ring A/[A,A]_𝖱𝖺𝖽𝖱𝗇𝗀, and an extension f A → B of skew braces is central relatively to the subvariety 𝖱𝖺𝖽𝖱𝗇𝗀 if and only if [(f), A]_𝖱𝖺𝖽𝖱𝗇𝗀 =0 (Theorem <ref>).
This pushes forward Rump's connection between skew braces and radical rings
<cit.> (see also <cit.>). We also observe that the central extensions of skew braces relative to the subvariety of braces (= skew braces of abelian type) can also be characterized in algebraic terms (Remark
<ref>).
In Section <ref>, we consider the reflection from 𝖲𝖪𝖡 to the subvariety 𝖠𝖻 of abelian groups, and characterize the corresponding central extensions. For this, given an ideal I of A, the additive subgroup generated by the set
{ [a,b]_+, a*b, [a,b]_∘ for all a∈ I, b∈ A}.
is shown to be an ideal (Proposition <ref>), denoted by [I,A]_𝖠𝖻 (here [a,b]_+ and [a,b]_∘ are the usual group-theoretic commutators with respect to the operations + and ∘, respectively). An extension f A → B of skew braces is then central with respect to 𝖠𝖻 if and only if [(f), A]_𝖠𝖻=0 (Proposition <ref>).
Note that these central extensions of skew braces have already been studied in <cit.>. We also observe that the commutator [(f), A]_𝖠𝖻 coincides with the Huq commutator defined in <cit.> (Proposition <ref>).
In the last section we deduce the corresponding Hopf formulae for homology of skew braces, and
then apply the Stallings–Stammbach exact sequence holding in any semi-abelian variety <cit.> to these contexts. A result relating lower central series to low-dimensional homology concludes the article (Corollary <ref>).
This work opens the way to the study of relative commutators in skew braces in the sense of <cit.>, as well as to the theory of higher central extensions of skew braces, following the recent developments in categorical algebra <cit.>. | null | null | null | null | null |
http://arxiv.org/abs/2409.18130v1 | 20240926175955 | Bridging 4D QFTs and 2D VOAs via 3D high-temperature EFTs | [
"Arash Arabi Ardehali",
"Mykola Dedushenko",
"Dongmin Gang",
"Mikhail Litvinov"
] | hep-th | [
"hep-th",
"math-ph",
"math.MP",
"math.QA",
"math.RT"
] |
The high-temperature limit of the superconformal index, especially on higher sheets, often captures useful universal information about a theory. In 4d 𝒩=2 superconformal field theories with fractional r-charges, there exists a special notion of high-temperature limit on higher sheets that captures data of three-dimensional topological quantum field theories arising from r-twisted circle reduction. These TQFTs are closely tied with the VOA of the 4d SCFT. We study such high-temperature limits. More specifically, we apply Di Pietro-Komargodski type supersymmetric effective field theory techniques
to r-twisted circle reductions of (A_1,A_2n) Argyres-Douglas theories, leveraging their Maruyoshi-Song Lagrangian with manifest 𝒩=1 supersymmetry. The result on the second sheet is the Gang-Kim-Stubbs family of 3d 𝒩=2 SUSY enhancing rank-0 theories with monopole superpotentials, whose boundary supports the Virasoro minimal model VOAs M(2,2n+3). Upon topological twist, they give non-unitary TQFTs controlled by the M(2,2n+3) modular tensor category (MTC). The high-temperature limit on other sheets yields their unitary or non-unitary Galois conjugates. This opens up the prospect of a broader four-supercharge perspective on the celebrated correspondence between 4d 𝒩=2 SCFTs and 2d VOAs via interpolating 3d EFTs. Several byproducts follow, including a systematic approach to 3d SUSY enhancement from 4d SUSY enhancement, and a 3d QFT handle on Galois orbits of various MTCs associated with 4d 𝒩=2 SCFTs.
[NO \title GIVEN]
[NO \author GIVEN]
September 28, 2024
======================
§ INTRODUCTION AND SUMMARY
The tools of two-dimensional Conformal Field Theory (CFT) <cit.> and three-dimensional Topological Quantum Field Theory (TQFT) <cit.> permeate physics in diverse dimensions.
Vertex operator algebras (VOAs) and their representation categories play a special role here, both controlling the structure of 2d CFT and 3d TQFT, and appearing in other dimensions.
In particular, a far-reaching correspondence was discovered a decade ago between a subsector of 4d 𝒩=2 superconformal field theories (SCFTs) and 2d vertex operator (super)algebras <cit.>.
We refer to it as the SCFT/VOA correspondence in this paper.
Recently, it has been put in a broader context <cit.>, relating it with the 3d TQFT and 2d CFT in a natural way.
Exploring this link in greater detail holds promise of putting the world of (super)conformal field theories under better control.
The SCFT/VOA correspondence inspired a plethora of developments in recent years.
It has guided, on the one hand, explorations of the landscape of 𝒩=2 superconformal field theories, and on the other hand, foundational developments in logarithmic vertex operator algebras. A sample of works in the first direction is
<cit.> and in the second <cit.>. See the surveys <cit.> for more context and related directions of research.
In this paper, building on <cit.> on the one hand, and <cit.> on the other, we present a 3d effective field theory (EFT) bridge between the two sides that sheds significant light on the correspondence in the rational case.[The high-temperature expansions in <cit.> suggest that a single 3d EFT might fall short of capturing logarithmic modules of the VOA, and a direct sum of multiple 3d EFTs may be needed in that case. This possibility is currently being studied.] In particular, while in the original 4d/2d context, non-vacuum modules of the VOA correspond to surface defects in 4d <cit.>, whose fusion category is rather poorly understood, in the 3d EFT, they correspond to line operators, which are under much better control (cf. <cit.>).
The 3d bridge arises via an R-twisted circle reduction <cit.> of the 4d SCFT as follows. We begin with the holomorphic-topological (HT) twist <cit.> of the 4d SCFT on a (holomorphic) Riemann surface Σ times a (topological) cigar C. This yields a commutative vertex Poisson algebra on Σ <cit.>. We then quantize the latter into a non-commutative VOA by localizing the transverse excitations to the tip of the cigar via Ω-deformation <cit.> along C <cit.>. Reduction to 3d is now possible along the angular direction of the cigar, where fields acquire U(1)_r-twisted boundary conditions due to the unit U(1)_r flux involved in the topological twist <cit.>.
We restrict the discussion here to what may be considered the simplest setting of the 4d/2d correspondence, where the VOA is just the Virasoro minimal model. On the 4d side, we have the (A_1,A_2n) series of Argyres-Douglas theories <cit.>, and on the 2d side – the M(2,2n+3) series of minimal models <cit.>. By applying the R-twisted reduction to (A_1,A_2n), we will derive as the 3d bridge the 𝒯_n series of Gang-Kim-Stubbs <cit.> rank-0 3d 𝒩=4 superconformal theories. The category of Wilson lines in these 3d theories is under good control (via systems of polynomial Bethe equations in the 3d A-model <cit.>, for instance,) and known to be in correspondence with that of the modules in M(2,2n+3) <cit.>. Upon the 3d topological A-twist (the mirror of B-twist, also known as Rozansky-Witten or Blau-Thompson twist) <cit.>, these theories lead to semi-simple non-unitary 3d TQFTs. Such TQFTs are captured by some modular tensor categories (MTC) <cit.>, and in our case, not surprisingly, these are the categories of modules for M(2,2n+3).
§.§ R-twisted circle reduction
The R-symmetry group of the 4d 𝒩=2 superconformal group is SU(2)_R× U(1)_r . For our purposes, the main result of <cit.> can be summarized as follows: the 4d 𝒩=2 SCFT reduced on a circle of length β, with U(1)_r-twisted (supersymmetric) boundary conditions around the circle on all fields ϕ:
ϕ(x+β)= e^iπ (F+2r) ϕ(x),
yields a 3d 𝒩=4 SCFT, whose topological A-twist supports the desired 2d VOA on its holomorphic boundary. The quantum numbers F and r in (<ref>) are the fermion number and the U(1)_r charge of the field ϕ.
Through this device, the 4d/2d correspondence reduces to the more familiar realm of 3d/2d correspondences between TQFTs and their boundary VOAs, which have longer history and are far more developed since their inception in <cit.>. Among other things, the emergence of infinite-dimensional chiral symmetry is now largely clarified.
More precisely, starting from a generic 4d 𝒩=2 SCFT, the twisted reduction followed by the topological twist may yield a TQFT with local operators. This happens when the 3d theory has Higgs and/or Coulomb branches of vacua <cit.>. Such TQFTs constitute a more general type of bulk/boundary correspondence <cit.>. (See also <cit.>) In our setting, however, the (A_1, A_2n) theories have no 4d Higgs branch to begin with, and the 4d Coulomb branch is lifted via the R-twisting, so our TQFTs will have no local operators (even in the presence of line defects), and we will be in the standard territory.
It remains to identify the 3d 𝒩=4 SCFT arising from the R-twisted reduction, or better, its A-twisted version supporting the VOA on its boundary. The main result of the present paper is that this remaining challenge is overcome efficiently via the EFT tools forged in the context of Cardy limits of the 4d superconformal index <cit.>.
§.§ SUSY index on the second sheet
Consider the 4d 𝒩=2 superconformal index defined as <cit.>:
ℐ_t(p,q,t):=Tr^_S^3(-1)^F p^j_1+j_2+rq^j_1-j_2+rt^I_3-r,
where I_3 is the generator of the Cartan of SU(2)_R, and j_1,j_2 are the Lorentz spins.
Going to the (γ+1)st sheet of the index via p→ p e^2π iγ, we get (cf. <cit.>):
ℐ^γ_t(p,q,t):=ℐ_t(p e^2π iγ,q,t) =Tr (-1)^F e^2π i γ (j_1+j_2+r) p^j_1+j_2+rq^j_1-j_2+rt^I_3-r
=Tr (-1)^F e^iπγ(F+2r) p^j_1+j_2+rq^j_1-j_2+rt^I_3-r,
where on going to the second line we used (-1)^F+2(j_1+j_2)=1.
The index can be computed via localization of the path-integral on a Hopf surface with topology S^3× S^1 <cit.>. Writing p=e^2π iσ, q=e^2π iτ, the complex-structure moduli σ,τ of the Hopf surface encode the length β of the circle, the squashing parameter b of the unit three-sphere, and two angular twist parameters Ω_1,2 as (cf. <cit.>)
σ=iβ/2π(b+i Ω_1) , τ=iβ/2π(b^-1+i Ω_2) .
The insertion of e^iπγ(F+2r) inside the trace corresponds to the 4d fields having twisted boundary conditions around the S^1 (cf. Eq.(1.6) of <cit.>):
ϕ(x+β)= e^iπγ(F+2r) ϕ(x).
For γ=1, corresponding to the 2nd sheet, this is exactly our desired U(1)_r-twist in (<ref>). This twist appeared in <cit.>, within the Ω-deformation approach to the SCFT/VOA correspondence, as a combination of the usual (-1)^F (due to the NS spin structure arising from folding the topological plane of the HT twist into a cigar) with an e^2π i r arising from the unit U(1)_r flux involved in the topological twist. Higher sheets, as will be explained below, allow making contact with the Galois orbits discussed in <cit.>.
In the body of the paper we focus on the 𝒩=1 limit of ℐ_t^γ(p,q,t), which we denote by ℐ^γ(p,q). This is the index that has been better understood from an EFT perspective in <cit.>; other choices will be discussed in Section <ref>. The 𝒩=1 limit corresponds to setting t=(pq)^2/3, and thus our index of interest is[Note that the 2nd sheet index of <cit.> is instead ℐ_there=ℐ_t(p e^2π i,q,(pq)^2/3e^4π i/3). Different notions of “2nd sheet” correspond to different subgroups of the R-symmetry used in the twisting.]
ℐ^γ=1(p,q):=ℐ_t(p e^2π i,q,(pq)^2/3).
In fact, since we are primarily interested in topological structures descending from 4d to 3d, we further set b=1, Ω_1,2=0, implying p=q∈ℝ. The relevant index is hence
ℐ^γ=1(q):=ℐ^γ=1(q,q)=ℐ_t(q e^2π i,q,q^4/3).
With the R-twisting implemented as above, we can now perform the reduction via the Cardy limit q→1 of the index ℐ^γ=1(q).
§.§ EFT data from the Cardy limit
The tools and formulas developed for analyzing the Cardy limit of the 4d 𝒩=1 index in <cit.> will be upgraded here to an efficient procedure for finding the data of the 3d EFT arising from the R-twisted reduction. The procedure requires as an input a 4d 𝒩=1 Lagrangian description of the 4d 𝒩=2 SCFT, which for the (A_1,A_2n) series is available thanks to the work of Maruyoshi and Song <cit.>.
There are three steps to the procedure:
* EFT field content is found via a (patch-wise) saddle-point variant of the real-analytic methods used in <cit.>. This variant, as explained in Section <ref>, solves Problem 1 in <cit.>. The resulting holonomy saddles <cit.> of ℐ^γ=1 yield abelian EFTs from the non-abelian Maruyoshi-Song starting points for (A_1,A_2n).
* EFT Chern-Simons couplings are found via formulas from <cit.>. For (A_1,A_2) the Chern-Simons (CS) coupling derived as such, and the field content obtained as above, match with those of the Gang-Yamazaki theory <cit.>, confirming the conjecture in <cit.>. Moreover, as explained in Section <ref>, the theory obtained from 4d in this way comes equipped with a gravitational CS coupling that resolves via inflow the 't Hooft anomaly matching puzzle raised in <cit.>.
* EFT monopole superpotentials are found via an upgraded version of the technique used in <cit.> for diagnosing (multi-) monopole superpotentials from the periodic polynomials (<ref>) and (<ref>) (the former vanished in <cit.>, and the latter was referred to as the Rains function there). For (A_1,A_4) the monopole superpotential derived as such, together with the field content and CS couplings obtained as above, match those of the Gang-Kim-Stubbs 𝒯_2 theory <cit.>, confirming the conjecture in <cit.>. Moreover, our 4d derivation provides us with a microscopic mechanism for dynamical generation of the monopole superpotential à la Affleck-Harvey-Witten <cit.> in the compactified Maruyoshi-Song theory. As explained in Section <ref>, more generally, the monopole superpotentials of all 𝒯_n theories can be thought of as arising from the Affleck-Harvey-Witten type mechanism in the parent Maruyoshi-Song theory. See <cit.> for more details.
We thus establish the Gang-Kim-Stubbs family 𝒯_n of 3d 𝒩=4 SCFTs (including the Gang-Yamazaki minimal theory <cit.> for n=1) as the 3d bridge in the 4d/2d correspondence between (A_1,A_2n) and M(2,2n+3). Indeed, the A-twisted 𝒯_n theory is known via the half-index calculations <cit.> to host M(2,2n+3) on its boundary. Our approach to the family, however, brings to light additional background CS couplings supplied by the 4d UV completion, which resolve anomaly mismatches of the kind noticed in <cit.>.
§.§ Higher sheets and Galois orbits
Our approach also brings to light, following <cit.>, a whole collection of other three-dimensional QFTs closely related to the 𝒯_n family.
These arise from non-minimal R-twists in (<ref>) corresponding to 1<γ<2n+3, with the upper bound present because γ∈ℤ_2n+3 <cit.>. The associated EFT data can be obtained from the Cardy limit q→1 of the higher-sheet indices
ℐ^γ(q):=ℐ_t(q e^2π iγ,q,q^4/3).
The 3d EFTs we find are either gapped and flow to unitary TQFTs, or enjoy a U(1) flavor symmetry with respect to which the F-maximization gives 𝒩=4 SCFTs whose A-twist yields non-unitary TQFTs. These higher-sheet 3d TQFTs (some of which are unitary and some are not) are labeled by γ∈{1,…,2n+2}, or equivalently, by the roots of unity e^2π i γ/2n+3, and denoted TQFT^γ. It is natural to expect that they are permuted by the Galois group associated to the extension ℚ(ζ), where ζ= e^2π i/2n+3 is a simple root of unity. Such a group is known to be the group of units in ℤ_2n+3:
Gal(ℚ(ζ)/ℚ) = ℤ_2n+3^×.
This suggests a tantalizing connection to the Galois group action on the TQFT data <cit.>, as was noticed in <cit.>. More specifically, when a TQFT is determined by an underlying modular tensor category, the MTC data obeys polynomial equations, and one may consider various Galois groups acting on MTCs <cit.>. The most relevant for us is the Galois group of the modular data, acting on the modular S and T matrices. It is the Galois group of the extension of ℚ by the matrix elements of S and T. This group is usually bigger than ours, given by ℤ_N^×, where N is called the conductor <cit.> (see also <cit.>). If we, however, choose the normalization T_00=1 (rather than e^-2π i c/24), then the Galois group acting on the modular data of our 3d theories TQFT^γ is precisely Gal(ℚ(ζ)/ℚ) given in (<ref>).
Note that when 2n+3 is prime, we have ℤ_2n+3^×≅ℤ_2n+2, and the possible values γ∈{1,…,2n+2} constitute the group's single orbit. For non-prime 2n+3, the set {1,…,2n+2} breaks into multiple orbits of ℤ_2n+3^×. The orbit of our interest in this paper is the one containing γ=1, whose members are labeled by γ coprime to 2n+3. For such γ, the Coulomb branch is fully lifted in the 3d limit. The other orbits would require dealing with unlifted Coulomb branches which are beyond the scope of this work.
We thus obtain from a single 4d theory, by considering different values of γ relatively prime to 2n+3, a sequence of 3d TQFTs:
TQFT^γ=1_non-uni , TQFT^γ=2_(non-)uni , ⋯, TQFT^γ=2n+1_(non-)uni , TQFT^γ=2n+2_non-uni
whose modular S and T matrices are related via the Galois/Frobenius conjugation (cf. <cit.>). The γ=1 TQFT is of course the non-unitary A-twisted bridge in the 4d/2d correspondence <cit.>.
Our evaluation of the TQFT S and T matrices is based on the 𝒩=2 field content and CS levels (see e.g. (<ref>)), using Nekrasov-Shatashvili type Bethe root techniques <cit.>, via the handle-gluing and fibering operators used in the BPS surgery (see Appendix <ref>). At its current stage of development, this approach yields results up to an overall sign ambiguity in S, and an overall phase ambiguity in T. Fixing the normalization of the latter by T_00=1 allows one to unambiguously talk about the Galois group action. The physical T_00, however, is some phase: our ignorance of this phase reflects the possibility of multiplying a TQFT by an invertible TQFT <cit.>.
Like in <cit.>, our main example here is (A_1,A_2) (although we also consider (A_1, A_4)). On the second sheet γ=1, we of course find the Lee-Yang (LY) MTC. In this case, 2n+3=5, the relevant Galois group is ℤ_5^× = ℤ_4, and the Galois orbit also contains, in addition to LY, conjugate Fibonacci (Fib), Fibonacci (Fib), and conjugate Lee-Yang (LY) modular tensor categories:
shapes.geometric
font=,
module minimum width=1.7cm,
module minimum height=.8cm,
text width=1cm,
circular distance=2.2cm, arrow tip=to
[circular diagram:clockwise]LY, Fib, Fib, LY
sdasdasd
shapes.geometric, positioning
font=,
module minimum width=1.7cm,
module minimum height=.7cm,
text width=1cm,
circular distance=2cm,
arrow tip=to
[circular diagram:clockwise]LY, Fib, Fib, LY
The natural expectation is to see this orbit on our four sheets, for γ=1 to 4, respectively. Computations of S, T in the normalization T_00=1 indeed corroborate this expectation. However, once we try to account for the phase of T_00, things start to slightly deviate from <cit.>.
Our concrete EFT data enable us to take a further step here and access the boundary VOAs of these TQFTs via half-index calculations as in <cit.>. In the (A_1,A_2) case, we obtain the LY≅ M(2,5) characters <cit.> on the second sheet (i.e. γ=1), as expected from the SCFT/VOA correspondence. Similarly, on the fifth sheet (i.e. γ=4) we see evidence of LY, thus realizing half of the Galois/Hecke orbit at the level of VOA characters (cf. <cit.>). On the third and fourth sheet, however, where the 3d EFT is gapped, the half-index computations indicate a spin-TQFT. Since we study the low-energy regimes of supersymmetric gauge theories, this finding should not be too surprising. We perform a half-index calculation for the TQFT^γ=2_uni on the third sheet of (A_1,A_2), with Dirichlet boundary conditions in Section <ref>. The resulting characters are those of a fermionic VOA, namely the fermionized tricritical Ising (known as the supersymmetric minimal model SM(3,5)) times a free Majorana fermion (often denoted as SO(1)_1). On the fourth sheet, we find evidence that TQFT^γ=3_uni is the conjugate of the third sheet spin-TQFT. Thus we propose:
M(25) ⟶ (F_4)^_1 ?! ⟶ (G_2)^_1 ⟶ (E_71/2)^_1 ?!
shapes.geometric, positioning
font=,
module minimum width=2.2cm,
module minimum height=.7cm,
text width=1cm,
circular distance=2.4cm,
arrow tip=to
[circular diagram:clockwise]M(2,5), SM(3,5)⊗SO(1)_1, SM(3,5)⊗SO(1)_1, M(2,5)
[overlay, remember picture]
at ([xshift=-3.8cm,yshift=3.1cm]current bounding box.center) (A_1, A_2);
At first, this looks very disappointing, as it differs from the Galois orbit shown earlier. However, as we explain in Section <ref>, SM(3,5)⊗SO(1)_1 actually agrees with Fib≅(F_4)^_1 up to multiplication by an invertible spin-TQFT. Likewise, SM(3,5)⊗SO(1)_1 agrees with Fib≅(G_2)^_1 up to an invertible factor. Therefore, our findings still agree with the Galois orbit proposal formulated up to invertible factors:
Higher-sheet TQFTs form a Galois orbit, up to invertible spin-TQFT
We test this proposal by computing half-indices of various natural boundary conditions and identifying them as characters of the corresponding VOAs. For the third sheet theory, besides the SM(3,5)⊗SO(1)_1 characters on the right Dirichlet boundary, we also find the (G_2)_1 ≅ Fib characters on the left enriched Neumann boundary, as explained in Section <ref>. On the fourth sheet, we find the same, with the left and right boundaries swapped. This implies a level-rank type duality between SM(3,5)⊗SO(1)_1 and (G_2)_1, analogous to <cit.>. Notice also that up to invertible factors, we have Fib and Fib on the opposite boundaries, which is expected.
We also note in passing that the LY and LY theories are mirror duals of each other.
It would be interesting to realize the remaining (F_4)^_1≅ Fib <cit.> and (E_71/2)^_1≅ LY <cit.> characters.
We do not explore the Galois orbit of (A_1,A_4) in as much detail, but we find that besides the Gang-Kim-Stubbs 𝒯_2 theory arising on the 2nd sheet, quite interestingly, its mirror arises on the 3rd sheet, suggesting that more generally, the mirror is always on the Galois orbit of 𝒯_n. We also uncover a fascinating unitary TQFT with non-abelian gauge group and fractional monopoles on the fourth sheet of (A_1,A_4), which may be seen as a harbinger of richer possibilities at larger n.
We hope this concrete 3d perspective on the Galois orbits of 4d 𝒩=2 SCFTs finds further synthesis with related results in <cit.>.
§.§ Outline of the paper
In Section <ref> we outline how the twisted reduction of 4d 𝒩=1 gauge theories can be tracked via EFT tools developed in the context of Cardy-like limits of the 4d superconformal index. While most of Section <ref> is review of known results, the extremely simple relations in (<ref>) are new. We use them here to sharpen the twisted reduction procedure, but they have wider implications that will be explored elsewhere. In Section <ref> we apply the R-twisted reduction procedure to (A_1,A_2), reproducing via the minimal twist the Gang-Yamazaki theory, properly dressed with a gravitational CS coupling that resolves the anomaly mismatch puzzle of <cit.>. With non-minimal R-twists, we obtain the Galois orbit of the Gang-Yamazaki theory, up to invertible spin-TQFT factors. In particular, on the third (resp. fourth) sheet we obtain a spin-TQFT with S and T^2 matrices matching those of the conjugate Fibonacci (resp. Fibonacci) MTC, up to an overall phase of T, that supports SM(3,5)⊗SO(1)_1 characters on a Dirichlet boundary (resp. (G_2)^_1 characters on a Neumann boundary).
In Section <ref> we explore part of the Galois orbit arising from the R-twisted reduction of (A_1,A_4), making contact with the 𝒯_2 theory of Gang-Kim-Stubbs on the second sheet, its mirror on the third sheet, and an intriguing TQFT with non-abelian gauge group on the fourth sheet. Some remaining open questions are discussed in Section <ref>, and various technical calculations are outlined in Appendices <ref> and <ref>. Appendix <ref> summarizes our toolkit for studying 3d 𝒩=2 gauge theories: the 3d superconformal index, which we use as a diagnostic tool to verify that our (possibly twisted) EFTs flow to TQFTs without local operators; the squashed three-sphere partition function, which arises in the Cardy limit of the 4d 𝒩=1 index as explained in Appendix <ref>; the Bethe root techniques, which we use to determine S and T^2 matrices from the EFT data; half-indices with Neumann or Dirichlet boundary conditions, which we use to find characters of the VOAs on holomorphic boundaries of our TQFTs.
§ CARDY LIMIT OF THE 4D INDEX AND 3D MONOPOLES
The superconformal indices ℐ^γ(p,q) of our interest in this work are made out of q-Pochhammer symbol and elliptic gamma function building blocks:
(z;q):=∏_k=0^∞(1-zq^k), Γ(z;p,q):=∏_j,k≥ 01-z^-1p^j+1q^k+1/1-z
p^jq^k.
We often denote Γ(z;p,q) by Γ_e(z) for simplicity, keeping the dependence on p,q implicit. Note that
1/Γ(z;p,q)=Γ(z^-1pq;p,q).
For a general 4d 𝒩=1 gauge theory with a U(1)_R symmetry, with a semi-simple gauge group G and flavor group F, the index takes the form (cf. <cit.>):
ℐ^γ(q)=(q;q)^2r_G1/|W|∫_𝔥_cl∏_j=1^r_Gd x_j ∏_χ∏_ρ^χΓ_e(z^ρ^χ q^r_χ e^2π i q^χ·ξ)/∏_α_+Γ_e(z^α_+) Γ_e(z^-α_+),
where |W| is the order of the Weyl group and r_G the rank of G. The parameters x_j will be referred to as the holonomies. They parametrize a moduli space denoted 𝔥_cl, which we write explicitly as
𝔥_cl=(-1/2,1/2]^r_G.
The r_G-tuple x_1,…,x_r_G is denoted x, and the r_F-tuple ξ_1,…,ξ_r_F is denoted ξ. The positive roots of G are denoted α_+, and the weights of the gauge group representation of the chiral multiplet χ are denoted ρ^χ. We have defined z^ρ:=z_1^ρ_1⋯ z_r_G^ρ_r_G, and similarly for z^α, with z_j:=e^2π i x_j. The r_F-tuple flavor charge of χ is denoted q^χ, and its U(1)_R charge r_χ. We assume r_χ∈(0,2) for all χ.
This guarantees, among other things <cit.>, a universal large-β (or “low-temperature”) limit <cit.>.
In our application to R-twisted reductions below, we will encounter phases of the form e^2π i q^χ·ξ in the arguments of the gamma functions arising from the replacement p→ q e^2π iγ as in (<ref>). See for example the phases in (<ref>). We do not consider actual flavor fugacities in this work (except in Appendix <ref>, but there we will take ξ to be 𝒪(β), rather than 𝒪(β^0) as in the formal discussion of the present section).
Defining β via q=e^-β, we refer to the q→1^- limit of the index as the Cardy limit. In this limit the Pochhammer symbols in (<ref>) simplify as
log (q;q)=
-π^2/6β-1/2logβ+𝒪(β^0).
Decomposition of the moduli space and the outer patch. The index (<ref>) can be thought of as a supersymmetric partition function on S^3× S^1, with 𝔥_cl/W the (classical) moduli-space of flat connections. From a Kaluza-Klein perspective, 𝔥_cl/W coincides with a middle-dimensional section of the classical Coulomb branch of the 3d field theory living on S^3 (the other half of the Coulomb branch is parameterized by the dual photons). A subset 𝒮 of 𝔥_cl referred to as the “singular set”,
𝒮_g:=⋃_α_+{x∈𝔥_cl| α_+·x∈ ℤ}, 𝒮_χ:=⋃_ρ^χ≠0{x∈𝔥_cl| ρ^χ·x+q^χ·ξ∈ℤ},
𝒮:=⋃_χ𝒮_χ∪𝒮_g,
supports additional massless modes compared to generic x. We excise an ϵ neighborhood 𝒮_ϵ of the singular set, and refer to the rest of 𝔥_cl as the outer patch. The details of the excision scheme are largely irrelevant for our purposes here, but an important point is that for small enough ϵ the outer patch consists of multiple disconnected components out_n separated from each other by 𝒮_ϵ. One can in turn decompose 𝒮_ϵ into various inner patches. We refer the interested reader to <cit.> for precise definitions, and here just illustrate the ideas with a couple of simple examples as in Figures <ref> and <ref>.
The leading asymptotics of the index ℐ^γ(q) in this limit can be found using the estimate <cit.> (cf. <cit.>):
logΓ_e(z q^r) = i 8π^3/β^2κ(x)/12
-4π^2/β1-r/2ϑ(x)-π^2/3β(r-1)+O(β^0),
where
ϑ(x) :={x}(1-{x}),
κ(x) :={x}(1-{x})(1-2{x}),
with {x}:=x-⌊ x⌋.
Using (<ref>) and (<ref>), we find in the β→0 limit (cf. Eq. (3.9) of <cit.>):
ℐ^γ(q)≈ e^-π^2/3βTrR(2π/β)^r_G∫_𝔥_cld^r_Gxe^-4π^2/βL_h(𝐱)+i8π^3/β^2Q_h(𝐱),
where the two functions in the exponent[The role that Q_h,L_h play in the 4d→3d reduction of four-supercharge gauge theories is in some ways analogous to that played by W,Ω in the 3d→2d reduction <cit.>; see Appendices <ref> and <ref>. This is seen most clearly from the 4d A-model perspective <cit.>. In particular, Q_h (resp. W) encodes black hole entropy in AdS_5/CFT_4 (resp. AdS_4/CFT_3) via Cardy limit of the 4d (resp. 3d) superconformal index <cit.>. Alternatively, Q_h,L_h may be thought of as periodic polynomials encoding various dynamical and contact CS couplings of the 3d KK (or thermal) EFT arising from compactification on a circle <cit.>.] are given by
Q_h(𝐱):=1/12∑_χ∑_ρ^χκ(ρ^χ·𝐱+q^χ·ξ),
L_h(𝐱):= 1/2∑_χ(1-r_χ)∑_ρ^χϑ(ρ^χ·𝐱+q^χ·ξ)-∑_α_+ϑ(α_+·𝐱).
Even though ϑ and κ are piecewise quadratic and cubic respectively, anomaly cancellations make L_h and Q_h piecewise linear and quadratic (hence the letters). Moreover, it follows from their building blocks ϑ and κ that L_h is continuous while Q_h has continuous first derivatives.
Note that the inner-patch contributions are neglected in (<ref>), and ϵ is sent to zero (the integration domain is all of 𝔥_cl). This is because we are interested mainly in the leading parts of the asymptotics. As explained in <cit.>, to obtain subleading asymptotics one has to take into account the inner-patches. Typically the small-β expansion of logℐ^γ(q) is of the form:
logℐ^γ(q)=#/β^2+#/β+#log1/β+𝒪(β^0).
The inner-patch contributions become crucial at 𝒪(β^0) and higher. An example of the inner-patch analysis will be discussed in Appendix <ref>.
The integrand in (<ref>) is piecewise analytic, so can be treated via the saddle-point method in its domains of analyticity. The saddle-point method shows in these domains that the leading term of the exponent containing i Q_h/β^2 is the most important. So the saddles correspond to stationary points of Q_h. In fact, since the first derivative of Q_h is continuous, one can forget about non-analyticity and simply look for stationary points of Q_h on all of 𝔥_cl. If the locus of such stationary points is extended, or consists of multiple points, then one has to find the subset of that locus where L_h is minimized. Let us denote this subset by 𝔥_qu. The saddle-point method then yields for the asymptotic of the index (cf. <cit.>):
ℐ^γ(q)≈ e^-π^2/3βTrR(2π/β)^dim 𝔥_qu
e^-4π^2/βL_h(𝐱^∗)+i8π^3/β^2Q_h(𝐱^∗),
where 𝐱^∗ is any point on 𝔥_qu.
The elementary fact that patch-wise application of the saddle-point method (together with the continuity argument for the derivative of Q_h) can determine the asymptotics of (<ref>) was missed in <cit.>, partly because the examples treated there were not rich enough to necessitate complex-analytic tools. The saddle-point analysis outlined above (augmented with a straightforward adaptation of the inner-patch results in <cit.>) resolves Problem 1 in <cit.>.[Problem 1.1 in <cit.> will be addressed by the example in Section <ref>, which is the first case we have encountered where there is conflict between making Q_h stationary and minimizing L_h, so that complex-analytic tools are required for the asymptotic analysis. Problems 2 and 3 there are still open; see Section 4.2 of <cit.> for more comments. Problem 4 there appears straightforward if one imposes gauge-gravity-gravity and gauge-R-R anomaly cancellation.] Note that while the asymptotic relation (<ref>) had appeared in a different guise in <cit.>, the real-analytic derivation there via minimization of Re(iQ_h/β^2) does not work when β∈ℝ, because then Re(iQ_h/β^2) would vanish; the complex-analytic, patch-wise saddle-point derivation here fills that gap.
Dominant patches and their field content. The asymptotic in (<ref>) can be improved to exponential accuracy by incorporating the contributions of all patches that intersect 𝔥_qu. These will be the dominant patches. All other patches make exponentially smaller contributions to the asymptotic of the index.
To each patch, and in particular to each dominant patch, we can associate a 3d 𝒩=2 field content. Besides the r_G photon multiplets present everywhere on 𝔥_cl, there will be light chiral multiplets for every instance of
ρ^χ·x+q^χ·ξ∈ℤ ,
inside the inner patch, and (pairs of) massless vector multiplets for every instance of
α_+·x∈ℤ .
EFT couplings. In the present work we focus on cases with dim𝔥_qu=0. In fact we will be considering examples where 𝔥_qu consists of a single point x^∗ (modulo Weyl orbit redundancies). The small-β asymptotics of the 4d index is then captured by the S^3 partition function of an EFT with field content as described above, and with various induced Chern-Simons couplings in the uv (i.e. at scale ∝ϵ/β). Denote the set of heavy 4d multiplets (those that do not yield any light fields in the 3d EFT) at the saddle x^∗ by H_∗. Denote the complement set, consisting of light 4d multiplets, by L_∗. The induced couplings can be obtained from the formulas <cit.>:
k^∗_ij =-∑_χ∑_ρ^χ∈ H_∗B_1 (ρ^χ·x^∗ +q^χ·ξ) ρ^χ_i ρ^χ_j,
k^∗_j R =-∑_χ∑_ρ^χ∈ H_∗B_1 (ρ^χ·x^∗ +q^χ·ξ) ρ^χ_j (r_χ-1)-∑_α∈ H_∗B_1 (α·x^∗) α_j ,
k^∗_R R =-∑_χ∑_ρ^χ∈ H_∗B_1 (ρ^χ·x^∗ +q^χ·ξ) (r_χ-1)^2,
k^∗_grav =-2∑_χ∑_ρ^χ∈ H_∗B_1 (ρ^χ·x^∗ +q^χ·ξ).
The function B_1(x), displayed in Figure <ref>, is defined as[In computer implementations it may pay off to set B_1=0 in a tiny window (of width say 10^-3) around ℤ, banishing the discontinuities where they are unlikely to interfere with calculations.]
B_1(u):={u}-1/2 for u∉ℤ ,
0 for u∈ℤ .
For the factor of 2 arising for k_grav and absent in k_ij,k_jR,k_RR, see e.g. Appendix A of <cit.>. The corresponding CS terms are in our conventions
i/4π∫d^3x √(g) ϵ^μνρ𝒜^i_μ ∂_ν𝒜^j_ρ ,
i/4π∫d^3x √(g) ϵ^μνρ𝒜^j_μ ∂_ν𝒜^(R)_ρ ,
i/4π∫d^3x √(g) ϵ^μνρ𝒜^(R)_μ ∂_ν𝒜^(R)_ρ ,
i/192π∫d^3x √(g) ϵ^μνρ Tr(ω_μ ∂_νω_ρ-23 ω_μω_νω_ρ) .
From an EFT perspective, the formulas (<ref>) arise from zeta-function regularization of the KK sums over the familiar contributions
δ k_xy=1/2 sign(m) q_x q_y ,
to the one-loop-exact CS coupling generated by integrating out a fermion of real-mass m with charges q_x,y under the gauge fields 𝒜^x,y <cit.>. For the contribution of a ρ^χ∈ H_∗ to the gauge-gauge CS coupling for instance we have <cit.>
∑_n∈ℤ1/2 sign(n+ρ^χ·x^∗ +q^χ·ξ) ρ^χ_i ρ^χ_j-B_1 (ρ^χ·x^∗ +q^χ·ξ) ρ^χ_i ρ^χ_j,
where we have used ∑_n∈ℤsign(n+x)(n+x) -2 B_1(x).
The breakthrough realization of <cit.>, further corroborated in <cit.>, was that by supersymmetrizing the CS actions (<ref>) (as well as those involving the KK photon) with the zeta-regularized couplings as in (<ref>), the 3d effective action is essentially fixed, at least as far as localization results are concerned.[This is modulo a lingering puzzle in the literature <cit.> pertaining to geometries with nonzero angular twists Ω_1,2. That puzzle seems irrelevant to the present work though, since we set Ω_1,2=0.]
Monopole operators. Using the relations κ(x)=2B_3(x) and ϑ(x)=-B_2(x)+1/6
following from (<ref>), together with B'_n(x)=n B_n-1(x),
we have κ”(x)=12B_1(x) and ϑ'(x)=-2B_1(x). The formulas (<ref>) then imply[Note that Eqs. (<ref>) together with the behavior of B_1 as in Figure <ref>, imply that an inner patch interpolating between two outer patches has average values of k_ij and k_jR.]
∂_i∂_j Q_h -k_ij ,
∂_j L_h -k_jR .
We restrict to patches that intersect the stationary loci of Q_h as discussed above. Since Q_h is piecewise quadratic, if its second derivatives vanish on its stationary loci, it would be flat. Since a Coulomb branch in the ith direction of the 3d EFT is possible only if k_ij=0 for all j, we conclude that the 3d EFT can only have viable Coulomb branch directions along flat directions of Q_h. Alternatively, since on outer patches the gauge charges of a 3d monopole V_m are given by (cf. Eq. (3.18) in <cit.>)
c_i(m)=-∑_j k^_ij m_j,
the flat directions of Q_h signal gauge-invariant monopoles in the 3d EFT.
There might still be obstructions to such Coulomb branch directions from contact terms on curved background due to k_jR <cit.>. Alternatively, since on outer patches the R-charge of a 3d monopole V_m is
r(m)=-∑_j k_jR^ m_j,
the slope of L_h along the flat directions of Q_h determines whether there are gauge-invariant monopole operators of the right R-charge (=2) to be dynamically generated in the superpotential, either à la Affleck-Harvey-Witten <cit.>, or due to some KK- or multi-monopole generalization thereof <cit.>. Hence the unlifted Coulomb branch of the 3d EFT is expected to corresponds to 𝔥_qu, both on curved background where there are contact terms associated to non-flat L_h, and on ℝ^3 where contact terms vanish but there can be monopole superpotentials diagnosed by the slope of L_h.
The general formulas valid on all patches (see e.g. <cit.>)
c_i(m)=-∑_j k^_ij m_j-1/2∑_χ∑_ρ^χ∈ L_∗ρ^χ_i|ρ^χ(m)|,
r(m)=-∑_j k^_jR m_j-1/2∑_χ(r_χ-1)∑_ρ^χ∈ L_∗|ρ^χ(m)|-1/2∑_α∈ L_∗ |α(m)|,
together with (<ref>) imply that monopole gauge and R charges are continuous across patches. Therefore inner patches naturally inherit the dynamically generated monopole superpotentials of their neighboring outer patches. In addition, they might admit gauge-invariant superpotentials containing light matter fields of the patch (cf. <cit.>). See <cit.> for an example in a similar spirit to what we will encounter below.
The connection outlined above between Cardy limit of the 4d superconformal index and 3d monopoles was through CS terms in the KK effective action <cit.>. The relation between L_h and R-charges of monopoles can be understood also from an alternative perspective due to Shaghoulian <cit.> (see <cit.> for a sharper BPS version). The idea is that <cit.>
S^1_β→0× S^3≈ S^1× S^3/ℤ_p→∞ ,
through a “modular”
identification
r̃_S^3/p/r̃_S^1=r_S^1/r_S^3 ,
with the tilded parameters those of the latter space. This allows relating the Cardy limit of the index to the supersymmetric Casimir energy <cit.> on S^1× S^3/ℤ_p→∞. There are discrete holonomy sectors associated to the torsion cycles of S^3/ℤ_p, and in the limit p→∞ they can be thought of via Stokes' theorem as monopole sectors on S^2≈ S^3/ℤ_p→∞. The supersymmetric Casimir energy of these flux sectors matches the R-charge of the corresponding monopole operator thanks to the BPS relation between energy and R-charge of 3d chiral operators in radial quantization. This ends up relating the “high-temperature” L_h to the “low-temperature” r(m) as in <cit.>. A similar understanding of Q_h is lacking at present, but the results of <cit.> (see their Eq. (25) in particular) appear to be a promising starting point.
§ A1A2 AND ITS GALOIS ORBIT
In this section, we analyze the Cardy limit of the index ℐ^γ(q) of the (A_1,A_2) Argyres-Douglas theory on its four inequivalent higher sheets, γ=1,2,3,4. This yields the EFTs arising from various U(1)_r-twisted circle reductions to 3d. These EFTs form the core result of this section. We also explain how they lead to 3d TQFTs and Galois orbits, using half-indices <cit.> of natural boundary conditions in 3d EFTs as a tool for identifying the TQFTs.
On the second sheet (i.e., γ=1) we obtain the Gang-Yamazaki (GY) theory <cit.> as conjectured in <cit.>. The A-twisted (or H-twisted) GY theory supports the Lee-Yang VOA on its boundary <cit.>, as expected from the 4d/2d correspondence <cit.>. We refer to the A-twisted GY theory as the Lee-Yang TQFT. We also talk about the Lee-Yang modular tensor category (MTC), since the 3d TQFT is determined by the choice of MTC 𝒞 and the central charge (compatible with the chiral central charge of 𝒞 mod 8).
On other sheets, using our EFTs, we obtain TQFTs closely related to the Galois conjugates of the Lee-Yang TQFT. Their modular S and T matrices, as evaluated through familiar Bethe root techniques <cit.>, complete the Lee-Yang's Galois orbit <cit.>, up to overall phases of T. These overall phases indicate an important subtlety, which we now explain. The Galois orbit of Lee-Yang MTC contains Fibonacci MTC, as well as the conjugates of these two. We sometimes denote the elements of the orbit as LY, Fib, LY and Fib. It was originally conjectured in <cit.> that the second through fifth sheets of (A_1, A_2) give precisely the Lee-Yang TQFT, Fibonacci TQFT, and their conjugates. Our findings corroborate this conjecture only up to multiplication by an invertible TQFT or spin-TQFT. Such invertible factors are responsible for the overall phases mentioned earlier. On the second and fifth sheets, Dirichlet half-indices yield LY and LY, up to an invertible TQFT. On the third and fourth sheets, however, instead of the expected Fibonacci, we find fermionic theories: SM(3,5) (an 𝒩=1 supersymmetric minimal model that also coincides with the fermionized tricritical Ising model,) and its conjugate. This mismatch is resolved by realizing that SM(3,5) coincides with Fib multiplied by an invertible spin-TQFT.
Thus the modified proposal is that different sheet TQFTs, after possible multiplication by an invertible (spin-)TQFT, live on the same Galois orbit. To test this proposal, we also perform the half-index calculations yielding (G_2)^_1 ≅Fib and fermionic SM(3,5) characters on different boundaries of the third and fourth-sheet TQFTs. The close connection between the spin-TQFT of SM(3,5) and the bosonic TQFT of (F_4)^_1≅Fib is reflected in another close relation between the fermionic VOA SM(3,5) and the bosonic VOA (G_2)^_1, which will be discussed in Section <ref>. Namely, they are commutants of each other inside the VOA of free fermions, similar to the case of OSp(1|2)_1 and LY discussed in <cit.>.
We use the 4d 𝒩=1 Maruyoshi-Song Lagrangian for the (A_1, A_2) theory <cit.>. The readers should consult this reference for details. Here we only explicitly quote the 𝒩=2 index (see Eq. (16) in <cit.>):
ℐ_t(p,q,t) =(p;p)(q;q)Γ_e((pq/t)^6/5)Γ_e((pq/t)^1/5)/Γ_e((pq/t)^2/5)×
∮dz/2π i z Γ_e(z^±1(pq)^2/5t^1/10)Γ_e(z^±1(pq)^-1/5t^7/10)Γ_e(z^±2(pq/t)^1/5)/2Γ_e(z^±2) ,
where (· ;·) and Γ_e(·) stand respectively for the q-Pochhammer symbol and the elliptic gamma function, and the integration contour is the unit circle.
§.§ Second sheet: Lee-Yang
Going to the 2nd sheet of the SUSY index by setting γ=1, the resulting twisted 𝒩=1 index becomes[Note that a different “2nd sheet” index, namely ℐ_t(p e^2π i,q,e^4π i/3(pq)^2/3), of the (A_1,A_2) theory was studied in <cit.>. That index corresponds to twisting the boundary condition around the circle with a 4d 𝒩=1 R-charge rather than the 𝒩=2
U(1)_r charge of our interest here.]
ℐ^γ=1(p,q) =ℐ_t(p e^2π i,q,(pq)^2/3)
=(p;p)(q;q)Γ_e(e^12π i/5(pq)^2/5)Γ_e(e^2π i/5(pq)^1/15)/Γ_e(e^4π i/5(pq)^2/15)×
∮dz/2π i z Γ_e(z^±1 e^4π i/5 (pq)^7/15)Γ_e(z^±1 e^-2π i/5(pq)^4/15)Γ_e(z^±2e^2π i/5(pq)^1/15)/2Γ_e(z^±2) .
We further restrict to p=q=e^2π iτ for simplicity and denote the resulting index as ℐ^γ=1(q). The more general index ℐ^γ=1(p,q) can be studied similarly.
§.§.§ The holonomy saddle and the EFT data
We now analyze the Cardy limit q=e^-β→1. For the leading asymptotics, (<ref>) gives:
ℐ^γ=1(q)≈2π/β∫_-1/2^1/2dx e^2π i/90τ-4π^2/βL^γ=1_h(x)+i8π^3/β^2Q_h^γ=1(x) ,
Q_h^γ=1(x) =1/12[κ(6/5)+κ(1/5)-κ(2/5)+κ(x+2/5)+κ(-x+2/5)
+κ(x-1/5)+κ(-x-1/5)+κ(2x+1/5)+κ(-2x+1/5)
] ,
L_h^γ=1(x) =(1-4/5)ϑ(6/5)/2+(1-2/15)ϑ(1/5)/2-(1-4/15)ϑ(2/5)/2
+(1-14/15)ϑ(x+2/5)+ϑ(-x+2/5)/2+(1-8/15)ϑ(x-1/5)+ϑ(-x-1/5)/2
+(1-2/15)ϑ(2x+1/5)+ϑ(-2x+1/5)/2
-ϑ(2x) ,
Recall that the functions ϑ(x)={x}(1-{x}) and κ(x)={x}(1-{x})(1-2{x}), with {x}:=x-⌊ x⌋, are 1-periodic.
The result in (<ref>) is obtained using estimates inside the integrand that are uniformly accurate on the outer patch (defined as the union of the disjoint outer patches). Those estimates lose uniform validity inside the inner patches; see Figure <ref>. Nevertheless, (<ref>) locates the holonomy saddle and captures the leading e^#/β^2+#/β asymptotic of ℐ^γ(q) correctly (cf. <cit.>). In Appendix <ref> we discuss how more accurate estimates inside the inner patches can yield the subleading 𝒪(β^0) asymptotic as well.
The leading asymptotic of ℐ^γ=1 is dictated by the stationary point of Q_h^γ=1. In the best case scenario this would coincide with the locus of minima of L_h^γ=1. This best case scenario is realized here, at
x^∗=±0.2 ,
as can be seen from the plots in Figures <ref> and <ref>.
We can hence read the leading asymptotic from (<ref>) as[The remarkable-looking fact that Q^γ=1_h
(.2) = 0, which implies the index does not grow as e^#/β^2, and also the fact that L^γ=1_h
(.2) = 0, will be explained in Section <ref>.]
ℐ^γ=1(q)≈ e^2π i/90τ-4π^2/βL^γ=1_h(x=.2)+i8π^3/β^2Q_h^γ=1(x=.2)=e^2π i/90τ.
The error is multiplicative 𝒪(β^0) (cf. Eq. (2.57) of <cit.>).
We conclude from (<ref>) that the 3d effective theory lives on in_2. Its matter content can be read from (<ref>) near z=e^2π i 1/5: the only light multiplets are that of the photon as well as the chiral multiplet with gauge charge +1 and R-charge 8/15. For the induced gauge-gauge, gauge-R, R-R, and gravitational CS couplings, the formulas in (<ref>) give (see Appendix <ref> for derivations):
k_gg=-3/2 , k_gR=1/30 , k_RR=-31/225 ,
k_grav=2/5 .
§.§.§ SUSY enhancement top. twist non-unitary TQFT
The 3d EFT found above is a 3d 𝒩=2 U(1)_-3/2 gauge theory with a chiral multiplet Φ of gauge charge +1 and R-charge 8/15. The effective Lagrangian at the uv scale[For inner-patches we distinguish between the “uv” cut-off Λ∼2πϵ/β of the 3d EFT <cit.>, and the “UV” scale ∝1/β>Λ where the theory is four dimensional.] Λ∼2πϵ/β also contains a mixed gauge-R CS coupling k_gR=1/30, as well as k_RR=-31/225 and k_grav=2/5. This theory is the same as the one studied by Gang and Yamazaki in <cit.> (modulo the background CS levels which were irrelevant there). We thus refer to it henceforth as the GY theory.
It was discovered via F-maximization in <cit.> that the GY theory flows at low energies to a 3d 𝒩=4 SCFT whose 𝒩=2 superconformal U(1)_R current is related to the uv R-current by mixing with the topological U(1)_J. This mixing shifts the parameters of the IR SCFT with respect to those in the uv discussed above, such that in the IR we have r_Φ=1/3 for the 3d 𝒩=2 R-charge of Φ, while k_gR=0. (Further mixing of the gauge and U(1)_R symmetries, though physically inconsequential, allows for other “R-charge schemes” with different k_gR and r_Φ. See Eqs. (<ref>) and (<ref>).)
The resulting 3d 𝒩=4 theory can then be subjected to the topological A-twist, also known as the H-twist, which employs the SU(2)_H R-symmetry. Thus it is convenient to use U(1)_H⊂ SU(2)_H as the 3d 𝒩=2 R-symmetry, yielding the following 𝒩=2 uv data (see Appendix <ref> for the definitions of k_gg^+ and k_gR^+):
LY: 𝐔(1)_-3/2 + Φ_+1^𝐫=1 and 𝐤_𝐠𝐑=0 (k_gg^+=-1, k_gR^+=0) .
We have indicated the gauge charge of the chiral multiplet in the subscript, and the R-charge in the superscript. Again, other R-charge schemes are possible via adding a multiple of the gauge charge to the U(1)_R charge.
We note in passing that the 3d A-model data in (<ref>) was derived above via a multi-step procedure: 4d to 3d reduction, then F-maximization, then identification of the U(1)_H R-symmetry appropriate for the A-twist. We will give a one-shot prescription for such derivations in Section <ref>.
§.§ S and T matrices from handle-gluing and fibering operators
The TQFT S and T matrices can be studied via localization and Bethe root techniques. Suppressing the contributions from k_RR and k_grav, we first compute the effective twisted superpotential and the effective dilaton using (<ref>):
W(Z) =-π i Z -1/2 Z^2+Li_2(e^Z) ,
Ω(Z) =0 .
The Bethe equation exp(W'(Z_α^∗))=1 reads in terms of the charge +1 Wilson line[Instead of the holonomy saddle at x=.2, we could have picked the Weyl image at x=-.2, then the gauge charge of the chiral multiplet would become -1, and the Fibonacci fusion relation z^2=1+ z would arise for the charge -1 Wilson line.] variable z:=e^Z as
z^2=1+ z .
This is interpreted as a quantum relation either in the ring of BPS Wilson lines in 3d, or in the twisted chiral ring of the 2d theory obtained from reducing the 3d theory on a circle <cit.>.
BPS surgery via fibering and handle-gluing operators <cit.> yields as in (<ref>) the following S and T matrices (up to an overall sign ambiguity for S and an overall phase for T as in <cit.>):
S =(
[ -√(2/5-√(5)) √(2/5+√(5)); √(2/5+√(5)) √(2/5-√(5)); ]) =
1/√(2+φ)[ -φ 1; 1 φ ],
T =(
[ 1 0; 0 e^2π i (-1/5); ]),
where φ = 1+√(5)/2 is the golden ratio. They can be used to compute partition functions on three-manifolds. For example, the topological S^3 partition function can be either expressed as:
Z^_S^3=S_00=-φ/√(2+φ) ,
or as (<ref>) (up to an overall phase).
The half-index of a (0,2) boundary condition in a 3d 𝒩=2 QFT <cit.> gives the vacuum character of its boundary VOA. Dressing the half-index with Wilson lines gives access to the non-vacuum modules of the VOA as in <cit.>. In our case, the GY 3d 𝒩=2 QFT possesses enhanced 𝒩=4 SUSY in the IR, and is moreover subjected to the topological A-twist. Hence the boundary conditions we are interested in are (0,4) boundary conditions subject to the Costello-Gaiotto deformation <cit.> that makes them compatible with the A-twist. These are holomorphic boundary conditions in the TQFT, and their half-index also computes the vacuum character of the boundary VOA. How is this half-index related to that of (0,2) boundary conditions in the 3d 𝒩=2 theory? Firstly, the parameter ϵ of Costello-Gaiotto deformation breaks the R-symmetry of a (0,4) boundary condition, yet it does not enter the half-index. Thus we may simply compute the half-index of (0,4) boundary condition at ϵ=0, but only with the fugacities for symmetries unbroken by the Costello-Gaiotto deformation present. Secondly, the (0,4) boundary conditions look like (0,2) from the 3d 𝒩=2 perspective. Thus the TQFT half-index must be equal to the usual half-index of (0,2) boundary conditions in the GY theory. We only have to make sure, first, that the (0,2) boundary conditions enahnce to (0,4) in the IR, and second, that we only include fugacities for symmetries compatible with the Costello-Gaiotto deformation. For example, the topological symmetry U(1)_J of the GY theory (identified with the anti-diagonal combination of U(1)_H and U(1)_C in the IR) is not one of them. As for the SUSY enhancement of (0,2) boundary conditions to (0,4), we do not study it in detail. We simply consider natural (0,2) boundary conditions, and view various consistency checks, such as the anomaly matching and the expected answers for half-indices, as evidence that we chose the right boundary conditions.
The two characters of M(2,5) have been reproduced via Dirichlet and A-twist on the left boundary in <cit.>, as well as via Neumann and B-twist on the right boundary in <cit.>. With our conventions, the Lee-Yang characters are obtained in the A-twisted theory (<ref>) via Dirichlet on the right boundary, with the computation paralleling the one in <cit.>. Quite interestingly, we were also able to reproduce them via the somewhat subtle Neumann boundary conditions on the right (enriched by the boundary chirals and Fermis), as explained in Section <ref>.
Here, following <cit.>, our focus will be on Neumann conditions in the A-twist on the left boundary. See Section <ref> for more discussion on this point.
Half-index calculations with Neumann boundary condition proceed, as explained in Appendix <ref>,
by first finding the boundary anomaly polynomial. In the present case the anomaly polynomial reads
-3/2𝐟^2+ 2 𝐟·𝐟_x-1/2𝐟^2=-2𝐟^2 + 2 𝐟·𝐟_x .
Above 𝐟_x stands for the field strength of U(1)_J . To cancel the gauge anomaly we introduce two 2d fermi multiplets, one with gauge, U(1)_J, and R charges (-1,1,0), whose anomaly polynomial is (-𝐟+𝐟_x)^2, and the other with charges (1,0,0) and anomaly polynomial 𝐟^2. The formulas in Appendix <ref> give:
II_Neu(q) = (q;q)∮dz/2π i zθ_0(-q^1/2z x^-1;q) θ_0(-q^1/2z^-1;q)/(-q^1/2z^-1;q)
=∑_m∈ℤ_≥0q^m^2+m/(q;q)_m,
producing the vacuum character χ_0^M(2,5)(q) of Lee-Yang, similarly to the computation in Section 4.4 of <cit.>.
The resulting half-index turns out to diverge in the limit of our interest where the fugacity x associated with U(1)_J is removed x→1. To regularize the divergence we add a 2d fermi multiplet with charges (0,-1,1), this multiplet as it turns out also cancels the boundary anomaly -𝐟_x^2 for U(1)_J.
Since we have a 2d chiral multiplet, we need to have a way of dealing with the infinite number of poles in the integral.
It is known how to deal with this problem in the context of the 2d elliptic genera, and the JK prescription appears as the result of carefully dealing with the zero modes <cit.>.
We use the two-step procedure for computing the 3d half-index as in <cit.>: we first compute the half-index with Dirichlet condition on the gauge field (which in the present case implies the same boundary anomaly as in (<ref>)), then dress it with the 2d contributions from anomaly-canceling matter, and finally gauge the boundary U(1) symmetry descending from the bulk gauge field via a 2d gauge multiplet. The result is (after a change z→ z (-q^-1/2) of the gauge variable)
II_Neu(q) =lim_x→1 (x;q)(x^-1q;q) (q;q)^2 ∮_z=x^-1JK resdz/2π i z C(zx;q) 1/(q;q)∑_m∈ℤq^m^2/2 z^m (-q^-1/2x)^m/(z q^m;q)
=lim_x→1 - (x;q)(x^-1q;q)/(q;q) ∑_m∈ℤ q^m^2/2 (-q^-1/2)^m 1 /(x^-1 q^m;q)
=∑_m∈ℤ_≥0q^m^2+m/(q;q)_m,
which is equal to the vacuum character χ_0^M(2,5)(q). In going from the second line to the third we used: lim_x→1(x;q)/(x^-1q^m;q) = -(-1)^m q ^m(m+1)/21/(q,q)_m for m≤0 and 0 for m>0.
While from the fusion rule (<ref>) we expect that insertion of the charge -1 Wilson line in the half-index would give access to the non-vacuum module of M(2,5), it appears that in the two-step procedure, we have to insert a charge +1 Wilson line. This amounts to an insertion of q^m in the summand of the previous computation <cit.>, changing it to
lim_x→1 - (x;q)(x^-1q;q)/(q;q) ∑_m∈ℤ q^m^2/2+m (-q^-1/2)^m 1 /(x^-1 q^m;q)= ∑_m∈ℤ_≥0q^m^2/(q;q)_m,
which matches the non-vacuum character χ_1^M(2,5)(q).
While our results are certainly encouraging, we leave a clearer derivation (justifying in particular our x→1 regularization and the relevance of the charge +1 Wilson line in the two-step procedure) to future studies.
Finally, note that the induced gravitational CS level k_grav=2/5 in Eq. (<ref>) explains via inflow the boundary gravitational anomaly 48(c-a)=2/5 discussed in <cit.>, resolving the anomaly mismatch puzzle raised in that work.
§.§ Third sheet: conjugate Fibonacci
Going to the 3rd sheet of the index by setting γ=2 we get:
ℐ^γ=2(p,q) =ℐ_t(p e^4π i,q,t=(pq)^2/3)
=(p;p)(q;q)Γ_e(e^24π i/5(pq)^2/5)Γ_e(e^4π i/5(pq)^1/15)/Γ_e(e^8π i/5(pq)^2/15)×
∮dz/2π i z Γ_e(z^±1 e^8π i/5 (pq)^7/15)Γ_e(z^±1 e^-4π i/5(pq)^4/15)Γ_e(z^±2e^4π i/5(pq)^1/15)/2Γ_e(z^±2).
Again for simplicity we restrict to p=q and denote the resulting index as ℐ^γ=2(q).
§.§.§ The dominant patch and the EFT data
Asymptotics of the index (<ref>) is obtained using the estimates in <cit.> as
ℐ^γ=2(q)≈2π/β∫_-1/2^1/2dx e^2π i/90τ-4π^2/βL^γ=2_h(x)+i8π^3/β^2Q_h^γ=2(x),
Q_h^γ=2(x) =1/12[κ(12/5)+κ(2/5)-κ(4/5)+κ(x+4/5)+κ(-x+4/5)
+κ(x-2/5)+κ(-x-2/5)+κ(2x+2/5)+κ(-2x+2/5)
].
L_h^γ=2(x) =(1-4/5)ϑ(12/5)/2+(1-2/15)ϑ(2/5)/2-(1-4/15)ϑ(4/5)/2
+(1-14/15)ϑ(x+4/5)+ϑ(-x+4/5)/2+(1-8/15)ϑ(x-2/5)+ϑ(-x-2/5)/2
+(1-2/15)ϑ(2x+2/5)+ϑ(-2x+2/5)/2
-ϑ(2x),
These functions are plotted in Figures <ref> and <ref>.
Where L_h^γ=2 is minimized, Q_h^γ=2 is not stationary. So we have to asymptotically analyze the integral in (<ref>) via complex-analytic tools. We appeal to the saddle-point method, which in the present context is equivalent to the steepest-descent analysis.
The integrand in (<ref>) is piecewise analytic, so we have to decompose the integration domain to sub-intervals (or patches) where the integrand is analytic. See Figure <ref>. We spell out the details of the computation only for the patch 2/10+ϵ<x<3/10-ϵ, denoted out_2 in Figure <ref>. The contribution of the other patches will be briefly outlined.
For .2<x<.3 we have:
Q_h^γ=2(x) =3/50(1-5x)^2,
L_h^γ=2(x) =1/25(1-5x).
The saddle of the integrand in (<ref>) is found to be at
x^∗_2=1/5+τ/15,
where τ=iβ/2π. The corresponding growth of the index is[Similarly to the 2nd sheet case, an explanation for the fact that Q^γ=2_h
(.2) = 0, which implies the index does not grow as e^#/β^2, and the fact that L^γ=2_h
(.2) = 0 will be given in Section <ref>.]
ℐ^γ=2(q)⟶ e^2π i/90τ+𝒪(β^0).
As for the other patches, it turns out that out_1 gives a contribution similar to (<ref>) coming from a saddle on its right end
x^∗_1=1/5,
while out_3 and out_4 give contributions that are exponentially smaller (i.e. suppressed as e^-#^>0/β compared to (<ref>)).
The conclusion is that the 3d effective theory lives on in_1.
The EFT matter content can be read from (<ref>) near z=e^2π i 1/5. The only light multiplets are: (1) the photon multiplet; (2) the chiral multiplet with gauge charge +1 and R-charge 14/15; (3) the chiral multiplet with gauge charge -2 and R-charge 2/15. For the various induced CS couplings we get (see Appendix <ref> for the derivations):
k_gg=-3/2, k_gR=11/10, k_RR=-13/450, k_grav=-1/5.
§.§.§ Mass gap unitary TQFT
Let us summarize the 3d EFT found in the previous section. It is a 3d 𝒩=2 U(1)_-3/2 gauge theory with two chiral multiplets Φ^r=14/15_+1 and Φ^r=2/15_-2. The effective Lagrangian at the uv scale Λ∼2πϵ/β also contains a mixed gauge-R CS coupling k_gR=11/10, as well as k_RR=-13/450 and k_grav=-1/5. For reasons alluded to in the introduction and spelled out below, we will refer to this theory as the Fib theory.
To determine the low-energy dynamics, we first note that the theory has monopole operators whose charges can be found from (<ref>) and (<ref>):
V_+: gauge charge 3 and R-charge -1/5 ,
V_-: gauge invariant with R-charge 2 .
From the charge assignments above we see that there are two possible gauge-invariant terms of R-charge 2 in the uv:
Φ^2_+1Φ^_-2 , V_- .
Importantly, these are also invariant under the 4d 𝒩=1 flavor symmetry U(1)_f, which is part of the 4d 𝒩=2 R-symmetry: invariance of Φ^2_+1Φ^_-2 follows from the flavor charges 1/10 and -1/5 as seen in (<ref>), while invariance of V_- follows from (<ref>).
Naturalness therefore implies a superpotential
𝒲^_Fib ∼ Φ^2_+1Φ^_-2 + V_- ,
with V_- arising via the Affleck-Harvey-Witten type mechanism <cit.> in the UV SU(2) gauge theory on a circle <cit.>.
The first term in the superpotential prevents a U(1) flavor symmetry in the matter sector, as well as a Higgs branch in the 3d theory, while the second breaks the topological U(1)_J and lifts the Coulomb branch. We thus end up with a theory lacking a moduli-space of vacua, as expected from the R-twisted reduction <cit.>.
With the U(1)_J broken in the 3d EFT due to the monopole superpotential, one may wonder what happens to the 4d 𝒩=1 flavor symmetry U(1)_f. The twisted reduction is not expected to break 4d global symmetries (cf. <cit.>). Since our 3d EFT has no global U(1) symmetry to accommodate the U(1)_f, the only remaining possibility is that U(1)_f acts trivially in the dynamical sector of the EFT. With an analysis of the S^3 partition function, we argue in Appendix <ref> that this possibility is indeed realized.
Computing the 3d superconformal index <cit.>,
Ĩ^(q):=Tr^_S^2(-1)^R q^R/2+j_3,
of the theory via the formula (<ref>),
we find:
Ĩ^_Fib(q)=1 .
This strongly indicates that the 3d EFT is gapped and has a topological vacuum.
§.§ S and T matrices from handle-gluing and fibering operators
In order to leverage Bethe root techniques, we mix gauge and U(1)_R to make the R-charges of the chiral multiplets integer. For the calculation in the present subsection, we choose the mixing scheme
R-charge→ R-charge+1/15×gauge charge,
so that now Φ_+1 has R-charge 1, and Φ_-2 has R-charge 0. The mixing implies also (see Eq. (<ref>)):
k_gR → k_gR+1/15× k_gg=1 .
We thus have:
Fib: 𝐔(1)_-3/2 + Φ_+1^𝐫=1+ Φ_-2^𝐫=0 and 𝐤_𝐠𝐑=1 (k_gg^+=1, k_gR^+=2) ,
with the superpotential as in (<ref>).
The twisted superpotential and the effective dilaton are found from (<ref>) to be:
W(Z) =π i Z + 1/2Z^2+Li_2(e^Z)+Li_2(e^-2Z) ,
Ω(Z) =2Z+log(1-e^-2Z) .
The Bethe equation exp(W'(Z_α^∗))=1 reads in terms of the charge 1 Wilson line variable z:=e^Z as
z^2=1+ z .
Equation (<ref>) gives the S and T matrices (up to an overall sign ambiguity for S and an overall phase for T) as:
S =(
[ √(2/√(5)+5) √(2/5-√(5)); √(2/5-√(5)) -√(2/√(5)+5); ]) =
1/√(2+φ)[ 1 φ; φ -1 ],
T =(
[ 1 0; 0 e^2π i(3/5); ]).
As a check, Eq. (<ref>) gives the S^3 partition function as:
|Z^_S^3|=1/√(2+φ) ,
matching S_00, as it should. These indeed correspond, up to the said ambiguities, to the Fib (or Rep(F_4)^_1) modular data <cit.>.
§.§ Boundary VOA from the half-index
We now present Dirichlet half-index calculations yielding characters of
SM(3,5)×SO(1)_1 ,
where SM(3,5) is an 𝒩=1 supersymmetric minimal model <cit.>, also known as the fermionized tricritical Ising model M(4,5)/ℤ_2^f, and SO(1)_1 is a free Majorana fermion.
By the RCFT/TQFT correspondence, SM(3,5) yields a spin-TQFT, which is equivalent to (F_4)^_1 = Fib, up to an invertible spin-TQFT factor, and SO(1)_1 is itself an invertible spin-TQFT. See Section <ref> for more on the relation between the TQFTs as well as the VOAs.
The calculation is done with the 𝒩=(0,2) Dirichlet boundary conditions on the gauge multiplet and on Φ_+1, and with modified Dirichlet (or D_c) on Φ_-2,[One may be tempted to impose D_c on Φ_+1 and D on Φ_-2. The scalar potential |ϕ^2_+1|^2+|ϕ_+1ϕ_-2|^2 following from Φ^2_+1Φ_-2∈𝒲_ Fib implies that D_c on Φ_+1 breaks supersymmetry. We have checked that the half-index with such boundary condition vanishes, consistent with supersymmetry breaking.] on the right boundary. The boundary gauge anomaly is:
3/2 𝐟^2_-k_gg-2 𝐟·𝐫_-k_gR+1/2 𝐟^2_Φ_+1+1/2 (-2 𝐟-𝐫)^2_Φ_-2=4 𝐟^2 .
So we can use the formula (<ref>) with k_gg^+→4 and k_gR^+→0:
II^R_𝒟,D,D(q)=1/(q;q)∑_m∈ℤq^2m^2 z^4m (- z^-1q^1/2-m;q) ( z^2q^1+2m;q).
Sending the gauge fugacity z→1 due to the D_c condition on Φ_-2 breaking the boundary global U(1) descending from the bulk gauge symmetry, we get:
II^R_𝒟,D,D_c(q)=1/(q;q)∑_m∈ℤq^2m^2 (-q^1/2-m;q) (q^1+2m;q)=:χ_0(q).
This is a fermionic character, and we would like to determine the corresponding VOA.
The non-vacuum character can be obtained by inserting a Wilson line of gauge charge 1:
χ_1(q):=1/(q;q)∑_m∈ℤq^2m^2-m (-q^1/2-m;q) (q^1+2m;q).
Introduction of q^-m inside the summand, instead of q^m as prescribed in <cit.>, is because we are considering the right boundary.
Requiring modular covariance of the two characters
[ q̃^-c/24χ_0(q̃); q̃^-c/24+hχ_1(q̃) ]=(
[ √(2/√(5)+5) √(2/5-√(5)); √(2/5-√(5)) -√(2/√(5)+5); ]) ·[ q^-c/24χ_0( q); q^-c/24+hχ_1( q) ],
where q̃:=e^-2π i/τ, we find that
c=6/5,
and h=1/10.
Guided by these data, we recognize that the two characters are actually the NS characters of the following fermionic RCFT:
the FRCFT = SM(3,5)⊗(free fermion RCFT).
More precisely, we have
q^-c/24χ_0(q) =(χ^_(s,t)=(1,1)(q) of SM(3,5))×χ^_F(q),
q^-c/24+hχ_1(q) =(χ^_(s,t)=(1,3)(q) of SM(3,5))×χ^_F(q),
where c=6/5= c(3,5)+1/2 and h=1/10=h_(1,3)^(3,5). The characters can be obtained as a special case of the general SM(p,p') NS character formula <cit.>:
χ^_(s,t)=q^h_(s,t)^p,p'-c(p,p')/24(-q^1/2;q)/(q;q)∑_l∈ℤ(q^l(lpp'+sp'-tp)/2-q^(lp+s)(lp'+t)/2),
with
c(p,p')=3/2(1-2(p'-p)^2/pp'), h^(p,p')_(s,t)=(p's-pt)^2-(p-p')^2/8pp'.
The conformal primaries O_(s,t) are labeled by two integers 1≤ s≤ p and 1≤ t≤ p', with an equivalence relation O_(s,t)=O_(p-s,p'-t). The NS primaries are O_(s,t) with s-t∈2ℤ. See <cit.> for other recent examples of supersymmetric minimal models arising from 3d TQFTs.
By “free fermion RCFT” we mean the free Majornara fermion theory with c=1/2, whose NS character reads
χ^_F=q^-1/48∏_n=0^∞(1+q^n+1/2).
§.§.§ Bosonic VOA on the opposite boundary
The left enriched Neumann boundary conditions support Fib≅(G_2)^_1, as we now demonstrate.
The 𝒩=(0,2) Neumann boundary conditons on all the multiplets induce the boundary gauge anomaly:
-3/2 𝐟^2_k_gg+2 𝐟·𝐫_k_gR-1/2 𝐟^2_Φ_+1-1/2 (-2 𝐟-𝐫)^2_Φ_-2=-4 𝐟^2 .
This can be cancelled by adding four boundary fermi multiplets of gauge charge -1 (or 1) and R-charge 0. The half-index can be found as in the Appendix <ref> to be:
II^L_𝒩,N,N=(q,q) ∮dz/2 π i zθ_0(-q^1/2z;q)^4/(-q^1/2z;q)(z^-2;q) = 1 + 14 q + 42 q^2 + 140q^3 + …,
matching the (G_2)^_1 vacuum character. See e.g. Table 1 in <cit.>. In evaluating the integral we have excluded the pole at z=1 via an ε-prescription z→ z q^ ε<0. Alternatively, one can add some multiple of gauge charge to the R-charge of the bulk chiral multiplets to bring them inside the safe interval 0<r_χ<2.
The non-vacuum character can be obtained by inserting a Wilson line of gauge charge +1:
(q,q) ∮ z dz/2 π i zθ_0(-q^1/2z;q)^4/(-q^1/2z;q)(z^-2;q) = q^1/2(7 + 34 q + 119 q^2 + 322q^3 + …) .
This matches the non-vacuum character of (G_2)^_1, up to the overall q^1/2 factor. Explaining this factor takes two steps: The Wilson line, in the presence of gauge CS level k_gg^+=1, supports a magnetic flux 1, which then, through the mixed CS level k^+_gR=2 in (<ref>), generates an R-symmetry Wilson line. The latter contributes since the half-index background includes the R-symmetry holonomy q^1/2 around the S^1. Thus the line that actually realizes the non-vacuum module is a gauge Wilson line combined with the R-symmetry Wilson line canceling q^1/2 (as in <cit.>).
§.§.§ Discussion
We found that the (left) enriched Neumann boundary supports (G_2)_1 VOA, while the (right) Dirichlet boundary supports SM(3,5)⊗SO(1)_1. Since the former is Fib and the latter is Fib⊗ (invertible spin-TQFT), this is quite natural at the level of MTC, which does not see the invertible factor. If we want to precisely identify the bulk TQFT, the invertible factor matters. Which of the two (if any) captures the bulk TQFT?
We conjecture that
bulk TQFT≅SM(3,5)⊗SO(1)_1
There are a few reasons to believe in this conjecture. First note, at the most naive level, that a uv 3d 𝒩=2 theory depends on the spin structure, so the spin-TQFT SM(3,5)⊗SO(1)_1 is, generically, more natural than the bosonic TQFT (G_2)_1. More seriously, the structures we get are consistent with such a conjecture. In the known examples, the Dirichlet boundary captures the bulk TQFT. For example, consider 3d 𝒩=2 G_k super Yang-Mills with k>h^∨, which is gapped. The 𝒩=(0,2) boundary is known to support the affine VOA L_k-h^∨(𝔤) <cit.>, capturing the IR TQFT given by the G_k-h^∨ bosonic Chern-Simons theory. At the same time, the Neumann boundary with k-h^∨ fundamental Fermi multiplets (say in the G=SU(N) case) would support some fermionic VOA V. By putting the theory on the interval, with the L_k-h^∨(𝔤)-carrying Dirichlet b.c. on the one side and the V-carrying enriched Neumann on the opposite side, we, on the one hand, obviously get free fermion VOA F= Ferm^⊗# in the IR. On the other hand, L_k-h^∨(𝔤) and V embed into F as commutants of each other, V = F / L_k-h^∨(𝔤) and L_k-h^∨(𝔤)=F/V. One can in fact view V = F / L_k-h^∨(𝔤) as a gauging prescription, defining how the bulk TQFT (corresponding to L_k-h^∨(𝔤)) gauges the boundary Fermi multiplets F, resulting in the boundary VOA V. We find similar structures in our case, after putting our theory on the interval with SM(3,5)⊗SO(1)_1 on the right and (G_2)_1 on the left. Due to the boundary Fermi multiplets, SM(3,5)⊗SO(1)_1 and (G_2)_1 are commutants of each other in Ferm^⊗ 4, which will be discussed more in Section <ref>.
We could try to consider different boundary conditions on the chiral multiplets, but we would like to ensure that the boundary U(1) symmetry is broken, since our 3d theory has no flavor symmetries. As we explained, using D_c for Φ_+1 is not an option as it breaks SUSY, so we had to break U(1) by imposing D_c on Φ_-2. We impose D on Φ_+1, and replacing it by Neumann would not be a good idea, as the superpotential would evaluate to Φ_+1^2 c at such a boundary, which breaks SUSY without introduction of additional boundary Fermi multiplets and superpotentials <cit.>. The latter would, however, add extra boundary degrees of freedom not captured by the bulk. Overall, the (𝒟, D, D_c) boundary conditions we use seem like the best choice. Clearly, it would be interesting to study the 3d IR physics of our theory in more detail and verify the conjecture (<ref>) more convincingly.
§.§ Fourth sheet: Fibonacci
Without repeating all the derivation steps, we emphasize that the fourth sheet theory is quite similar to the one on the third sheet, but with the following modifications:
Fib: 𝐔(1)_3/2 + Φ_-1^𝐫=1+ Φ_+2^𝐫=0 and 𝐤_𝐠𝐑=1 (k_gg^+=4, k_gR^+=0) .
The superpotential is
𝒲^_Fib ∼ Φ^2_-1Φ^_+2 + V_- .
The effective twisted superpotential and the effective dilaton are found from (<ref>):
W(Z) =4π i Z + 2Z^2+Li_2(e^-Z)+Li_2(e^2Z) ,
Ω(Z) =log(1-e^2Z) .
The Bethe equation exp(W'(Z_α^∗))=1 reads in terms of the charge 1 Wilson line variable z:=e^Z as
z^2=1+ z .
Equation (<ref>) gives the S and T matrices (up to an overall sign ambiguity for S and an overall phase for T) as:
S =(
[ √(2/√(5)+5) √(2/5-√(5)); √(2/5-√(5)) -√(2/√(5)+5); ]) =
1/√(2+φ)[ 1 φ; φ -1 ],
T =(
[ 1 0; 0 e^2π i(2/5); ]),
and Eq. (<ref>) gives the topological S^3 partition function as:
|Z^_S^3|=1/√(2+φ) ,
matching S_00 as it should. These indeed correspond, up to the said ambiguities, to the Fib (or Rep(G_2)^_1) modular data <cit.>.
§.§ Boundary VOA from the half-index
We first reproduce Fib (or (G_2)^_1) characters on the right boundary. The anomaly with (𝒩,N,N) boundary conditions on the right boundary is:
-k_gg^+ 𝐟^2-2k_gR^+ 𝐟·𝐫=-4𝐟^2 .
This can be cancelled by adding four boundary fermi multiplets of gauge charge -1 (or 1) and R-charge 0. The half-index can be found as in Appendix <ref> to be:
II^R_𝒩,N,N=(q,q) ∮dz/2 π i zθ_0(-q^1/2z;q)^4/(-q^1/2z;q)(z^-2;q) = 1 + 14 q + 42 q^2 + 140q^3 + …,
matching the (G_2)^_1 vacuum character.
The non-vacuum character can be obtained by considering a Wilson line of gauge charge -1 (inserting z instead of z^-1 as prescribed in <cit.> since we are considering the right boundary):
(q,q) ∮ z dz/2 π i zθ_0(-q^1/2z;q)^4/(-q^1/2z;q)(z^-2;q) = q^1/2(7 + 34 q + 119 q^2 + 322q^3 + …) .
This matches the non-vacuum character of (G_2)^_1, again up to the overall q^1/2 factor corresponding to the R-symmetry Wilson loop induced by the Chern-Simons couplings.
§.§.§ Fermionic VOA on the opposite boundary
On the left boundary, using the general formula (<ref>) for the 3d half-index with Dirichlet boundary conditions on all fields we get:
II^L_𝒟,D,D(q)=1/(q;q)∑_m∈ℤq^2m^2 z^4m (- z^-1q^1/2-m;q) ( z^2q^1+2m;q).
Sending z→1 due to the D_c condition on Φ_-2 breaking the boundary global U(1) descending from the bulk gauge symmetry, we get:
II^L_𝒟,D,D_c(q)=1/(q;q)∑_m∈ℤq^2m^2 (-q^1/2-m;q) (q^1+2m;q)=:χ_0(q).
This is the vacuum character of SM(3,5)×Majorana.
The non-vacuum character can be obtained by inserting a Wilson line of gauge charge -1:
χ_1(q):=1/(q;q)∑_m∈ℤq^2m^2-m (-q^1/2-m;q) (q^1+2m;q),
giving the non-vacuum character of SM(3,5)⊗Majorana.
§.§.§ Discussion
We found the same VOAs as on the third sheet, but on the opposite boundaries. Via the same reasoning, we now conjecture that the bulk TQFT is captured by the conjugate of SM(3,5)⊗Majorana.
Next, we obtain the (G_2)_1 characters via half-index calculations with Neumann conditions. The computation begins again with the boundary anomaly polynomial, which for the 𝒩,N,N boundary condition reads:
-3/2𝐟^2 - 2 𝐟·𝐫-1/2(2𝐟-𝐫)^2-1/2𝐟^2=-4𝐟^2 .
Note that we have not turned on 𝐟_x since the monopole superpotential prevents emergence of a U(1)_J.
To cancel the above anomaly, we add four 2d fermi multiplets on the boundary with gauge charge +1 (or -1, it does not matter) and R-charge 0. The general formula (<ref>) for the 3d half-index with Neumann boundary condition on all fields then gives
II^Fib_𝒩,N,N =(q;q)∮dz/2π i zθ_0(-q^1/2z;q)^4/(-z^-1q^1/2;q)(z^2;q)
=1+14q+42q^2+140q^3+350q^4+840q^5+1827q^6+… ,
which matches the (G_2)^_1 vacuum character. In evaluating the integral we have excluded the pole at z=1 via an ε-prescription z→ z q^ ε>0.
To access the non-vacuum module we insert the charge -1 Wilson line:
(q;q)∮dz/2π i z z^-1 θ_0(-q^1/2z;q)^4/(-z^-1q^1/2;q)(z^2;q)
=q^1/2(7+34q+119q^2+322q^3+819q^4+1862q^5+…) .
This matches the (G_2)^_1 non-vacuum character, up to an overall q^1/2 factor that we do not have a satisfactory explanation for. It would be nice to relate it to the fact that k_gR=-1, which may imply the Wilson line should be thought of as carrying R-charge as suggested in a similar context in <cit.>. We leave a more systematic understanding of this point for future work as well.
§.§ Fifth sheet: conjugate Lee-Yang
Again, without repeating the derivation steps, we emphasize that the fifth sheet theory is very similar to the one on the second sheet, but with the following modifications:
LY: 𝐔(1)_3/2 + Φ_-1^𝐫=1 and 𝐤_𝐠𝐑=0 (k_gg^+=2, k_gR^+=0) .
This is the uv data of the fifth-sheet theory appropriate for the A-twist. We have checked that the same TQFT (up to an invertible factor) arises from the B-twist of the 2nd-sheet theory. As in Gang-Yamazaki, there is no 3d superpotential.
The twisted superpotential and the effective dilaton are found from (<ref>) to be:
W(Z) =2π i Z +Z^2+Li_2(e^-Z) ,
Ω(Z) =0 .
The Bethe equation exp(W'(Z_α^∗))=1 reads in terms of the charge 1 Wilson line variable z:=e^Z as
z^2=1+ z .
Equation (<ref>) gives the S and T matrices (up to an overall sign ambiguity for S and an overall phase for T) as:
S =(
[ -√(2/5-√(5)) √(2/5+√(5)); √(2/5+√(5)) √(2/5-√(5)); ]) =
1/√(2+φ)[ -φ 1; 1 φ ],
T =(
[ 1 0; 0 e^2π i(-4/5); ]),
and Eq. (<ref>) gives the topological S^3 partition function:
|Z^_S^3|=φ/√(2+φ) ,
which matches S_00, as it should. These indeed correspond, up to the said ambiguities, to the LY (or Rep(E_71/2)^_1 <cit.>) modular data <cit.>.
§.§ Boundary VOA from the half-index
We have not managed to reproduce the corresponding (E_71/2)^_1 characters via (decorated) half-indices. However, we now present half-index calculations on the right Neumann boundary giving characters of OSp(1|2)_1. The corresponding spin-TQFT is known to be equivalent to LY <cit.>, up to an invertible factor.
§ A1A4 WITH NGE2
For the (A_1,A_2n) theory, we again use the 𝒩=1 Lagrangian of <cit.>, quoting only the 𝒩=2 index (setting z_j=e^2π i x_j):
ℐ_t^(A_1, A_2n)(p,q,t) =((p;p)(q;q))^n [ ∏_i=1^n Γ_e ( (pq/t)^α_i)/Γ_e ( (pq/t)^β_i)] Γ_e( (pq/t)^1/2n+3)^n
∫d^n x/2^n n![∏_i=1 ^n Γ_e (z_i^±1 (pq/t)^n+1/2n+3 t^1/2) Γ_e (z_i^±1 (pq/t)^-n/2n+3 t^1/2)]
[∏_i=1^n Γ_e (z_i^±2 (pq/t)^1/2n+3)/Γ_e(z_i^±2)][∏_1≤ i<j≤ nΓ_e (z_i^±1z_j^±1 (pq/t)^1/2n+3)/Γ_e(z_i^±1z_j^±1)],
with the integral over -1/2<x_j<1/2, while α_i:=2(n+i+1)/2n+3 and β_i:=2i/2n+3.
From the exponents in the arguments of the gamma functions in (<ref>), or from the lowest common denominator of r-charges being 2n+3, we see that there are 2n+3 inequivalent sheets.
For (A_1,A_4) we get seven sheets. Discarding the trivial sheet corresponding to γ=0 (which is well-understood <cit.>), we end up with six sheets, or three up to conjugation. Below we will study the three sheets corresponding to γ=1,2,3. The conjugate sheets arising for γ=6,5,4 can be studied similarly.
Our main focus will in fact be on the second sheet, γ=1, where we will make contact with the 𝒯_2 theory of Gang-Kim-Stubbs <cit.>. The third and fourth sheets of (A_1,A_4) will be discussed briefly.
§.§ Second sheet: SUSY enhancement with AHW superpotentials
There are two gauge holonomies x_1,x_2 for (A_1,A_4). The associated function Q_h^γ=1 is depicted in Figure <ref>. Although somewhat invisible to the naked eye, it has a flat direction around each minimum, as can be seen more clearly in Figure <ref>. The flat direction signals a gauge-invariant monopole V^out_1,-1 in the corresponding outer patch, where the subscript indicates the magnetic charges of the monopole.
The associated function L_h^γ=1 evaluated along the flat direction of Q_h^γ=1 is depicted in Figure <ref>. It shows that there is a holonomy saddle at (x_1,x_2)=(1/7,2/7), and determines the dominant inner patch.[There are Weyl images of the saddle, one of them visible at x=1/7 in Figure <ref>. Since it is enough to consider one member of the Weyl orbit, we discard the rest.] The slope of L_h^γ=1 being 2 along the flat direction of Q_h^γ=1 implies that the gauge-invariant monopole V^out_1,-1 has R-charge 2.
Having found the holonomy saddle at (x_1,x_2)=(1/7,2/7), the inner-patch EFT field content can be read easily from the index. We have a 3d 𝒩=2 U(1)× U(1) gauge theory with two light chiral multiplets Φ_1,-1 and Φ_0,1. The gauge charges are indicated as subscripts, and the R-charges are 2/21 and 10/21, respectively.
The EFT couplings can be computed using the formulas in Section <ref>. The matrix of gauge-gauge CS couplings comes out
k_ij=-[ 3/2 1/2; 1/2 1 ],
while the mixed gauge-R CS couplings read k_1R=-11/42 and k_2R=4/7.
An instructive consistency check of the inner-patch EFT data is to match the quantum numbers it yields for V^in_1,-1 with those of V^out_1,-1, which is the gauge-invariant monopole of R-charge 2 identified from the plots above. The interested reader can compute the gauge and R charges of V^in_1,-1 via the EFT data using (<ref>) and (<ref>).
We have checked via the formulas (<ref>) and (<ref>) that V:=V^in_1,-1 is also invariant under the 4d 𝒩=1 flavor symmetry. Since V is gauge-invariant, has R-charge 2, and is compatible with the flavor symmetry, it should be included in the superpotential:
𝒲 ∼ V .
This concludes our determination of the 3d EFT.
Let us now compare with the 𝒯_2 theory of Gang-Kim-Stubbs <cit.>. We only need to change the gauge variables and instead of the first U(1) factor of the gauge group, work with the difference of the two factors σ_1-σ_2=:y_1 . Comparison of the CS quadratic forms (appearing, for instance, in the S^3 partition function):
(σ_1,σ_2)· k_ij·(σ_1,σ_2)^T⟷ (y_1,y_2)· K_ij·(y_1,y_2)^T,
by setting σ_1=y_1+y_2 and σ_2=y_2, shows that
K_ij=-[ 3/2 2; 2 7/2 ].
To find the fluxes of the monopole V in the new gauge coordinates, we use the fact that the fluxes are valued in the co-character lattice to write:
m^new_i=∑_j∂ y_i/∂σ_j m_j^old .
This gives
[ m_1; m_2 ]=[ 1 -1; 0 1 ][ 1; -1 ]=[ 2; -1 ].
We thus end up with the description of 𝒯_2 in <cit.>, up to a parity transformation yielding an overall sign for the matrix of CS levels. Additionally, we have here a microscopic mechanism for dynamical generation of the monopole superpotential in the compactified Maruyoshi-Song theory
à la Affleck-Harvey-Witten <cit.>. It should be possible to corroborate this result via the Nye-Singer index theorem <cit.> as in <cit.>, but we do not attempt that here.
§.§ TQFT S and T matrices
The uv data of the 2nd-sheet theory appropriate for the A-twist is given by
triLY: 𝐔(1)× 𝐔(1) + Φ_1,-1^𝐫=0 + Φ_0,1^𝐫=1 with 𝐤_1𝐑=-1/2, 𝐤_2𝐑=1/2 ,
with the matrix of gauge-gauge CS couplings as in (<ref>). By triLY we mean tricritical Lee-Yang, due to the connection between M(2,7) and the tricritical Yang-Lee edge singularity <cit.>.
A quick calculation shows:
k^+_ij=[ -1 -1; -1 0 ],
while k^+_1R=-1 and k^+_2R=1. Therefore,
W(Z) =-1/2Z_1^2-Z_1 Z_2-π i Z_1+Li_2(e^Z_1-Z_2)+Li_2(e^Z_2) ,
Ω(Z) =-Z_1+Z_2+log(1-e^Z_1-Z_2) .
We now identify the Wilson lines realizing simple objects L_α in the triLY MTC. They play dual role: when piercing the boundary, they give simple modules of the boundary VOA, and when inserted parallel to the boundary, they realize Verlinde lines <cit.>. Wrapping L_α on any loop in S^3 leads to the vev <cit.>:
⟨ L_α⟩^_S^3=S_0α/S_00 ,
which in terms of the handle-gluing operator ℋ and the Bethe roots Z^∗_α reads (cf. <cit.>):
⟨ L_α⟩^_S^3=L_α(Z^∗_0)=±ℋ(Z^∗_α)^-1/2/ℋ(Z^∗_0)^-1/2 .
These equations allow us to identify the appropriate Wilson lines as z_1=e^Z_1 and z̃_2=e^Z_1+Z_2, in addition to the trivial line L_0=1. In terms of these, the Bethe equations read[From the Bethe equations, instead of (<ref>), we actually find z̃_2^2=1+ z_1 z̃_2, but the resulting systems are equivalent under the condition that z_1≠0. We have included (<ref>) instead, because it is a fusion rule. See e.g. <cit.> for a computational commutative algebraic approach to such systems of polynomial Bethe equations.]
z_1^2 =1+ z̃_2 ,
z_1 z̃_2 = z_1+z̃_2 .
Next, using
S_αβ=L_α(Z^∗_β)S_0β ,
together with (<ref>) and (<ref>), we find up to an overall sign ambiguity of S and an overall phase ambiguity in T:
S =
2sin(π/7)/√(7)[ d 1 1-d^2; 1 d^2-1 d; 1-d^2 d -1 ],
T =(
[ 1 0 0; 0 e^2π i(-3/7) 0; 0 0 e^2π i(-2/7) ]),
with d=2cos(π/7). The S^3 partition function computed via (<ref>) gives:
|Z^_S^3|=2sin(2π/7)/√(7),
matching S_00 as it should. These indeed correspond to the Virasoro minimal model M(2,7) <cit.>, up to the said ambiguities.
§.§ Other Argyres-Douglas theories
That R-twisted reduction of the rest of the (A_1,A_2n) family yields the 𝒯_n theories of Gang-Kim-Stubbs <cit.> will be demonstrated in an upcoming work <cit.>, where also R-twisted reduction of the (A_1,A_2n+1), (A_1,D_2n+1), and (A_1,D_2n+2) families will be shown to yield SUSY enhancing 3d 𝒩=2 Yang-Mills-Chern-Simons-matter theories with monopole superpotentials.
§.§ Higher sheets of (A_1,A_4)
We next discuss the 3rd and 4th sheets of (A_1,A_4). We will skip the other three sheets since their EFT data can be reached via simple conjugations from the sheets discussed.
§.§.§ Third sheet: duality up to an overall phase and the B-twist
Here we get a saddle at (x_1,x_2)=(3/7,-1/7). The EFT is a 3d 𝒩=2 U(1)× U(1) gauge theory with
k=[ 1 1/2; 1/2 -1 ].
The matter content is
Chiral Φ_-1,0 Φ_0,-1 Φ_-1,-1 Φ_0,2
R-charge 10/21 20/21 2/21 2/21
with the gauge charges indicated as subscripts. We also have k_1R=-9/7 and k_2R=-13/21. The monopole superpotential is
𝒲∼ V_1,0+V_0,1 .
As the R-charges indicate, there is also a natural possibility of adding
𝒲 ∋ Φ_0,-1^2Φ_0,2 .
Both (<ref>) and (<ref>) are invariant under the U(1)_f descending from 4d 𝒩=1 flavor symmetry (that is part of the 𝒩=2 R-symmetry). F-maximization with respect to this U(1)_f gives the R-charges 3/14, 13/14, 1/7, 1/7 for Φ_-1,0, Φ_0,-1, Φ_-1,-1, Φ_0,2, respectively, in a gauge-R mixing scheme where k_1R=-33/28 and k_2R=-19/28.
Comparison of the superconformal indices suggests that this theory, which we denote by 𝒯_2, is dual to the 𝒯_2 theory <cit.> that we obtained on the 2nd sheet, except for a difference in background CS couplings. In particular, 𝒯_2 is an 𝒩=4 SCFT. The differing background CS couplings manifest themselves in overall phases of the S and T matrices, and any partition function obtained after the A-twist.
This duality up to overall phase between the second and third sheets of (A_1,A_4) may appear disappointing at first, if one were hoping to find an entirely new TQFT. We will argue below, however, that they are in fact mirror dual to each other, which readers might find more appealing. So for the second sheet theory 𝒯_2, we find its mirror 𝒯_2 on the third sheet and its conjugate on the seventh sheet. Note that in the Lee-Yang case, the mirror and the conjugate theories coincided and were identified as LY on the fifth sheet of (A_1,A_2). The appearance of mirror dual on the Galois orbit of TQFT seems intriguing.
The possibility of a duality between the EFTs on different sheets of (A_1,A_4) is hinted by the Galois orbit of M(2,7) as well. As in the (A_1,A_2) case we expect the T matrix of the (γ+1)st sheet be given by the T matrix of the 2nd sheet raised to the γth power. The T matrix of M(2,7) reads
T_M(2,7)=e^2π i17/42[ 1 ; e^2π i(-3/7) ; e^2π i(-2/7) ],
and its square is, up to an overall phase, simply a permutation of the original matrix:
T^2_M(2,7)=e^2π i17/21[ 1 ; e^2π i(-6/7) ; e^2π i(-4/7) ]=e^2π i5/21[ e^2π i(-3/7) ; e^2π i(-2/7) ; 1 ].
We now present explicit Bethe root calculations corroborating this picture.
§.§ TQFT S and T matrices
The uv data of the 3rd-sheet theory appropriate for the A-twist is given by[Verifying that this is a TQFT data via superconformal index calculations in Mathematica can be simplified with a minor gauge-R mixing so that poles are removed from the unit circle contours.]
triLY: 𝐔(1)× 𝐔(1) + Φ_-1,-1^𝐫=0 + Φ_0,2^𝐫=0 + Φ_-1,0^𝐫=1 + Φ_0,-1^𝐫=1 , 𝐤_1𝐑=-3/2, 𝐤_2𝐑=-1/2 ,
with the matrix of gauge-gauge CS couplings as in (<ref>). It is easily seen from (<ref>) that
k^+_ij=[ 2 1; 1 2 ],
while k^+_1R=-1 and k^+_2R=-1. Therefore
W(Z) =Z^2_1+Z^2_2+Z_1Z_2+2π i(Z_1+Z_2)
+Li_2(e^-Z_1-Z_2)+Li_2(e^2Z_2)+Li_2(e^-Z_1)+Li_2(e^-Z_2) ,
Ω(Z) =-Z_1-Z_2+log(1-e^-Z_1-Z_2)+log(1-e^2Z_2) .
Identifying the Wilson lines corresponding to the simple modules via (<ref>),
we find[The negative sign here is introduced to enforce positive coefficients in the fusion rule (<ref>).] z̃_1=-e^Z_2 and z̃_2=e^-Z_1, in terms of which the Bethe equations read:
z̃_1^2 =1+z̃_2 ,
z̃_1 z̃_2 = z̃_1+z̃_2 .
Next, using (<ref>) together with (<ref>), we find (up to the overall sign ambiguity of S and the overall phase ambiguity in T):
S =
2sin(π/7)/√(7)[ d^2-1 -d 1; -d -1 d^2-1; 1 d^2-1 d ],
T =(
[ 1 0 0; 0 e^2π i(1/7) 0; 0 0 e^2π i(3/7) ]).
The T matrix coincides with the square of the T matrix on the 2nd sheet (<ref>), confirming the general expectation that the T matrix on the (γ+1)st sheet is given by the γth power of the 2nd-sheet T matrix.
From the S matrix note that for |S_00| we get 2sinπ/7/√(7)(d^2-1), which is different from that of triLY in (<ref>). This implies that triLY and triLY have different S^3 partition functions, and are hence truly distinct TQFTs. Since they arise from the A-twist of dual 3d 𝒩=4 SCFTs 𝒯_2 and 𝒯_2, we suggest that triLY arises from B-twisting 𝒯_2. In other words, we conjecture that 𝒯_2 and 𝒯_2 are mirror dual, which we confirm by checking that their flavored indices coincide, up to the inversion of the flavor fugacity corresponding to the flip U(1)_H ⟷ U(1)_C. Since the A and B twists are truly distinct (and not just conjugate) for all 𝒯_n with n>1, we conjecture that more generally, B-twisted 𝒯_n will be on the Galois orbit of the A-twisted 𝒯_n.
§.§.§ Fourth sheet: non-abelian TQFT and fractional monopoles
Here we get a saddle at (x_1,x_2)=(2/7,2/7). The EFT is a 3d 𝒩=2 SU(2)× U(1) gauge theory. Before recovering the SU(2), we have a U(1)× U(1) theory with
k=-[ 2 1/2; 1/2 2 ].
The matter content is:
Chiral Φ_1,1 Φ_2,0 Φ_0,2 Φ_-1,0 Φ_0,-1
R-charge 2/21 2/21 2/21 20/21 20/21
as well as the light W-bosons with charges (1,-1) and (-1,1), which will be responsible for the gauge symmetry enhancement. We also have k_1R=k_2R=-34/21. The monopole superpotential is
𝒲∼ V_1,0+V_0,1 .
It is also natural to add:
𝒲 ⊃ 𝒲_Φ=Φ_-1,0Φ_0,-1Φ_1,1+ Φ_-1,0^2Φ_2,0+ Φ_0,-1^2Φ_0,2 .
For going to gauge coordinates where the SU(2) is manifest, consider:
Z^_S^3 =∫dσ_1dσ_2 e^-2π i k_ijσ_iσ_j/2+2π k^_jRσ_j
Γ_h(2/21i+σ_1+σ_2)Γ_h(2/21i+2σ_1)Γ_h(2/21i+2σ_2)Γ_h(20/21i-σ_1)Γ_h(20/21i-σ_2)/Γ_h(±(σ_1-σ_2)).
We change variables to y:=y_1=(σ_1-σ_2)/2 and x:=y_2=(σ_1+σ_2)/2:
Z^_S^3 =∫dy/2dx e^2π i (3y^2/2+5x^2/2)-2π 68/21x
Γ_h(2/21i+2x)Γ_h(2/21i+2x+2y)Γ_h(2/21i+2x-2y)Γ_h(20/21i-x-y)Γ_h(20/21i-x+y)/Γ_h(±2y).
A further shift of x by -i/21 makes the R-charges 0,1, while k_xR=-3 and k_yR=0. We thus get:
TQFT^γ=3_A_1A_4: 𝐒𝐔(2)_-3× 𝐔(1)_-5/ℤ_2 + Φ_,2^𝐫=0 + Φ_,-1^𝐫=1 with 𝐤_𝐱𝐑=-3 ,
with the ℤ_2 identification to be explained momentarily. The superpotential becomes:
𝒲 ∼ √(Y_- V_+)+√(Y_+ V_+)+𝒲_Φ √(Y V_+)+𝒲_Φ ,
where V_+ is the U(1) monopole with flux m_x=1, and Y is the SU(2) monopole with GNO charge m_y=1. To obtain the monopole superpotential in (<ref>) from (<ref>), we have used (<ref>):
[ m_y; m_x ]=[ 1/2 -1/2; 1/2 1/2 ][ m_1; m_2 ].
We have checked that the superconformal index of this theory is trivial:
Ĩ(q)=1 ,
indicating that it is gapped and flows to a TQFT. Note that in the computation of the index, we have to sum over m_y,m_x∈1/2ℤ subject to m_y+m_x∈ℤ, as dictated by
m_1,m_2∈ℤ from the UV completion <cit.>. This restriction is reflected in the ℤ_2 identification in (<ref>).
It would be interesting to perform half-index calculations for this theory and see whether the three-component vector-valued modular forms (vvmfs) from <cit.> mentioned in Section 5.3 of <cit.> arise. The vvmf in <cit.> in particular—see Eq. (6.1) there—appears to be a reasonable target.
Our preliminary calculations do not reproduce the expected fusion rules and modular data from the Bethe root techniques in this case. We leave clarification of the relation between the 4th-sheet TQFT and triLY, triLY to future work.
§.§ TQFT S and T matrices
From the data in (<ref>), we first compute k^+_yy=8, k^+_xx=12, k^+_xR=0 . Then obtain:
W(Z) =4Y^2+8π iY+6X^2+12π iX+Li_2(e^-X-Y)+Li_2(e^-X+Y)
+Li_2(e^2X)+Li_2(e^2X+2Y)+Li_2(e^2X-2Y) ,
Ω(Z) =log(1-e^2X)+log(1-e^2X+2Y)+log(1-e^2X-2Y)
-log(1-e^2Y)-log(1-e^-2Y) .
The resulting Bethe equations are horrible... and incompatible with the fusion rules on the 2nd and 3rd sheets (<ref>), (<ref>)
§ DISCUSSION AND OPEN QUESTIONS
Building on <cit.>, we have further developed the
4d 3d 2d
picture of the SCFT/VOA correspondence <cit.>. We studied here the U(1)_r-twisted circle reductions that leave only finitely many points of the Coulomb branch unlifted in 3d <cit.>, focusing specifically on theories without Higgs branches. Then, either via the topological A-twist (when we have SUSY enhancement to 3d 𝒩=4), or via flowing to gapped phases, we obtained 3d TQFTs without local operators. The former TQFTs are non-unitary, and the latter are unitary, but in either case, they are controlled by some modular tensor categories <cit.>. On their holomorphic boundaries, our TQFTs support VOAs whose characters are accessible via line-decorated half-indices. The minimal U(1)_r twist with γ=1 yields the VOAs of <cit.>, while other choices yield other VOAs related to those of <cit.> via Galois/Hecke-type transformations <cit.>.
At each step in (<ref>), there are various choices to be made that we did not spell out in the main text. We address some of them below.
§.§ Topological twist and Bethe roots technique
We use the Bethe roots technique <cit.> as formulated in <cit.> to compute the TQFT S and T matrices. There is, however, a technical subtlety that we skipped. This technique was developed for the partial topological, or quasi-topological, twist in 3d 𝒩=2 theories, sometimes also called 3d 𝒩=2 A-twist. In this paper, on the other hand, we never work with this twist. We are either interested in the fully topological twist, or we consider gapped theories that are topological in the IR on their own, without any twist. Then are our results reliable? We believe that when studying partition functions on three-manifolds that are total spaces of circle fibrations, this distinction is irrelevant. Applying the 3d 𝒩=2 A-twist to a gapped theory is almost vacuous, and will at most result in the overall phase of T, which we ignore anyways. The distinction between the topological and quasi-topological twist is slightly more subtle. By deforming the metric on the total space of circle fibration, we can make sure that the topological and quasi-topological backgrounds agree almost everywhere, except the location of fibering operators. This implies that the computation of handle-gluing operators is reliable, and our S-matrix is fully correct. At the same time the fibering operators are likely to receive some additional phases in the fully topological background, capturing the overall phase of T. It would be useful to clarify this issue.
§.§ Sensitivity to 2d boundary conditions
In the main text of the paper, we only studied the simplest 𝒩=(0,2) boundary conditions, with either Dirichlet or Neumann on all fields, with the boundary Fermi multiplets canceling anomalies when necessary. The
hope was that such boundary conditions could be used to probe the possible VOAs and the bulk TQFT. However, this does not exhaust the possible boundary conditions. Furthermore, the cigar reduction in <cit.> implied that there exist preferred, or canonical, boundary conditions H_ε for the second sheet theory, guaranteed to carry the VOA of the 4d SCFT. It was also argued that the half-index of H_ε, — or the TQFT partition function on solid torus with the H_ε boundary, — computes the Schur index.
In the context of Lagrangian theories, such preferred boundary conditions H_ε were identified in <cit.> as the 𝒩=(0,4) Neumann, deformed to be compatible with the topological 3d A-twist. In the notation H_ε, ε stands for this deformation, referred to as the Costello-Gaiotto deformation <cit.>. Before discussing the possible modifications in the non-Lagrangian context of our main interest here, let us explain how the (0,4) Neumann boundary conditions reproduce the SCFT Schur index in Lagrangian cases.
First, our r-twisting is trivial in Lagrangian theories, meaning that there exists only one sheet, corresponding to the ordinary supersymmetric circle reduction. It always gives a (not necessarily dominant <cit.>) holonomy saddle at the origin, yielding a 3d 𝒩=4 theory with the same field content as that obtained from the naive dimensional reduction.
The (0,4) Neumann boundary conditions amount to (0,2) Neumann on all multiplets, except for the adjoint chirals in the 3d 𝒩=4 vectors that should have (0,2) Dirichlet. For compatibility with the A-twist, we use U(1)_H as the 𝒩=2 R-symmetry, and compute the half-index using formulas from <cit.>. The result is:
ℐ(q)=(q;q)^2r_G/|W|/θ_0(q^1/2;q)^n_ρ_0/2∫_𝔥_cld^r_Gx ∏_αθ_0(z^α;q)/∏_ρ_+^χθ_0(q^1/2z^ρ^χ_+;q),
matching the Schur index of the Lagrangian 4d 𝒩=2 SCFT. The weights ρ_+^χ above go over all the positive weights of the gauge group representation of the chiral multiplets inside the (half-) hypermultiplets, and n^_ρ_0 is the number of zero weights in the chiral multiplets inside (half-) hypers. Note that we are not including among ρ_+^χ the weights of the chiral multiplets inside 4d 𝒩=2 vector multiplets.
From the 3d 𝒩=2 perspective, half the numerator contribution θ_0(z^α; q)=(z^α; q)(q z^-α;q) in (<ref>) comes from the 3d 𝒩=2 vector, while the other half is from its chiral partner.
Can we similarly determine the preferred boundary conditions H_ε in the non-Lagrangian examples of our main interest? The answer is almost certainly yes, though we leave this question to the future work, only explaining the main idea here. Starting with the 4d 𝒩=1 Lagrangians of Maruyoshi-Song <cit.>, we are supposed to dimensionally reduce them on the cigar with the topological U(1)_r twist, following the procedure in <cit.>. The corresponding background and Lagrangians were described in <cit.>. In the 3d limit, like in this paper, the dominant contribution will come from certain gauge field configurations. Namely, there will be a non-zero gauge flux through the cigar, breaking the gauge group down to its maximal torus, and screening the U(1)_r flux in such a way that some 4d chiral multiplets will possess zero modes on the cigar, resulting in the 3d chiral multiplets. Our Φ_+1 in the Gang-Yamazaki theory is one such multiplet, and the boundary condition on Φ_+1 engineered by the tip of cigar clearly is the (0,2) Neumann. Similarly, the surviving U(1) gauge multiplet also obeys the (0,2) Neumann. Such boundary conditions, as we know, are anomalous. Since the starting 4d theory is anomaly-free, there is only one resolution: The tip of the cigar must support additional localized normalizable modes that in the 3d limit become boundary modes. Indeed, such a possibility can be inferred from the analysis in <cit.>.
In the GY theory, imposing (0,2) Neumann boundary conditions on both the gauge and chiral multiplets on the right boundary results in the boundary gauge anomaly
3/2𝐟^2 - 2 𝐟·𝐟_x -1/2𝐟^2 = 𝐟^2 - 2 𝐟·𝐟_x ,
where 𝐟 is the gauge field strength and 𝐟_x is for U(1)_J. We cancel the gauge anomaly by adding a boundary (0,2) chiral multiplet of gauge× U(1)_J× U(1)_R charges (1,-1,1), which carries anomaly -(𝐟+𝐟_x)^2. We also include an extra boundary Fermi multiplet of charges (0, 1, 1) to ensure that the x→ 1 limit is regular (where x is the U(1)_J fugacity).
Now let us compute the half-index. Since we have a 2d chiral multiplet, we need to have a way of dealing with the infinite number of poles in the integral.
It is known how to deal with this problem in the context of the 2d elliptic genera, and the JK prescription appears as the result of carefully dealing with the zero modes <cit.>.
We use the two-step procedure of <cit.> to compute the 3d half-index as in Appendix <ref>. First, compute the half-index with Dirichlet condition on the gauge field, which in the present case implies the same boundary anomaly as in (<ref>). Then dress it with the 2d contributions from the anomaly-canceling matter, and finally gauge the boundary U(1) symmetry descending from the bulk gauge field via a 2d gauge multiplet. The result is (after a change z→ z (-q^-1/2) of the gauge variable)
II_Neu(q) =lim_x→1 θ_0(x^-1;q) (q;q)^2 ∮_z=xJK resdz/2π i z1/θ_0(zx^-1;q)1/(q;q)∑_m∈ℤq^m^2/2 z^m (-q^-1/2x^-1)^m/(z q^m;q)
=lim_x→1 - (x^-1;q)(xq;q)/(q;q) ∑_m∈ℤ q^m^2/2 (-q^-1/2)^m 1 /(x q^m;q)
=∑_m∈ℤ_≥0q^m^2+m/(q;q)_m,
which matches the vacuum character χ_0^M(2,5)(q). In going from the second line to the third we used:
lim_x→1(x^-1;q)/(x q^m;q) =(-1)^m+1 q ^m(m+1)/2/(q,q)_m for m≤0,
0 for m>0.
To get the non-vacuum character, we insert a charge -1 Wilson line. This amounts to an insertion of q^m in the summand of the previous computation, changing it to:
lim_x→1 - (x^-1;q)(xq;q)/(q;q) ∑_m∈ℤ q^m^2/2+m (-q^-1/2)^m 1 /(x q^m;q)= ∑_m∈ℤ_≥0q^m^2/(q;q)_m,
which matches the non-vacuum character χ_1^M(2,5)(q).
Since we found the correct characters, we conjecture that the boundary chiral and Fermi multiplets that we included by hand must appear naturally as normalizable edge modes in the reduction of 𝒩=1 Maruyoshi-Song Lagrangian on the cigar.
We emphasize that these characters can be obtained from the (𝒟, D_c) boundary conditions in the GY theory <cit.>. This suggests that such boundary conditions are dual to the enriched Neumann that we just considered. It would be quite interesting to study these issues further. It is especially interesting to systematically derived the H_ε type boundary conditions by the topological cigar reduction of the Maruyoshi-Song 𝒩=1 Lagrangians. As said earlier, we conjecture that the above Neumann boundary conditions with the boundary chiral and Fermi should arise in such a way.
Note that the described procedure gives the preferred boundary conditions H_ε for the second sheet theory. In fact, it also works for its conjugate, or “last” sheet, — the fifth sheet in the (A_1,A_2) case. Indeed, the second sheet has the holonomy e^2π i/N originating from the topological twist along the cigar, and the last sheet has e^2π i (N-1)/N=e^-2π i/N, clearly originating from the anti-topological twist along the cigar.[These are the 2d B and B twists along the cigar <cit.>, which are switched by the charge conjugation.] The intermediate higher-sheet theories do not seem to posses such preferred boundary conditions like H_ε descending from 4d.
Thus, for the third-sheet Fib (resp. fourth-sheet Fib) theory, we studied the Dirichlet as well as Neumann half-indices simply for the reasons of naturalness. We also had a prior expectation, based on our experience with the second sheet, as well as other examples, that the Dirichlet boundary conditions (including D_c on some chirals) are likely to capture the bulk TQFT. This reasoning was explained around the conjecture (<ref>). While with Neumann conditions we found the expected <cit.> (G_2)^_1 characters on the left (resp. right) boundary, with Dirichlet conditions we obtained characters of the fermionized tricritical Ising model times a free Majorana fermion on the opposite boundary. This motivated us to conjecture that the bulk TQFT is actually a spin-TQFT SM(3,5)⊗SO(1)_1 as in (<ref>) on the third sheet (or its conjugate on the fourth sheet).
The conjecture is corroborated by the following considerations. First, assuming that the third-sheet TQFT is SM(3,5)⊗SO(1)_1 with c_bulk=6/5 as in (<ref>), we see that appearance of the (G_2)^_1 VOA on its boundary is compatible with our addition of the four boundary Fermi multiplets:[The negative sign is because we are considering the Weyl anomaly induced on the opposite boundary. Compare with Section 4.4 in <cit.>, in particular their Eq. (4.49).]
-c_bulk+c_boundary = -6/5 + 4 = 14/5=c^_(G_2)^_1.
The four added fermions on the opposite boundary yield U(4)_1, which is the only chiral fermionic CFT of central charge 4 <cit.>. The corresponding bulk spin-TQFT is invertible. These considerations point to the possibility of realizing (G_2)^_1 as:
(G_2)_1 = (U(4)_1)/(SM(3,5)⊗SO(1)_1) .
The bosonic counterpart (G_2)^_1 = (E_8)^_1/(F_4)^_1 is of course standard.
We can consider the 3d system on an interval with the enriched Neumann boundary conditions on one boundary and Dirichlet on the other.[For intervals with both boundaries being Dirichlet or Neumann see e.g. <cit.>.]
As those boundaries are mutually exclusive, after the interval reduction, the only surviving degrees of freedom are the boundary fermions.
Indeed, the product of VOAs (G_2)_1 ⊗SM(3,5) ⊗SO(1)_1 conformally embeds into U(4)_1, as (<ref>) suggests.
The G_2 VOA at level 1 has two modules, and we denote the second, non-vacuum, module by L_ω̂_2(G_2).
It is straightforward to check at the level of characters the following decomposition of the four-fermion vacuum module:[We take the liberty of denoting the affine VOA as G_k, and using the same symbol for both the VOA and its representation category. At the same time, U(4)_1 is understood as a spin-TQFT, whose corresponding VOA of free fermions is an extension of the affine VOA of U(4) at level one. We hope this frivolous approach to notations will not cause confusion.]
U(4)_1 = ((G_2)_1 ⊗ V(1,1)^(3,5)⊕ L_ω̂_2(G_2) ⊗ V(1,3)^(3,5)) ⊗ Ff^SO(1),
where Ff^SO(1) denotes the Majorana fermion VOA.
To the best of our knowledge, this relation has not appeared in the literature before.
While we have checked this relation at the level of characters, we believe it indeed holds at the level of VOAs.
This relation is, of course, consistent with the expectation that Wilson lines form bimodules of the boundary algebras, and extend them in the 2d limit (see <cit.>).
This result also implies that the representation categories of those vertex algebras are
braided-reverse equivalent: (G_2)_-1≃SM(3,5), up to an invertible factor. More precisely, they are spin-TQFTs and depend on the choice of spin structure:
(G_2)^_-1⊗ U(4)_1 ≃SM(3,5) ⊗SO(1)_1 .
Since (G_2)^_1= (E_8)^_1/(F_4)^_1 and U(4)_1= SO(8)_1, we can alternatively write:
(F_4)^_1_c = 26/5⊗SO(8)_1/(E_8)^_1_c = -4=SM(3,5)⊗SO(1)_1_c = 6/5 ,
at the level of spin-TQFTs.
This is reminiscent of level-rank dualities, which have been recently discussed in closely related contexts in <cit.>.
The above discussion serves to reinforce the message that we are really dealing with spin-TQFTs. The Galois orbits encountered here and in <cit.> should thus be considered orbits of fermionic MTCs, whose T^2 matrices coincide with those of the bosonic counterparts such as LY and Fib. A more careful derivation of the TQFT S and T matrices, including the overall phases that we have ignored in this paper, may shed light on this aspect of the problem as well.
§.§ Sensitivity to 3d superpotentials
We have discussed two kinds of 3d superpotentials in this work: matter superpotentials 𝒲_Φ and monopole superpotentials 𝒲^_V. We have not considered dressed monopole superpotentials containing both monopoles and matter fields, because in our settings they would either have wrong R-charge or nonzero spin. Whether spin-singlets can be formed from higher powers of such terms with the right R-charge is an intriguing possibility that we leave for future studies.
Already our derivations of 𝒲_Φ and 𝒲^_V may be questioned since they relied largely on naturalness. As for 𝒲_Φ, it can actually be easily checked that they can be obtained from the corresponding superpotentials in 4d <cit.>. For example, the superpotential 𝒲^Fib_Φ=Φ_-1^2Φ_+2 of the Fibonacci theory of Section <ref>, arises from the term denoted tr(pϕ p) (=p_-1^2ϕ_2+p_1^2ϕ_-2-2 p_1p_-1ϕ_0 ) in Eq. (7.3) in <cit.>. Nevertheless, we have also numerically investigated relevance of 𝒲^Fib_Φ to the IR phase of the theory on ℝ^3 as follows. Dropping it, a flavor U(1)_s arises in the 3d theory, under which Φ_-1 has charge 2 and Φ_2 has charge -1 (in a scheme where k_gs=0). Numerically F-maximizing with respect to U(1)_s, we found a superconformal fixed point without extended SUSY. In other words, the fixed point obtained upon dropping 𝒲^Fib_Φ would neither be gapped to yield a unitary TQFT, nor have extended SUSY to yield a non-unitary TQFT after twisting.
As for 𝒲^_V, the fact that the corresponding superpotentials should be generated on the outer patches essentially follows from the Affleck-Harvey-Witten mechanism <cit.>, but why the inner patches always inherit the monopole superpotentials of their neighboring outer patches deserves further scrutiny. They are certainly needed (in all cases discussed above, except for Gang-Yamazaki) if the 3d 𝒩=2 Coulomb branch of the reduced Maruyoshi-Song theory is to be completely lifted on ℝ^3. For example in the Fib theory, as can be seen in Figure <ref>, only the 3d 𝒩=2 Coulomb branch to the right of the x^∗=.2 saddle is lifted by the CS coupling; for the part to the left of the saddle to be lifted on ℝ^3, the monopole superpotential V_- is necessary. On curved backgrounds where the contact terms of <cit.> associated with L_h as in Figure <ref> are active, they would of course suffice for lifting the 3d 𝒩=2 Coulomb branch, and the superpotential V_- would not be needed for that purpose in the theory.
Proper superpotentials can be essential for SUSY enhancement to 𝒩=4, or for a mass gap in the 3d theory.
Most significantly, the superpotentials prevent extra symmetries from emerging in the 3d EFT.
Such symmetries would widen the possibilities of F-maximization, potentially leading to new IR phases for the 3d EFT on ℝ^3 (different from the SUSY enhanced or gapped phases found above). If they had mixed boundary anomalies with the gauge symmetry, they might also necessitate a different set of anomaly canceling boundary multiplets when considering Neumann conditions on the gauge fields. For the purpose of locating the TQFT point on the moduli-space of R-mixings, however, the widened set of possibilities would actually be only a minor inconvenience. In fact even that minor inconvenience can be completely bypassed using a better index as the starting point in 4d, as explained below.
§.§ Sensitivity to 4d background
In this work we have mainly focused on the index
ℐ^γ(q)=ℐ_t(q e^2π iγ,q,q^4/3),
corresponding to the 𝒩=1 background of <cit.>, albeit with the U(1)_r-twisted boundary conditions around the circle. From the point of view of the SCFT/VOA correspondence, there are several other 4d backgrounds that constitute more natural starting points for the twisted reduction. We now discuss some of those and their respective advantages compared to that of ℐ^γ(q).
Cigar_ε × Riemann surface and the 4d A-model. The present work is to a large extent a follow-up to <cit.> and <cit.>. The former used an Ω-deformed cigar × Riemann surface background, reducing on the angular direction of the cigar, while the latter used the 4d A-model, of which the T^2×Σ topologically twisted index <cit.> is a prominent example. For the 4d/3d/2d
picture that we have painted here, the cigar × Riemann surface background appears to be the most appropriate. Using that background as our starting point though, would require evaluation of the contact terms in <cit.> on the corresponding rigid supergravity background, which has not yet been done. The T^2×Σ index on the other hand, has been examined from an EFT perspective in <cit.>, but only partially, and in particular its large gauge flux sectors on Σ remain to be understood. Our focus on ℐ^γ(q) was because its 3d EFT is quite well-understood, and once the 3d EFT is obtained, general local QFT considerations imply that it can be put on any 3d background, including those arising from circle reductions of cigar × Riemann surface and T^2×Σ. The 3d EFTs obtained via reduction on different circles of different backgrounds can of course differ by various flavor-R mixings, which can indeed be significant for our intended applications. This aspect of the problem, however, can be put into sharp focus by considering other limits of the 𝒩=2 index as we discuss next.
The Schur and Coulomb backgrounds.
A natural alternative to ℐ^γ(q) from the viewpoint of the 4d/2d correspondence is the R-twisted Schur index:
ℐ_Schur^γ(q):=ℐ_t(q e^2π iγ,q,q).
Thinking of the Schur index as a partition function on S^3× S^1, taking the Cardy limit shrinks the S^1, which is one of the directions where the VOA lives (cf. <cit.>). So we would be deviating from the picture (<ref>) (cf. <cit.>), but only momentarily, since we expect the resulting EFT can then be placed on other backgrounds, and in particular on the background of the 3d half-index.
Studying the Cardy limit of ℐ_Schur^γ(q) turns out to be quite illuminating. First, it is straightforward to check that ℐ_Schur^γ(q) yields exactly the same Q_h^γ function as ℐ^γ(q). It also yields analogs of the L_h^γ functions that differ from the ones we obtained above only by an overall factor of 3/2 (similarly to what was noted for Schur indices in <cit.>). Consequently, all our results could have been obtained equally well starting from ℐ_Schur^γ(q) instead of ℐ^γ(q).
Although it is not obvious from the way we have defined ℐ_Schur^γ(q) in (<ref>), explicit calculation for the theories we have considered shows that ℐ_Schur^γ(q) exactly coincides (after an insignificant shift of all gauge holonomies by 1/2) with the usual Schur index on its higher sheets: ℐ^γ=0_Schur(q e^2π iγ).
This close connection between ℐ_Schur(q e^2π iγ) and ℐ^γ(q) provides an explanation for what appeared to be remarkable accidents in Sections <ref> and <ref>: that Q_h^γ and L_h^γ vanished on the holonomy saddles. This follows from the fact that the theories we considered have single-valued Schur indices (i.e. ℐ_Schur(q e^2π iγ)=ℐ_Schur(q) for all γ∈ℤ), unlike what their UV Maruyoshi-Song appearances might suggest. As a result, ℐ^γ_Schur(q) should have the same q→1^- asymptotic for all γ. Since for γ=0 they are known to exhibit the Di Pietro-Komargodski-type asymptotic <cit.>, and since the saddle values of Q_h^γ and L_h^γ quantify deviations from that asymptotic formula <cit.>, it follows that Q_h^γ=L_h^γ=0 on the holonomy saddles for all γ∈ℤ.
Another advantage of working with ℐ^γ_Schur(q) is that its Cardy limit lands us directly on the F-maximized point of the 3d EFT! In particular, Cardy limit of the 2nd sheet index ℐ^γ=1_Schur(q) gives directly the R-charges and mixed gauge-R CS couplings of the 3d 𝒩=4 SCFT,[This is known not to be the case on the 1st sheet where γ=0 <cit.>.] relieving us from the burden of F-maximization that we would need to do if we reduced the 𝒩=1 index as in Section <ref> for example. This observation is quite helpful in deriving new families of 3d SUSY enhancing theories from 4d, as will be demonstrated in <cit.>.
Cardy limit of the R-twisted regularized Coulomb index:
ℐ_Coul^γ(q):=lim_t→ q^2ℐ_t(q e^2π iγ,q,t),
does even better: it lands us directly on the A-twisted TQFT. This is in fact how most of the TQFT data in the main text (such as (<ref>)) were obtained. So it is actually the Cardy limit of this index that bridges the 4d/2d correspondence most practically.
The limit in (<ref>) needs a few clarifying remarks. First, convergence requires that it be more precisely q^2/t→1^-. This effectively assigns 4d 𝒩=1 R-charge r=1 to the fundamental chiral multiplets (analogous to the chirals in hypers,) as can be seen from (<ref>), and r→0^+ to all other matter multiplets (which are analogous to adjoint chirals in the 4d 𝒩=2 vector multiplets). This seems to be the intended Z_top.(S^3× S^1) of <cit.>. But it differs slightly from the usual Coulomb index <cit.>, wherein q^2/t is set to a constant u, and then q is sent to zero.
Recall that the Schur index is associated to the 4d Higgs branch <cit.>. We have shown how its q- (or “low-temperature”) expansion can be obtained via the half-index calculation in the A-twist TQFT arising from the Cardy (or “high-temperature”) limit of the R-twisted Coulomb index. Analogous crossed-channel relations were noticed in the context of the 4d A-model in <cit.>. This raises the possibility that the 4d A-model perspective can shed light, via a combination of modularity and mirror symmetry, on the intriguing Coulomb/Higgs relations discovered in <cit.>. We leave clarification of this for the future.
We are grateful to Anirudh Deb for collaboration during the early stages of this work, and to C. Closset, S. Chen, D. Delmastro, N. Garner, H. Kim, Z. Komargodski, J. Kulp, N. Mekareeya, D. Pei, N. Rajappa, L. Rastelli, B. Rayhaun, S. Razamat M. Sacchi, S. Shao, and Y. Zheng for helpful correspondences and conversations related to this project. AA owes his involvement in this project to encouraging remarks from L. Rastelli, and is indebted to Z. Komargodski for clarifying questions. The work of AA was supported in part by the NSF grant PHY-2210533 and the Simons Foundation grants 397411 (Simons Collaboration on the Nonperturbative Bootstrap) and 681267 (Simons Investigator Award). The work of DG is supported in part by the National Research Foundation of Korea grant NRF-2022R1C1C1011979. DG also acknowledges support by Creative-Pioneering Researchers Program through Seoul National University. ML was supported by
the National Science Foundation under Award PHY 2210533.
§ REDUCING THE INDEX TO THE S^3 PARTITION FUNCTION
In this appendix we illustrate how the 2nd sheet index ℐ^γ=1 of the (A_1,A_2) theory can be reduced in the Cardy-like limit to the supersymmetric S^3 partition function of the Gang-Yamazaki theory.
§.§ Subleading asymptotics of the index
The ℤ_2 Weyl redundancy implies we can focus on 0≤ x≤1/2. Therefore we only consider the saddle at x=0.2 and multiply its contribution by two to account for its ℤ_2 image at x=-0.2.
To find the contribution from a small neighborhood of x=0.2, we need to apply inside the integrand of (<ref>) asymptotic estimates that are valid uniformly near x=0.2. As direct examination shows, to all the elliptic gamma functions in (<ref>), except Γ_e(z e^-2π i/5(pq)^4/15) to which we will return shortly, we can apply the following estimate (see Eq. (2.12) of <cit.>):
Γ_e((pq)^r/2e^2π i u) = exp(-2π i ( 1/στ B_3(u)/3!
+ 1/στ(σ+τ/2) (r -1) B_2(u)/2!
+3(r-1)^2 (σ+τ)^2-(σ^2+τ^2)/24στ B_1(u)+𝒪(β)) ) ,
valid for any r∈ℝ, and point-wise for u∈ℝ∖ℤ. The functions B_1,2,3 above are the periodic Bernoulli polynomials, explicitly given by
B_3(u) :=B_3({u})=1/2{u}(1-{u})(1-2{u}),
B_2(u) :=B_2({u})=-{u}(1-{u})+1/6,
B_1(u) :=
B_1({u})={u}-1/2 for u∉ℤ,
0 for u∈ℤ.
It is a simple exercise to show the compatibility of (<ref>) with (<ref>).
The remaining elliptic gamma function in (<ref>), namely Γ_e(z e^-2π i/5(pq)^4/15), corresponds to a light multiplet in the dimensionally reduced theory. That is because the real mass of the 3d multiplet would be ∝ x-0.2, which is small near x=0.2. In mathematical terms, application of (<ref>) to this elliptic gamma function is not justified because the uniform validity of (<ref>) breaks down at u=0, corresponding to u=x-0.2=0 when applied to Γ_e(z e^-2π i/5(pq)^4/15). We have to use instead (see Eq. (2.31) of <cit.>)
Γ_e((pq)^r/2e^2π i u) = exp(-2π i ( 1/στ K_3(u)/3!
+ 1/στ(σ+τ/2) (r -1) K_2(u)/2!
+3(r-1)^2 (σ+τ)^2-(σ^2+τ^2)/24στ K_1(u)+𝒪(β)) )
×Γ_h(2π/β u^_ℤ+(ω_1+ω_2/2) r ;ω_1,ω_2),
valid for any r∈ℝ, and point-wise for any u∈ℝ. The functions K_j, which we call modified periodic Bernoulli polynomials, are defined as
K_j(u):=B_j(u)+j/2sign(u^_ℤ) (u^_ℤ )^j-1.
Here u^_ℤ :=u-nint(u), with nint(·) the nearest integer function. The parameters ω_1,2 above are defined as ω_1:=2πσ/β, ω_2:=2πτ/β. For simplicity we assume below that Ω_1,2=0, so that ω_1=ib, ω_2=ib^-1.
Applying the above estimates, we get (see Eq. (2.47) of <cit.>):
ℐ^γ=1(p,q)≈ e^-V^out(x^∗) Z^in_3d(b),
where -V^out(x^∗)=2π i/90σ+τ/2τσ, up to a constant shift (related to the induced k_RR and k_grav) that we do not consider here, and (see Eq. (2.48) of <cit.>):
Z^in_3d(b) =∫_-∞^∞dx̃ e^-2π i k_gg x̃^2/2 -2π iω k̃^_gR x̃ Γ_h(8/15ω+x̃),
where k_gg=-3/2, k̃_gR=1/30, and ω:=(ω_1+ω_2)/2=i(b+b^-1)/2. The new variable x̃ above is defined via
x̃:=2π/β(x-x^∗),
where x^∗=0.2
§.§ R-current mixing and S^3 partition function comparisons
We would like to compare our result Z^in_3d(b) above with the S_b^3 partition function of the GY theory at the superconformal point k_gR=0 and r_χ=1/3. With a real-mass m_J turned on for the topological U(1)_J symmetry, the said partition function is <cit.> (see Appendix <ref>):
Z(b,m^_J)=∫_-∞^∞dσ e^-2π i k_ggσ^2/2 e^2π i m^_J σ Γ_h(r_χω+σ),
where k_gg=-3/2.
We first perform a change of variables in (<ref>) from σ to x̃ via
σ=x̃-ω(r_χ-r_0),
where we have denoted the R-charge 8/15 appearing in (<ref>) by r_0.
The necessity of this transformation for a successful comparison of Z^in_3d(b) and Z(b,m^_J) as in Eq. (<ref>) signals that the R-charges used in the two expressions differ by mixing with the U(1) gauge (besides a mixing with the U(1)_J that will be discussed momentarily). In terms of the new variable, we get:
Z(b,m^_J)=∫_-∞^∞dx̃ e^-2π i k_ggx̃^2/2 e^2π i (m^_J-k_ggω(r_0-r_χ)) x̃ Γ_h(r_0ω+x̃),
up to an overall phase (related to the background fields) that we do not consider here. It is now clear that we have a match with (<ref>) if we take:
m^_J=-ωk̃_gR+ k_gg(r_0-r_χ)ω.
We now explain why this sort of relation signals the fact that the R-charges used in (<ref>) and (<ref>) differ by a mixing with the topological U(1)_J. See <cit.> for earlier related discussions in the context of reduction on the first sheet (γ=0).
Consider a 3d 𝒩=2 U(1)_k_gg gauge theory with a chiral multiplet of gauge charge g_χ and a mixed gauge-R CS coupling k̃_gR. Assume the U(1)_R mixes with the topological U(1)_J and the gauge U(1) as follows
R_new=R+c_1· J+c_2· g,
where J,g stand for the topological U(1)_J and gauge U(1) charges, respectively.
The 3d 𝒩=2 chiral multiplet is not charged under U(1)_J. Therefore c_2 is fixed by the shift in its R-charge as
c_2=r_χ-r_0/g_χ.
Note that we have taken its R_new to be r_χ, while its R is r_0, and its g is g_χ.
To fix c_1, we find the shift in the R-charge of the J=1 monopole. The old R-charge of the J=1 monopole is -k̃_gR-|g_χ|/2(r_0-1). Assume the new R-current has a mixed CS level k_gR with the gauge U(1). The new R-charge of the J=1 monopole is then -k_gR-|g_χ|/2(r_χ-1). Since the gauge charge of the J=1 monopole is g_m=-k_gg-1/2g_χ|g_χ|, we get an equation
-k_gR-|g_χ|/2(r_χ-1)_R_new=-k̃_gR-|g_χ|/2(r_0-1)_R+ c_1·1+c_2·(-k_gg-1/2g_χ|g_χ|).
Plugging in c_2 from (<ref>), we obtain:
c_1=-k_gg/g_χ(r_0-r_χ)-k_gR+k̃_gR.
If c_1 were zero, this would be a special case of the formula <cit.>:
k^new_gR=k^old_gR+k_gg c_2.
We see from (<ref>) that the nonzero mixing with U(1)_J via c_1 has the effect of inducing an additional Δ k_gR:
k^new_gR=k^old_gR+k_gg c_2-c_1.
A comparison of the partition functions:
Z_3d(b)=∫_-∞^∞dx̃ e^-2π i k_ggx̃^2/2 e^-2π i ω k̃^_gRx̃ Γ_h(r_0ω+g_χx̃),
and
Z_new(b,m^_J)=∫_-∞^∞dσ e^-2π i k_ggσ^2/2 e^2π i (m^_J-ω k^_gR) σ Γ_h(r_χω+g_χσ),
now establishes equivalence (possibly up to an overall constant related to the background fields) upon identifying:
σ=x̃-c_2ω, m^_J=-c_1ω.
The former relation implements an (“unphysical”) change of gauge-R mixing scheme, while the latter compensates for the (“physical”) difference k_gR-k̃_gR due to the U(1)_J-R mixing.
Note that the change of integration contour that σ=x̃-c_2ω yields can be undone via contour deformation assuming that r_0,r_χ∈(0,2). This follows from the fact that for generic b∈ℝ_>0, the function
Γ_h(x) has simple zeros at
x=ibℤ^≥1+ib^-1ℤ^≥1 and simple
poles at x=ibℤ^≤0+ib^-1ℤ^≤0.
§.§ Turning on flavor fugacities/real-masses
Let us set t=(pq)^2/3ξ in (<ref>). The fugacity ξ corresponds to the part of the Cartan of the 4d 𝒩=2 SU(2)_R× U(1)_r R-symmetry that is flavor from an 𝒩=1 perspective. We denote this flavor by U(1)_f.
Introducing m_f via
ξ=e^iβ m^_f,
and performing the reduction similarly to how it was done above, we get:
Z^in_3d(b,m^_f) =∫_-∞^∞dx̃ e^-2π i k_gg x̃^2/2 -2π i(ωk̃^_gR+m^_fk̃^_gf) x̃ Γ_h(8/15ω+x̃+7/10m^_f),
instead of (<ref>). The effective mixed gauge-flavor CS coupling k̃^_gf can be obtained similarly to (<ref>) from:
k^∗_j f=-∑_χ∑_ρ^χ∈ H_∗B_1 (ρ^χ·x^∗ +q^χ·ξ) ρ^χ_j q^_f .
In the present case this gives k̃^_gf=-1/20.
We then shift
x̃→x̃-q^_f m^_f ,
with q^_f=7/10, which amounts to adding a multiple of the gauge charge to the flavor charge (in effect, going to a different gauge-flavor mixing scheme). This allows us to rewrite the above integral as
Z^in_3d(b,m^_f) =∫_-∞^∞dx̃ e^-2π i k_gg x̃^2/2 +2π i(-ωk̃^_gR+ζ^_f) x̃ Γ_h(8/15ω+x̃),
where
ζ^_f:=-(k̃^_gf-k_gg q_f)m^_f .
Since ζ^_f can be considered as the real mass associated with the U(1)_J, we see that the four-dimensional U(1)_f descends effectively to the U(1)_J. (One can think of ζ^_f as an effective three-dimensional FI parameter as well.)
Moreover, the dependence of the dynamical part of Z^in_3d(b,m^_f) on the real mass m^_f descending from 4d is entirely through ζ^_f, with a proportionality factor k^_gf:=k̃^_gf-k_gg q_f=1.
Our emphasis on the word dynamical is because there are background CS actions involving k_fR and k_ff that we have suppressed above for simplicity. Including them gives extra dependence on m_f through the multiplicative factors:
e^-2π i ωk̃^_fRm^_f-2π i k̃^_ffm^2_f/2e^-2π i ωk̃'_fRm^_f-2π i k̃'_ffm^2_f/2,
where k̃_fR=17/300 and k̃_ff=73/200, while k̃'_fR=k̃_fR-q^_fk̃_gR=1/30 and k̃'_ff=k̃_ff-2q^_fk̃_gf+q^2_fk_gg=21/50.
§.§ When the 4d flavor symmetry disappears in 3d
Now we consider the case where the 3d EFT is gapped, and the 4d U(1)_f disappears in 3d. More precisely, it acts trivially in the dynamical sector of the EFT below the uv scale ∝ϵ/β.
We skip the details of the reduction, but it should be clear that for the third sheet of (A_1,A_2), the analog of (<ref>) becomes:
Z^in_3d(b,m^_f)=∫_-∞^∞dx̃ e^-2π i k_gg x̃^2/2 -2π i(ωk̃^_gR+m^_fk̃^_gf) x̃
×Γ_h(14/15ω+x̃+1/10m^_f) Γ_h(2/15ω-2x̃-1/5m^_f).
The flavor charges 1/10, -1/5 can be found by examining (<ref>). The mixed gauge-flavor CS coupling is found in this case to be k̃^_gf=-3/20. Combined with the flavor analog of (<ref>):
f(m)=-∑_j k^_jf m_j-1/2∑_χf_χ∑_ρ^χ∈ L_∗|ρ^χ(m)|,
the value k̃^_gf=-3/20 implies that the monopole V_- has zero flavor charge:
f(-1)=3/20(-1)-1/21/10|1|-1/2(-1/5)|-2|=0.
A quick examination of (<ref>) shows that a shift x̃→x̃-q^_f m^_f, with q^_f=1/10, removes m_f from the arguments of the hyperbolic gamma functions. Therefore, as in the second-sheet case discussed above, the dependence of Z^in_3d(b,m^_f) on the real mass m^_f descending from 4d is entirely through the effective FI parameter ζ^_f. This time, however, regardless of m^_f, the FI parameter ζ^_f vanishes! The reason is that the proportionality constant becomes
k_gf=k̃^_gf-k_gg q_f=-3/20+3/2·1/10=0.
In other words, despite the initial appearances in (<ref>), the dynamical part of the S_b^3 partition function of the third-sheet EFT is completely independent of m^_f. This suggests that the 4d U(1)_f acts trivially in the dynamical sector of the 3d EFT.
We emphasize, though, that there are background CS terms in the 3d EFT that do depend on m^_f, but have been suppressed for simplicity. These are seen if we do not suppress the contributions from flavor-R and flavor-flavor CS actions:
Z^in_3d(b,m^_f)=∫_-∞^∞dx̃ e^-2π i k_gg x̃^2/2 -2π i(ωk̃^_gR+m^_fk̃^_gf) x̃ -2π i ωk̃^_fRm^_f-2π i k̃^_ffm^2_f/2
×Γ_h(14/15ω+x̃+1/10m^_f) Γ_h(2/15ω-2x̃-1/5m^_f)
=∫_-∞^∞dx̃ e^-2π i k_gg x̃^2/2 -2π iωk̃^_gRx̃ -2π i ωk̃'_fRm^_f-2π i k̃'_ffm^2_f/2
×Γ_h(14/15ω+x̃) Γ_h(2/15ω-2x̃).
Here k̃_fR=43/300 and k̃_ff=17/200, while k̃'_fR=k̃_fR-q^_fk̃_gR=1/30 and k̃'_ff=k̃_ff-2q^_fk̃_gf+q^2_fk_gg=1/10.
§ EFFECTIVE CS COUPLING CALCULATIONS
For rank-one theories a practical way of computing k_gg and k_gR is via ∂^2 Q and ∂ L as in (<ref>) for outer patches, and then finding the inner-patch values by averaging the two sides (see footnote <ref>). Below we present the more direct calculations via the formulas (<ref>).
§.§ Second sheet of (A_1,A_2)
The formulas (<ref>) applied to the saddle at x^∗=.2 on the 2nd sheet of (A_1,A_2) give:
-k_gg =B_1(x^∗+2/5)+B_1(-x^∗+2/5)+B_1(-x^∗-1/5)+4B_1(2x^∗+1/5)+4B_1(-2x^∗+1/5)
=3/2,
-k_gR =-1/15(B_1(x^∗+2/5)-B_1(-x^∗+2/5))+7/15B_1(-x^∗-1/5)+2-13/15[ B_1(2x^∗+1/5)
-B_1(-2x^∗+1/5)]+1(2B_1(2x^∗)-2B_1(-2x^∗))=-1/30,
-k_RR =(-1/15)^2(B_1(x^∗+2/5)+B_1(-x^∗+2/5))+(-7/15)^2B_1(-x^∗-1/5)+(-13/15)^2[ B_1(2x^∗+1/5)
+B_1(-2x^∗+1/5)]+(-1/5)^2B_1(6/5)+(-13/15)^2B_1(1/5)-(-11/15)^2B_1(2/5))=31/225,
-k_grav =2(B_1(x^∗+2/5)+B_1(-x^∗+2/5)+B_1(-x^∗-1/5)+B_1(2x^∗+1/5)+B_1(-2x^∗+1/5)
+B_1(6/5)+B_1(1/5)-B_1(2/5))=-2/5.
§.§ Third sheet of (A_1,A_2)
For the patch in_1 around x^∗=.2 on the 3rd sheet of (A_1,A_2), the equations (<ref>) give:
-k_gg =B_1(-1/5+4/5)+B_1(1/5-2/5)+B_1(-1/5-2/5)+4B_1(2·1/5+2/5)=3/2,
-k_gR =-1/15(-B_1(-1/5+4/5))+-7/15(B_1(1/5-2/5)-B_1(-1/5-2/5))+-13/15(2B_1(2·1/5+2/5))
+1(2B_1(2·1/5)-B_1(-2·1/5))=-11/10.
We also get k_RR=-13/450 and k_grav=-1/5.
§.§ Fourth and fifth sheets of (A_1,A_2)
The fourth sheet of (A_1,A_2) is conjugate to its third sheet. It thus has the opposite k_gg, k_RR, and k_grav, while having the same k_gR.
Similarly, the fifth sheet has the opposite k_gg, k_RR, and k_grav compared to the second sheet, while having the same k_gR.
§.§ Second sheet of (A_1,A_4)
For the saddle at (x_1^∗,x_2^∗)=(1/7,2/7) on the 2nd sheet of (A_1,A_4), we get from (<ref>):
-k_11 =B_1(x^∗_1 + 3/7) +
B_1(-x^∗_1 + 3/7) + B_1(x^∗_1 - 2/7) +
B_1(-x^∗_1 - 2/7) + 4 B_1(2 x^∗_1 + 1/7)
+
4 B_1(-2 x^∗_1 + 1/7) + B_1(x^∗_1 + x^∗_2 + 1/7) +
B_1(-x^∗_1 + x^∗_2 + 1/7) + B_1(-x^∗_1 - x^∗_2 + 1/7)
=3/2,
-k_22 =B_1(x^∗_2 + 3/7) +
B_1(-x^∗_2 + 3/7) +B_1(-x^∗_2 -
2/7) + 4 B_1(2 x^∗_2 + 1/7)+
4 B_1(-2 x^∗_2 + 1/7)
+ B_1(x^∗_1 + x^∗_2 + 1/7) +
B_1(-x^∗_1 + x^∗_2 + 1/7) + B_1(-x^∗_1 - x^∗_2 + 1/7)=1,
-k_12 =k_21=B_1(x^∗_1 + x^∗_2 + 1/7) - B_1(-x^∗_1 + x^∗_2 + 1/7) +
B_1(-x^∗_1 - x^∗_2 + 1/7)=1/2.
We also get k_1R=-11/42 and k_2R=4/7.
§ ANALYTIC TOOLKIT FOR 3D 𝒩=2 GAUGE THEORIES
§.§ Superconformal index
The 3d superconformal index is defined for any 3d 𝒩=2 theory with a U(1)_R symmetry as <cit.>
I^(q):=Tr^_S^2(-1)^2j_3 q^R/2+j_3,
where R is the U(1)_R charge, while j_3 is the spin quantum number associated with the SO(3) rotations of the Euclidean three-space, and the trace is over the Hilbert space on S^2 (or alternatively, the space of local operators in case the 3d 𝒩=2 theory is an SCFT).
We refer to I(q) as the first-sheet 3d index. For a gauge theory it can be computed from the formula <cit.>
I(q) =∑_m(-1)^c_j(m)m_j/|W_m| q^r(m)/2∮∏_j=1^r_Gdz_j/2π i z_j z_j^c_j(m)∏_α_+(1-z^±α_+q^|α_+(m)|/2)
∏_Φ∏_ρ∈ R_Φ(z^-ρq^|ρ(m)|/2+1-r_Φ/2;q)/(z^ρq^|ρ(m)|/2+r_Φ/2;q),
with c(m),r(m) as in (<ref>),(<ref>). The contour is on the unit circles, assuming that via suitable gauge-R mixing one has ensured that the R-charges of all chiral multiplets are strictly between 0 and 2.
The 2nd sheet index can be found via sending q→ q e^2π i, and simplifying via a_j→ a_j+m_j π (that is z_j→ z_j (-1)^m_j):
Ĩ(q)=I(q e^2π i) =∑_me^iπ r(m)/|W_m|q^r(m)/2∮∏_j=1^r_Gdz_j/2π i z_j z_j^c^_j(m)∏_α_+ (1-q^|α_+(m)|/2z^±α_+)
∏_Φ∏_ρ∈ R_Φ(z^-ρq^|ρ(m)|/2+1-r_Φ/2 e^-iπr_Φ;q)/(z^ρq^|ρ(m)|/2+r_Φ/2 e^iπr_Φ;q).
We use the 3d superconformal index in this work mainly as a tool to diagnose whether the 3d 𝒩=2 gauge theories that we obtain are in the topological phase, and without local operators, in which case we should find:
I(q)=Ĩ(q)=1.
§.§ Squashed three-sphere partition function
Consider a 3d 𝒩=2 gauge theory with a U(1)_R as well as a U(1)_f flavor symmetry.
The SUSY partition function on the squashed three-sphere S_b^3, with unit radius and squashing parameter b, can be found from the following formula[Our orientation appears to be opposite to the one in <cit.>. Alternatively, our THF modulus is complex conjugate to that of <cit.>. This complex conjugation should be taken into account when comparing with Eqs. (5.22)–(5.24) in that work.] <cit.>:
Z(b,m_f) =∫_-∞^∞d^r_Gσ/|W| e^-2π i k_ij σ_iσ_j/2 -2π iω k_jR σ_j -2π im_f k_jf σ_j
×∏_χ∏_ρ^χΓ_h(r_χ ω+ρ^χ·σ+q_χ m_f)/∏_α_+Γ_h(α_+·σ)Γ_h(-α_+·σ) ,
where ω=i(b+b^-1)/2. For simplicity, we have suppressed the contributions from k_RR, k_grav, k_fR, and k_ff <cit.>.
Above, we have suppressed the dependence of the hyperbolic gamma function Γ_h( · ;ω_1,ω_2) on ω_1=ib and ω_2=ib^-1. For b=1, we have
Γ_h(x)=(1-e^-2π x)^-i x-1 e^i/2πLi_2(e^-2π x)+iπ/2(-i x-1)^2-iπ/12.
We denote the partition function Z(b=1,m^_f=0) simply as Z^_S^3.
§.§ Bethe roots and BPS surgery
The effective twisted superpotential and the effective dilaton of a 3d 𝒩=2 gauge theory on S^1 are given by <cit.>:
W(u) =∑_j,l1/2k^+_jlu_j u_l+∑_j1/2k^+_jju_j+∑_Φ∑_ρ∈ R_Φ1/-4π^2Li_2(z^ρ)
,
Ω(u) =∑_j k^+_jRu_j-∑_Φ∑_ρ∈ R_Φr_Φ-1/2π ilog(1-z^ρ)-∑_α_+1/2π ilog(1-z^±α_+)
,
where z=e^2π i u, and the ambiguous sign in the exponent of z^±α means that every α_+ contributes two terms to ∑_α_+, once with each sign. We have dropped the contributions from k_RR and k_grav for simplicity, and used:
k^+_jl :=k_jl+1/2∑_Φ∑_ρ∈ R_Φρ_jρ^_l ,
k^+_jR :=k_jR+1/2∑_Φ∑_ρ∈ R_Φρ_j(r_Φ-1) .
To compare with <cit.>, note that we are not using an asymmetric quantization scheme (such as U(1)_-1/2): we do not implicitly augment our chiral multiplets with the uv CS couplings.
To simplify the expressions, we often work with (Z:=2π i u):
W(Z) :=-4π^2 W(u)=1/2k^+_jlZ_j Z_l+1/2k^+_jj·2π i Z_j+∑_Φ∑_ρ∈ R_ΦLi_2(e^ρ Z),
Ω(Z) :=2π i Ω(u)=k^+_jRZ_j+∑_Φ(1-r_Φ)∑_ρ∈ R_Φlog(1-e^ρ Z)-∑_α_+log(1-e^±α_+Z).
The Bethe roots are at
exp(∂_Z_i W(Z_α^∗))=1,
with each expected to map to a module of the boundary VOA as in <cit.>.
The handle-gluing and fibering operators are given by (see e.g. <cit.>):
ℋ(Z)=e^Ω(Z) ∂^_Z_i∂^_Z_j W(Z),
ℱ(Z)=e^[W(Z)-Z_i ∂^_Z_i W(Z)]/(2π i),
and are identified with the components of modular S and T matrices via the map (cf. <cit.>):
{S^2_0α,T^2_αα}⟷{ℋ(Z^∗_α)^-1,ℱ(Z^∗_α)^2}.
Since we have dropped the contributions from k_RR and k_grav in (<ref>), these identifications are accurate up to overall phases. In the case of S, imposing the SL(2,ℤ) relations allow reducing the ambiguity down to an overall sign.
The S^3 partition function can be found via the BPS surgery (see e.g. <cit.>):
Z_S^3=∑_αℋ(Z^∗_α)^-1 ℱ(Z^∗_α) ,
and for a TQFT it should match with S_00=ℋ(Z^∗_0)^-1/2. In this work we verify such matchings up to an overall phase.
§.§ Half-index calculations
The 3d half-indices used in this work <cit.> are defined as:
II_ℬ:=Tr_Ops_ℬ(-1)^R q^R/2+j_3,
with the trace taken over the hemisphere Hilbert space with boundary conditions ℬ.
§.§.§ Neumann
With the 𝒩=(0,2) Neumann boundary conditions on all fields, the various contributions to the boundary anomaly <cit.> are:
-1/2ρ_iρ_j 𝐟_i·𝐟_j- ρ_j(r_Φ-1) 𝐟_j·𝐫+… ,
from any 3d 𝒩=2 chiral multiplet of gauge charge ρ_j and R-charge r_Φ,
and
hTr(𝐟^2)+…,
with h the dual Coxeter number, from any 3d 𝒩=2 vector multiplet. The CS levels k_ij and k_jR contribute
±(k_ij 𝐟_i·𝐟_j+ 2k_jR 𝐟_j·𝐫) ,
with the positive sign for the left and negative sign for the right boundary in our conventions.
Moreover, if there is a U(1)_J in the problem associated with an abelian gauge field A, we see from
J^top_μ=i/2πε_μνρ∂^ν A^ρ⟹∫ J^top_μ A^μ_top=2×i/4π∫ A_top∧dA,
that there will be a term
2 𝐟_x·𝐟,
in the boundary anomaly, where 𝐟 is the curvature of the U(1) gauge field and 𝐟_x that of the background connection for the U(1)_J.
To cancel the bulk-induced boundary anomalies, we often add boundary Fermi and/or chiral multiplets. A 2d Fermi/chiral multiplet of gauge charge g and fermion R-charge r_fermion would contribute to the boundary anomaly via
±(g^2𝐟^2+2g r_fermion𝐟·𝐫+…) ,
where in the notation of <cit.>, in the Fermi case, r_fermion is the R-charge of γ_- and we take the plus sign, while in the chiral case, r_fermion is that of ψ_+ (hence r_ϕ-1), and we take the minus sign.
Recalling
θ_0(z;q):=(z;q)(z^-1q;q),
the boundary degrees of freedom contribute to the Neumann half-index via:
Z_∂ fermi_r =θ_0((-q^1/2)^1-rz^-ρ;q),
Z_∂ chiral_r =1/θ_0((-q^1/2)^rz^ρ;q),
where both are assumed to have charge ρ under the U(1) symmetry associated with the fugacity z. In the Fermi case, r stands for r_γ_-, and in the chiral case, – for r_ϕ .
The bulk vector and chirals contribute via:
Z^N_vector = (q;q)^rk(G) ∏_α_+ ( z^±α;q),
Z_chiral^N = ∏_ρ∈ R_Φ(z^ρ (-q^1/2)^r_Φ ;q)^-1 .
The full Neumann half-index is given by:
II_𝒩,N,…(q)=1/|W_G|∮∏_j=1^rk(G)dz_j/2π i z_j Z^N_vector Z^N_chirals Z_∂ fermis Z_∂ chirals .
Difficulty in presence of boundary chirals. In absence of boundary chiral multiplets, the integration contour in (<ref>) can be taken to be the unit circle, possibly after suitable gauge-R mixing such that all chiral multiplet R-charges are strictly between 0 and 2.
With boundary chirals present, however, the correct integration contour in (<ref>) is not clear <cit.>. It was suggested in <cit.> that in such cases, one takes a two-step approach: first compute the half-index for the boundary condition 𝒟,N,N,…,N (namely Dirichlet on the vector multiplet and Neumann on the chirals), and then gauge the boundary global symmetry descending from the bulk gauge symmetry via a boundary gauge multiplet, together with various anomaly canceling boundary chiral and Fermi multiplets. The boundary gauging then follows a well-understood JK-residue prescription <cit.>.
When considering the left boundary, the first step gives <cit.>:
II_𝒟,N,…(q,z) =1/(q;q)^r_G∑_mq^∑_i,jk^-_ijm_i m_j/2 (-q^1/2)^k^-_jRm_j ∏_j=1^r_G z_j^∑_l k^-_jlm_l∏_α_+1/(q^1±α_+(m) z^±α_+;q)
∏_Φ∏_ρ∈ R_Φ(z^ρ q^ρ(m)(-q^1/2)^r_Φ ;q)^-1,
where k^-_ij and k^-_jR are defined as
k^-_jl :=k_jl-1/2∑_Φ∑_ρ∈ R_Φρ_jρ^_l,
k^-_jR :=k_jR-1/2∑_Φ∑_ρ∈ R_Φρ_j(r_Φ-1).
On the right boundary, instead of k^-_ij and k^-_jR in (<ref>), one has to use
k^-_jl→ -k^+_jl ,
k^-_jR→-k^+_jR ,
which are the coefficients arising from the gauge anomaly on the right boundary.
The second step then yields
II_𝒩,N,…(q)=(q;q)^2r_G/|W_G|∮_JK ∏_j=1^r_Gdz_j/2π i z_j ∏_α_+θ_0(z^±α_+;q) II_𝒟,N,…(q,z) Z_∂ fermis Z_∂ chirals .
§.§.§ Dirichlet
The general formula for the 3d half-index with 𝒩=(0,2) Dirichlet boundary conditions on all fields reads <cit.>:
II_𝒟,D,…(q,z) =1/(q;q)^r_G∑_mq^∑_i,jk^+_ijm_i m_j/2 (-q^1/2)^k^+_jRm_j ∏_j=1^r_G z_j^∑_l k^+_jlm_l∏_α_+1/(q^1±α_+(m) z^±α_+;q)
∏_Φ∏_ρ∈ R_Φ(z^-ρ q^1-ρ(m) (-q^1/2)^-r_Φ;q),
for the left boundary, in our conventions.
See (<ref>) for the definition of k^+_ij and k^+_jR.
On the right Dirichlet boundary, instead of k^+_ij and k^+_jR in the above formula, one has to use
k^+_jl→ -k^-_jl :=-k_jl+1/2∑_Φ∑_ρ∈ R_Φρ_jρ^_l ,
k^+_jR→-k^-_jR :=-k_jR+1/2∑_Φ∑_ρ∈ R_Φρ_j(r_Φ-1) ,
which are the coefficients arising from the gauge anomaly on the right boundary.
JHEP
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17438v1 | 20240926001803 | Simulated annealing of reduced magnetohydrodynamic systems | [
"M. Furukawa",
"P. J. Morrison"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
Simulated annealing]Simulated annealing of reduced magnetohydrodynamic systems
[1]M. [email protected]
2]P. J. [email protected]
*[1]Faculty of Engineering,
Tottori University,
Minami 4-101, Koyama-cho,
Tottori-shi, 680-8552,
Japan
[2]Department of Physics and Institute for Fusion Studies,
University of Texas at Austin,
Texas, 78712,
USA
Theory of simulated annealing (SA), a method for equilibrium and
stability analyses for Hamiltonian systems, is reviewed. The SA
explained in this review is based on a double bracket formulation that derives from Hamiltonian structure.
In addition to general theoretical aspects, the
explicit formulation as well as numerical applications are presented. Both finite and infinite degree-of-freedom systems are treated, in particular, the heavy top, a toy model mimicking low-beta reduced magnetohydrodynamics (MHD) and low- and high-beta reduced MHD. Numerical results successfully demonstrate the usefulness of SA for
equilibrium and stability analyses. At the same time, the results raise some future issues that are
discussed in the paper.
[
[
September 28, 2024
======================
§ INTRODUCTION
Simulated annealing (SA) is a type of relaxation method for Hamiltonian
systems based on an artificial dynamics that uses the
Hamiltonian structure. In usual Hamiltonian dynamics, the energy
(Hamiltonian) is conserved because of the antisymmetry of the
Poisson bracket, while the artificial dynamics of SA is constructed in such a way
that the time evolution changes the energy (Hamiltonian) monotonically. It does this by acting twice with the Poisson bracket and, consequently, SA relaxes to a stationary state of the energy as time progresses.
If the Hamiltonian system is noncanonical, the Poisson bracket possesses
a null space and the null space leads to Casimir invariants that are
conserved during the time evolution for any Hamiltonian.
Because the artificial dynamics of SA is constructed by acting twice with the Poisson
bracket, the Casimir invariants are preserved during the time evolution.
Because SA extremizes the energy on a constant Casimir leaf, which is a
subspace of the phase space of the system defined by the level sets of the
Casimir invariants, it in effect finds a solution of the energy-Casimir variational principle, a variational principle that made its way into the plasma and fluid literature in the early work of <cit.> and <cit.> <cit.>. The equilibria obtained by SA of noncanonical Hamiltonian systems can
have a variety of structure because of the possible variety of Casimir invariants.
The ideal fluid and MHD were shown to be noncanonical Hamiltonian systems by <cit.> <cit.>. Therefore, SA can be used for equilibrium calculations of such systems. Reduced MHD systems are also Hamiltonian systems, as was shown by <cit.>; these will be treated
in this paper explicitly.
Originally, equilibrium calculations by such artificial dynamics were
developed for two-dimensional vortical motion of neutral
fluids in <cit.> and placed in a general
Hamiltonian systems setting in <cit.>.
However, the method of these references is limited and is now known to only work for a small class of equilibria. To correct for this the method was generalized by <cit.>, where the term “simulated annealing” was introduced, and where it was shown to work for a variety of equilibria.
They developed a double bracket that is constructed from
the Poisson bracket and a definite symmetric kernel.
The Dirac SA (DSA) dynamics was also introduced,
that utilizes a Dirac bracket instead of the Poisson bracket
in the construction of the double bracket.
They presented numerically a variety of non-trivial equilibria of
two-dimensional neutral fluids and two-layer quasigeostrophic flows.
The first application of SA to MHD systems <cit.> was on low-beta reduced
MHD <cit.> in a two-dimensional rectangular domain with doubly
periodic boundary conditions.
Numerical results with several ratios of kinetic energy to the magnetic
energy were presented. It was shown that upon relaxation to
stationary states, fine structure remained when the kinetic energy is comparable to or greater
than the magnetic energy. It was also pointed out that
the relaxation path, i.e., which of kinetic or magnetic energies
decreases earlier, can affect the resultant stationary state.
This subtlety arises because the low-beta reduced MHD has multiple
fields to be relaxed. As explained in the discussion section of the present paper,
each Casimir invariant should be adjusted to have a desired value prior to the time evolution of SA, since the value
does not change during the time evolution. A method for the adjustment was developed in <cit.>.
Next, SA was applied to low-beta reduced MHD in a cylindrical plasma in <cit.>.
By performing SA with an initial condition that is
a sum of a cylindrically symmetric equilibrium and a
small-amplitude helical perturbation accompanying magnetic islands,
an equilibrium with magnetic islands was obtained. In further work, toroidal equilibria were calculated by SA in
<cit.> by using the high-beta reduced MHD model
<cit.>. An example described therein was that of an axisymmetric tokamak equilibria with a large aspect ratio and a circular cross section. The Shafranov shift was shown to increase
as beta was increased, although the magnitude of the shift did not
fully agree with the analytic theory based on the
large-aspect-ratio expansion. This was because the toroidicity
completely disappears in high-beta reduced MHD, while it remains in
the analytic theory. Some equilibria with poloidal rotation were also calculated by SA, and
examined based on a mapping between such equilibria with poloidal rotation
and static equilibria. Toroidally-averaged stellarator equilibria were also calculated.
Simulated annealing can be used not only for equilibrium calculations
but also for stability analyses <cit.>.
We know that equilibria obtained by SA that decreases the total energy
of the system are stable at least linearly since they locate at energy
minima. However, equilibria that are obtained by other methods,
such as solving the Grad-Shafranov equation
<cit.>,
are not necessarily stable.
Suppose we know such an equilibrium, and we perform SA starting from
an initial condition that is a sum of the known equilibrium and a
small-amplitude perturbation.
If SA recovers the original equilibrium, it is linearly stable.
However, if the perturbation grows during the time evolution of SA, the
equilibrium is not at an energy minimum.
In the numerical demonstration of the stability analyses,
it was shown that the perturbation grows in a short time if the
equilibrium is unstable. On the other hand, SA required a long time for
recovering the original equilibrium if it is stable. Therefore,
accelerated
relaxation is indispensable for SA to be practically useful.
In <cit.>, a method for accelerating the relaxation was
developed by introducing time dependence in the symmetric kernel of the
double bracket.
Another kind of SA based on a metriplectic bracket introduced in <cit.> <cit.> has also been studied extensively in <cit.>.
Metriplectic dynamics is a
combination of the Hamiltonian dynamics and dissipative dynamics.
The dissipative mechanism is realized by a metric bracket.
The metriplectic dynamics was shown to successfully obtain
equilibria of two-dimesional Euler flow, axisymmetric toroidal
equilibria that are a solution to the Grad–Shafranov equation,
and force-free MHD equilibria.
Metriplectic dynamics is also explained in
<cit.>; this paper covers wider topics on geometric
aspects of plasma physics and numerical algorithms for them.
The rest of present paper is organized as follows.
In Sec. <ref>,
Hamiltonian theory is reviewed for systems of both finite and
infinite degrees of freedom.
It starts from general theory, then proceeds to some examples
such as a free rigid body and the heavy top.
A toy model mimicking aspects of low-beta reduced MHD is also introduced.
For systems with infinite degrees of freedom,
two-dimensional Euler flow, low-beta reduced MHD in both a two-dimensional
rectangular domain and in a cylindrical geometry, and high-beta reduced
MHD are considered.
Then, the theory of SA is explained in Sec. <ref>.
It reviews the double bracket formulation of SA for systems with both
finite and infinite degrees of freedom. SA by metriplectic brackets is also described briefly.
Section <ref> is devoted to some numerical examples of
SA for the heavy top and a toy model mimicking low-beta reduced MHD.
Analyses of equilibrium and stability are also presented.
Sections <ref> to <ref> cover numerical studies of SA for low-beta and high-beta reduced MHD. Section <ref> is on linear stability
analyses using SA, while Sec. <ref> is on the equilibrium
calculations in toroidal geometry.
Section <ref> shows that helically-deformed equilibria can
be obtained by SA of low-beta reduced MHD in cylindrical geometry.
Section <ref> describes our numerical studies of flowing
equilibria in two-dimensional rectangular domain.
An equilibrium with magnetic islands is introduced in
Sec. <ref>.
Two methods for accelerated relaxation are described in
Sec. <ref>.
Section <ref> contains discussion on several issues that remain to
be solved. Finally, our summary and conclusions are given in
Sec. <ref>.
§ HAMILTONIAN SYSTEMS
In Sec. <ref>,
the theory of Hamiltonian systems of finite and infinite degrees of
freedom is reviewed.
Section <ref> is on systems with finite degrees
of freedom. Starting from a canonical case, a noncanonical case is
briefly introduced. Explicit examples are the free rigid body, the heavy
top <cit.> and a toy model mimicking low-beta reduced MHD.
Section <ref> describes Hamiltonian theory of
infinite-dimensional systems such as two-dimensional Euler flow and low-
and high-beta reduced MHD.
§.§ System with finite degrees of freedom
§.§.§ General theory
A canonical Hamiltonian system is governed by
Hamilton's equations
q̇^i=
H(, )/ p_i and ṗ_i =
- H(, )/ q^i,
with i = 1, 2, ⋯, N, where
= ( q^1, q^2, ⋯, q^N )^𝖳
and
= ( p_1, p_2, ⋯, p_N )^𝖳
are canonical coordinates and canonical momenta of a system with N
degrees of freedom, respectively,
H(, ) is a Hamiltonian,
and a dot denotes time derivative.
Defining a Poisson bracket as
[ f , g ]
f/ q^i g/ p_i
- f/ p_i g/ q^i,
where f(, ) and g(, ) are arbitrary functions,
the canonical equations are written as
q̇^i =
[ q^i , H ].
ṗ_i =
[ p_i , H ],
These equations are rewritten by introducing
phase space coordinates
:= ( z^1, z^2, ⋯, z^2N )
with
z^i = q^i for i = 1, 2, ⋯ , N
and
z^i = p_i - N for i = N+1, N+2, ⋯ , 2N
as
ż^i
=
[ z^i , H ].
Further, by introducing a canonical Poisson tensor as
J_c[ 0_N I_N; -I_N 0_N ]
where 0_N and I_N are N × N zero and unit matrices,
respectively,
the Poisson bracket is expressed as
[ f , g ]
=
f/ z^i
J_c^ij g/ z^j,
and the canonical equations are rewritten as
ż^i
=
J_c^ij H()/ z^j.
By changing the phase space variables from to as
z̅^i = z̅^i(),
Hamilton's equations (<ref>)
become
ż̅̇^i
=
J^ij() H̅()/z̅^j,
where the Hamiltonian is transformed to H̅(),
and the Poisson tensor is transformed to
J ( )
=
( J^ij( ) )
=
(
z̅^i/ z^k
J_c^k ℓz̅^j/ z^ℓ).
The Poisson tensor (<ref>) is antisymmetric
by definition, but it does not have
canonical form when are noncanonical coordinates.
Equation (<ref>) can also be written as
ż̅̇^i
=
[ z̅^i , H̅ ],
where the Poisson bracket is given by
[ f, g ]
=
f/z̅^i
J^ij g/z̅^j.
Let us now consider a dynamical system that need not be generated by a transformation such as that above. This system is governed by
u̇^i
=
J^ij() H()/ u^j
=
[ u^i, H() ],
[ f, g ]
f/ u^i J^ij g/ u^j,
where ( u^1, u^2, ⋯, u^M )^𝖳
is a vector of noncanonical variables of an M-dimensional phase space,
H() is a Hamiltonian,
J() := ( J^ij() ) is an antisymmetric
Poisson tensor,
and [ f , g ] is the Poisson bracket for arbitrary functions
f() and g().
The dimension M of the phase space can be odd.
If rank J = 2 N < M,
the Poisson tensor J has a (M - 2N)-dimensional null space.
The eigenvectors of the zero eigenvalues determine directions in which
the system cannot evolve.
Surfaces perpendicular to the eigenvectors define Casimir invariants.
The Casimir invariants C_k() (k = 1, 2, ⋯ M - 2N) satisfy
J^ij C_k/ u^j = 0.
The gradient of C_k points in the direction that the system is prohibited
from evolving.
The dynamics are not affected even if we plug in an energy-Casimir
function
F() H() + λ_k C_k()
into the evolution Eq. (<ref>).
Here λ_k are Lagrange multipliers.
The evolution equation reads
u̇^i
=
J^ij() F()/ u^j.
Equilibria of this system are given by
F()/ u^j = 0.
For an equilibrium _e given by
Eq. (<ref>), the
linearized equations are given by
δu̇^i
=
J^ij (_e)
^2 F/ u^j u^k (_e)
δ u^k,
where δ u^i is a perturbation away from equilibrium.
By assuming the time dependence of the perturbation is
δ u^i = δũ^i^-ω t
with δũ^i being a constant,
linear stability can be analyzed by
solving the following eigenvalue problem:
-ωδũ^i
=
J^ij (_e)
^2 F/ u^j u^k (_e)
δũ^k.
Lastly in the present
Sec. <ref>,
we define the energy of a linearized mode as
δ^2 H
1/2^2 F/ u^j u^k (_e)
δ u^jδ u^k,
where δ u^j is an eigenvector
of the eigenvalue problem (<ref>).
The time derivative of δ^2 H is easily seen to be zero,
δ^2 H/ t =
1/2^2 F/ u^j u^k (_e)
( δu̇^jδ u^k + δ u^jδu̇^k )
=
^2 F/ u^j u^k (_e)
δ u^jδu̇^k
=
^2 F/ u^j u^k (_e)
δ u^j
J^k ℓ (_e)
^2 F/ u^ℓ u^i (_e)
δ u^i
=
J^k ℓ (_e)
(
^2 F/ u^k u^j (_e)
δ u^j)
(
^2 F/ u^ℓ u^i (_e)
δ u^i)= 0,
where the symmetry of the Hessian matrix
( ^2 F / ( u^k u^j ) )
and the antisymmetry of J
were used.
Therefore, as a measure of the mode energy for systems with finite
degrees of freedom,
we adopt
H̃δ^2 H /1/2 | δ |^2.
§.§.§ Free rigid body
We sometimes find a set of variables that forms a closed
subset with a proper Poisson bracket in the 2N-dimensional phase
space. This is called reduction.
An example is rotational dynamics of the free rigid body.
If we choose angular momenta as the dynamical variables, we obtain a
three-dimensional phase space, where
the Hamiltonian, the Poisson bracket, and the evolution equations are
given by
H ( )
1/2∑_i = 1^3 L_i^2/ I_i,
[ f( ) , g( ) ]
-_ijk L_k f/ L_i g/ L_j,
L̇_i
=
[ L_i , H ],
respectively.
Here, ( L_1, L_2, L_3 )^𝖳 is the angular momenta,
I_i (i = 1, 2, 3) are the principal moments of inertia in a frame
fixed to the
rigid body, f and g are arbitrary functions of ,
and _ijk is the Levi-Civita symbol.
The Poisson tensor is given by J = ( J_ij ) with J_ij = [ L_i , L_j ] = -_ijk L_k, or
J
=
[ 0 -L_3 L_2; L_3 0 -L_1; -L_2 L_1 0 ].
Using the Poisson tensor, the evolution equations can be written as
L̇_i = J_ij H/ L_j.
Since the phase space is odd dimensional, the determinant of the Poisson
tensor is zero and consequently there must be a Casimir invariant. The system cannot evolve in the direction of the null
space of the Poisson tensor. This degeneracy of the Poisson tensor defines
a Casimir invariant, which is | | in the present case.
Therefore, C( || ) is conserved by the dynamics where C is an
arbitrary function.
§.§.§ Heavy top
Another example of noncanonical dynamics is that of the heavy top.
A unit vector in the direction opposite to the gravitational
acceleration is taken to be = ( ρ_1, ρ_2, ρ_3 ),
where the components are taken in a frame fixed to the top.
Then the phase space variables are
= ( u_1, u_2, ⋯, u_6 )^𝖳
= ( L_1, L_2, L_3, ρ_1, ρ_2, ρ_3 )^𝖳.
The Hamiltonian, the Poisson bracket, the Poisson tensor, and the
evolution equations are given by
H ( )
1/2∑_i = 1^3 L_i^2/ I_i
+ G ρ_3,
[ f( ) , g( ) ]
-_ijk L_k f/ L_i g/ L_j
- _ijkρ_k(
f/ L_i g/ρ_j
- g/ L_i f/ρ_j),
J
=
( [ u_i , u_j ] )
=
[ 0 -L_3 L_2 0 -ρ_3 ρ_2; L_3 0 -L_1 ρ_3 0 -ρ_1; -L_2 L_1 0 -ρ_2 ρ_1 0; 0 -ρ_3 ρ_2 0 0 0; ρ_3 0 -ρ_1 0 0 0; -ρ_2 ρ_1 0 0 0 0 ],
u̇_i
=
[ u_i , H ]
=
J_ij H/ u_j,
respectively. A measure of the effect of gravity is expressed by the parameter
G = mgℓ,
where m is the mass, g is the magnitude of the gravitational
acceleration, and ℓ is the the distance of the center of mass of
from the fixed point of the top.
The Casimir invariants are given by
C_1
C_1( | |^2 / 2 ),
C_2
C_2( · ).
The phase space is depicted in Fig. <ref>, which shows
the dynamics restricted, because of the constancy of C_1,2, to a four-dimensional subspace in the
six-dimensional phase space.
Moreover, the system follows a trajectory that conserves the energy in the
four-dimensional subspace. Two dimensions are drawn as the plane
· = const. in the -space.
The other two dimensions are the surface of the sphere
| | = const. drawn in the -space.
In the -space, the direction of changes, while
| | does not change. Therefore, the distance of the
plane · = const. from the origin does not
change. Similarly, drawn in the -space can change both
in direction and the magnitude. The intersection of the plane
· = const. and the sphere
| | = const. changes in time.
However, since the distance of the plane · =
const. from the origin in the -space is smaller than
or equals to | | according to
·/| |≤ | | | |/| |
= | |,
the intersection always exists.
For later use, equilibria and stability of the heavy top
are briefly summarized in the remainder of this
Sec. <ref>.
To this end, let us define an energy-Casimir function F as
F := H + λ_1 C_1 + λ_2 C_2 ,
where the Hamiltonian H is given by
Eq. (<ref>), the
two Casimir invariants C_i (i = 1, 2) by
Eqs. (<ref>) and (<ref>),
and the λ_i are the Lagrange multipliers.
The first partial derivatives of F are given by
F/ L_i =
L_i/I_i
+ λ_2 C_2^( · ) ρ_i,
F/ρ_i =
G δ_i3
+ λ_1 C_1^( | |^2 / 2 ) ρ_i
+ λ_2 C_2^( · ) L_i,
where the
L_i / I_i term is not summed over i,
and δ_i3 is used only for the index of ρ_i.
The prime denotes the derivative with respect to the argument, which
will not be written explicitly hereafter.
Equilibria are given by setting the first derivatives of F to zero.
Since the parameter G only appears when i = 3, equilibria
may be classified into two categories.
One is equilibria with ρ_3≠ 0,
and the other is equilibria with ρ_1≠ 0 or
ρ_2≠ 0.
Let us explain them one by one.
When ρ_3≠ 0,
Eqs. (<ref>)
and (<ref>)
with i = 3 can be solved to obtain
λ_1 =
-1/C_1^ρ_3(
G - L_3^2/I_3ρ_3),
λ_2 =
-L_3/C_2^ I_3ρ_3.
Here, we assumed that
C_1^≠ 0
and
C_2^≠ 0.
From
Eqs. (<ref>),
(<ref>),
and λ_i in
Eqs. (<ref>)
and (<ref>),
we obtain
(
[ 1/I_1 -L_3/I_3ρ_3; - L_3/I_3 - G + L_3/I_3ρ_3 ])
(
[ L_1; ρ_1 ])
=
(
[ 0; 0 ])
for i = 1.
The determinant of the 2 × 2 matrix on the left-hand side is
-G/I_1
+ L_3^2/I_3ρ_3(
1/I_2 - 1/I_3),
which is generally not zero.
Therefore we obtain L_1 = ρ_1 = 0.
Similarly, we obtain L_2 = ρ_2 = 0
from F / L_2 = 0 and F / ρ_2 = 0.
When ρ_1≠ 0, on the other hand,
λ_i are obtained from
Eqs. (<ref>)
and (<ref>)
with i = 1 as
λ_1 =
L_1^2/C_1^ I_1ρ_1^2,
λ_2
=
-L_1/C_2^ I_1ρ_1.
Then
Eqs. (<ref>)
and (<ref>)
with i = 2
yield
(
[ 1/I_2 -L_1/I_1ρ_1; - L_1/I_1 L_1/I_1ρ_1 ])
(
[ L_2; ρ_2 ])
=
(
[ 0; 0 ]).
The determinant of the 2 × 2 matrix is
L_1^2/I_1ρ_1(
1/I_2 - 1/I_1),
which is generally not zero.
Therefore, we obtain L_2 = ρ_2 = 0.
From
Eqs. (<ref>)
and (<ref>)
with i = 3,
we obtain
(
[ 1/I_3 -L_1/I_1ρ_1; - L_1/I_1 L_1/I_1ρ_1 ])
(
[ L_3; ρ_3 ])
=
(
[ 0; -G ]).
Except for cases where the determinant of this 2 × 2 matrix
vanishes,
we obtain
(
[ L_3; ρ_3 ])
=
-G ρ_1/L_1^2/I_1(
1/I_3 - 1/I_1)
(
[ L_1/I_1; ρ_1/I_3 ]).
Therefore L_3≠ 0 and ρ_3≠ 0 for
the equilibrium with ρ_1≠ 0.
Similarly, when ρ_2≠ 0, we obtain
λ_1 =
L_2^2/C_1^ I_2ρ_2^2,
λ_2
=
-L_2/C_2^ I_2ρ_2,
L_1 =
0,
ρ_1
=
0,
(
[ L_3; ρ_3 ])
=
-G ρ_2/L_2^2/I_2(
1/I_3 - 1/I_2)
(
[ L_2/I_2; ρ_2/I_3 ]).
Linear stability of these equilibria can be examined
by solving
Eq. (<ref>)
or
Eq. (<ref>).
§.§.§ A toy model mimicking low-beta reduced MHD
We propose a toy model that tries to mimic features of low-beta reduced MHD (see
Sec. <ref>). The toy model is based on the heavy top
presented in Sec. <ref>, but with a new Hamiltonian taken to be
H()
=
1/2(
L_1^2/I_1
+ L_2^2/I_2
+ L_3^2/I_3)
+ 1/2(
M_1ρ_1^2
+ M_2ρ_2^2
+ M_3ρ_3^2).
As presented in
Sec. <ref>,
the Hamiltonian of low-beta reduced MHD in two dimensions
is composed of kinetic and magnetic energy terms.
In the Hamiltonian (<ref>),
the terms quadratic in L_i, being kinetic in origin, mimic the corresponding kinetic energy of reduced MHD, while the terms quadratic in ρ_i mimic magnetic energy.
Because the Poisson bracket is assumed to be the same, the Casimir invariants are the same as those of the original heavy top, i.e., those of (<ref>). With these ingredients, the energy-Casimir function F is thus given by
F
H + λ_1 C_1 + λ_2 C_2,
the evolution equation for this system is given by
u̇^i
=
J^ij F/ u^j,
where
λ_1 and λ_2 are Lagrange multipliers,
:= ( L_1, L_2, L_3, ρ_1, ρ_2, ρ_3 )^𝖳,
and the Poisson tensor is given by
Eq. (<ref>).
Equilibria are obtained by setting to zero the gradient of the energy-Casimir
function (<ref>),
F/ L_i =
L_i/I_i
+ λ_2 C_2^(·) ρ_i =0,
F/ρ_i =
M_iρ_i
+ λ_1 C_1^( | |^2 / 2 ) ρ_i
+ λ_2 C_2^(·) L_i=0.
Note that L_i / I_i term and M_iρ_i term are not summed over i.
The prime denotes derivative with respect to the argument, which
will not be written explicitly hereafter.
Equations (<ref>) and (<ref>)
for any of i = 1, 2, or 3
are written in a matrix form as
[ 1/I_i λ_2 C_2^; λ_2 C_2^ M_i + λ_1 C_1^ ][ L_i; ρ_i ]
=
[ 0; 0 ].
This equation has non-zero solution (L_i, ρ_i)^𝖳
when the determinant of the 2 × 2 matrix on the left-hand side
vanishes, which leads to
λ_2^2
=
M_i + λ_1 C_1^/ I_i (C_2^)^2.
For some j ≠ i, Eqs. (<ref>) and
(<ref>) are
[ 1/I_j λ_2 C_2^; λ_2 C_2^ M_j + λ_1 C_1^ ][ L_j; ρ_j ]
=
[ 0; 0 ].
If I_j≠ I_i and/or M_j≠ M_i,
the determinant of the 2 × 2 matrix in
Eq. (<ref>) does not vanish.
Therefore, we obtain ( L_j, ρ_j )^𝖳 = 0.
Now, we have four unknowns L_i, ρ_i, λ_1 and
λ_2.
First, we give values of C_1 and C_2, and then solve
Eqs. (<ref>) and (<ref>)
for L_i and ρ_i.
Then, we solve Eqs. (<ref>) for
λ_1 and λ_2 to obtain
λ_1
=
- 1/C_1^( M_i - L_i^2/ I_iρ_i^2) and λ_2
=
- L_i/ C_2^ I_iρ_i .
Here, we assumed C_1^≠ 0 and C_2^≠ 0.
These λ_1 and λ_2 satisfy
Eq. (<ref>).
Note that ρ_i must not be zero.
Linear stability of these equilibria can be analyzed by studying
Eq. (<ref>) with the present F.
The 6 × 6 Hessian matrix
^2 F / ( u^j u^k )
is explicitly obtained upon differentiating Eqs. (<ref>) and
(<ref>), yielding
^2 F/ L_i L_j =
1/I_iδ_ij
+ λ_2 C_2^ρ_iρ_j,
^2 F/ L_iρ_j =
λ_2(
C_2^δ_ij
+ C_2^ρ_i L_j),
^2 F/ρ_iρ_j =
M_iδ_ij
+ λ_1(
C_1^δ_ij
+ C_1^ρ_iρ_j)
+ λ_2 C_2^ L_i L_j.
Again, δ_ij / I_i in Eq. (<ref>)
and M_iδ_ij in Eq. (<ref>) are not summed over
i.
§.§ System with Infinite degrees of freedom
§.§.§ Two-dimensional Euler flow
One of the simplest examples of a noncanonical Hamiltonian system with
infinite dimensions is two-dimensional Euler fluid flow <cit.>.
Suppose the two-dimensional velocity field (x,y,t) is given by
= ×_⊥,
where (x,y,t) is the stream function,
_⊥ is the gradient operator in the x–y plane,
and is the unit vector perpendicular to the x–y plane.
The vorticity in the z direction is given by
U · = △_⊥,
where △_⊥ is the Laplacian in the x–y plane.
The governing equation of U is
U/ t
=
[ U , ],
where the `inner' Poisson bracket or Jacobian is defined by
[ f , g ]
· f × g
=
f/ x g/ y
- f/ y g/ x.
A Hamiltonian and a Lie–Poisson bracket for functionals
are defined
as
H [U]
1/2∫_𝒟^2 x
| _⊥ ( △_⊥^-1 U ) |^2,
{ F, G } ∫_𝒟^2 x
U [ δ F/δ U , δ G/δ U],
respectively, where F[U] and G[U] are arbitrary functionals of U,
and 𝒟 is a two-dimensional domain in the x–y plane.
Functional derivatives such as δ F / δ U are defined through
a variation of F as
δ F
=
lim_→ 01/∫_𝒟^2 x (
F[ U + δ U ] - F[U]
)
=
. / F[ U + δ U ]
|_ = 0
=:
∫_𝒟^2 x δ U δ F/δ U.
By using the Poisson bracket for functionals,
the vorticity equation (<ref>) can be written as
U/ t
= { U , H }.
Understanding Eq. (<ref>)
may need some care, so details are explained in Appendix <ref>.
The antisymmetric Poisson operator 𝒥 associated with the Poisson bracket of (<ref>) is
𝒥 [ ∘ , U ],
in terms of which the Poisson bracket can be expressed as
{ F , G }
=
∫_𝒟^2 x δ F/δ U𝒥δ G/δ U.
Note that 𝒥 takes the argument ∘ from its right.
Then the evolution equation
(<ref>)
can also be written as
U/ t
=
𝒥δ H/δ U.
There exists an infinite number of Casimir invariants for this system, viz.
C
∫_𝒟^2 x
f(U) ,
where f(U) is an arbitrary function. It can easily be shown that {F,C}=0 for all functionals F.
§.§.§ Low-beta reduced MHD in a two-dimensional rectangular domain
Another noncanonical Hamiltonian system is that of the low-beta reduced MHD system of <cit.> whose Hamiltonian structure was given by <cit.>. This system describes two-dimensional dynamics in the plane perpendicular to
a strong ambient magnetic field.
The velocity and magnetic fields are expressed as
=
×_⊥,
=
+ ×_⊥ψ,
where the magnetic field is normalized by the strong magnetic field in
the z direction, and the velocity field is by the Alfvén velocity.
If we assume translational symmetry in the z direction,
the governing equations of the low-beta reduced MHD are given by
U/ t =
[ U , ] + [ ψ , J ],
ψ/ t =
[ ψ , ],
where
U · = △_⊥
is the same definition used for two-dimensional Euler flow,
J - · = △_⊥ψ,
and the Poisson bracket [ , ] is the same as Eq. (<ref>).
The noncanonical variables are
= ( u^1, u^2 )^𝖳 = ( U , ψ )^𝖳.
The Hamiltonian, the Lie-Poisson bracket, and the evolution equations
are, respectively, given by
H []
1/2∫_𝒟^2 x (
| _⊥ ( △_⊥^-1 U ) |^2
+ | _⊥ψ |^2),
{ F, G } ∫_𝒟^2 x (
U [ δ F/δ U , δ G/δ U]
+ψ(
[ δ F/δ U , δ G/δψ]
+[ δ F/δψ , δ G/δ U]
)
),
u^i/ t
= { u^i , H },
where F[] and G[] are arbitrary functionals of .
For low-beta reduced MHD, the antisymmetric Poisson operator 𝒥 = ( 𝒥_ij )
can be defined
as
𝒥[ [ ∘ , U ] [ ∘ , ψ ]; [ ∘ , ψ ] 0 ],
and the Poisson bracket reads
{ F , G }
=
∫_𝒟^2 x δ F/δ u^i𝒥^ijδ G/δ u^j.
Note, as before, 𝒥 takes the arguments ∘ from its right.
Then the evolution equation
(<ref>)
can also be written as
u^i/ t
=
𝒥^ijδ H/δ u^j.
The Casimir invariants are given by
C_1[]
∫_𝒟^2 x
f(ψ),
and
C_2[]
∫_𝒟^2 x
U g(ψ),
where f(ψ) and g(ψ) are arbitrary functions.
§.§.§ Low-beta reduced MHD in cylindrical geometry
In cylindrical geometry
under periodic boundary condition in the axial direction,
Eqs. (<ref>) and
(<ref>) become
U/ t =
[ U , ] + [ ψ , J ] - J/ζ,
ψ/ t =
[ ψ , ] - /ζ,
where := a / R_0 is the inverse aspect ratio with the length
of the cylinder and the minor radius being 2 π R_0 and a,
respectively. The toroidal angle is ζ z / R_0.
Using the cylindrical coordinates (r, θ, z),
the Poisson bracket (<ref>) becomes
[ f, g ]
=
1/r(
f/ r g/θ
- f/θ g/ r).
The Hamiltonian is the same as that of Eq. (<ref>).
The Poisson bracket for arbitrary functionals F[] and G[]
and the Poisson tensor are given, respectively, by
{ F, G }
:= ∫_𝒟^3 x (
U [ δ F/δ U , δ G/δ U]
+ψ(
[ δ F/δ U , δ G/δψ]
+[ δ F/δψ , δ G/δ U]
)
.
+ .
(
δ F/δ U/ζδ G/δψ
- δ G/δ U/ζδ F/δψ)
),
𝒥
:= [ [ ∘ , U ] [ ∘ , ψ ] + /ζ; [ ∘ , ψ ] + /ζ 0 ].
Note that again 𝒥 takes the arguments ∘ from its right.
Using the Poisson bracket (<ref>)
and the Poisson tensor (<ref>),
the evolution equations (<ref>)
and (<ref>) can be rewritten as
u^i/ t
=
{ u^i , H }=
𝒥^ijδ H/δ u^j ,
and the Casimir invariants are
C_v[]
∫_𝒟^3 x U and
C_m[]
∫_𝒟^3 x ψ .
If we focus on single helicity dynamics that includes
only a family of Fourier modes with mode numbers
ℓ (m, n) where is an ℓ integer and m and n are specified poloidal and toroidal
mode numbers, respectively,
the ζ-derivative terms can be absorbed in the bracket
terms.
By adopting a helical flux
ψ_h
:=
ψ + n/2 m r^2
as a state variable as
= ( u^1, u^2 )^𝖳 = ( U, ψ_h )^𝖳,
the Hamiltonian, the Lie-Poisson bracket, the Poisson tensor, and the
evolution equations become, respectively,
H []
1/2∫_𝒟^3 x (
|_⊥ ( △_⊥^-1 U ) |^2
+ |_⊥( ψ_h - n/2 m r^2)
|^2),
{ F, G } ∫_𝒟^3 x (
U [ δ F/δ U , δ G/δ U]
+ψ_h(
[ δ F/δ U , δ G/δψ_h]
+[ δ F/δψ_h , δ G/δ U]
)
),
𝒥 [ [ ∘ , U ] [ ∘ , ψ_h ]; [ ∘ , ψ_h ] 0 ],
u^i/ t
= { u^i , H }
=
𝒥^ijδ H/δ u^j.
Again, note that 𝒥 takes the arguments ∘ from its right.
For this case, the Casimir invariants are given by
C_1[]
∫_𝒟^3 x
f(ψ_h)
and
C_2[]
∫_𝒟^3 x
U g(ψ_h),
where f and g are arbitrary functions.
§.§.§ High-beta reduced MHD in toroidal geometry
Lastly, the evolution equations for high-beta reduced MHD <cit.>
are given by
U/ t =
[ U , ] + [ ψ , J ] - J/ζ
+ [ P, h ],
ψ/ t =
[ ψ , ] - /ζ,
P/ t =
[ P, ],
where U, ψ, and J are the same as those of low-beta
reduced MHD in cylindrical geometry,
P is the normalized pressure,
and h := r cosθ expresses the toroidicity.
The pressure is normalized by the typical magnetic pressure, the brackets [ , ] are the same as those of Eq. (<ref>), and the state vector is
= ( u^1 , u^2 , u^3 )^𝖳 := ( U , ψ , P )^𝖳.
The Hamiltonian, the Poisson bracket for functionals,
the Poisson tensor, and the evolution equations <cit.> are given, respectively, by
H []
∫_𝒟^3 x (
1/2
| _⊥ ( △_⊥^-1 U ) |^2
+ 1/2
| _⊥ψ |^2
- h P
),
{ F, G } ∫_𝒟^3 x (
U [ δ F/δ U , δ G/δ U]
+ψ(
[ δ F/δ U , δ G/δψ]
+[ δ F/δψ , δ G/δ U]
)
.
+ .
P (
[ δ F/δ U , δ G/δ P]
+[ δ F/δ P , δ G/δ U]
)
+ (
δ F/δ U/ζδ G/δψ
- δ G/δ U/ζδ F/δψ)
),
𝒥 [ [ ∘ , U ] [ ∘ , ψ ] + /ζ [ ∘, P ]; [ ∘ , ψ ] + /ζ 0 0; [ ∘, P ] 0 0 ],
u^i/ t
= { u^i , H } =
𝒥^ijδ H/δ u^j.
Again, note that 𝒥 takes the arguments ∘ from its right.
The Casimir invariants are
C_v[]
∫_𝒟^3 x U ,
C_m[]
∫_𝒟^3 x ψ and
C_p[]
∫_𝒟^3 x f(P) ,
where f is an arbitrary function.
§ SIMULATED ANNEALING
Let us now turn to the theory of SA, which is explained in Section <ref>. This theory will be used for the computation of equilibrium states in Secs. <ref>–<ref>.
In Sec. <ref>, the double bracket of SA is
presented and its properties are discussed, both for finite and infinite-dimensional systems. Then, in Sec. <ref> we briefly introduce
another kind of SA by means of a metriplectic bracket.
§.§ Simulated annealing by double bracket
§.§.§ Finite degrees of freedom
In Sec. <ref>,
it was explained that
Hamiltonian systems are governed by equations of the following type:
u̇^i = J^ij H()/ u^j,
where = ( u^1, ⋯ , u^M )^𝖳 are the phase space
variables,
H() is the Hamiltonian,
and J = ( J^ij ) is the Poisson tensor.
The antisymmetry of J guarantees the energy is conserved, as is easliy shown,
H()/ t =
H/ u^iu̇^i
=
H/ u^i
J^ij H/ u^j
=
- H/ u^i
J^ij H/ u^j
= 0.
The Casimir invariants are also conserved during the time evolution, but this is because of the null space of J.
Consider an artificial dynamics governed by equations of the form
u̇^i
=
J^ij K_jk J^k ℓ H()/ u^ℓ
=
[ u^i , u^j ] K_jk [ u^k , H ],
where K_jk is a matrix with a definite sign.
We assume here that this matrix is positive definite.
The time evolution of H() according to
Eq. (<ref>)
is then
H()/ t =
H/ u^iu̇^i
=
H/ u^i
J^ij K_jk J^k ℓ H/ u^ℓ
=
(
- J^ji H/ u^i)
K_jk J^k ℓ H/ u^ℓ≤ 0.
Therefore, the energy of the system monotonically decreases, and
the system approaches a minimum energy state until H / t = 0
or J^ij ( H / u^j ) = 0,
which corresponds to
an equilibrium of the original system
(<ref>).
If we take K as negative definite, the energy monotonically increases
to approach an energy maximum.
Of note, is that SA dynamics preserves Casimir invariants of the original system.
In fact, it is easily shown that C_k / t = 0 because
of J^ij ( C_k / u^j ) ≡ 0.
To obtain a wider class of equilibria it is necessary to constrain the dynamics. To do this <cit.> used Dirac constraint theory, by constructing a Dirac bracket, which imposes
additional constraints C_ℓ that differ from the original Casimir
invariants. As part of the construction, each C_ℓ must possess a counterpart, i.e., the set of C_ℓs must be evenly splint into such pairs where [C_ℓ, C_ℓ']≠ 0. If this split is not possible, then other paired constraints for any C_ℓ can be manufactured according to
C_ℓ + 1
[ C_ℓ, H ] .
If C_ℓ does not change during the course of the time evolution, then
C_ℓ + 1 must be always zero.
By using this pair of constraints, a Dirac bracket can be constructed as
[ f, g ]_D
=
[ f, g ]
- [ [ f, C_ℓ ]; [ f, C_ℓ + 1 ] ]^𝖳[ [ C_ℓ , C_ℓ ] [ C_ℓ, C_ℓ + 1 ]; [ C_ℓ + 1, C_ℓ ] [ C_ℓ + 1, C_ℓ + 1 ] ]^-1[ [ C_ℓ, g ]; [ C_ℓ + 1, g ] ].
Note that this definition is valid when the inverse matrix
on the right hand side exists, or when [C_ℓ, C_ℓ + 1] ≠ 0.
The number of additional constraints can be increased in a similar
manner. Suppose we have ( M - 2N ) Casimir invariants originally, and
we impose L constraints additionally.
Then C_M-2N+1 and C_M-2N+2 := [ C_M-2N+1, H ] is the first
pair,
C_M-2N+3 and C_M-2N+4 := [ C_M-2N+3, H ]
is the second pair, and the last pair is
C_M-2N+2L-1 and C_M-2N+2L := [ C_M-2N+2L-1, H ].
By defining a matrix
𝒞 = ( 𝒞^ij ) := ( [ C_i, C_j ] )^-1,
the Dirac bracket is given by
[ f, g ]_D
:=
[ f, g ]
- [ f, C_i ] 𝒞^ij [ C_j, g ],
where i and j take on integer values from M-2N+1 to M-2N+2L.
Here 𝒞 must exist for this Dirac bracket to be valid.
In terms of a Dirac bracket of the form of (<ref>), the evolution equation for DSA is given by
u̇^i
=
[ u^i , u^j ]_D K_jk [ u^k , H ]_D.
§.§.§ Infinite degrees of freedom
The governing equations of
systems with infinite degrees of freedom have the following form
u^i/ t
= { u^i , H }
= 𝒥^ijδ H/δ u^j,
where, as described in Sec. <ref>, 𝒥^ij is now an operator. On the basis of this form, <cit.> defined an artificial dynamics generated by a double bracket according to
u^i/ t = (( u^i , H )),
(( F, G ))
=
∫_𝒟^N x^ ∫_𝒟^N x^ { F , u^i(^) }𝒦_ij( ^ , ^ )
{ u^j(^) , G } ,
where N is the spatial dimension
and 𝒦 = ( 𝒦_ij ) is a symmetric kernel with a definite
sign.
Double bracket SA dynamics for infinite degree-of-freedom systems
can be understood as a replacement of the advection fields
for the dynamical variables u^i.
This will be shown explicitly case-by-case in
Secs. <ref> and <ref>.
According to the dynamics generated by Eq. (<ref>),
time evolution of any arbitrary functional F[] is governed by
F[]/ t
=
(( F, H )).
Thus time derivative of the Hamiltonian becomes
H[]/ t =
(( H, H ))
=
∫_𝒟^N x^ ∫_𝒟^N x^ { H , u^i}𝒦_ij{ u^j , H }
=
- ∫_𝒟^N x^ ∫_𝒟^N x^ { u^i , H }𝒦_ij{ u^j , H }≤ 0,
for a positive definite symmetric kernel 𝒦.
Therefore, H decreases monotonically and approaches a minimum value
where { u^i , H } = 0, which is a stationary state of the
original system (<ref>).
On the other hand,
the time evolution of a Casimir invariant C[] is given by
C[]/ t =
(( C, H ))
=
∫_𝒟^N x^ ∫_𝒟^N x^ { C , u^i}𝒦_ij{ u^j , H }≡ 0.
Therefore, all Casimir invariants of the original system are preserved in
the SA dynamics.
As with finite-dimensional systems, <cit.> made SA more useful by using a Dirac bracket, giving DSA akin to that in finite dimensions,
{ F , G }_D
:=
{ F , G }
- { F , C_i}𝒞^ij{ C_j, G },
where
𝒞 = ( 𝒞^ij ) ( { C_i, C_j} )^-1
is an even dimensional matrix,
where C_i are additional constraints to be incorporated.
Then, DSA is defined as
u^i/ t = (( u^i , H ))_D,
(( F, G ))_D =
∫_𝒟^N x^ ∫_𝒟^N x^ { F , u^i( ^ ) }_D𝒦_ij( ^ , ^ )
{ u^j( ^ ) , G }_D.
§.§ Simulated annealing by metriplectic brackets
The SA changes the energy (Hamiltonian) of the system monotonically,
while the Casimir invariants are preserved.
On the other hand, the metriplectic dynamics changes the entropy monotonically, while the energy
is conserved. See <cit.> for original papers and see <cit.> for a summary and recent results.
For finite-dimensional systems,
let us define a symmetric bracket according to
( f , h )
f/ u^i G^ij h/ u^j,
where
f and h are arbitrary functions,
and G = (G^ij ) is a symmetric metric-like matrix that ensures
( f, h ) = ( h, f ).
One more important feature imposed on a metriplectic bracket is
( f, H ) = 0
for any f. Such a choice can be realized by a projection, for example.
Then, metriplectic dynamics is generated by a free energy like quantity,
F H - 𝒯 S, where 𝒯 is a global constant temperature and S is an entropy, according to
u̇^i
=
[ u^i , F ] + ( u^i , F ).
Here, for convenience we have scaled away 𝒯.
The entropy S is selected from the set of Casimir
invariant of the Poisson bracket; i.e., Casimirs are candidate entropies that determine ones choice for `thermal equilibrium.'
Then the entropy evolves as
S/ t
=
[ S , H ] + [ S , S ] + ( S , H ) + ( S , S )
≥ 0
for a positive semi-definite (G^ij ).
On the other hand,
the Hamiltonian is conserved as
H/ t
=
[ H , H ] + [ H , S ] + ( H , H ) + ( H , S )
= 0.
For infinite-dimensional systems,
a symmetric bracket is defined similarly as
( F, G )
:=
∫_𝒟^N x^ ∫_𝒟^N x^ δ F/δ u^i( ^ )𝒢^ij( ^ , ^ )
δ G/δ u^j( ^ ),
where 𝒢 := ( 𝒢^ij ) is a symmetric kernel,
and F[] and G[] are arbitrary functionals of .
The kernel is chosen to satisfy ( H, F ) ≡ 0 for any F
and ( S, S) ≥ 0.
The evolution equations of the metriplectic dynamics are given by
u^i/ t
= { u^i , F } + ( u^i , F ),
where F[] := H[] + S[].
This dynamics increases the entropy functional S monotonically,
while conserving H.
§ SIMULATED ANNEALING OF SYSTEM WITH FINITE DEGREES OF FREEDOM
Section <ref> presents
some analyses of equilibria and stability of
Hamiltonian systems with finite degrees of freedom.
Numerical results of SA are also shown.
Section <ref> treats the heavy top,
while Sec. <ref> presents results on a toy model
designed to mimic reduced MHD.
§.§ Heavy top
Some numerical results of SA for the heavy top will be shown in
Sec. <ref>. Our first example consists of a stable equilibrium with two positive energy modes, while a second example is for an unstable equilibrium with a positive energy mode and a saddle. These cases have the same equilibrium point, but different values of the gravity parameter. A third example consists of a stable equilibrium with a positive and a negative energy mode. Recall, negative energy modes are stable oscillations with negative energy <cit.>. Linear spectral stability analyses are described for these cases, along with SA results.
Our final example employs DSA.
In all cases, the principal moments of inertia were chosen to be
I_1 = 1, I_2 = 2, and I_3 = 3.
The first and the second examples are for an equilibrium with
ρ_3 = 1 and L_3 = 3, and L_1 = L_2 = ρ_1 = ρ_2 = 0.
As shown in Sec. <ref>, this is an equilibrium point.
Figure <ref>
shows the real and the imaginary parts of ω, as determined by
Eq. (<ref>),
for the heavy top as functions of the gravity parameter G.
Note that two of the six eigenvalues are zero, which is expected because of the existence of two Casimirs <cit.>, and these two are not plotted. The equilibrium is linearly stable when G ≤ 1, and is unstable when
G > 1. The bifurcations at G=1 and G=2 are steady state bifurcations, i.e., they happen at ω=0 and the two neutrally stable modes yield one purely growing unstable mode with a corresponding purely damped mode.
In Fig. <ref>,
eigenvalues of the Hessian matrix ( ^2 F / ( u^i u^j ) )
are plotted as functions of G.
Here, the original Hessian matrix is a 6 × 6 matrix.
However, it includes two directions that are not allowed for
the system to evolve because of the two Casimir invariants. Therefore, two
dimensions were removed by using the linearized equations
ρ_iδρ_i
=
0 and
L_iδρ_i
+ ρ_iδ L_i =
0.
For the equilibrium under consideration,
we obtain δ L_3 = 0 and δρ_3 = 0.
Therefore, by using a four dimensional vector of perturbations
δ_r :=
( δ L_1, δ L_2, δρ_1, δρ_2),
the second variation of the energy-Casimir function F can be written
as
δ^2 F
=
δ u_r^i A_rHM ijδ u_r^j,
where the subscript “r” and “HM” stands for “reduced”
and “Hessian Matrix”, respectively.
Note that the eigenvalues of A_rHM ij are always real.
When G < 1, there exist four positive eigenvalues,
which means that the system is an energy minimum on the Casimir leaf.
When 1 < G < 2, there exist three positive and one negative
eigenvalues, meaning that a saddle exists, i.e., there is one neutral (stable) degree of freedom, and one unstable mode with its damped counterpart, in this range of G. In fact,
we observe that ω > 0 in
Fig. <ref>subfig:htLinearStability-G-omgi-ieq3
showing linear
instability.
When G > 2, there exists two positive and two negative eigenvalues.
In this case, another saddle appeared and we have two purely growing modes with their damped counterparts.
This can be confirmed also in
Fig. <ref>subfig:htLinearStability-G-omgi-ieq3
where two ω > 0 and two ω < 0 eigenvalues exist for G > 2.
Note that similar information can be obtained from the linearized
equations of SA.
If we assume time dependence of the perturbation as
^-ω t, we obtain an eigenvalue problem from the linearized
equations of SA.
The imaginary part of ω corresponds to the eigenvalue of
the reduced Hessian matrix A_rHM.
If the equilibrium under consideration is at an energy minimum,
all ω should be negative. On the other hand,
if the equilibrium is not at an energy minimum, there should be at least
one positive ω.
Figure <ref> shows ω
of the linearized SA equation.
Moreover, the mode energy H̃ was calculated
according to
Eq. (<ref>)
by using the eigenmodes corresponding to the eigenvalue problem of the
original dynamics
(<ref>).
Figure <ref> shows H̃ as functions
of G. As is to be expected,
two pairs of oscillatory modes have positive energies
in 0 ≤ G < 1, and a pair has a positive energy
in 1 ≤ G < 2.
The pair of modes with ω≠ 0 has H̃ = 0.
No negative energy mode exists in this equilibrium.
Now, let us show SA results.
The numerical results shown here use the unit matrix as the symmetric
kernel K.
First, the time evolution for G = 0.5 is shown in
Fig. <ref>.
The initial perturbation was given so that the perturbed state has the
same values of C_1 and C_2.
Explicitly, L_1 = 0.1, L_2 = 0.1, L_3 =3.010,
ρ_1 =0.1 ρ_2 =0.1, and ρ_3 = 0.990.
As the time proceeds, the energy H decreased
as seen in
Fig. <ref>subfig:htSA-t-H-C-ieq3-G0_5,
and the system approaches the equilibrium.
During the time evolution C_1 = | |^2 and
C_2 = · were conserved.
This result was to be expected since the equilibrium has two
positive energy modes for G = 0.5.
Next, the time evolution for G = 1.5 is shown in
Fig. <ref>.
The initial perturbation was given
similarly as in the case of
Fig. <ref>
so that the perturbed state has the
same values of C_1 and C_2.
As the time proceeds, the energy H decreased
as seen in
Fig. <ref>subfig:htSA-t-H-C-ieq3-G1_5,
and C_1 = | |^2 and C_2 = · were
conserved.
In this case, another equilibrium
L_3 = -3 and ρ_3 = -1,
L_1 = L_2 = ρ_1 = ρ_2 = 0 was reached by SA.
This is because the original equilibrium is unstable for G = 1.5.
Similar time evolution of SA was obtained for G=2.5
since the equilibrium is unstable.
Another numerical example is for an equilibrium with a pair of negative
energy modes.
The principal moments of inertia were chosen to be
I_1 = 1, I_2 = 2, and I_3 = 3, which were same as in the
previous cases.
The equilibrium was chosen to have
L_1 = 0.968, L_2 = 0, L_3 = 0.75,
ρ_1 = 0.968, ρ_2 = 0, ρ_3 = 0.25.
Figures <ref>subfig:htLinearStability-G-omgr-ieq1
and
<ref>subfig:htLinearStability-G-omgi-ieq1
shows the real and the imaginary parts of ω
for the linearized equations of the original dynamics
Eq. (<ref>),
respectively.
Note that this equilibrium exists only for 0 ≤ G ≤ 2,
and becomes linearly unstable for G ≳ 0.8.
Figure <ref>subfig:ht-G-evHssr-ieq1 shows eigenvalues of the
reduced Hessian matrix A_rHM.
When 0 ≤ G ≲ 0.8, two positive and two negative eigenvalues
exist. Given only the information shown in
Fig. <ref>subfig:ht-G-evHssr-ieq1,
the situation cannot be entirely identified: either there is
(i) a pair of positive energy modes and a pair of negative energy modes
or
(ii) there are two saddles.
For G ≳ 0.8, we can identify that
there exists a saddle and a pair of positive energy modes.
Figure <ref>subfig:htlinearSAp-G-omgi-4D-ieq1
shows eigenvalues of the linearized SA equation, where the kernel K was
chosen to be the unit matrix. Then the energy of the system
monotonically decreases as time proceeds.
There exist two positive and two negative eigenvalues
for 0 ≤ G ≲ 0.8, while one positive and three negative eigenvalues
for 0.8 ≲ G ≤ 2.
The existence of the positive eigenvalues of the linearized SA equation
indicates that the dynamics in the direction corresponding to these
eigenvectors is unstable. This is true even for 0 ≤ G ≲ 0.8.
However, the original dynamics shows linear stability for
0 ≤ G ≲ 0.8.
This indicates an existence of a pair of negative energy modes.
Figure <ref>subfig:ht-G-mdener-ieq1 shows the mode energy
H̃. Whence, it is clear that there exists negative energy modes
for 0 ≤ G ≲ 0.8. Thus we see that SA can be used to identify negative energy modes.
Now, time evolution of SA is shown in
Fig. <ref>.
The gravity parameter was chosen to be G = 0.5,
where the negative energy modes exist.
The initial condition was
L_1 = 0.878, L_2 = 0.1, L_3 =0.85,
ρ_1 =0.93, ρ_2 = 0.1, and ρ_3 = 0.35,
which has the same values for the Casimir invariants as those for the equilibrium.
The energy of the system monotonically decreases as time proceeds,
while the Casimir invariants, C_1 = | |^2 and
C_2 = · in this case, were conserved
as shown in
Fig. <ref>subfig:htSA-t-H-C-ieq1-G0_5.
We also observe
in Figs. <ref>subfig:htSA-t-L-ieq1-G0_5
and <ref>subfig:htSA-t-rho-ieq1-G0_5
that the system did not recover the original equilibrium and reached
another equilibrium.
The existence of the negative energy modes explains this behavior.
The last case of this subsection is a DSA result.
Let us introduce a new constant
C_3 := ρ_3.
Then, the Dirac bracket is constructed according to
Eq. (<ref>).
The counterpart of C_3 is given by
C_4
[ C_3, H ]
=
L_2ρ_1/I_2
-L_1ρ_2/I_1.
If C_3 is kept unchanged during the time evolution,
C_4 must be always zero since Ċ_3 = [ C_3 , H ] ≡ 0.
The Dirac bracket is properly defined when
either of ρ_1 or ρ_2 is not zero
since
[ C_3 , C_4 ]
=
ρ_1^2/I_2
+ ρ_2^2/I_1.
In other words, this formulation breaks down when ρ_1 = ρ_2 =
0.
The initial condition for the DSA run is chosen to be
L_1 = 0.878, L_2 = 0.1, L_3 =0.85,
ρ_1 =0.93, ρ_2 = 0.1, and ρ_3 = 0.35,
which is a perturbed state of an equilibrium with
L_1 = 0.968, L_2 = 0, L_3 = 0.75,
ρ_1 = 0.968, ρ_2 = 0, ρ_3 = 0.25.
This initial condition is the same as the one for the case of
Fig. <ref>.
The gravity parameter G=0.5 was also chosen to be the same as that
for
Fig. <ref>.
The kernel K for the double bracket was again chosen to be the unit matrix
so that the energy of the system monotonically decreases by DSA.
If we use the ordinary Poisson bracket for constructing the double
bracket, the SA lead to an equilibrium with ρ_3 = -1
that is different from the original
equilibrium without a perturbation as shown in
Fig. <ref>.
Figure <ref> shows
time evolution of , , H, C_1 = | |^2
and C_2 = ·.
As observed in
Fig. <ref>subfig:htDSA-t-rho-ieq1-G0_5,
C_3 = ρ_3 was successfully conserved, it remaining at its initial value.
Note that the final state is not an equilibrium originally.
It is a stationary state where the top is somehow supported at a tilted
angle. Without such a support, the top will flip over to get
ρ_3 = -1 as in the case of
Fig. <ref>.
The number of Dirac constraints can be increased further.
We have confirmed that ρ_1 in addition to ρ_3 can be fixed
at the initial value by adding C_5 = ρ_1
and C_6 = [ C_5 , H ].
In this case, ρ_2 is also fixed at the initial condition since
C_1 = | |^2 = 1 is conserved.
On the other hand, can change in time while keeping
C_2 = ·.
§.§ A toy model mimicking low-beta reduced MHD
Equilibrium and stability analyses similar to the heavy top presented in
Sec. <ref> can be performed for the toy model of Sec. <ref> that mimics an aspect of low-beta reduced MHD. In the present subsection, numerical results examining effects of addition of
Hamiltonian dynamics to SA dynamics are presented. This toy model was created to answer whether the addition of the
Hamiltonian dynamics to SA dynamics can accelerate relaxation to an
equilibrium. We have tried some numerical tests, and the results show that the
relaxation was not affected significantly. On the other hand, the relaxation can be either accelerated or decelerated by the addition
of the Hamiltonian dynamics to SA for low-beta reduced MHD
as shown in Sec. <ref>.
Therefore, we need to further investigate what determines the fastest
path to the equilibrium both analytically and numerically. Examination of toy models like the present one, however, may shed light on this important issue.
Here, we solve
u̇^i = f̃^i + c f^i,
where
f^i
[ u^i , H ]
and f̃^i
[ u^i, u^j ] K_jk [ u^k , H ].
Note that the kernel K in Eq. (<ref>)
is taken to be the unit matrix. The parameters are chosen to be
I_1 = I_2 = I_3 = 1,
M_1 = M_2 = 2, and M_3 = 1, while the equilibrium considered is
L_1 = L_2 = 0, L_3 = 1/2,
ρ_1 = ρ_2 = 0, and ρ_3 = 1.
The Casimir invariants are chosen to be
C_1 = | |^2 = 1
and
C_2 = · = 1/2.
This equilibrium is linearly stable with positive energy modes only.
The initial condition for SA was chosen to be a perturbation away from the equilibrium
of the previous paragraph, with
L_1 = L_2 = -0.0649, L_3 = 0.6,
ρ_1 = ρ_2 = 0.308 and ρ_3 = 0.9.
Figure <ref> shows comparison of
time evolutions of the variables with c=0 and c=± 10.
A negative c means that the time-reversed Hamiltonian dynamics is
added to the SA dynamics.
Figures <ref>subfig:mimicLBRMHD-t-L1-c-case1,
subfig:mimicLBRMHD-t-L2-c-case1
and subfig:mimicLBRMHD-t-L3-c-case1
show time evolution of L_1, L_2 and L_3, respectively.
Similarly,
Figs <ref>subfig:mimicLBRMHD-t-rho1-c-case1,
subfig:mimicLBRMHD-t-rho2-c-case1
and subfig:mimicLBRMHD-t-rho3-c-case1
show time evolution of ρ_1, ρ_2 and ρ_3, respectively.
Figure <ref>subfig:mimicLBRMHD-t-H-c-case1
shows the time evolution of the energy.
In
Fig. <ref>subfig:mimicLBRMHD-t-Ek-c-case1,
E_k is the term in the Hamiltonian
(<ref>),
while E_m in
Fig. <ref>subfig:mimicLBRMHD-t-Em-c-case1
is the term.
As seen in
Fig. <ref>,
the relaxation to a stationary value of H did not differ by much for the different values of c, although each variable showed different time
evolution except for L_3.
Note that we have also tried c = ± 100, and observed that
the time evolution of H did not differ much.
Figure <ref> shows
snapshots of the phase space at t = 1.
Figures <ref>subfig:mimicLBRMHD-L-cm10-t1-case1
and
<ref>subfig:mimicLBRMHD-rho-cm10-t1-case1
are for c = -10,
Figs. <ref>subfig:mimicLBRMHD-L-c0-t1-case1
and
<ref>subfig:mimicLBRMHD-rho-c0-t1-case1
are for c = 0,
and
Figs. <ref>subfig:mimicLBRMHD-L-c10-t1-case1
and
<ref>subfig:mimicLBRMHD-rho-c10-t1-case1
are for c = 10.
In the space,
Figs. <ref>subfig:mimicLBRMHD-L-cm10-t1-case1,
subfig:mimicLBRMHD-L-c0-t1-case1 and subfig:mimicLBRMHD-L-c10-t1-case1,
the spherical surface in light blue represents the constant
E_k,
the plane in light yellow represents · = 1/2,
the green circle represents the intersection of
the constant E_k surface and the constant
· surface.
On the other hand,
in the space,
Figs. <ref>subfig:mimicLBRMHD-rho-cm10-t1-case1,
subfig:mimicLBRMHD-rho-c0-t1-case1 and subfig:mimicLBRMHD-rho-c10-t1-case1,
the ellipsoidal surface in light blue represents the constant
E_m,
the spherical surface in light yellow represents | |^2 = 1,
the green circle represents the intersection of
the constant E_m surface and the constant
| |^2 surface.
In each subfigure of Fig. <ref>,
the red point and the pink curve
represent the current position of the system and the trajectory in the
phase space, respectively.
Without the Hamiltonian dynamics, the trajectories follow straight
relaxation to the final state as seen in
Figs. <ref>subfig:mimicLBRMHD-L-c0-t1-case1
and subfig:mimicLBRMHD-rho-c0-t1-case1.
On the other hand, the trajectories for c = ± 10 turn around.
However, as explained above, the time evolution of energy did not differ
much from that of c = 0.
§ USING SA FOR LINEAR STABILITY ANALYSIS
In addition to being useful for equilibrium calculations, SA can be used to assess linear stability, as seen in Sec. <ref>.
Here we show how this can work for reduced MHD.
Suppose an equilibrium of an MHD system has been obtained somehow by a
method other than SA. For example, any cylindrically symmetric state of
the vorticity U and the magnetic flux function ψ is
an equilibrium of low-beta reduced MHD in cylindrical geometry.
Then, let us perform SA starting from an initial condition that is a small amplitude perturbation away from the equilibrium.
If the perturbation relaxes to original equilibrium by SA dynamics that monotonically decreases the energy of the system,
then the equilibrium is linearly stable. On the other hand,
if the perturbation grows, there are two possibilities: the equilibrium is linearly unstable or the system linearly stable with a combination of positive and negative energy modes. To distinguish these two cases a spectral stability analysis would need to be done. The case with all negative energy modes corresponds to the equilibrium located at an energy maximum which is stable. Upon reversing the direction of time, this case will be detected if relaxation occurs.
Section <ref> introduces the evolution equations of SA used here, while Sec. <ref> shows some numerical results of the linear
stability analyses for low-beta reduced MHD in cylindrical geometry.
§.§ Formulation
Consider now the SA double bracket evolution equations
for low-beta reduced MHD in cylindrical geometry. We begin by defining the symmetric kernel 𝒦 that will be used, i.e.,
𝒦_ij ( ^, ^ )
=
α_ij g( ^ - ^ ),
i, j = 1, 2,
where the Green's function is defined by
△ g() = - δ^3 ( ),
with △ being the Laplacian in three dimensions.
We assumed that 𝒦 is diagonal with
positive constants α_ii and
α_ij = 0 for i ≠ j.
For simplicity of notation, let us define
the right-hand sides of the original low-beta reduced MHD as
{ u^i , H }
f^i, i=1,2.
By using the symmetric kernel (<ref>),
the double bracket can be calculated explicitly, resulting in the following SA equations:
U/ t =
[ U, ] + [ ψ, J̃ ]
- J̃/ζ,
ψ/ t =
[ ψ, ] - /ζ,
where the artificial advection fields are defined by
()
α_11∫_ D^3 x^
g(, ^) f^1(^),
J̃()
α_22∫_ D^3 x^
g(, ^) f^2(^).
As we observe,
the advection fields of Eqs. (<ref>)
and (<ref>)
are replaced by the artificial ones and J̃
from the advection fields of the low-beta reduced MHD
and J.
Because of the property of the Poisson tensor,
the Casimir invariants are automatically preserved.
Note that the formulation becomes simpler if we choose Dirac's delta
function instead of the Green's function in the kernel 𝒦.
However, in our experience, it is less stable numerically.
The kernel with the Green's function can suppress growth of
fine-scale structure.
§.§ m/n = 2/1 perturbation
We take as a given equilibrium a cylindrically symmetric state
with a safety factor profile
q(r) = q_0 / ( 1 - r^2/2 ) with q_0 = 1.75
<cit.>.
This equilibrium has a q=2 surface at r=1/2 and no plasma rotation.
This equilibrium is known to be linearly stable against m=2 and n=1
ideal MHD modes.
A series of m = 2 and n = 1 perturbations was generated in a
dynamically accessible (Casimir preserving) manner <cit.>.
Even if we substitute arbitrarily chosen advection fields
into Eqs. (<ref>)
and (<ref>), ones that
are different from and J̃
defined in Eqs. (<ref>)
and (<ref>),
the Casimir invariants are still preserved because of the property of the
Poisson tensor.
Therefore, we use the following advection fields to generate the dynamically
accessible perturbations:
(r, θ, ζ)
=
A_ r (1-r)
^-( r-r_0/L)^2sin(m θ - n ζ),
J̃(r, θ, ζ)
=
A_J r (1-r)
^-( r-r_0/L)^2cos (m θ - n ζ),
where A_, A_J, r_0 and L are constants.
The poloidal and toroidal mode numbers are m = 2 and n = 1, respectively.
A case with A_ = A_J = 10^-3, r_0 = 0.5 and L = 0.1
was shown in Fig. 2 of <cit.>.
The initial condition chosen for generating the series of dynamically accessible
perturbations was the cylindrically symmetric equilibrium.
The time evolution generates the series of helically perturbed states that
are on the same Casimir leaf as the equilibrium.
As noted, the equilibrium is linearly stable. Therefore, we expected
that SA would
recover the cylindrically symmetric equilibrium at least if the given
perturbation is small enough.
In fact, we observed that the perturbation amplitude became smaller as
the total energy of the system was decreased by SA <cit.>.
We tried some initial perturbations with different ratios of kinetic to
magnetic energies, and we observed that the perturbation tended to disappear in
all cases, i.e., the dynamics relaxes to the equilibrium. However, the disappearance of the velocity perturbation took
long simulation time, even if we applied an acceleration method which
will be explained in Sec. <ref>.
For an equilibrium without the q = 2 surface, when q_0 = 2.5 for
example, the perturbation also tended to disappear.
However, since the simulation was performed without the to be explained acceleration technique,
the damping of the velocity part was very slow.
The magnetic part, on the other hand, disappeared quickly.
We have also tried SA for an unstable equilibrium.
The safety factor was the same as the equilibrium introduced
above, but equilibrium poloidal rotation was introduced, according to
v_θ(r)
=
v_θ max (α + 1)^α + 1/α^α
r ( 1 - r )^α,
where α is a positive parameter.
A radial profile with v_θ max = 0.01 and α = 3
were shown in Fig. 12 of <cit.>.
This equilibrium is linearly unstable against centrifugal instability.
We performed SA, which monotonically decreased the total energy of the
system. In the course of this evolution, the amplitude of the perturbation grew as time
proceeded. The time evolution of the total energy
and the radial profiles of the perturbation were shown in Figs. 15 and
16 of <cit.>, respectively.
§ TOROIDAL EQUILIBRIA
We have applied SA for high-beta reduced MHD in axisymmetric
toroidal geometry <cit.>.
Section <ref> introduces the evolution
equations of SA, while Sec. <ref>
describes some numerical results.
§.§ Formulation
The symmetric kernel for the double bracket was assumed to be same
as in Sec. <ref>;
it was diagonal with positive coefficients.
The advection fields were
in Eq. (<ref>),
J̃ in Eq. (<ref>),
and
h̃()
α_33∫_ D^3 x^
g(, ^) f^3(^),
where
f^3{ P, H }.
Then the evolution equations of SA read
U/ t =
[ U , ]
+ [ ψ , J̃ ] - J̃/ζ
+ [ P, h̃ ],
ψ/ t =
[ ψ , ] - /ζ,
P/ t =
[ P, ].
Again, the form of the equations are the same as
the original high-beta reduced MHD
Eqs. (<ref>)–(<ref>), but with the
advection fields replaced by the artificial ones.
And, the Casimir invariants are preserved, while the energy of the
system monotonically decreases by the time evolution.
§.§ Large-aspect-ratio, circular-cross-section tokamak equilibrium
For calculating axisymmetric equilibria,
Fourier components with the toroidal mode number n = 0 only were
retained in the simulation.
The initial condition had concentric magnetic surfaces.
The safety factor profile was
q(r) = q_0 / ( 1 - r^2/2 ) with q_0 = 1.75.
The pressure profile was assumed to be P(r) = β_0 ( 1 - r^2).
Here, the central beta was defined by
β_0 := 2 μ_0 p_0 / B_0^2,
where μ_0 is the vacuum permeability, p_0 is the pressure at the
magnetic axis, and B_0 is the typical magnitude of the magnetic field.
These profiles were plotted in Fig. 1 of <cit.>.
The central beta was taken to be β_0 = 0.1%, 0.5%,
and 1%. Zero poloidal velocity was assumed. As the time proceeded, the total energy of the system successfully
decreased, and the stationary states were obtained.
The time evolution of the energy was shown in Fig. 2 of
<cit.>.
The flux surfaces of the obtained equilibria showed
the Shafranov shift as seen in Figs. 3 and 4 of
<cit.>.
The distance of the magnetic axis shift was compared with the analytic
theory based on the large-aspect-ratio expansion.
Since the analytic theory includes the toroidicity even if beta is
zero, the finite Shafranov shift remains even at zero beta.
On the other hand, since the toroidicity drops out completely in
high-beta reduced MHD, the Shafranov shift was smaller than that
of the analytic theory for all three beta values examined.
However, the results showed reasonable agreement in the increment of the
shift as the beta was increased.
In <cit.>,
some equilibria with poloidal plasma rotation were also calculated by
SA.
This is an advantage of SA; we just need to solve an initial-value
problem for a given initial condition.
The resultant stationary states can have plasma rotation.
The initial poloidal velocity was assumed to have a profile
v_θ(r) = 4 v_θmax r ( 1 - r ) with
a constant v_θmax.
The Shafranov shift was shown to increase quadratically in the rotation
velocity <cit.>. The quadratic dependence was
explained by a mapping
between an equilibrium without plasma rotation and poloidally rotating
equilibrium.
§.§ Toroidally-averaged stellerator equilibrium
Dynamics of toroidally-averaged stellerator plasmas are governed by
equations of the same form as the high-beta reduced MHD.
Numerical results of the obtained equilibria were compared with the
results of a previous study on Heliotron E <cit.>.
We obtained reasonable agreement, although our results did not
completely overlap the previous results.
The difference has several reasons, e.g., our SA calculation could not impose the net
toroidal current free condition on each magnetic surface, which was
imposed in the previous study. This may be overcome by using DSA.
§ HELICALLY DEFORMED EQUILIBRIA
In the present section we show some numerical results where SA leads
to helically deformed equilibria in cylindrical geometry.
Section <ref> shows
a case of internal kink mode like deformation with m = 1 and n = 1,
and Sec. <ref> shows
a case with m = 2 and n = 1, where a sheared poloidal rotation was
assumed in the equilibrium.
§.§ m/n = 1/1 deformation
For this case we performed SA with a safety factor profile
q(r) = q_0 / ( 1 - r^2/2 ) with q_0 = 0.75.
A q=1 surface exists at r=1/2 in this case.
The equilibrium plasma rotation was assumed to be zero.
This equilibrium is neutrally stable against ideal internal kink modes.
Dynamically accessible perturbations were generated
as in Sec. <ref>.
The advection fields were given by
Eqs. (<ref>)
and
(<ref>)
with m = 1 and n = 1.
In the numerical results shown in
the present section,
r_0 = 0.5 and L = 0.1 were used.
The other parameters A_ and A_J
were given so as to control the ratio of the perturbed kinetic and
magnetic energies.
The initial condition for generating the dynamically accessible
perturbation was the cylindrically symmetric equilibrium
introduced in the previous paragraph.
A numerical example is presented below, where the initial condition for SA
is shown in
Fig. <ref>.
This initial condition corresponds to A_ = 10^-3
and A_J = 2 × 10^-2.
The perturbed kinetic energy is about 0.01 times the perturbed magnetic
energy at t = 0.
Time evolution of the energy by SA is shown in
Fig. <ref>.
Kinetic and magnetic energies decrease monotonically and reach their
stationary values. Note that the horizontal axis is a log scale in
each figure.
Figure <ref>
shows radial profiles of
U_-1/1, _-1/1, ψ_-1/1 and
J_-1/1
at t = 0, 10000, 30000 and 50000.
The other components
U_-1/1, _-1/1, ψ_-1/1 and
J_-1/1 as well as higher (m,n) modes were
almost zero.
The damping of the velocity part was slow, as was the case of
Sec. <ref>,
even though the acceleration method, to be explained in
Sec. <ref>,
was used. Although the vorticity U_-1/1 still remains finite,
the stream function _-1/1 almost disappears.
The magnetic part remains almost unchanged after t > 10^4, and
is finite at the stationary state.
The final state has a structure similar to that of an internal kink mode,
although it may be difficult to observe since the amplitudes at the
stationary state are much smaller than the initial amplitudes.
The magnetic flux function ψ_-1/1 has a finite amplitude
at r < 1/2, and zero at r > 1/2.
Also, the current density J_-1/1 has a spiky structure around
r = 1/2. This is typical of the internal kink mode.
We have performed SA with different initial conditions where
(i) the perturbed kinetic energy is 100 times the perturbed
magnetic energy, and
(ii) the perturbed kinetic and magnetic energies are almost the same.
In all cases examined, we obtained helically deformed equilibria.
The spatial structures were similar to the internal kink mode.
We have also performed SA with a different equilibrium with q_0 = 1.1, which has
no q = 1 surface inside the plasma.
The initial conditions for SA were generated by using the
advection fields
Eqs. (<ref>)
and
(<ref>)
with m = 1 and n = 1.
We generated three initial conditions for SA:
(i) the perturbed kinetic energy was 100 times the perturbed
magnetic energy,
(ii) they were almost same,
(iii) perturbed kinetic energy was 0.01 times the perturbed
magnetic energy.
In all these cases, the perturbation went away as the total energy of
the system decreased monotonically by SA.
§.§ m/n = 2/1 deformation
Another example of helically deformed equilibrium
with m = 2 and n = 1 structure is shown in this
section.
Here, we consider the cylindrically symmetric equilibrium with the same
q profile as in Sec. <ref>, where the q = 2 resonant surface exists at r = 1/2.
We assumed a sheared poloidal rotation velocity
v_θ (r) = 8 v_θs r^3.
The poloidal rotation velocity at the resonant surface
is v_θs.
A poloidal rotation velocity profile with v_θs = 0.003 is
shown in Fig. <ref>,
together with the q profile.
Figure <ref>
shows the spectral stability of the equilibria with the sheared poloidal
rotation.
The horizontal axis is v_θs, and the vertical axis is
the linear growth rate.
In the figure, “RMHD(ideal)” denotes the linear growth rates obtained by
the spectral analyses of the linearized ideal low-beta reduced MHD.
We observed that the equilibria are stable even with a finite rotation
velocity with 0 ≤ v_θs≤ 0.003.
Also in Fig <ref>,
“SA” denotes the linear growth rates by the spectral analyses of
the linearized SA equations
(<ref>)
and (<ref>),
although the symmetric kernel was taken to be diagonal and
𝒦_ii ( ^, ^ )
=
δ^3( ^ - ^ ),
i = 1, 2
for simplicity.
The linearized SA equation shows instability at finite
v_θs.
This indicates that the equilibria with the sheared poloidal rotation
are not energy minima.
We have performed SA with an initial condition that is a summation of
the cylindrically symmetric equilibrium and a dynamically accessible
perturbation with m = 2 and n = 1.
For generating the dynamically accessible perturbation,
we used the advection fields of
Eqs. (<ref>)
and
(<ref>)
with m = 2, n = 1, r_0 = 0.8 and L = 0.1.
Then the initial perturbation has larger amplitudes around
r = 0.8. This was because the eigenmode structure of the
linearized SA equation has larger amplitudes at larger radii.
Then the relaxation by SA to a stationary state can occur in a shorter
simulation time.
Figure <ref> shows time evolution of energy.
Both kinetic and magnetic energies decreased monotonically and reached their
stationary values. Note that the horizontal axis is a log scale in
each figure.
Figure <ref>
shows time evolution of the radial profiles of
U_-2/1,
_-2/1,
ψ_-2/1,
J_-2/1,
at t = 0, 100, 1000 and 10000.
The real parts of both the velocity and the magnetic parts were
initially finite.
However, they almost disappeared.
On the other hand, the imaginary parts of the velocity and the magnetic
parts appear to be generating some structure with finite amplitudes.
The structure did not change significantly after t > 100,
although the amplitudes were still getting larger slowly on the long
time scale.
§ SUPER-ALFVÉNIC EQUILIBRIA
When applying SA to low- or high-beta reduced MHD,
the total energy of the system is minimized in order to reach a stationary state
with a smooth spatial structure.
On the other hand, when applying SA to two-dimensional Euler flow,
the total energy of the system is maximized to reach a stationary
state with a smooth spatial structure <cit.>.
If the energy is minimized in the two-dimensional Euler flow by SA,
the system approaches a state called Kelvin's sponge
<cit.>.
The numerical results shown in
Secs. <ref>, <ref>, and <ref>
were obtained by minimizing the total energy of the system
by SA for reduced MHD.
In most of these numerical cases, plasma flow was absent.
Even in the case with finite plasma flow,
the flow velocity was small compared to the Alfvén velocity, so that the
system was dominated by magnetic energy. Thus, the question arises of what will happen if we perform SA to minimize
the total energy when the kinetic energy is
comparable to or even larger than the magnetic energy.
Some examples of this case were shown in <cit.>,
where SA was performed for low-beta reduced MHD in a doubly-periodic
rectangular domain.
Figure 5 of <cit.> shows the time evolution
of U, , ψ and J for a case with comparable kinetic and
magnetic energies. We observed that fine spatial structures in U and
J remained, although the system seemed to be trying to generate a smooth and symmetric
circular spatial structure, such as that reached in the sub-Alfvénic case
of Fig. 8 of <cit.>. Figures 11 and 14 of <cit.> show the time evolution
of U, , ψ and J for super-Alfvénic cases.
The case of Fig. 14 had larger ratio of the kinetic energy to the
magnetic energy. In both cases, we observed fine spatial structures.
This indicates that the system behaved more like a two-dimensional
neutral fluid.
§ EQUILIBRIUM WITH MAGNETIC ISLANDS
As explained in Sec. <ref>,
the double bracket dynamics of SA preserves all the Casimir invariants.
Therefore, the magnetic field topology is also preserved.
If there is no magnetic island in the initial condition for SA,
then theoretically magnetic islands should never appear.
We tried an initial condition with magnetic islands, and
obtained an equilibrium with magnetic islands by SA of low-beta
reduced MHD in cylindrical geometry <cit.>.
The initial condition was a sum of a cylindrically symmetric
equilibrium and a small-amplitude helical perturbation.
The safety factor q profile of the equilibrium was the
same as the one in Sec. <ref>;
the q profile is monotonic and
there exists a q=2 surface at r = 1/2.
Equilibrium flow was absent and the helical perturbation had Fourier mode numbers m=2 and n=1.
The radial profiles of ψ_mn and the Poincarè plots of
magnetic field lines on a poloidal cross section
at the stationary state are shown in Fig. <ref>.
The initial ψ_-2/1 is also plotted in
Fig. <ref>subfig:LBRMHDSA-island-r-psi.
The value of ψ_mn at the q = 2 resonant surface at r = 1/2 did
not change during the time evolution of SA because of preservation of
Casimir invariants. Therefore, the island width did not change from the
initial condition.
§ ACCELERATED RELAXATION
For both equilibrium and stability calculations, SA solves an initial-value problem.
Generally, the computations are time consuming, especially as one gets near the energy
minimum. Therefore, accelerated relaxation to the stationary state is
indispensable for SA to be practically useful. We have examined two methods for acceleration, which are explained in
this section. Section <ref> explains
the first method, where time dependence was introduced in the kernel
when defining the double bracket in Eq. (<ref>).
This certainly had an acceleration effect.
The other method is explained in
Sec. <ref>,
where the original Hamiltonian dynamics was added to the SA dynamics.
We observed both acceleration and deceleration of relaxation using this method.
Further examination of methods for acceleration are under investigation.
§.§ Time dependent kernel
In <cit.>,
we found that the magnetic energy decreases quickly, while the kinetic
energy changes over a significantly longer time scale. This is the reason why the
system requires a long time to approach a stationary state.
Therefore, it seemed better to find a relaxation path such that
the kinetic and magnetic energies decrease at comparable rates.
The idea was to introduce time dependence in the symmetric kernel.
Explicitly, we controlled the magnitudes of
α_ii in Eqs. (<ref>)
and (<ref>)
so that and J̃ become comparable.
If f^i, the right-hand side of the original low-beta
reduced MHD equations, is small (large), then
α_ii is changed to a larger (smaller) value
at each time step. This method successfully accelerated the relaxation to the stationary
state, although it may still be possible to improve how the time
dependence is implemented.
§.§ Addition of Hamiltonian dynamics
We have also examined whether the relaxation can be accelerated if we add the
original Hamiltonian dynamics to the SA dynamics, which uses the double
bracket. In Sec. <ref>,
we observed that the time required to approach the stationary state did not
differ significantly with the inclusion of the Hamiltonian dynamics in the
case of the toy model mimicking low-beta reduced MHD.
In the low-beta reduced MHD case, on the other hand,
we found that the relaxation could be either accelerated or decelerated
<cit.>.
Recall
f^i{ u^i , H }
in
Eq. (<ref>),
and define
f̃^i := (( u^i , H )),
which gives the right-hand sides of the evolution equations of SA with the
double bracket.
In the simulation results shown here,
the symmetric kernel was chosen to be diagonal, and the coefficients
α_ii were taken to be constant during the simulations.
Then the mixed dynamics was generated by
u^i/ t
=
f̃^i + c f^i.
The parameter c is a constant
representing the ratio of the Hamilton dynamics to the SA dynamics.
When c < 0, the time-reversed Hamiltonian dynamics is added to the SA
dynamics. Pure SA dynamics corresponds to c = 0.
Figure <ref> shows the time
evolution of SA for the same equilibrium presented in
Sec. <ref>;
the equilibrium with the monotonic q profile with the q = 2 surface at
r = 0.5 and without plasma rotation.
The initial perturbations were dynamically
accessible; i.e., they were generated by the advection fields
(<ref>)
and
(<ref>),
where m = 2, n = 1, r_0 = 0.5, and L = 0.1.
In Fig. <ref>subfig:LBRMHDSA-additionHamiltonianDynamics-t-E-1,
the initial condition for SA were generated with
A_ = A_J = 10^-3,
for which the perturbed kinetic energy was much smaller than the perturbed
magnetic energy at the initial time.
On the other hand,
in Fig. <ref>subfig:LBRMHDSA-additionHamiltonianDynamics-t-E-2,
the initial condition for SA were generated with
A_ = 10^-4 and A_J = 2 × 10^-1,
for which the perturbed kinetic energy was much larger than the perturbed
magnetic energy at the initial time.
In Fig. <ref>subfig:LBRMHDSA-additionHamiltonianDynamics-t-E-1,
we observe that the relaxation was fastest when c = 0.
The addition of the Hamiltonian dynamics decelerated the relaxation.
Moreover, the sign of c did not generate a visible difference in the
time evolution of the total energy. On the other hand,
in Fig. <ref>subfig:LBRMHDSA-additionHamiltonianDynamics-t-E-2,
we observe that the relaxation was slowest when c = 0.
As explained in Sec. <ref>, the relaxation to a
stationary state becomes considerably slow when the initial perturbation
has a large kinetic energy.
Addition of the Hamiltonian dynamics significantly accelerated the
relaxation.
The time evolutions of the total energy were slightly different
depending on the sign of c with a same magnitude, however, the
difference was not significant.
We observed that the time evolution of the kinetic energy
may be a key to understanding what causes the relaxation to be accelerated or
decelerated. This issue is still under investigation.
§ DISCUSSION
An issue to be clarified was raised in Sec. <ref>
regarding the equilibria with large plasma flow velocities.
In the context of magnetically confined fusion plasmas,
it may be unusual to have a super-Alfvénic flow velocity.
However, if we do need to calculate an equilibrium with a super-Alfvénic
flow velocity, we may perform SA maximizing the total energy of the
system to obtain an equilibrium with smooth spatial structure,
according to the results in <cit.>.
On the contrary, when the kinetic and magnetic energies are
comparable, we do not know whether the total energy should be minimized
or maximized to obtain a stationary state by SA.
As explained in Sec. <ref>,
accelerated relaxation is especially important if SA is utilized to
obtain a stationary state of a Hamiltonian system with infinite degrees
of freedom because SA requires solving an initial value problem.
In Sec. <ref>,
we explained that the relaxation can be accelerated by introducing
time dependence in the kernel of the double bracket.
The key was to control the advection fields to have comparable
magnitudes. However, we have to determine what magnitudes are appropriate.
If the magnitudes are too large, the time evolution likely becomes
numerically unstable. If we have a numerically more stable algorithm
for the time evolution, the magnitudes can be larger.
Normally, such algorithms use implicit methods, which require iteration
to solve nonlinear equations. For these, an efficient preconditioning is required to realize a
large time step, which is an advantage of implicit methods.
SA can be applied to any Hamiltonian system; hence, a natural future step might be to apply it to the full MHD system in toroidal
geometry. Then we may be able to calculate an MHD equilibrium with
magnetic islands and/or even magnetic chaos.
In such a case, it may be important to recognize on which Casimir leaf the
equilibrium exists. Since the Casimir invariants do not change during
the time evolution of SA, we need to adjust the values of the Casimir
invariants of the initial condition for SA.
It was demonstrated that we could adjust the values of the Casimir
invariants of the initial condition for two-dimensional Euler flow
and the low-beta reduced MHD in <cit.>.
This adjustment method will be useful when applying SA to full MHD, or
even kinetic models that are Hamiltonian.
Regarding numerical stability,
spatial discretization methods should be also important in addition to
the time integration methods.
The numerical results introduced in this paper on the reduced MHD systems
in cylindrical and toroidal geometry
used second-order central differences in the radial direction
and Fourier decomposition in the poloidal and the toroidal directions
for all variables equally. It should be advantageous to implement the
discretization based on finite element exterior calculus
<cit.> for improving numerical stability.
Such improved numerical stability may enable us to obtain another
equilibrium by SA when an equilibrium is unstable.
As explained in Sec. <ref>,
SA succeeded in identifying a linearly unstable equilibrium.
However, after the initial growth of the helical perturbation,
a spiky behavior appeared in the radial profile of the variables.
Therefore, the time evolution of SA was stopped.
Although it is unclear whether such spiky behavior is because of
physics or is a numerical artifact, it is anyway better to adopt numerically
stable algorithms.
Another future possibility is to explore the calculation of free boundary equilibria. The numerical results explained in the paper were all
obtained under fixed boundary conditions, except for the doubly-periodic
boundary condition in the two-dimensional rectangular domain in
Sec. <ref>. This is certainly possible theoretically.
§ SUMMARY AND CONCLUSIONS
Simulated annealing (SA) is a method for obtaining equilibria and analyzing stability
of Hamiltonian systems.
Starting from any Hamiltonian system,
an artificial dynamics is derived that monotonically changes the total
energy of the system, while preserving all the Casimir invariants.
These are accomplished by using the double bracket obtained from the Poisson bracket.
By solving an initial-value problem of the artificial dynamics, the
system may reach a state with a stationary energy that is an equilibrium.
If the energy is minimized or maximized, the equilibrium is stable from an energy
standpoint.
This paper reviewed Hamiltonian structure, formulation of SA, and
described numerical demonstrations of SA for
some Hamiltonian systems of both finite and infinite degrees of freedom.
The numerical results for reduced MHD systems, obtained by double bracket SA, included cylindrical as well as axisymmetric
toroidal equilibria, linear stability, helically deformed equilibria,
flowing equilibria, and equilibria with magnetic islands.
We also explained the importance of accelerated relaxation,
and introduced two methods for doing so, although one of the methods is
still under investigation. Some issues for future work were also discussed.
We hope that this paper succeeded in sharing interesting aspects of SA,
and revealing how SA can be applied to many other Hamiltonian systems. Then, the theoretical and practical use of SA might be further developed in the future.
Acknowledgements
M.F. was supported by JSPS KAKENHI (Grant No. JP15K06647, JP21K03507 and
No. JP24K06993), while P.J.M. supported by the United States Department of Energy (Grant No. DE-FG02- 04ER54742).
Both authors would like to acknowledge the JIFT program for the
support of M.F. to visit IFS in the Spring of 2019 when a portion of this
work was carried out.
M.F. is grateful for discussions with Y. Chikasue, Takahiro Watanabe,
K. Goto, and K. Ichiguchi. Lastly, we sincerely appreciate the late Emeritus Professor Robert
L. Dewar for encouraging us to write this RMPP paper.
Data availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
§ DECLARATIONS
Conflict of interest
Authors state that there is no conflict of interest.
§ DETAILED EXPLANATION OF EQ. (<REF>)
First, we recognize that the arguments of the right-hand side of
Eq. (<ref>)
are functionals, mappings from functions to real numbers.
The Hamiltonian functional defined by Eq. (<ref>), which is
a number as a result of spatial integration.
The vorticity U can also be interpreted as a functional given by
U( _0 , t )
=
∫_𝒟^2 x δ^2 ( - _0 ) U ( , t ),
where δ^2() is a two-dimensional Dirac delta function.
The spatial integration gives us a value of U at = _0
and time t.
In order to evaluate
Eq. (<ref>)
or
Eq. (<ref>) when Eq. (<ref>) is inserted,
we need δ U (_0 , t) / δ U ( , t ).
This is obtained through
δ U ( _0 , t )
=
lim_→ 01/∫_𝒟^2 x δ^2 ( - _0 )
(
(
U ( , t ) + δ U ( , t )
)
- U ( , t )
)
=
∫_𝒟^2 x δ U ( , t ) δ^2 ( - _0 )
which implies
δ U ( _0 , t )/δ U ( , t )
=
δ^2 ( - _0 ).
On the other hand,
δ H [ U ]
=
∫_𝒟^2 x ( , t )
·δ ( , t )
=
∫_𝒟^2 x δ U ( , t ) ( - ( , t ) ),
where δ U ( , t ) = △_⊥ ( , t )
and an integration by parts were used.
Therefore
δ H [ U ] /δ U ( , t )
=
- ( , t ).
Then, Eq. (<ref>) reads
{ U ( _0 , t ) , H } =
∫_𝒟^2 x δ^2 ( - _0 )
[
- ( , t ) , U ( , t )
]
=
.
[
U ( , t ) , ( , t )
]
|__0
and Eq. (<ref>)
gives
U ( _0 , t )/ t
=
{ U ( _0 , t ) , H [ U ] }
=
.
[
U ( , t ) , ( , t )
]
|__0.
The evolution equations of low-beta RMHD in two dimensions
(<ref>),
those in cylindrical geometry
(<ref>)
as well as
(<ref>)
for single helicity dynamics,
and those in toroidal geometry
(<ref>)
are understood similarly.
Equation (<ref>) may also needs some
explanation. Writing the arguments of the variables and functionals as
u^i ( _0 , t )/ t
= (( u^i ( _0 , t ) , H [ ] )),
we see its evaluation, leading to the evolution equations (<ref>), is similar to that for the Poisson bracket above.
The same applies for the
the Dirac bracket on functionals
(<ref>), leading to (<ref>), and metriplectic dynamics as given by
(<ref>).
33
#1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook
[Arnold et al.2006]Arnold-2006
Arnold, D.N.,
Falk, R.S.,
Winther, R.:
Finite element exterior calculus, homological techniques, and
applications.
Acta Numerica
15,
1–155
(2006)
10.1017/S0962492906210018
[Arnol'd1965]Arnold-1965-1
Arnol'd, V.I.:
Variational principle for three-dimensional steady-state flows of an
ideal fluid.
Journal of Applied Mathematics and Mechanics
29(5),
1002–1008
(1965)
10.1016/0021-8928(65)90119-X
[Bressan et al.2018]Bressan-2018
Bressan, C.,
Kraus, M.,
Morrison, P.J.,
Maj, O.:
Relaxation to magnetohydrodynamics equilibria via collision brackets.
Journal of Physics: Conference Series
1125,
012002
(2018)
10.1088/1742-6596/1125/1/012002
[Bressan2023]Bressan-2023
Bressan, C.:
Metriplectic relaxation for calculating equilibria: theory and
structure-preserving discretization.
PhD thesis,
Technische Universität München
(2023).
<https://mediatum.ub.tum.de/1686142>
[Chikasue and
Furukawa2015a]Chikasue-2015-JFM
Chikasue, Y.,
Furukawa, M.:
Adjustment of vorticity fields with specified values of casimir
invariants as initial condition for simulated annealing of an incompressible,
ideal neutral fluid and its mhd in two dimensions.
Journal of Fluid Mechanics
774,
443–459
(2015)
10.1017/jfm.2015.263
[Chikasue and
Furukawa2015b]Chikasue-2015-PoP
Chikasue, Y.,
Furukawa, M.:
Simulated annealing applied to two-dimensional low-beta reduced
magnetohydrodynamics.
Physics of Plasmas
22(2),
022511
(2015)
10.1063/1.4913234
https://arxiv.org/abs/https://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/1.4913234/16143171/022511_1_online.pdfhttps://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/1.4913234/16143171/022511_1_online.pdf
[Carnevale and Vallis1990]Carnevale-1990
Carnevale, G.F.,
Vallis, G.K.:
Pseudo-advective relaxation to stable states of inviscid
two-dimensional fluids.
Journal of Fluid Mechanics
213,
549–571
(1990)
10.1017/S0022112090002440
[Flierl and
Morrison2011]Flierl-Morrison-2011
Flierl, G.R.,
Morrison, P.J.:
Hamiltonian–Dirac simulated annealing: Application to the
calculation of vortex states.
Physica D: Nonlinear Phenomena
240(2),
212–232
(2011)
10.1016/j.physd.2010.08.011
[Furukawa and Morrison2017]Furukawa-2017
Furukawa, M.,
Morrison, P.J.:
Simulated annealing for three-dimensional low-beta reduced mhd
equilibria in cylindrical geometry.
Plasma Physics and Controlled Fusion
59(5),
054001
(2017)
10.1088/1361-6587/aa5863
[Furukawa and Morrison2022]Furukawa-2022
Furukawa, M.,
Morrison, P.J.:
Linear stability analysis via simulated annealing and accelerated
relaxation.
Physics of Plasmas
29(10),
102504
(2022)
10.1063/5.0101095
https://arxiv.org/abs/https://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/5.0101095/16572139/102504_1_online.pdfhttps://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/5.0101095/16572139/102504_1_online.pdf
[Furukawa and
Morrison2023a]Furukawa-2023-AAPPS-DPP
Furukawa, M.,
Morrison, P.J.:
Change of relaxation path by inclusion of Hamiltonian dynamics to simulated
annealing of reduced magnetohydrodynamics.
7th Asia-Pacific Conference on Plasma Physics, November 2023, Nagoya, Japan,
F-9-O1
(2023)
[Furukawa and
Morrison2023b]Furukawa-2023-CCP
Furukawa, M.,
Morrison, P.J.:
Effect of inclusion of Hamiltonian dynamics to simulated annealing of reduced
magnetohydrodynamics equilibrium calculations.
Proceeding of CCP2023 - 34th IUPAP Conference on Computational Physics
(accepted on Feb. 12, 2024)
(2023)
[Furukawa et al.2018]Furukawa-2018
Furukawa, M.,
Watanabe, T.,
Morrison, P.J.,
Ichiguchi, K.:
Calculation of large-aspect-ratio tokamak and toroidally-averaged
stellarator equilibria of high-beta reduced magnetohydrodynamics via
simulated annealing.
Physics of Plasmas
25(8),
082506
(2018)
10.1063/1.5038043
https://arxiv.org/abs/https://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/1.5038043/15802036/082506_1_online.pdfhttps://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/1.5038043/15802036/082506_1_online.pdf
[Grad and Rubin1958]Grad-1958
Grad, H.,
Rubin, H.:
Hydromagnetic equilibria and force-free fields.
In: Proceedings of the Second United Nations International Conference
on the Peaceful Uses of Atomic Energy,
vol. 31.
United Nations,
Geneva
(1958)
[Kraus et al.2017]Kraus-2017
Kraus, M.,
Kormann, K.,
Morrison, P.,
Sonnendrücker, E.:
Gempic: geometric electromagnetic particle-in-cell methods.
Journal of Plasma Physics
83(4),
905830401
(2017)
10.1017/S002237781700040X
[Kruskal and Oberman1958]KO-1958
Kruskal, M.D.,
Oberman, C.R.:
On the Stability of Plasma in Static Equilibrium.
The Physics of Fluids
1(4),
275–280
(1958)
10.1063/1.1705885
https://arxiv.org/abs/https://pubs.aip.org/aip/pfl/article-pdf/1/4/275/12605145/275_1_online.pdfhttps://pubs.aip.org/aip/pfl/article-pdf/1/4/275/12605145/275_1_online.pdf
[Lüst and Schlüter1957]Lust-1957
Lüst, R.,
Schlüter, A.:
Axialsymmetrische magnetohydrodynamische
gleichgewichtskonfigurationen.
Zeitschrift für Naturforschüung
12A,
850
(1957)
[Morrison and Eliezer1986]pjmE86
Morrison, P.J.,
Eliezer, S.:
Spontaneous symmetry breaking and neutral stability in the
noncanonical hamiltonian formalism.
Phys. Rev. A
33,
4205–4214
(1986)
10.1103/PhysRevA.33.4205
[Morrison and Greene1980]Morrison-1980
Morrison, P.J.,
Greene, J.M.:
Noncanonical hamiltonian density formulation of hydrodynamics and
ideal magnetohydrodynamics.
Phys. Rev. Lett.
45,
790–794
(1980)
10.1103/PhysRevLett.45.790
[Morrison and Hazeltine1984]MH84
Morrison, P.J.,
Hazeltine, R.D.:
Hamiltonian formulation of reduced magnetohydrodynamics.
The Physics of Fluids
27(4),
886–897
(1984)
10.1063/1.864718
https://arxiv.org/abs/https://pubs.aip.org/aip/pfl/article-pdf/27/4/886/12518533/886_1_online.pdfhttps://pubs.aip.org/aip/pfl/article-pdf/27/4/886/12518533/886_1_online.pdf
[Morrison1982]Morrison-1982
Morrison, P.J.:
Poisson brackets for fluids and plasmas.
AIP Conference Proceedings
88(1),
13–46
(1982)
10.1063/1.33633
https://arxiv.org/abs/https://pubs.aip.org/aip/acp/article-pdf/88/1/13/11912899/13_1_online.pdfhttps://pubs.aip.org/aip/acp/article-pdf/88/1/13/11912899/13_1_online.pdf
[Morrison1984]pjm84
Morrison, P.J.:
Bracket formulation for irreversible classical fields.
Physics Letters A
100(8),
423–427
(1984)
10.1016/0375-9601(84)90635-2
[Morrison1986]pjm86
Morrison, P.J.:
A paradigm for joined hamiltonian and dissipative systems.
Physica D: Nonlinear Phenomena
18(1),
410–419
(1986)
10.1016/0167-2789(86)90209-5
[Morrison1998]Morrison-1998
Morrison, P.J.:
Hamiltonian description of the ideal fluid.
Rev. Mod. Phys.
70,
467–521
(1998)
10.1103/RevModPhys.70.467
[Morrison2017]Morrison-2017
Morrison, P.J.:
Structure and structure-preserving algorithms for plasma physics.
Physics of Plasmas
24(5),
055502
(2017)
10.1063/1.4982054
https://arxiv.org/abs/https://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/1.4982054/16678126/055502_1_online.pdfhttps://pubs.aip.org/aip/pop/article-pdf/doi/10.1063/1.4982054/16678126/055502_1_online.pdf
[Morrison and Updike2024]pjmU24
Morrison, P.J.,
Updike, M.H.:
Inclusive curvaturelike framework for describing dissipation:
Metriplectic 4-bracket dynamics.
Phys. Rev. E
109,
045202
(2024)
10.1103/PhysRevE.109.045202
[Nakamura et al.1993]Nakamura-1993
Nakamura, Y.,
Wakatani, M.,
Ichiguchi, K.:
Low-n Mode Stability Analysis for ℓ=2 Heliotron/Torsatron by
VMEC-STEP Code.
Jounal of Plasma and Fusion Research
69(1),
41–52
(1993)
[Shafranov1958]Shafranov-1958
Shafranov, V.D.:
On magnetohydrodynamical equilibrium configurations.
Soviet Physics JETP
6,
545
(1958)
[Shepherd1990]Shepherd-1990
Shepherd, T.G.:
A general method for finding extremal states of hamiltonian dynamical
systems, with applications to perfect fluids.
Journal of Fluid Mechanics
213,
573–587
(1990)
10.1017/S0022112090002452
[Sudarshan and Mukunda1974]sudarshan
Sudarshan, E.C.G.,
Mukunda, N.:
Classical Dynamics: A Modern Perspective.
Wiley,
New York
(1974)
[Strauss1976]Strauss-1976
Strauss, H.R.:
Nonlinear, three‐dimensional magnetohydrodynamics of noncircular
tokamaks.
The Physics of Fluids
19(1),
134–140
(1976)
10.1063/1.861310
https://arxiv.org/abs/https://pubs.aip.org/aip/pfl/article-pdf/19/1/134/12260042/134_1_online.pdfhttps://pubs.aip.org/aip/pfl/article-pdf/19/1/134/12260042/134_1_online.pdf
[Strauss1977]Strauss-1977
Strauss, H.R.:
Dynamics of high β tokamaks.
The Physics of Fluids
20(8),
1354–1360
(1977)
10.1063/1.862018
https://arxiv.org/abs/https://pubs.aip.org/aip/pfl/article-pdf/20/8/1354/12612427/1354_1_online.pdfhttps://pubs.aip.org/aip/pfl/article-pdf/20/8/1354/12612427/1354_1_online.pdf
[Vallis et al.1989]Vallis-1989
Vallis, G.K.,
Carnevale, G.F.,
Young, W.R.:
Extremal energy properties and construction of stable solutions of the
euler equations.
Journal of Fluid Mechanics
207,
133–152
(1989)
10.1017/S0022112089002533
| Simulated annealing (SA) is a type of relaxation method for Hamiltonian
systems based on an artificial dynamics that uses the
Hamiltonian structure. In usual Hamiltonian dynamics, the energy
(Hamiltonian) is conserved because of the antisymmetry of the
Poisson bracket, while the artificial dynamics of SA is constructed in such a way
that the time evolution changes the energy (Hamiltonian) monotonically. It does this by acting twice with the Poisson bracket and, consequently, SA relaxes to a stationary state of the energy as time progresses.
If the Hamiltonian system is noncanonical, the Poisson bracket possesses
a null space and the null space leads to Casimir invariants that are
conserved during the time evolution for any Hamiltonian.
Because the artificial dynamics of SA is constructed by acting twice with the Poisson
bracket, the Casimir invariants are preserved during the time evolution.
Because SA extremizes the energy on a constant Casimir leaf, which is a
subspace of the phase space of the system defined by the level sets of the
Casimir invariants, it in effect finds a solution of the energy-Casimir variational principle, a variational principle that made its way into the plasma and fluid literature in the early work of <cit.> and <cit.> <cit.>. The equilibria obtained by SA of noncanonical Hamiltonian systems can
have a variety of structure because of the possible variety of Casimir invariants.
The ideal fluid and MHD were shown to be noncanonical Hamiltonian systems by <cit.> <cit.>. Therefore, SA can be used for equilibrium calculations of such systems. Reduced MHD systems are also Hamiltonian systems, as was shown by <cit.>; these will be treated
in this paper explicitly.
Originally, equilibrium calculations by such artificial dynamics were
developed for two-dimensional vortical motion of neutral
fluids in <cit.> and placed in a general
Hamiltonian systems setting in <cit.>.
However, the method of these references is limited and is now known to only work for a small class of equilibria. To correct for this the method was generalized by <cit.>, where the term “simulated annealing” was introduced, and where it was shown to work for a variety of equilibria.
They developed a double bracket that is constructed from
the Poisson bracket and a definite symmetric kernel.
The Dirac SA (DSA) dynamics was also introduced,
that utilizes a Dirac bracket instead of the Poisson bracket
in the construction of the double bracket.
They presented numerically a variety of non-trivial equilibria of
two-dimensional neutral fluids and two-layer quasigeostrophic flows.
The first application of SA to MHD systems <cit.> was on low-beta reduced
MHD <cit.> in a two-dimensional rectangular domain with doubly
periodic boundary conditions.
Numerical results with several ratios of kinetic energy to the magnetic
energy were presented. It was shown that upon relaxation to
stationary states, fine structure remained when the kinetic energy is comparable to or greater
than the magnetic energy. It was also pointed out that
the relaxation path, i.e., which of kinetic or magnetic energies
decreases earlier, can affect the resultant stationary state.
This subtlety arises because the low-beta reduced MHD has multiple
fields to be relaxed. As explained in the discussion section of the present paper,
each Casimir invariant should be adjusted to have a desired value prior to the time evolution of SA, since the value
does not change during the time evolution. A method for the adjustment was developed in <cit.>.
Next, SA was applied to low-beta reduced MHD in a cylindrical plasma in <cit.>.
By performing SA with an initial condition that is
a sum of a cylindrically symmetric equilibrium and a
small-amplitude helical perturbation accompanying magnetic islands,
an equilibrium with magnetic islands was obtained. In further work, toroidal equilibria were calculated by SA in
<cit.> by using the high-beta reduced MHD model
<cit.>. An example described therein was that of an axisymmetric tokamak equilibria with a large aspect ratio and a circular cross section. The Shafranov shift was shown to increase
as beta was increased, although the magnitude of the shift did not
fully agree with the analytic theory based on the
large-aspect-ratio expansion. This was because the toroidicity
completely disappears in high-beta reduced MHD, while it remains in
the analytic theory. Some equilibria with poloidal rotation were also calculated by SA, and
examined based on a mapping between such equilibria with poloidal rotation
and static equilibria. Toroidally-averaged stellarator equilibria were also calculated.
Simulated annealing can be used not only for equilibrium calculations
but also for stability analyses <cit.>.
We know that equilibria obtained by SA that decreases the total energy
of the system are stable at least linearly since they locate at energy
minima. However, equilibria that are obtained by other methods,
such as solving the Grad-Shafranov equation
<cit.>,
are not necessarily stable.
Suppose we know such an equilibrium, and we perform SA starting from
an initial condition that is a sum of the known equilibrium and a
small-amplitude perturbation.
If SA recovers the original equilibrium, it is linearly stable.
However, if the perturbation grows during the time evolution of SA, the
equilibrium is not at an energy minimum.
In the numerical demonstration of the stability analyses,
it was shown that the perturbation grows in a short time if the
equilibrium is unstable. On the other hand, SA required a long time for
recovering the original equilibrium if it is stable. Therefore,
accelerated
relaxation is indispensable for SA to be practically useful.
In <cit.>, a method for accelerating the relaxation was
developed by introducing time dependence in the symmetric kernel of the
double bracket.
Another kind of SA based on a metriplectic bracket introduced in <cit.> <cit.> has also been studied extensively in <cit.>.
Metriplectic dynamics is a
combination of the Hamiltonian dynamics and dissipative dynamics.
The dissipative mechanism is realized by a metric bracket.
The metriplectic dynamics was shown to successfully obtain
equilibria of two-dimesional Euler flow, axisymmetric toroidal
equilibria that are a solution to the Grad–Shafranov equation,
and force-free MHD equilibria.
Metriplectic dynamics is also explained in
<cit.>; this paper covers wider topics on geometric
aspects of plasma physics and numerical algorithms for them.
The rest of present paper is organized as follows.
In Sec. <ref>,
Hamiltonian theory is reviewed for systems of both finite and
infinite degrees of freedom.
It starts from general theory, then proceeds to some examples
such as a free rigid body and the heavy top.
A toy model mimicking aspects of low-beta reduced MHD is also introduced.
For systems with infinite degrees of freedom,
two-dimensional Euler flow, low-beta reduced MHD in both a two-dimensional
rectangular domain and in a cylindrical geometry, and high-beta reduced
MHD are considered.
Then, the theory of SA is explained in Sec. <ref>.
It reviews the double bracket formulation of SA for systems with both
finite and infinite degrees of freedom. SA by metriplectic brackets is also described briefly.
Section <ref> is devoted to some numerical examples of
SA for the heavy top and a toy model mimicking low-beta reduced MHD.
Analyses of equilibrium and stability are also presented.
Sections <ref> to <ref> cover numerical studies of SA for low-beta and high-beta reduced MHD. Section <ref> is on linear stability
analyses using SA, while Sec. <ref> is on the equilibrium
calculations in toroidal geometry.
Section <ref> shows that helically-deformed equilibria can
be obtained by SA of low-beta reduced MHD in cylindrical geometry.
Section <ref> describes our numerical studies of flowing
equilibria in two-dimensional rectangular domain.
An equilibrium with magnetic islands is introduced in
Sec. <ref>.
Two methods for accelerated relaxation are described in
Sec. <ref>.
Section <ref> contains discussion on several issues that remain to
be solved. Finally, our summary and conclusions are given in
Sec. <ref>. | null | null | null | An issue to be clarified was raised in Sec. <ref>
regarding the equilibria with large plasma flow velocities.
In the context of magnetically confined fusion plasmas,
it may be unusual to have a super-Alfvénic flow velocity.
However, if we do need to calculate an equilibrium with a super-Alfvénic
flow velocity, we may perform SA maximizing the total energy of the
system to obtain an equilibrium with smooth spatial structure,
according to the results in <cit.>.
On the contrary, when the kinetic and magnetic energies are
comparable, we do not know whether the total energy should be minimized
or maximized to obtain a stationary state by SA.
As explained in Sec. <ref>,
accelerated relaxation is especially important if SA is utilized to
obtain a stationary state of a Hamiltonian system with infinite degrees
of freedom because SA requires solving an initial value problem.
In Sec. <ref>,
we explained that the relaxation can be accelerated by introducing
time dependence in the kernel of the double bracket.
The key was to control the advection fields to have comparable
magnitudes. However, we have to determine what magnitudes are appropriate.
If the magnitudes are too large, the time evolution likely becomes
numerically unstable. If we have a numerically more stable algorithm
for the time evolution, the magnitudes can be larger.
Normally, such algorithms use implicit methods, which require iteration
to solve nonlinear equations. For these, an efficient preconditioning is required to realize a
large time step, which is an advantage of implicit methods.
SA can be applied to any Hamiltonian system; hence, a natural future step might be to apply it to the full MHD system in toroidal
geometry. Then we may be able to calculate an MHD equilibrium with
magnetic islands and/or even magnetic chaos.
In such a case, it may be important to recognize on which Casimir leaf the
equilibrium exists. Since the Casimir invariants do not change during
the time evolution of SA, we need to adjust the values of the Casimir
invariants of the initial condition for SA.
It was demonstrated that we could adjust the values of the Casimir
invariants of the initial condition for two-dimensional Euler flow
and the low-beta reduced MHD in <cit.>.
This adjustment method will be useful when applying SA to full MHD, or
even kinetic models that are Hamiltonian.
Regarding numerical stability,
spatial discretization methods should be also important in addition to
the time integration methods.
The numerical results introduced in this paper on the reduced MHD systems
in cylindrical and toroidal geometry
used second-order central differences in the radial direction
and Fourier decomposition in the poloidal and the toroidal directions
for all variables equally. It should be advantageous to implement the
discretization based on finite element exterior calculus
<cit.> for improving numerical stability.
Such improved numerical stability may enable us to obtain another
equilibrium by SA when an equilibrium is unstable.
As explained in Sec. <ref>,
SA succeeded in identifying a linearly unstable equilibrium.
However, after the initial growth of the helical perturbation,
a spiky behavior appeared in the radial profile of the variables.
Therefore, the time evolution of SA was stopped.
Although it is unclear whether such spiky behavior is because of
physics or is a numerical artifact, it is anyway better to adopt numerically
stable algorithms.
Another future possibility is to explore the calculation of free boundary equilibria. The numerical results explained in the paper were all
obtained under fixed boundary conditions, except for the doubly-periodic
boundary condition in the two-dimensional rectangular domain in
Sec. <ref>. This is certainly possible theoretically. | null |
http://arxiv.org/abs/2409.17600v1 | 20240926073220 | Attitudes and perceived effectiveness among first-time online instructors during Covid-19 | [
"Owen Xingjian Zhang"
] | cs.HC | [
"cs.HC"
] |
Senior Thesis
Attitudes and perceived effectiveness among first-time online instructors during Covid-19
Owen Xingjian Zhang
Collaborator: Tiffany Wenting Li
Supervisor: Prof. Karrie Karahalios
< g r a p h i c s >
Department of Computer Science
University of Illinois
Urbana, IL
May 12, 2023
§ ABSTRACT
Researchers have long worked to enhance access to quality education. Compared with traditional face-to-face instruction, online teaching provides pedagogical content to a larger audience in a more flexible education environment. However, while early studies have shed light on online teaching, it is crucial to understand the perspective of instructors who experienced the Covid-19 pandemic and subsequently conducted their first online classes, since the evolving landscape of post-pandemic teaching may bring new emphases and approaches. In addition, previous studies mainly focused on faculties who volunteered to teach an online course and may have been more open to the idea in the first place. In this study, my colleague Tiffany Wenting Li surveyed university faculties teaching online for the first time, regardless of whether they volunteered. She first surveyed them when the university began transitioning from in-person to online instruction in April 2020 due to the pandemic. Later, she conducted a follow-up survey as they completed their first online teaching semester.
In the surveys, with my collegue Tiffany, we investigated instructors’ expected level of class success towards online teaching before their first online teaching experience. Using Bayesian modeling, we analyzed this expectation varied based on instructors’ characteristics (self-efficacy of online teaching, tech-savviness, and technology acceptance) and course features (subject area, class size, and instructional design). We found instructors' self-efficacy in their first online class had a significant impact on their expectations of success. Furthermore, our findings indicate that smaller class sizes are associated with lower expectations of success. On the other hand, instructional design factors such as physical space and prior use of technology platforms did not show significant contributions to the final expectation of success. Lastly, this study proposes practical recommendations to support online teaching for instructors and school administrators. To increase instructors' self-efficacy, they should collaborate with colleagues and familiarize themselves with online platforms. Universities should organize workshops or training sessions for instructors to enhance their skills. In small-size interactive classes, instructors should utilize nonverbal communication, while universities should establish support teams and implement feedback mechanisms to ensure quality and effectiveness.
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Computer systems organization Embedded systems
[300]Computer systems organization Redundancy
Computer systems organization Robotics
[100]Networks Network reliability
Attitudes and perceived effectiveness among first-time online instructors during Covid-19
A.D.M. Valois
September 28, 2024
=========================================================================================
§ INTRODUCTION
Online teaching has a long growing history since 1990 <cit.>. The goal of online teaching is to enhance access to education and address inequality <cit.>. In recent years, online teaching has become increasingly prevalent, particularly during and after the pandemic. Previous studies <cit.> have shown that instructors' attitudes and acceptance significantly impact their approach and performance in online teaching. For instructors who were teaching online for the first time, I and my colleague Tiffany Wenting Li, are concerned about their attitudes toward online teaching, specifically, their expected level of success of their first online class. It is important to note that previous studies may have been affected by self-selection bias, where instructors with more positive attitudes towards online teaching were predominantly included. The transition to online instruction during the Covid-19 pandemic has provided an opportunity to address this gap, as instructors who are teaching online for the first time may not be familiar with or favor online teaching. In this project, with my colleague Tiffany, we aimed to investigate the factors that influence the expected success level of online courses, considering both course features and instructor features. Based on our findings, we will propose recommendations to instructors to ensure quality and effectiveness in online teaching, as well as recommendations to school administrations to provide necessary resources and support to instructors. Since this is a collaborated work, in this thesis, the pronoun "we" collectively refers to both Tiffany Wenting Li and myself.
To study the features, we proposed 2 research questions in this thesis:
RQ1 For instructors without prior online teaching experience, what are their Expected Level of Online Class Success as they transform their in-person course to an online course?
RQ2 How does the Expected Level of Online Class Success vary based on Subject Area (IV1), Class Size (IV2), Course Design (IV3), Instructors' Self-efficacy of Online Teaching(IV4), Tech-savviness(IV5), Technology Acceptance(IV6)?
To answer our research questions, we explored the factors that influence instructors' expectations of success before their first online class. Specifically, we investigated the impact of fields of study, class size, instructional design, self-efficacy of online teaching, tech-savviness, and technology acceptance. By conducting quantitative studies, we found that instructors' self-efficacy plays a crucial role in shaping their expectations of success. Additionally, we investigated the potential influence of class size and instructional design, such as physical space and technology used previously, on success expectations. Through our findings, we provide valuable insights into strategies for school administrations, including enhancing instructors' self-efficacy and explaining why smaller class sizes can be advantageous in the context of online teaching.
§ RELATED WORK
§.§ Selection bias and broad studies in online teaching
The majority of related work on instructors' attitudes was conducted on instructors who were already teaching online courses. Such samples could suffer from a selection bias where these instructors were mostly volunteers who already held a more positive attitude than an average instructor. There are a few studies that investigated instructors including those who have not taught an online course before. Ward<cit.> surveyed instructors in four STEM subjects about their attitudes toward online courses, while Vogle<cit.> surveyed instructors in Visual Arts. Corry and O'Quinn<cit.>, Zhen et al. (2008), and Gasaymeh<cit.> investigated factors that impacted an instructor's attitudes toward online courses and the decision of whether to teach an online course or not. The factors included self-efficacy of online teaching technology, perceived usefulness of online teaching, administrative support from the department, time-related challenges, personal experience with online courses, etc. However, these studies looked at their attitudes towards online teaching in general, but not in the context of a specific course they are teaching. Asking them in the context of a specific course 1) allows us to explore how course features (class size, subject area, etc.) impact attitudes, and 2) grounds instructors' responses so that the responses are less imaginary but more realistic.
§.§ Novice professors of online teaching in the post-pandemic era
While the pandemic eruption forced the online teaching transition, it also emphasizes the need for professional development and support in effectively delivering online instruction<cit.>. For university instructors, previous studies showed the need for training and support to enhance their online teaching skills, which affect their decisions to teach online or not<cit.>. When the transition is forced or highly recommended, how to make instructors, especially those who have not taught online classes before, willing to teach online became important. To give suggestions for first-time or inexperienced online teaching instructors, many previous studies<cit.> in the early 21st century have covered suggestions from authorities and experienced instructors. However, for instructors who are forced to teach online for the first time, suggestions from these prior studies may not be applicable to them, and also not up-to-date in the 2020s with post-pandemic teaching as they might have different emphases and strategies.
§ METHOD
My colleague Tiffany conducted two surveys for instructors in two universities before and after they taught their first online classes. In April 2020, most universities in the US switched from in-person instruction to online instruction. She recruited instructors from two universities that made the switch who had no online teaching experience before the switch. The survey was timed such that the instructors had no or very little online teaching experience when they filled in the first survey, and had taught half a semester online by the second survey. In the first survey, pre-survey, She collected 124 completed responses. After the Spring 2020 semester has ended, which was half a semester after the pre-survey, she sent the second follow-up survey, post-survey, to those who completed the pre-survey and agreed to be recruited for a post-survey, and collected 34 completed responses. In this thesis, she only focused on the pre-survey data to answer our research questions.
§.§ Demographics
Participants are instructors from 2 universities(UIUC, Uchicago) across diverse fields of study (Engineering, Business, Language, Natural Science, etc.). There are 124 participants in total. Among them, 59 are Women(47.6%) , 62 are Men(50%) and 3 are not-disclosed(0.24%). And 76 are from UIUC(61.3%), 46 are from the University of Chicago(37.1%) and 2 refuse to answer(1.6%). For the ages of participants, most of them are in the range of 25-75(98.4%, 122 participants out of 124). Specifically, there are 1 participant aged below 25, 15 participants aged 26-35, 26 participants aged 36-45, 33 participants aged 46-55, 33 participants aged 56-65, 15 participants aged 66-75, and 1 participant aged over 75.
In addition, from these 124 participants, two-thirds of them define "Professor" as their job title (101, 66%), while the rest are Associate Professors(28, 22.6%), Assistant Professors(19, 15.3%), Lecturer(12, 9.7%), Instructor(3, 2.4%), Graduate Student(1, 0.8%), and Others(6, 4.8%). For their years of teaching experience in higher education, participants are concentrated in the 10-30 year interval. In detail, 11 participants(8.9%) have experiences in less than 5 years, 17 participants(13.7%) have experiences in 5 to 9 years, 32 participants(25.8%) have experiences in 10 to 19 years, 31 participants(25%) have experiences in 20 to 29 years, and 33 participants(26.6%) have experiences in over 30 years.
She also asked participants to describe the level of online teaching training they had received prior to their first online teaching experience. The results showed that many participants(26%) had no training, and the number of participants in each training level decreased as the training level increased. From the collected responses, 48 participants(38.7%) describe them as "No training at all", 32 participants(25.8%) describe them as "Episodic informal training (e.g., reading a blog post)", 23 participants(18.5%) describe them as "3 or fewer hours of formal introductory training over time", 12 participants(9.7%) describe them as "More than 3 hours of formal introductory training over time", 2 participants(1.6%) describe them as "3 or fewer hours of formal advanced level training over time", and 7 participants(5.6%) describe them as "More than 3 hours of formal advanced level training over time".
Based on the department they are affiliated with, we grouped them into 6 fields of study. The following table shows the distribution by fields of study:
§.§ Pre-survey Design
Our pre-survey included questions on the following themes:
* Eligibility
For the first two questions, the survey checked the participants' eligibility for this study. In the first question of the survey, the survey informed participants about the purpose of the study, which is to collect first-time online instructors' attitudes towards online teaching, and explain that their participation will involve a 12-18 minute online survey. In the second question, the survey asked if the university's teaching modality switch in April 2020 was their first time teaching a course fully online. Only if they certify the first question and select "yes" in the second question were they directed to the later parts.
* In-person Course Description
First, the survey asked about the details of the course they were teaching in the following dimensions: learning objective, affiliated department, class size, targeted students, and grade options.
* In-person Course Component
Then, the survey asked about the components involved in the in-person version of the course. The survey first asked, in the in-person version of the course, how much time was spent on instructor-student interaction and peer interaction in a typical lecture session, a typical lab or studio session, and a typical fieldwork or field practicum session respectively. Then, the survey asked what assessment methods, such as take-home exams and class participation, counted towards the final grade. It also asked what parts of the in-person course were conducted online and what technology artifacts or platforms were used in the in-person version of the course.
* Initial Attitudes
Next, the survey asked participants to look back and answer questions in this part of the survey as if they were just beginning to plan for the online course. The questions elicited their perceived expected usefulness of their online course, perceived expected ease of use for their online course, intention to teach their course online in the future, and their emotional attitudes towards online teaching.
* Demographic
In this section, the survey asked for their personal information, including gender, age, teaching years, job title, level of training in online teaching, affiliated department, and affiliated university.
* Interest in a Follow-up Study
At the end of the survey, the survey asked if they were interested in continuing to participate in the research study for a post-survey and to provide their emails if they wanted.
§.§ Variables of Interest
Based on our research questions and the survey questions, we first defined variables of interest and their measurements. Then, we cleaned the collected responses and transformed them into the desired variable formats. We have three kinds of variables, Dependent Variables(DV), Independent Variables(IV), and Controlled Variables(CV). Our research questions ask about the effects of IVs on DVs, when controlling the confounds presented by the CVs.
§.§.§ Dependent Variables
Among the several attitude measurements in the survey, in this thesis, we focus on the Expected Level of Online Class Success, which is measured by the section of "the initial expectation of success" in the survey. This section included 16 questions. Among these questions, we asked about their expectations of the following aspects in the online version of the class compared to the in-person version of the same class: students' performance, flexibility, interaction, fairness, and quality.
The survey asked the above questions using a 7-point Likert scale, where 1 means the most successful and 7 means the least successful, compared with the in-person course. We calculated the average value in the question section of "the initial expectation of success" as the value of the Expected Level of Class Success. After transformation, we had 4.99 points as the average value, which shows participants had a slightly lower expectation of course success when it's taught online compared to in-person.
§.§.§ Independent Variables
For independent variables, we defined variables of interest in the following categories:
* Subject area(I1)
The affiliated departments of the course. We deductively coded the answers into 6 subject areas: Arts and Humanities, Business, Health and Medicine, Public and Social Services, STEM, and Social Sciences.
* Absolute class size(I2a)
The number of students in their classes.
* Relative class size(I2b)
This refers to how big their class was relative to all the courses offered in their department. They have three options: small class, average-size class, and large class.
* Instructor-student interaction time(I3a)
The survey asked how much time they used for interaction time between themselves and students throughout the class when the course was in-person. We calculated the average value of the corresponding answers and standardized them from 0 to 1, and then multiply this answer with 7 so that the outcome form is aligned with other 7-point Likert scaled variables. After these processes, we have the final value that ranges from 1 to 7, where 1 means the least time and 7 means the most time.
* Peer interaction time(I3b)
Same with I3a, the survey asked how much time they used for interaction time among students throughout the class when the course was in-person. Since all the questions in this section are on a 7-point Likert scale, we calculated the average value of the corresponding answers that range from 1 to 7, where 1 means the least time and 7 means the most time.
* Physical space(I3c)
The survey question asked about the necessary physical space (e.g., chemistry lab, dance studio) or equipment (e.g., lab equipment, mirrors) the course relied on when it was in-person. And we deductively coded their answers to physical space demands into the following categories: need special spaces, and no need.
* Technology(I3d)
The survey asked them what technology artifacts (e.g., projector, iClicker) or technology platforms (e.g., Compass, Slack) were used when the course was in-person. And we deductively coded their answers of used technology into the following categories: both software and hardware, hardware only, software only, and no tech.
* Online components in the in-person course(I3e)
The survey asked them what parts of the in-person course were conducted online. And we inductively coded their answers into the following categories: Only regular assignments/emails/forums were online versus more components were online.
* Self-efficacy towards teaching online(I4)
The survey asked them several questions about their self-efficacy in online teaching. Since all the questions in this section are on a 7-point Likert scale, we calculated the average value of the corresponding answers that range from 1 to 7, where 1 means the most efficacious and 7 means the least efficacious.
* Initial proficiency towards technology(I5)
Similar to I4, the survey asked them several questions about their initial proficiency towards technology. And then we calculated the average value that ranges from 1 to 7, where 1 means the most proficient and 7 means the least proficient.
* Open attitudes towards technology(I6)
Similar to I5, the survey asked them several questions about their open attitudes toward technology. And then we calculated the average value that ranges from 1 to 7, where 1 means the most openness and 7 means the least openness.
§.§.§ Controlled Variables
By applying the transformation rules, We transform the data into all Controlled Variables(CV) shown in the Table 2.
§.§ Directed Acyclic Graphs
We build DAGs(Directed Acyclic Graph) to define causal relationships among these variables. Based on the features of DAGs, there are 3 possible relationships between variable A and variable B: no relations, A → B and B → A. We denoted the arrow as the direction of the causal relationship. For example, we believe there is a causal relationship in I1: Subject area → I4: Self-efficacy towards teaching online. If we change the subject area, the most important teaching part will also change, but not the reverse. We built the DAG for all variables and linked the variables that could have a causal relationship to other variables with directed arrows. The graph is shown in Fig.3.
§.§ Bayesian Models
We used Bayesian analysis to study the direct effects of each IV on our target DV, Expected Level of Class Success, under the circumstance that all other confounding variables (based on the DAG) in the model are controlled. We choose Bayesian analysis as the main tool for our quantitative study due to the following reasons<cit.>:
First, since we only have less than 200 data points, Bayesian methods can help to compensate for the limitations of small sample size data<cit.> and improve the analysis accuracy with prior information or beliefs about the data.
Second, Bayesian methods tend to be more flexible and robust than traditional frequentist methods in the context of small sample sizes, as they do not rely on strict assumptions about the distribution of the data.
Third, Bayesian models offer the advantage of allowing for explicit specification and incorporation of all aspects of the model, without requiring the need to check modeling assumptions that are not already included in the model description, thereby foregrounding all relevant assumptions and parameters within the model.
Lastly, In contrast to null hypothesis significance testing (NHST), Bayesian analysis prioritizes quantifying the strength of an effect rather than simply determining its presence, which aligns more closely with the exploratory nature of many studies. As a result, Bayesian analysis can provide more informative and nuanced insights into the data under investigation.
To better explain what a direct effect of an IV on a DV is, let's look at an example of the following causal relationships: I1 → I3 → D1; I1 → D1, we included I3 in the model of studying I1 → D1 so that we can block the effect of I3 → D1, and hence we can study the direct effect of I1 → D1. Based on the DAGs we built, to estimate the direct effect of each IV to the DV, we carefully choose which IVs and CVs to include in each model such that all the confounds were taken care of and no new confounds were created. We created a total of 8 different models.
After plotting the histogram of our target outcome variable Expected Level of Class Success, we found it would fit the Normal distribution. So we modeled the data as a Normal distribution and used linear regression models to estimate the Normal distribution means and standard deviation for different values in the exposure variables.
To calculate how the DV changes per unit change in an IV, we took different approaches for a categorical IV and a Likert-scale continuous IV. We define a unit change of a category IV as changing from one category to another, and a unit change in a Likert-scale IV as changing one Likert level. So for categorical ones, we contrast each two of all possible categories; for continuous ones, we contrast them with a unit point difference in a 7-point Likert scale. By contrasting the posterior distributions of the means for 2 different categorical conditions or 1 point difference in the 7-point Likert Scale, we can see how the exposure variable affects outcome variables. And we also estimated the posterior distribution of Cohen's D for the linear regression model as a measurement of relative effect size.
§ RESULT
Regarding our RQ1(For instructors without prior online teaching experience, what are their Expected Level of Online Class Success as they transform their in-person course to an online course?), the data showed that participants have slightly lower expectations of success(Mean: 0.665, Standard Deviation: 0.129), compared to the in-person version of the same class. Mean value 0.665 approximates the 5th point on a 7-point Likert scale, corresponding to slightly lower success. In the following subsections, we will dive into how course features and instructors' features impact the level of expected success, in answer to our RQ2.
§.§ No significant direct effects from subject to the expected level of success
For all the responses we received, we grouped all the responses in "subject area" into 6 categories, which are: Arts and Humanities, Business, Health and Medicine, Public and Social Services, STEM, and Social Sciences. And we are curious how each subject area differs from others in the Expected Level of Class Success. Among these six categories, Art and Humanities(size:39, Mean:0.655, Standard Deviation: 0.125), STEM(size:57, Mean:0.652, Standard Deviation: 0.140), and Business(size:3, Mean:0.633, Standard Deviation: 0.073) have higher expectation levels(Mean: 0.646); while Public and Social Services(size:6, Mean:0.725, Standard Deviation: 0.126), Health and Medicine(size:5, Mean:0.71, Standard Deviation: 0.099), Social Sciences(size:14, Mean:0.708, Standard Deviation: 0.709) have lower expectation levels(Mean: 0.714): the mean value of the latter three categories is 0.5 points less than the former three categories in 7-point Likert scale. After controlling for other confounds(I2, I3, I4, I5, I6, C1, C2, C3, C4) to be the same values, we modeled the direct effect of I1: Field of study as a categorical variable on the Expected Level of Class Success. We contrasted the posterior distributions of Expected Level of Class Success of each subject area against the average of other subject areas. For each subject area, the HPDI(High-Posterior Density Interval) overlapped with the value of 0. This indicated that given the same value of confounds(same personal experience ratings, prior impression about online teaching ratings, teaching philosophy, gender, age, job title, university, training, class size, interaction time, physical space needs, technology used, self-efficacy towards teaching online, initial proficiency towards technology, and openness to technology, self-efficacy towards teaching online), simply being in different fields of study did not have any significant direct effect on the initial expectation of success. The mean absolute differences of each category to the average of other categories are small (absolute value less than 0.01).
§.§ Self-efficacy affects success expectations
We find that instructors' Self-efficacy towards Online Teaching(I4) has a significantly positive direct effect on Expected Level of Class Success.
Among all participants, we received overall good self-efficacy ratings(Mean: 0.384, Standard Deviation: 0.180), where 0 means the most efficacious and 1 means the least efficacious. 0.384 means approximately 3.3 points on a 7-point Likert scale, which is somewhat agreeing they are efficacious.
As a continuous variable, we consider how much each unit of Self-efficacy towards Online Teaching(I4) change would affect the target variable. We contrast the posterior distribution of Expected Level of Class Success when I4 is 0.5 and 0.667 (subtract 0.5 group from 0.667 group), where the difference is 1 unit point in a 7-point Likert scale. From the plotted graphs, we found that the HDPI range(Absolute Difference: [0.017, 0,0655], Effect Size: [0.25, 1]) excluded the reference value of 0. The effect size is on the right side of value 0, which shows the effect is positive. To conclude, there is a significant effect from Self-efficacy towards Online Teaching(I4) to Expected Level of Class Success. This means that if participants have more self-efficacy toward online teaching, they would have higher expectations for success in class. The graph is shown in Fig.4.
§.§ Best class size for online teaching?
To study what would be the perfect size for online instruction, in other words, in what size the instructors feel the most successful, we asked their Absolute Class Size(I2a) and Relative Class Size(I2b), where we defined them in Section 3.3.2. Regarding the class size, small-sized classes (number of responses: 33, Mean: 0.703, Standard Deviation: 0.145) are considered to have a lower(0.35 and 0.26 points lower on the 7-point Likert scale) success level than average-sized ones (number of responses: 57, Mean: 0.645, Standard Deviation: 0.115) and large-sized (number of responses: 34, Mean: 0.660, Standard Deviation: 0.131).
For the continuous variable, Absolute Class Size(I2a), though the result HPDI of absolute difference ([-0.043, 0.0116]) and effect size ([-0.654, 0.172]) overlapped with the reference value of 0, a large proportion of the intervals concentrates on the left-hand side of zero (Left: 86.7%, Right: 13.3%). This shows a trend that as the number of students in a course increases, the expected level of class success would be higher. The graph is shown in Fig.5.
Results for the relative class size (I2b) showed similar trends. When comparing large class sizes to small class sizes, the absolute difference in expected class success rating ranges from -0.083 to 0.0239, and the effect size ranges from -0.809 to 0.206. This suggests that, on average, teaching large-sized classes online tends to have slightly higher levels of expected success compared to small-sized classes. Similarly, when comparing average class sizes to small class sizes, the absolute difference in expected class success rating ranges from -0.087 to 0.011, and the effect size ranges from -0.839 to 0.098. This implies that, on average, average-sized classes also tend to have slightly higher levels of expected success than small classes.
However, when comparing average class sizes with large class sizes, the data shows that the mean difference is -0.0588, which is very close to zero. This means that there is no significant difference in the expected levels of class success between large-sized and average-sized classes.
In summary, instructors of smaller classes showed lower expectations of success, while there is no substantial difference in the expected levels of class success between large-sized and average-sized classes. We will dive into this in the discussion section.
§.§ Insignificant direct effects from other IVs
We did not find any significant or near-significant direct effects for other independent variables to Expected Level of Class Success.
We concluded that instructor-student Interaction(I3a) (HDPI: Absolute Difference: [-0.019, 0.013], Effect Size: [-0.277, 0.21]), Peer interaction(I3b) (HDPI: Absolute Difference: [-0.02, 0.01], Effect Size: [-0.308, 0.146]), Physical space(I3c) (HDPI: Absolute Difference: [-0.074, 0.041], Effect Size: [-0.662, 0.372]), Technology(I3d), online components(I3e), Initial proficiency towards technology(I5) (HDPI: Absolute Difference: [-0.026, 0.012], Effect Size: [-0.391, 0.174]), and Open attitudes towards technology(I6) (HDPI: Absolute Difference: [-0.021, 0.018], Effect Size: [-0.317, 0.269])have no significant effects by checking their HPDI range. We will also discuss how these results compare to our hypotheses in the discussion section.
§ DISCUSSION
§.§ Building confidence to expect success
In Section 4.2, the findings reveal a significant positive causal relationship between higher Self-efficacy toward Online Teaching (I4) to increased expectations for success. This outcome supports our initial hypothesis. The direct effects from initial proficiency towards technology (I5) and open attitudes towards technology (I6) to the outcome measurement did not demonstrate statistical significance, which contradicted our hypotheses and previous studies results<cit.>. Our best explanation for this contradiction is that I5 and I6 affect success expectations through I4. In support of this explanation, we observed notable correlations between I5 and I4, as well as between I6 and I4, with Pearson correlation coefficients of 0.53 and 0.47<cit.>, respectively. These results suggest a moderate positive association between I4 and both I5 and I6. Therefore, we posit that enhancing instructors' open attitudes towards technology, improving their proficiency in technology, and fostering their self-efficacy are crucial considerations for promoting success. Below, we propose recommendations that focus on strategies to cultivate instructors' open attitudes, enhance their technological proficiency, and bolster their confidence levels.
First, we encourage instructors to collaborate with colleagues with experience teaching online. Experienced colleagues can provide guidance, share best practices, and offer support during their transition. Collaborating with experienced online instructors provides valuable insights and helps build confidence through shared knowledge and experiences. Second, instructors can enhance their online teaching experience by familiarizing themselves with the online platform they will be using. It is highly recommended for instructors allocate time to explore the platform's features and functionality. By doing so, they can become comfortable with the platform's interface, tools, and options. This familiarity allows them to navigate through the platform effortlessly and access the necessary teaching resources with ease.
On the other hand, universities should offer online teaching workshops or training sessions specifically designed to help instructors transition to online teaching, especially for those who have lower self-efficacy in online teaching. These sessions can provide valuable guidance on effective online teaching strategies, technology usage, and engagement techniques. Participating in these workshops can boost instructors' confidence and provide them with the necessary skills to teach effectively online.
§.§ Small size class in online teaching
Prior to the data collection, we believe online teaching brings more flexibility and compatibility in containing larger classes while inhibiting in-person interaction in small group classes, which would make large classes more successful than smaller ones. In section 4.3, we found our results matched the hypothesis: smaller size classes have lower success expectations, compared with average size and larger size. At the same time, for small-size classes, both peer interaction time and peer interaction time are higher than average classes and large classes. Even after the transition, small classes still kept the trends of emphasizing in-class interaction. In other words, although small-size classes contained relatively more intensive interaction, online teaching limited their expected success. Hence, we propose recommendations for universities and instructors to practice in small-size interactive classes.
First, instructors can use more visual cues and nonverbal communication during class. Online platforms with video capabilities allow for visual cues and nonverbal communication, helping instructors gauge student understanding, reactions, and engagement. Similarly, students can observe and learn from their peers' nonverbal cues, enriching their learning experience even in an online setting. Second, instructors can manage time for more one-on-one support. Online platforms offer tools like private messaging and quick responses, allowing instructors to give individual attention to students. In smaller online classes, instructors can closely track student involvement, offer personalized assistance, and make sure students grasp the material and make progress.
For departments and universities, first, it is beneficial to establish online learning support teams that include instructional designers, technology specialists, and online learning experts who can assist instructors in designing interactive online activities, troubleshooting technical issues, and providing guidance on best practices. Second, universities can enhance the quality of online teaching by conducting regular evaluations and gathering feedback at both the campus-wide and department-wide levels. By implementing mechanisms for collecting feedback from instructors and students, universities can gain valuable insights into their online teaching and learning experiences. This feedback can then be utilized to identify areas for improvement, address challenges, and refine online teaching strategies. Additionally, conducting evaluations of online courses can ensure ongoing quality assurance and effectiveness in the delivery of online education.
§.§ Instructional design
In Section 4.4 of our study, we examined the impact of various instructional design variables on the expected success level of online teaching. Specifically, we focused on variables related to physical space demand, technology utilization, and the inclusion of online components (I3c, I3d, and I3e, respectively). According to our initial hypothesis, classes that required specific physical spaces, both hardware and software use, and more online components would encounter difficulties in their initial online teaching experience.
However, our analysis revealed that these variables did not have a significant direct effect on the Expected level of class success. While they did show a tendency towards lower success expectations, their effects did not reach statistical significance. But this does not mean these variables do not have a significant total effect on expected success. It is worth noting that we believe these variables causally affect another variable, I4, which significantly affects Expected level of class success. As I4 is already incorporated into the model, it is possible that its inclusion "weakened" the effects of the aforementioned variables, thereby diminishing their direct impact on success expectations.
§ FUTURE WORK
* Other attitude measurements
In this thesis, we only considered one dependent variable: Expected level of class success. Though we have found some interesting results from Expected level of class success, we wonder what are other dependent variables like, what independent variables would have a significant effect on them, and what differences or similarities they would show with Expected level of class success. In the future, we will explore more interesting dependent variables like instructors' perceived expected usefulness of their online course, perceived expected ease of use for their online course, intention to teach their course online in the future, and their emotional attitudes towards online teaching.
* Changes in attitudes after teaching online
For the data source, we did not include the post-data, so we do not know what changes have been made after these participants' first-time online teaching experience. In future work, we would also include post-data and investigate how DVs towards the online version of a course changed by the end of the course, and how DVs in post-data vary based on that DV in pre-data and IVs.
* Nuances in open-ended free-form answers
Lastly, we only considered selected questions(mostly multiple-choice questions) in the pre-survey, which means that we missed some important open-ended free-form answers. We believe these free-form opinions are interesting and may explain more about the reasons and thought processes behind the answers. We will use those answers in future work for further qualitative studies to seek a broader understanding for instructors who teach online for the first time.
§ CONCLUSION
In this thesis, we conducted quantitative studies to investigate the expectations of first-time online teaching instructors before their transition. Our research examined various factors such as fields of study, class size, instructional design, self-efficacy of online teaching, tech-savviness, and technology acceptance. Through our findings, we discovered that instructors' expectations of success in their first online class were significantly influenced by their self-efficacy toward online teaching. Additionally, we found that smaller class sizes were associated with lower expectations of success, while factors such as instructional design elements and prior technology platform usage did not significantly impact the final expectation. Based on our research, we have provided recommendations for instructors and universities to build self-efficacy and ensure quality and effectiveness in online teaching. Additionally, we have provided suggestions specifically tailored for small-size interactive classes in the online teaching context.
§ ACKNOWLEDGEMENT
I would like to express my sincere gratitude to my collaborator Tiffany Wenting Li for her invaluable assistance in the data sharing, planning, tutoring, and revising of my thesis. Additionally, I would like to extend my thanks to my thesis advisor Prof. Karrie Karahalios for her invaluable guidance throughout the process, as well as Prof. Hari Sundaram for his assistance. I am also deeply grateful to all the survey participants for generously sharing their insights. Last but not least, I would like to acknowledge the unwavering support and love of my family and friends, who have been my pillars of strength throughout this journey.
ACM-Reference-Format
| Online teaching has a long growing history since 1990 <cit.>. The goal of online teaching is to enhance access to education and address inequality <cit.>. In recent years, online teaching has become increasingly prevalent, particularly during and after the pandemic. Previous studies <cit.> have shown that instructors' attitudes and acceptance significantly impact their approach and performance in online teaching. For instructors who were teaching online for the first time, I and my colleague Tiffany Wenting Li, are concerned about their attitudes toward online teaching, specifically, their expected level of success of their first online class. It is important to note that previous studies may have been affected by self-selection bias, where instructors with more positive attitudes towards online teaching were predominantly included. The transition to online instruction during the Covid-19 pandemic has provided an opportunity to address this gap, as instructors who are teaching online for the first time may not be familiar with or favor online teaching. In this project, with my colleague Tiffany, we aimed to investigate the factors that influence the expected success level of online courses, considering both course features and instructor features. Based on our findings, we will propose recommendations to instructors to ensure quality and effectiveness in online teaching, as well as recommendations to school administrations to provide necessary resources and support to instructors. Since this is a collaborated work, in this thesis, the pronoun "we" collectively refers to both Tiffany Wenting Li and myself.
To study the features, we proposed 2 research questions in this thesis:
RQ1 For instructors without prior online teaching experience, what are their Expected Level of Online Class Success as they transform their in-person course to an online course?
RQ2 How does the Expected Level of Online Class Success vary based on Subject Area (IV1), Class Size (IV2), Course Design (IV3), Instructors' Self-efficacy of Online Teaching(IV4), Tech-savviness(IV5), Technology Acceptance(IV6)?
To answer our research questions, we explored the factors that influence instructors' expectations of success before their first online class. Specifically, we investigated the impact of fields of study, class size, instructional design, self-efficacy of online teaching, tech-savviness, and technology acceptance. By conducting quantitative studies, we found that instructors' self-efficacy plays a crucial role in shaping their expectations of success. Additionally, we investigated the potential influence of class size and instructional design, such as physical space and technology used previously, on success expectations. Through our findings, we provide valuable insights into strategies for school administrations, including enhancing instructors' self-efficacy and explaining why smaller class sizes can be advantageous in the context of online teaching. | §.§ Selection bias and broad studies in online teaching
The majority of related work on instructors' attitudes was conducted on instructors who were already teaching online courses. Such samples could suffer from a selection bias where these instructors were mostly volunteers who already held a more positive attitude than an average instructor. There are a few studies that investigated instructors including those who have not taught an online course before. Ward<cit.> surveyed instructors in four STEM subjects about their attitudes toward online courses, while Vogle<cit.> surveyed instructors in Visual Arts. Corry and O'Quinn<cit.>, Zhen et al. (2008), and Gasaymeh<cit.> investigated factors that impacted an instructor's attitudes toward online courses and the decision of whether to teach an online course or not. The factors included self-efficacy of online teaching technology, perceived usefulness of online teaching, administrative support from the department, time-related challenges, personal experience with online courses, etc. However, these studies looked at their attitudes towards online teaching in general, but not in the context of a specific course they are teaching. Asking them in the context of a specific course 1) allows us to explore how course features (class size, subject area, etc.) impact attitudes, and 2) grounds instructors' responses so that the responses are less imaginary but more realistic.
§.§ Novice professors of online teaching in the post-pandemic era
While the pandemic eruption forced the online teaching transition, it also emphasizes the need for professional development and support in effectively delivering online instruction<cit.>. For university instructors, previous studies showed the need for training and support to enhance their online teaching skills, which affect their decisions to teach online or not<cit.>. When the transition is forced or highly recommended, how to make instructors, especially those who have not taught online classes before, willing to teach online became important. To give suggestions for first-time or inexperienced online teaching instructors, many previous studies<cit.> in the early 21st century have covered suggestions from authorities and experienced instructors. However, for instructors who are forced to teach online for the first time, suggestions from these prior studies may not be applicable to them, and also not up-to-date in the 2020s with post-pandemic teaching as they might have different emphases and strategies. | My colleague Tiffany conducted two surveys for instructors in two universities before and after they taught their first online classes. In April 2020, most universities in the US switched from in-person instruction to online instruction. She recruited instructors from two universities that made the switch who had no online teaching experience before the switch. The survey was timed such that the instructors had no or very little online teaching experience when they filled in the first survey, and had taught half a semester online by the second survey. In the first survey, pre-survey, She collected 124 completed responses. After the Spring 2020 semester has ended, which was half a semester after the pre-survey, she sent the second follow-up survey, post-survey, to those who completed the pre-survey and agreed to be recruited for a post-survey, and collected 34 completed responses. In this thesis, she only focused on the pre-survey data to answer our research questions.
§.§ Demographics
Participants are instructors from 2 universities(UIUC, Uchicago) across diverse fields of study (Engineering, Business, Language, Natural Science, etc.). There are 124 participants in total. Among them, 59 are Women(47.6%) , 62 are Men(50%) and 3 are not-disclosed(0.24%). And 76 are from UIUC(61.3%), 46 are from the University of Chicago(37.1%) and 2 refuse to answer(1.6%). For the ages of participants, most of them are in the range of 25-75(98.4%, 122 participants out of 124). Specifically, there are 1 participant aged below 25, 15 participants aged 26-35, 26 participants aged 36-45, 33 participants aged 46-55, 33 participants aged 56-65, 15 participants aged 66-75, and 1 participant aged over 75.
In addition, from these 124 participants, two-thirds of them define "Professor" as their job title (101, 66%), while the rest are Associate Professors(28, 22.6%), Assistant Professors(19, 15.3%), Lecturer(12, 9.7%), Instructor(3, 2.4%), Graduate Student(1, 0.8%), and Others(6, 4.8%). For their years of teaching experience in higher education, participants are concentrated in the 10-30 year interval. In detail, 11 participants(8.9%) have experiences in less than 5 years, 17 participants(13.7%) have experiences in 5 to 9 years, 32 participants(25.8%) have experiences in 10 to 19 years, 31 participants(25%) have experiences in 20 to 29 years, and 33 participants(26.6%) have experiences in over 30 years.
She also asked participants to describe the level of online teaching training they had received prior to their first online teaching experience. The results showed that many participants(26%) had no training, and the number of participants in each training level decreased as the training level increased. From the collected responses, 48 participants(38.7%) describe them as "No training at all", 32 participants(25.8%) describe them as "Episodic informal training (e.g., reading a blog post)", 23 participants(18.5%) describe them as "3 or fewer hours of formal introductory training over time", 12 participants(9.7%) describe them as "More than 3 hours of formal introductory training over time", 2 participants(1.6%) describe them as "3 or fewer hours of formal advanced level training over time", and 7 participants(5.6%) describe them as "More than 3 hours of formal advanced level training over time".
Based on the department they are affiliated with, we grouped them into 6 fields of study. The following table shows the distribution by fields of study:
§.§ Pre-survey Design
Our pre-survey included questions on the following themes:
* Eligibility
For the first two questions, the survey checked the participants' eligibility for this study. In the first question of the survey, the survey informed participants about the purpose of the study, which is to collect first-time online instructors' attitudes towards online teaching, and explain that their participation will involve a 12-18 minute online survey. In the second question, the survey asked if the university's teaching modality switch in April 2020 was their first time teaching a course fully online. Only if they certify the first question and select "yes" in the second question were they directed to the later parts.
* In-person Course Description
First, the survey asked about the details of the course they were teaching in the following dimensions: learning objective, affiliated department, class size, targeted students, and grade options.
* In-person Course Component
Then, the survey asked about the components involved in the in-person version of the course. The survey first asked, in the in-person version of the course, how much time was spent on instructor-student interaction and peer interaction in a typical lecture session, a typical lab or studio session, and a typical fieldwork or field practicum session respectively. Then, the survey asked what assessment methods, such as take-home exams and class participation, counted towards the final grade. It also asked what parts of the in-person course were conducted online and what technology artifacts or platforms were used in the in-person version of the course.
* Initial Attitudes
Next, the survey asked participants to look back and answer questions in this part of the survey as if they were just beginning to plan for the online course. The questions elicited their perceived expected usefulness of their online course, perceived expected ease of use for their online course, intention to teach their course online in the future, and their emotional attitudes towards online teaching.
* Demographic
In this section, the survey asked for their personal information, including gender, age, teaching years, job title, level of training in online teaching, affiliated department, and affiliated university.
* Interest in a Follow-up Study
At the end of the survey, the survey asked if they were interested in continuing to participate in the research study for a post-survey and to provide their emails if they wanted.
§.§ Variables of Interest
Based on our research questions and the survey questions, we first defined variables of interest and their measurements. Then, we cleaned the collected responses and transformed them into the desired variable formats. We have three kinds of variables, Dependent Variables(DV), Independent Variables(IV), and Controlled Variables(CV). Our research questions ask about the effects of IVs on DVs, when controlling the confounds presented by the CVs.
§.§.§ Dependent Variables
Among the several attitude measurements in the survey, in this thesis, we focus on the Expected Level of Online Class Success, which is measured by the section of "the initial expectation of success" in the survey. This section included 16 questions. Among these questions, we asked about their expectations of the following aspects in the online version of the class compared to the in-person version of the same class: students' performance, flexibility, interaction, fairness, and quality.
The survey asked the above questions using a 7-point Likert scale, where 1 means the most successful and 7 means the least successful, compared with the in-person course. We calculated the average value in the question section of "the initial expectation of success" as the value of the Expected Level of Class Success. After transformation, we had 4.99 points as the average value, which shows participants had a slightly lower expectation of course success when it's taught online compared to in-person.
§.§.§ Independent Variables
For independent variables, we defined variables of interest in the following categories:
* Subject area(I1)
The affiliated departments of the course. We deductively coded the answers into 6 subject areas: Arts and Humanities, Business, Health and Medicine, Public and Social Services, STEM, and Social Sciences.
* Absolute class size(I2a)
The number of students in their classes.
* Relative class size(I2b)
This refers to how big their class was relative to all the courses offered in their department. They have three options: small class, average-size class, and large class.
* Instructor-student interaction time(I3a)
The survey asked how much time they used for interaction time between themselves and students throughout the class when the course was in-person. We calculated the average value of the corresponding answers and standardized them from 0 to 1, and then multiply this answer with 7 so that the outcome form is aligned with other 7-point Likert scaled variables. After these processes, we have the final value that ranges from 1 to 7, where 1 means the least time and 7 means the most time.
* Peer interaction time(I3b)
Same with I3a, the survey asked how much time they used for interaction time among students throughout the class when the course was in-person. Since all the questions in this section are on a 7-point Likert scale, we calculated the average value of the corresponding answers that range from 1 to 7, where 1 means the least time and 7 means the most time.
* Physical space(I3c)
The survey question asked about the necessary physical space (e.g., chemistry lab, dance studio) or equipment (e.g., lab equipment, mirrors) the course relied on when it was in-person. And we deductively coded their answers to physical space demands into the following categories: need special spaces, and no need.
* Technology(I3d)
The survey asked them what technology artifacts (e.g., projector, iClicker) or technology platforms (e.g., Compass, Slack) were used when the course was in-person. And we deductively coded their answers of used technology into the following categories: both software and hardware, hardware only, software only, and no tech.
* Online components in the in-person course(I3e)
The survey asked them what parts of the in-person course were conducted online. And we inductively coded their answers into the following categories: Only regular assignments/emails/forums were online versus more components were online.
* Self-efficacy towards teaching online(I4)
The survey asked them several questions about their self-efficacy in online teaching. Since all the questions in this section are on a 7-point Likert scale, we calculated the average value of the corresponding answers that range from 1 to 7, where 1 means the most efficacious and 7 means the least efficacious.
* Initial proficiency towards technology(I5)
Similar to I4, the survey asked them several questions about their initial proficiency towards technology. And then we calculated the average value that ranges from 1 to 7, where 1 means the most proficient and 7 means the least proficient.
* Open attitudes towards technology(I6)
Similar to I5, the survey asked them several questions about their open attitudes toward technology. And then we calculated the average value that ranges from 1 to 7, where 1 means the most openness and 7 means the least openness.
§.§.§ Controlled Variables
By applying the transformation rules, We transform the data into all Controlled Variables(CV) shown in the Table 2.
§.§ Directed Acyclic Graphs
We build DAGs(Directed Acyclic Graph) to define causal relationships among these variables. Based on the features of DAGs, there are 3 possible relationships between variable A and variable B: no relations, A → B and B → A. We denoted the arrow as the direction of the causal relationship. For example, we believe there is a causal relationship in I1: Subject area → I4: Self-efficacy towards teaching online. If we change the subject area, the most important teaching part will also change, but not the reverse. We built the DAG for all variables and linked the variables that could have a causal relationship to other variables with directed arrows. The graph is shown in Fig.3.
§.§ Bayesian Models
We used Bayesian analysis to study the direct effects of each IV on our target DV, Expected Level of Class Success, under the circumstance that all other confounding variables (based on the DAG) in the model are controlled. We choose Bayesian analysis as the main tool for our quantitative study due to the following reasons<cit.>:
First, since we only have less than 200 data points, Bayesian methods can help to compensate for the limitations of small sample size data<cit.> and improve the analysis accuracy with prior information or beliefs about the data.
Second, Bayesian methods tend to be more flexible and robust than traditional frequentist methods in the context of small sample sizes, as they do not rely on strict assumptions about the distribution of the data.
Third, Bayesian models offer the advantage of allowing for explicit specification and incorporation of all aspects of the model, without requiring the need to check modeling assumptions that are not already included in the model description, thereby foregrounding all relevant assumptions and parameters within the model.
Lastly, In contrast to null hypothesis significance testing (NHST), Bayesian analysis prioritizes quantifying the strength of an effect rather than simply determining its presence, which aligns more closely with the exploratory nature of many studies. As a result, Bayesian analysis can provide more informative and nuanced insights into the data under investigation.
To better explain what a direct effect of an IV on a DV is, let's look at an example of the following causal relationships: I1 → I3 → D1; I1 → D1, we included I3 in the model of studying I1 → D1 so that we can block the effect of I3 → D1, and hence we can study the direct effect of I1 → D1. Based on the DAGs we built, to estimate the direct effect of each IV to the DV, we carefully choose which IVs and CVs to include in each model such that all the confounds were taken care of and no new confounds were created. We created a total of 8 different models.
After plotting the histogram of our target outcome variable Expected Level of Class Success, we found it would fit the Normal distribution. So we modeled the data as a Normal distribution and used linear regression models to estimate the Normal distribution means and standard deviation for different values in the exposure variables.
To calculate how the DV changes per unit change in an IV, we took different approaches for a categorical IV and a Likert-scale continuous IV. We define a unit change of a category IV as changing from one category to another, and a unit change in a Likert-scale IV as changing one Likert level. So for categorical ones, we contrast each two of all possible categories; for continuous ones, we contrast them with a unit point difference in a 7-point Likert scale. By contrasting the posterior distributions of the means for 2 different categorical conditions or 1 point difference in the 7-point Likert Scale, we can see how the exposure variable affects outcome variables. And we also estimated the posterior distribution of Cohen's D for the linear regression model as a measurement of relative effect size. | null | §.§ Building confidence to expect success
In Section 4.2, the findings reveal a significant positive causal relationship between higher Self-efficacy toward Online Teaching (I4) to increased expectations for success. This outcome supports our initial hypothesis. The direct effects from initial proficiency towards technology (I5) and open attitudes towards technology (I6) to the outcome measurement did not demonstrate statistical significance, which contradicted our hypotheses and previous studies results<cit.>. Our best explanation for this contradiction is that I5 and I6 affect success expectations through I4. In support of this explanation, we observed notable correlations between I5 and I4, as well as between I6 and I4, with Pearson correlation coefficients of 0.53 and 0.47<cit.>, respectively. These results suggest a moderate positive association between I4 and both I5 and I6. Therefore, we posit that enhancing instructors' open attitudes towards technology, improving their proficiency in technology, and fostering their self-efficacy are crucial considerations for promoting success. Below, we propose recommendations that focus on strategies to cultivate instructors' open attitudes, enhance their technological proficiency, and bolster their confidence levels.
First, we encourage instructors to collaborate with colleagues with experience teaching online. Experienced colleagues can provide guidance, share best practices, and offer support during their transition. Collaborating with experienced online instructors provides valuable insights and helps build confidence through shared knowledge and experiences. Second, instructors can enhance their online teaching experience by familiarizing themselves with the online platform they will be using. It is highly recommended for instructors allocate time to explore the platform's features and functionality. By doing so, they can become comfortable with the platform's interface, tools, and options. This familiarity allows them to navigate through the platform effortlessly and access the necessary teaching resources with ease.
On the other hand, universities should offer online teaching workshops or training sessions specifically designed to help instructors transition to online teaching, especially for those who have lower self-efficacy in online teaching. These sessions can provide valuable guidance on effective online teaching strategies, technology usage, and engagement techniques. Participating in these workshops can boost instructors' confidence and provide them with the necessary skills to teach effectively online.
§.§ Small size class in online teaching
Prior to the data collection, we believe online teaching brings more flexibility and compatibility in containing larger classes while inhibiting in-person interaction in small group classes, which would make large classes more successful than smaller ones. In section 4.3, we found our results matched the hypothesis: smaller size classes have lower success expectations, compared with average size and larger size. At the same time, for small-size classes, both peer interaction time and peer interaction time are higher than average classes and large classes. Even after the transition, small classes still kept the trends of emphasizing in-class interaction. In other words, although small-size classes contained relatively more intensive interaction, online teaching limited their expected success. Hence, we propose recommendations for universities and instructors to practice in small-size interactive classes.
First, instructors can use more visual cues and nonverbal communication during class. Online platforms with video capabilities allow for visual cues and nonverbal communication, helping instructors gauge student understanding, reactions, and engagement. Similarly, students can observe and learn from their peers' nonverbal cues, enriching their learning experience even in an online setting. Second, instructors can manage time for more one-on-one support. Online platforms offer tools like private messaging and quick responses, allowing instructors to give individual attention to students. In smaller online classes, instructors can closely track student involvement, offer personalized assistance, and make sure students grasp the material and make progress.
For departments and universities, first, it is beneficial to establish online learning support teams that include instructional designers, technology specialists, and online learning experts who can assist instructors in designing interactive online activities, troubleshooting technical issues, and providing guidance on best practices. Second, universities can enhance the quality of online teaching by conducting regular evaluations and gathering feedback at both the campus-wide and department-wide levels. By implementing mechanisms for collecting feedback from instructors and students, universities can gain valuable insights into their online teaching and learning experiences. This feedback can then be utilized to identify areas for improvement, address challenges, and refine online teaching strategies. Additionally, conducting evaluations of online courses can ensure ongoing quality assurance and effectiveness in the delivery of online education.
§.§ Instructional design
In Section 4.4 of our study, we examined the impact of various instructional design variables on the expected success level of online teaching. Specifically, we focused on variables related to physical space demand, technology utilization, and the inclusion of online components (I3c, I3d, and I3e, respectively). According to our initial hypothesis, classes that required specific physical spaces, both hardware and software use, and more online components would encounter difficulties in their initial online teaching experience.
However, our analysis revealed that these variables did not have a significant direct effect on the Expected level of class success. While they did show a tendency towards lower success expectations, their effects did not reach statistical significance. But this does not mean these variables do not have a significant total effect on expected success. It is worth noting that we believe these variables causally affect another variable, I4, which significantly affects Expected level of class success. As I4 is already incorporated into the model, it is possible that its inclusion "weakened" the effects of the aforementioned variables, thereby diminishing their direct impact on success expectations. | In this thesis, we conducted quantitative studies to investigate the expectations of first-time online teaching instructors before their transition. Our research examined various factors such as fields of study, class size, instructional design, self-efficacy of online teaching, tech-savviness, and technology acceptance. Through our findings, we discovered that instructors' expectations of success in their first online class were significantly influenced by their self-efficacy toward online teaching. Additionally, we found that smaller class sizes were associated with lower expectations of success, while factors such as instructional design elements and prior technology platform usage did not significantly impact the final expectation. Based on our research, we have provided recommendations for instructors and universities to build self-efficacy and ensure quality and effectiveness in online teaching. Additionally, we have provided suggestions specifically tailored for small-size interactive classes in the online teaching context. |
http://arxiv.org/abs/2409.17268v1 | 20240925183338 | LiV2O4: Hund-Assisted Orbital-Selective Mottness | [
"Martin Grundner",
"Fabian B. Kugler",
"Olivier Parcollet",
"Ulrich Schollwöck",
"Antoine Georges",
"Alexander Hampel"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
[email protected]
§ ABSTRACT
We show that the remarkably small Fermi-liquid coherence scale and large effective mass observed in LiV_2O_4 are due to the proximity of a Hund-assisted orbital-selective Mott state.
Our work is based on an ab initio dynamical mean-field approach, combining several quantum impurity solvers to capture the physics from high to very low temperature.
We find that the Hund coupling plays a crucial role in rearranging
the orbital populations and in generating the heavy mass and low coherence scale.
The latter is found to be approximately 1–2 Kelvin, even though
the most correlated orbital is found to be significantly doped (∼ 10%) away from half-filling.
A flat quasiparticle band appears near the Fermi level as a result of the strong
electronic correlations. Finally, we discuss our results in comparison to experiments.
LiV_2O_4: Hund-Assisted Orbital-Selective Mottness
A. Hampel0000-0003-1041-8614
September 28, 2024
==================================================
The `heavy fermion' (HF) phenomenon – quasiparticles acquiring very large
effective masses – is one of the most spectacular manifestations of strong
electronic correlations. It is usually observed in materials having conduction
electrons hybridizing with very localized degrees of freedom, typically
f-orbitals in rare-earth or actinide compounds. In this context, the
discovery of a remarkably large specific-heat enhancement in LiV_2O_4
(LVO), a transition-metal oxide involving no f-electrons, came as quite a
surprise <cit.>. Indeed, only few other
transition-metal compounds displaying very large mass enhancements are known to
this day <cit.>.
The electronic structure of this material obtained with the density functional theory
(DFT) displays two sets of bands close to the Fermi level as shown in Fig. <ref> <cit.>:
one set of character, with a narrow bandwidth (∼ 0.7 eV), and one set of character with a broader bandwidth (∼ 1.9 eV).
A connection to HF materials was therefore suggested in early works <cit.>,
with the (resp. ) band playing the role
of localized states (resp. conduction electrons).
It was later realized, though, that this picture is not tenable. From a
theoretical viewpoint, the narrow band is too broad to play the role of
localized states, and, importantly, the coupling between the narrow and wide
band is dominantly an intra-atomic Hund coupling which favors spin alignment, not an antiferromagnetic Kondo
exchange <cit.>.
Furthermore, some experimental features are markedly different from conventional HFs.
The onset of Kondo coherence in HFs is typically signaled by a coherence peak in the temperature (T) dependence of the resistivity,
followed by a lower scale below which ∼ T^2 Fermi-liquid (FL) behavior holds.
While a small FL scale ∼ 2 K is indeed observed in LVO,
the resistivity, however, monotonously increases upon heating, eventually reaching a `bad metal' regime at high T <cit.>.
From various experimental probes, the onset temperature of electronic coherence is estimated as
∼ 10–30 K <cit.>.
Aware of the difficulties of the Kondo scenario, Arita et al.
proposed an alternative explanation <cit.>. In their picture, all
the action takes place in the narrow band, which is viewed as a doped
Mott insulator, while the broader bands are merely spectators. In this description, a tiny hole doping (∼ 2%) of the band yields the observed large effective mass.
In support of this picture, the authors of Ref. <cit.> performed
dynamical mean-field theory (DMFT) <cit.> calculations on a
simplified two-orbital model down to room temperature
(as well as projective Monte Carlo calculations at T= 0).
However, because of the very low energy scales involved, a full calculation of a
realistic model for LVO in the DFT+DMFT framework covering the whole T range from above to below has not been achieved yet <cit.>.
In this Letter, we leverage on recent advances in
DMFT impurity solvers to achieve this goal and elucidate the physics
responsible for the HF behaviour in LVO.
To probe the full crossover from a higher-T incoherent metal to the
low-T heavy-mass FL and to access the low FL scale,
we use a combination of three DMFT solvers:
Quantum Monte Carlo (QMC) for 10 K < T < 300 K,
tensor networks (TN) for T > 1 K
and the numerical renormalization group (NRG) for T < 1 K.
We show that LVO is close to a Hund-assisted orbital-selective Mott
state <cit.>.
The inter-orbital Hund coupling J (and therefore the bands) plays
a central role. First, it induces a redistribution of the
(, ) orbital populations per vanadium due to interaction from
(0.4, 2× 0.55) in DFT to (0.9, 2× 0.3) in DMFT. Second, it is
responsible for the heavy mass and low via the spin-blocking mechanism
characteristic of Hund systems in which electrons can only hop between atomic
configurations with maximum spin <cit.>.
LVO is shown to be close to an orbital-selective Mott state in which the a_1g
orbital is prevented to fully localize at T=0 due to inter-orbital hopping
<cit.>. Remarkably, we obtain a low already at ∼ 10% doping of the band, in
contrast to the ∼ 2% doping of Ref. <cit.>.
Electronic structure and effective model—
LVO crystallizes in the fcc spinel structure (space group Fd3m).
Crystal structure data from neutron scattering are available down to 12 K <cit.>.
The primitive unit cell contains 14 atoms, with 4 V atoms, each embedded in an octahedral crystal field of the surrounding oxygen atoms.
The octahedra themselves are corner-sharing and the local point group is trigonal D_3d, lowering the symmetry of the system and splitting the t_2g states into a single and a doubly degenerate orbital higher in energy.
We perform DFT calculations using the Quantum ESPRESSO
software package fixing the structural parameters to the 12 K experimental values <cit.>.
Maximally localized Wannier functions (MLWFs)
for the and states are constructed with
Wannier90 <cit.>,
accurately reproducing the DFT low-energy electronic structure.
In Fig. <ref>(a), the bandstructure of the Wannier Hamiltonian is shown together with its projection on the and states.
The projected density of states (DOS) in Fig. <ref>(b) reveals that the DOS is sharply peaked at 0.1 eV above the Fermi level, whereas the DOS is relatively flat.
The constrained random phase approximation <cit.>
as implemented in the RESPACK code <cit.> is used to determine the effective Coulomb interaction at low energy.
We focus on the static, ω= 0 limit
and fit the resulting four-index tensor to the symmetrized Kanamori form (including spin-flip and pair-hopping terms) with three independent parameters. The optimal fit yields U = 3.94 eV, U' = 2.83 eV, and J = 0.56 eV, which violates the full rotational invariance (U'=U - 2J) by only ∼ 0.01 eV.
We use DMFT to solve the effective many-body model, with a combination of impurity solvers to cover all T regimes (see <cit.> for details).
First, we use the continuous-time hybridization-expansion QMC solver
implemented in the TRIQS software library <cit.> and its interface to electronic structure codes TRIQS/DFTTools and TRIQS/solid_dmft <cit.>.
QMC calculations were performed down to 11.6 K (1/1000 eV).
Second, the T=0 TN-based solver from <cit.>
was extended to T > 0 using thermal-state purification <cit.> and applied down to 2.9 K (1/4000 eV).
Third, to access the sub-1 K regime, we use
NRG, similarly as in Refs. <cit.>.
To make non-degenerate three-orbital NRG calculations tractable, we increase the local symmetry by neglecting
the pair-hopping part of the interaction when using this solver <cit.>.
Crossover to a heavy FL—
The emergence of the low-T heavy FL is illustrated in
Fig. <ref>(a). It shows the imaginary part of the self-energy
for each orbital as a function of Matsubara frequency for several T.
The self-energy has a strong T dependence at low frequency,
with a sudden decrease below the onset temperature ∼ 12 K.
This is a hallmark of strong correlations and
corresponds to a sharp crossover from a high-T
incoherent regime with a large scattering rate to a low-T coherent regime.
This is further supported by
Fig. <ref>(b), which shows the
extrapolated zero-frequency value of the scattering rate
-ImΣ_a_1g(i0^+) as a function of T.
The local spectral function (DOS) at the Fermi level A_a_1g(0)
correspondingly increases, as shown in Fig. <ref>(c).
By contrast, the self-energy is smaller and depends more weakly on temperature and frequency.
The FL fully develops only below the FL coherence temperature ∼ 1–2 K,
an order of magnitude smaller than .
The FL regime is characterized by -ImΣ∼ω^2 +π^2 T^2 (see End Matter for T= 0 and Fig. <ref>(b) for ω= 0) for the retarded self-energy and χ^''∼ω for the imaginary part of the retarded local spin susceptibility (see End Matter).
The quasiparticle weights (from NRG at T= 0) are
Z_≈ 0.003 and Z_≈ 0.03.
Quasiparticle bandstructure—
Figure <ref>(a) shows the momentum-resolved spectral function
summed over orbitals at T=11.6 K.
The momentum-integrated
spectral functions are displayed in panel (b) for several T.
These results were obtained by
analytic continuation of the QMC self-energies using Padé extrapolation,
see Fig. <ref>(c)–(d),
and are consistent with NRG <cit.>.
We see from Fig. <ref>(b) that the onset of coherence is associated with the growth of a remarkably narrow
quasiparticle peak at the Fermi level, in line with the small Z_a_1g.
This corresponds to the formation of a flat band near the Fermi level (see panel (a)),
together with the strong renormalization of the overall band structure
(compare the black lines from DFT).
The formation of the flat band can be understood from the fact that
ReΣ_a_1g(ω)
(panel (c)) exhibits a very steep rise
near ω= 0.
As a result, the quasiparticle equation (quoting for simplicity a one-band version)
ω-ReΣ(ω)=ε_k-μ has solutions ω_k which
remain very close to zero energy for an extended range of momenta, as seen in panel (a) near the L-point.
Discussion—
To elucidate the mechanism responsible for the strong correlations observed above,
we first consider the effect of varying the Hund coupling J.
Figure <ref>(a) shows the self-energy on the Matsubara axis at T = 11.6 K for various J.
At small J, the self-energies are small;
for J ≳ 0.3 eV,
strong correlations develop.
The self-energy becomes large and acquires the characteristic frequency dependence emphasized above
(Fig. <ref>(a)).
This frequency dependence is reminiscent of Hund metals in the `spin freezing'
regime <cit.> (see, e.g., Fig. 8 of Ref. <cit.>).
Correspondingly, the extrapolated zero-frequency scattering rate at finite T quickly increases as a function of J
(inset of panel Fig. <ref>(a)), indicating a suppressed coherence scale.
The Hund coupling is known to induce strong correlations through the `spin blocking' mechanism, which
forces electrons to keep a high-spin configuration while hopping between different sites.
The dominance of high-spin configurations is indeed supported by the valence- and spin-resolved
histograms, obtained with QMC, see Fig. <ref>(b),
or NRG <cit.>.
In Fig. <ref>(c), we show the and orbital occupancies as a function of J.
The Hund coupling induces a significant redistribution of orbital populations <cit.>,
with the occupancy of the orbital evolving from the DFT value ∼ 0.4 to ∼ 0.9 as J is increased
beyond J ∼ 0.3 eV.
Hence, the orbital becomes ∼ 10% hole-doped away from half-filling. This points at the relevance of
Mott physics in this material, as proposed in Ref. <cit.>.
To analyze the possible proximity of a Mott-insulating state, we perform a numerical experiment in which we add a crystal field to the Wannier Hamiltonian, while keeping the total d-electron count equal to
1.5 by adjusting the chemical potential.
Figure <ref>(d) shows determined by NRG as a function of the orbital occupancy (lower horizontal scale) or the applied crystal field (upper scale).
We see that is driven to remarkably low values as half-filling of the orbital is approached, with
values as low as 10^-6 K at 1% doping.
This can be interpreted as the system reaching an orbital-selective Mott (OSM)
phase <cit.> in which the
electrons remain itinerant while the ones basically localize, full localization being prevented
at T = 0 by inter-orbital hopping <cit.>.
The relevance of the OSM phenomenon and of Hund physics as a possible cause for
d-electron HF behavior was recently emphasized by Crispino et al. <cit.> and also previously mentioned qualitatively for LVO in the NMR study of Shimizu et al. <cit.>.
While our results support the relevance of Mottness for LVO <cit.>,
they provide novel insights into the mechanism responsible for the HF behavior in LVO,
namely the decisive role of Hund coupling and the wide range of permissible fillings.
In Fig. <ref>(d), we compare of LVO, as a function of orbital occupancy,
to that of a single-band model with the same DOS as the one of the orbital (also solved in DMFT).
We see that, in order to reach a value of in the range 1–2 K, as observed in experiments,
a tiny doping of ∼ 1% is required in the single-band picture. By contrast, the small is robustly generated from
the realistic multi-orbital description, as a result of orbital and Hund physics,
in a wide range of a_1g doping.
More broadly, our results emphasize that the multi-orbital character
of the system is crucial, with inter-orbital correlations playing a key role, and hence that the
bands are not merely spectators.
Comparison to experiments—
Finally, we discuss how our results compare to experimental observations.
The resistivity, Hall effect, specific heat and susceptibility measurements on single crystals <cit.>, as well as optical spectroscopy <cit.>, all point to electronic coherence setting in below ∼ 10–30 K. Above , LVO displays a large scattering rate and resistivity.
This is consistent with our results in Fig. <ref>.
Furthermore, a T^2 dependence of the resistivity is only observed below ∼ 2 K, in excellent agreement with
our estimate of ∼ 1–2 K.
The gradual emergence below of an extremely narrow quasiparticle peak reported in photoemission spectroscopy <cit.> also aligns with our Figs. <ref>(c) and <ref>(b). Angle-resolved photoemission experiments would be desirable to test the predictions of Fig. <ref>(a).
The low-T specific heat coefficient C=γ T + … can be evaluated within DMFT
from the zero-frequency spectral functions and quasiparticle weights of each orbital m as γ=2π^2/3 k_B^2 ∑_m A_m(0)/Z_m.
The bare DFT value from our calculated DOS
is γ_DFT= 17.1 mJ/mol K^-2, in good agreement with previous theoretical estimates <cit.>.
Reported experimental values in the low-T HF regime are in the range γ/γ_DFT∼ 25 <cit.>
to ∼ 30 <cit.>.
A comparable value is reached in our DMFT calculations at T ∼ 8 K, but, at low T, we overestimate γ/γ_DFT
by a large factor (∼ 10).
We note, however, that no clear saturation of C/T at low T is seen experimentally, and that further increase of γ at low T
was reported on some samples <cit.>, so that the precise behavior of the
specific heat of LVO at low T is yet to be fully clarified.
We conjecture that the overestimation of γ at low T comes from the intrinsic limitations
of single-site DMFT. In this approach, γ is inversely proportional to Z, which we find to be tiny for the orbital.
In a more accurate treatment beyond the single-site approximation,
the inter-site magnetic exchange should intervene and reduce C/T by
reducing the entropy associated with fluctuating local moments above .
Indeed, slave-particle <cit.> or cluster-DMFT calculations <cit.>
yield, for a single band model, γ/γ_DFT∼ 1/(Z+/ϵ_F) with
the antiferromagnetic super-exchange and ϵ_F a typical (bare) electronic scale.
In many oxides, the exchange term limits the value of γ as compared to the DMFT 1/Z enhancement.
Remarkably, this effect appears to be much less pronounced in LVO, allowing γ to reach the
large observed value.
This is most likely due to frustration <cit.>
which results in a large amount of entropy stored at low T and hence a
reduced value of the exchange term /ϵ_F.
NMR <cit.> and neutron scattering <cit.> experiments yield
estimates of the effective exchange between V-atoms in the range ∼ 20–30 K, comparable to and significantly smaller than typical values in unfrustrated oxides.
A proper description of the spin correlations probed by these experiments
likely requires a treatment beyond single-site DMFT <cit.>.
Outlook—
The physical picture emerging from our work is that LVO is close to a Hund-assisted
orbital selective Mott state. Recent experiments have indeed revealed that
an insulating state can be induced by Li-intercalation <cit.> or, together with charge ordering, when
subjecting this compound to uniaxial strain <cit.>.
While some qualitative aspects of this physical picture
were anticipated before <cit.>,
our work demonstrates how key progress on DMFT solvers now allows us to cover the full range of energy scales in
this particularly challenging case and reliably establish the physical mechanism at hand.
Our work also points at two necessary further developments: (i) including spatial fluctuations and computing
spin and charge responses beyond single-site DMFT and (ii) exploring
instabilities toward charge and spin ordering
dependent on external perturbations such as pressure, uniaxial strain,
and chemical substitutions.
Acknowledgements—
We acknowledge useful discussions and communications with Ryotaro Arita, Sophie Beck, Riccardo Comin, Andrea Damascelli, Luca de' Medici, Dongjin Oh, Dennis Huang, Xiangyu Luo, Giorgio Sangiovanni, Hidenori Takagi, Manish Verma, and Jan von Delft.
M.G. and U.S. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy-426 EXC-2111-390814868.
M.G. and U.S. are grateful to the Flatiron Institute and A.G. to the University of Munich for their hospitality.
The Flatiron Institute is a division of the Simons Foundation.
§ END MATTER
Appendix: Real-frequency dynamics—
We include, for completeness, several numerical results obtained directly in real frequencies from DFT+DMFT calculations using the NRG impurity solver.
Figure <ref> shows the local spectral function and the real part of the self-energy of the a_1g orbital on linear scales.
The main panels give an overview at a large frequency range from -4 eV to 3 eV.
The results from three choices of temperatures below 3 K completely overlap.
Close to zero frequency, one notices a strikingly sharp low-energy feature.
The insets, magnifying this low-energy feature, reveal how the quasiparticle peak (top panel) builds up with decreasing temperature and the slope of ReΣ (bottom panel) increases, corresponding to a decreasing quasiparticle weight.
Figure <ref> shows the imaginary parts of the a_1g self-energy and of the local spin susceptibility on logarithmic scales, obtained at T = 0.
Colors indicate three choices of the crystal field;
Δ≈ 0.18 is the appropriate value for LVO,
and larger Δ is used to approach an OSM state.
The top panel demonstrates the large scattering rate in the incoherent regime at finite energies
and the crossover to FL behavior -ImΣ∼ω^2 at extremely low frequencies.
The bottom panel supports this picture:
The incoherent regime is characterized by an extremely large spin susceptibility, while,
in the FL regime, χ^''∼ω decreases as energy is decreased.
The peak position of χ^'', ω_max, can be used to estimate T_FL (e.g., T_FL=ω_max/2).
The result for LVO (Δ≈ 0.18) is roughly 0.1 meV or 1 K.
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17402v1 | 20240925222629 | Enhancing Recommendation with Denoising Auxiliary Task | [
"Pengsheng Liu",
"Linan Zheng",
"Jiale Chen",
"Guangfa Zhang",
"Yang Xu",
"Jinyun Fang"
] | cs.IR | [
"cs.IR",
"cs.AI",
"cs.LG"
] |
GBKsong
empty
Liu PS, Zheng LN, Chen JL et al. Enhancing Recommendation with Denoising Auxiliary Task. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 33(1): –last-page Jun. 2024. DOI: 10.1007/s11390-024-4069-5
1College of Big Data and information engineering, Guizhou University, Guiyang 550025, China
2Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
3University of the Chinese Academy of Sciences, Beijing 100049, China
E-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected];
Received December 25, 2023; accepted June 25, 2024.
This is a post-peer-review, pre-copyedit version of an article published in Journal of Computer Science and Technology at <https://jcst.ict.ac.cn/en/article/doi/10.1007/s11390-024-4069-5>
The work was supported by the Program for Student Innovation through Research and Training under Grant No. 2023SRT071.
^Equal Contributions
^*Corresponding Author
Institute of Computing Technology, Chinese Academy of Sciences 2024
Abstract The historical interaction sequences of users plays a crucial role in training recommender systems that can accurately predict user preferences. However, due to the arbitrariness of user behavior, the presence of noise in these sequences poses a challenge to predicting their next actions in recommender systems. To address this issue, our motivation is based on the observation that training noisy sequences and clean sequences (sequences without noise) with equal weights can impact the performance of the model. We propose a novel self-supervised Auxiliary Task Joint Training (ATJT) method aimed at more accurately reweighting noisy sequences in recommender systems. Specifically, we strategically select subsets from users' original sequences and perform random replacements to generate artificially replaced noisy sequences. Subsequently, we perform joint training on these artificially replaced noisy sequences and the original sequences. Through effective reweighting, we incorporate the training results of the noise recognition model into the recommender model. We evaluate our method on three datasets using a consistent base model. Experimental results demonstrate the effectiveness of introducing self-supervised auxiliary task to enhance the base model's performance.
Keywords Auxiliary Task Learning, Recommender System, Sequence Denoising
=14.8pt plus.2pt minus.2pt
=0pt plus.2pt minus0.2pt
2
§ INTRODUCTION
Recommender systems play a crucial role in today's internet and e-commerce domains, offering users improved information retrieval and shopping experiences, while also yielding substantial economic benefits for businesses <cit.>. Click-through rate (CTR) prediction holds a significant role within personalized recommender systems <cit.>. By analyzing users' historical interaction sequences, these systems recommend products aligned with user interest and preferences, facilitating the discovery of potentially engaging content <cit.>. This method enhances user experience, fosters sales and propagates content <cit.>.
In the context of sequence-based recommendation, the issue of noise present in sequences significantly impacts the establishment of accurate and reliable recommender models, forming a complex and pivotal challenge within the field. Sequence noise can arise from various sources, including user curiosity, data collection inaccuracies and environmental shifts, consequently leading to misjudgments of user interest and inaccurate recommender model outcomes <cit.>. Models trained on clean sequences significantly outperform those trained on original, noise-containing sequences. This underscores the imperative of exploring denoising strategies in recommender systems <cit.>.
To address the challenges mentioned above, denoising of sequences has garnered increasing attention from researchers. Recent studies demonstrate that using denoising methods in recommender systems can lead to more efficient model training and better performance at a reasonable computational cost <cit.>. The existing denoising process involves two steps: recognizing noise and handling noisy sequences.
In practice, recognizing for noise typically judges sequences with high loss values as noisy sequences. Based on the handling of noisy sequences, existing methods can be categorized into two types: truncated denoising and reweighted denoising. For the truncated denoising method <cit.>, the objective is to train a network capable of recognizing noise and discarding noisy sequences, allowing the model to only learn from clean sequences. Regarding the reweighted denoising method <cit.>, once noisy sequences are recognized, this method tends to assign smaller weights to these sequences throughout the entire model training process, thereby reducing the contribution of these sequences to the recommender model.
Although these denoising methods contribute to improving recommender model's performance, user behavior encompasses diverse interest and motivations. Some interactions may be temporary, random or influenced by other factors, which increases the difficulty of recognizing between noisy and clean sequences. Moreover, due to complex data distributions and inherent learning difficulties, high loss values do not necessarily indicate noisy sequences. Additionally, the presence of thresholds in the truncated denoising method heavily relies on the sampling distribution during the decision-making process, inevitably discarding many clean sequences and potentially exhibiting biased selections <cit.>. Reweighted denoising method requires specific configurations for a given model or recommendation task, which can be time-consuming and challenging to transfer to other settings <cit.>.
To address the aforementioned issues, from an intuitive perspective, we posit that using a noise recognition model to identify noise sequences and then assigning smaller weights to these sequences to mitigate their influence can enhance the performance of the recommender model. Unlike traditional noise recognition methods, we propose a direct method by constructing a noise recognition model as an auxiliary task to specifically identify noisy sequences. Moreover, to mitigate the impact of reduced training data on the recommender model, we use a novel adaptive reweighting method: training the noise recognition model and the recommender model jointly. This method allows for assigning the most suitable weights for different sequences, optimizing the performance of the recommender model.
Initially, we construct a noise recognition model to differentiate between clean and noisy sequences in the original dateset. Given the difficulty of identifying noisy sequences within the original data <cit.>, we artificially create noisy sequences by replacing historical click items of the original sequences with random data. Due to the inherent limitations of human intervention, the artificially replaced noisy sequences may not fully replicate the authentic noisy sequences present in the original sequences. However, since certain authentic noisy sequences also result from users' sporadic, unintentional clicks, there are some similarities between them. Based on the assumption of the existence of certain similarities, we believe that the artificially replaced noisy sequences can represent a portion of the original noisy sequences, thus we regard the artificially replaced noisy sequences as noise data. Given the scarcity of true noise data within the original sequences, we regard the original sequences as clean data. At this point, we can conduct labeled training for the noise recognition model.
Furthermore, we cannot simply discard the noisy sequences from the original sequences, as these noisy sequences may contain factors that are beneficial for the training of the recommender model, and different noisy sequences have varying impacts on the training of the recommender model. Consequently, we use a novel adaptive reweighting method. Taking into account that a fixed weighting strategy does not adapt to model variations and that the contributions of noisy data to model training are not uniform, we opt to design the sequence weights as learnable parameters associated with denoising method and beneficial for the performance of the recommender model. Specifically, we train the noise recognition model using original sequences and randomly replaced noisy sequences. The noise recognition model then weights non-overlapping original sequences not used in its training. These weighted sequences are subsequently used to train the recommender model. This joint training is accomplished through auxiliary task, ensuring that the noise recognition model accurately identifies noisy sequences while optimizing the results of sequence reweighting.
After training the noise recognition model, the noise recognition model becomes adept at accurately distinguishing between these two types of sequences. In other words, the noise recognition model tends to classify the original sequences it was trained on as clean sequences, which results in the inability to recognize the noisy sequences in the original sequences. Taking this issue into consideration, we choose to use the non-overlapping original sequences that were not involved in the training of the noise recognition model as inputs for the recommender model allows us to determine which of the input sequences used during the training of the recommender model contain noise.
The main contributions of this work are:
* We introduce a novel self-supervised Auxiliary Task Joint Training (ATJT) method, where the weights obtained from the joint training of the noise recognition model and the recommender model are reweighted onto the sequences used for training the recommender model. This method enhances the performance of the recommender model.
* The ATJT method is versatile and can be applied to various underlying recommender models.
* We evaluate the ATJT method on three datasets using a consistent base model. Experimental results show that our method improves recommender model performance.
The paper is structured as follows: section 2 provides a comprehensive overview of related work, focusing on CTR models and denoising methods. section 3 introduces the preliminary work, describes the training processes for both the noise recognition and recommender models, and explains the ATJT method. section 4 presents the experimental setup, results and model analysis. section 5 concludes the paper with a summary of our work and discusses future research directions.
§ RELATED WORK
In this section, we introduce the CTR Models and provide a comprehensive overview of the methods related to sequence denoising in CTR Models.
§.§ CTR Models
In recent years, deep learning based models have gained significant traction in CTR prediction <cit.>. These models exhibit strong representation learning capabilities, enabling them to capture more intricate and challenging patterns and features. Existing deep learning based recommender models can be broadly categorized into two types: sequence-based <cit.> and graph-based <cit.>. We propose a sequence-based denoising method in this paper. Consequently, this subsection focuses on sequence-based recommender models. Wide & Deep <cit.> and DCN <cit.> leverage the memory and generalization capabilities of feature interactions by combining traditional generalized linear models with deep neural networks. DIN <cit.> uses self-attention mechanisms to enhance the representation of user interest. SASRec <cit.> and S3Rec <cit.> utilize multi-head self-attention mechanism to model relationships within sequences. PS-SA<cit.> employs a learnable progressive sampling strategy to identify the most valuable items. FEARec <cit.> enhances recommendation by converting user historical behavior sequences into frequency domain representations and combining them with a self-attention mechanism.
CTR models leverage self-supervised learning <cit.> methods to improve data utilization and learn feature representations. For instance, DuoRec <cit.> and MPT <cit.> enhance item embedding distributions through contrastive learning. ICL <cit.> and simple CL method <cit.> address data sparsity and popularity bias by learning user intent representations. Pre-training GNN <cit.>, multi-channel hypergraph convolutional network <cit.>, DHCN <cit.> and self-supervised tri-training <cit.> integrate self-supervised learning with other relevant techniques to enhance the performance of recommender systems.
§.§ Denoising Methods
Identifying noisy sequences is an essential step in sequence denoising. DROP <cit.> and three instance selection methods <cit.> discuss how to reduce the number of sequences in the training set without affecting classification accuracy. AutoDenoise <cit.> deletes sequences that have a counteractive effect on the model through rewards. Hierarchical Reinforcement Learning for Course Recommendation in MOOCs <cit.> removes noisy courses by jointly training of a hierarchical reinforcement learning-based modifier and a basic recommender model. DeCA <cit.> determines noisy sequences by analyzing the discrepancies in user preferences predicted by two recommender models. MMInfoRec <cit.> and ContrastVAE <cit.> address issues such as sparsity and uncertainty in recommender systems by leveraging contrastive learning techniques. DT4SR <cit.> effectively resolves the problem of neglecting user dynamic preferences and item relationships in traditional methods by introducing uncertainty into sequential modeling. SDK framework <cit.> deals with the challenges of Knowledge Graphs (KGs) in knowledge-aware recommendation by modeling hyper-relational facts and using self-supervised learning mechanisms. SGL <cit.> improves the recommendation performance of long-tail items and the robustness against interaction noises by using an auxiliary self-supervised learning task. We propose a denoising auxiliary task neither requires considering the impact on the model nor adds excessive additional training steps. We define a model capable of recognizing noise, thereby enhancing the model's performance.
After recognizing the noisy sequences, we need to handle these sequences to improve the performance of the recommender model. Existing methods for handling noisy sequences can be classified into two categories: truncated denoising <cit.> and reweighted denoising <cit.>. WBPR <cit.> and T-CE <cit.> define thresholds for samples, truncating sequences with loss values higher than the threshold at each iteration. IR <cit.> modifies labels to train downstream modules for recommendation tasks. In R-CE <cit.>, smaller weights are assigned to high-loss sequences to prevent the model from fitting them too quickly. However, truncated denoising risks filtering out many clean sequences, while reweighted denoising suffers from limited transferability. We propose an ATJT method similar to reweighted denoising, but it addresses limitations by adaptively adjusting the weighting degree.
§ METHODOLOGY
In this section, we will introduce the preliminary work and discuss the training processes for both the noise recognition model and the recommender model. We will also provide a detailed explanation of how to implement the ATJT method.
§.§ Preliminary
In this paper, we use batch b composed of training sequences as the input for both the noise recognition model and the recommender model. Each batch has a size M, and the sequences have a length N.
All batches are divided into two groups, ℬ^R and ℬ^D. The batch in the first group, denoted as b_i^R = { s_i,1 ,⋯,s_i,m,⋯ , s_i,M}∈ℬ^R, undergoes obtaining the weights of historical interaction sequences through the noise recognition model. We then use the reweighted sequences to train the recommender model. We use s_i to represent the sequences of the i-th batch that are used for training the recommender model. And ℬ^R consists of a total of I batches. The batch in the second group, denoted as b_j^D = { s_j,1,⋯,s_j,m,⋯ , s_j,M}∈ℬ^D, is used to train the noise recognition model capable of accurately recognizing noisy sequences. We use s_j to represent the sequences of the j-th batch that are used for training the noise recognition model. And ℬ^D consists of a total of J batches. In summary, ℬ^R∪ℬ^D=ℬ and ℬ^R∩ℬ^D =ϕ.
We further divide the batch b_j^D into two batches, b_j^D_(+) and b_j^D_(-). b_j^D_(+) represents clean batch within b_j^D consisting of original sequences. b_j^D_(-) = { s_j,1 ^',⋯,s_j,m^',⋯ , s_j,M^'} represents noisy batch consisting of randomly replaced sequences from b_j^D, where s_j,m^'= { v_1,⋯,v_n^',⋯ ,v_N} represents the m-th noisy sequence that has undergone random replacement in the j-th batch of the noise recognition model. Within the sequence s_j,m^', v_n^' represents the n-th interaction item that has been randomly replaced. At this point, the second batch transforms into b_j^D = {s_j,1,⋯,s_j,m^',⋯ , s_j,M}∈ℬ^D. In summary, b_j^D_(+)∪ b_j^D_(-)=b_j^D and b_j^D_(+)∩ b_j^D_(-) =ϕ.
We use f(· ;Θ _R) to represent the recommender model and g(·;Θ _D) to represent the noise recognition model. Given the users' historical interaction sequences (u,s_i)∈ b_i^R, the recommender model can predict the probabilities f(u,s_i;Θ _R) of clicks. Similarly, given (u,s_j)∈ b_j^D, the noise recognition model can predict the probabilities g(u,s_j;Θ _D) of noise contamination.
In summary, we enhance the recommender model's performance by obtaining accurate weights w_i for the sequences s_i from the joint training of f(· ;Θ _R) and g(·;Θ _D).
§.§ Recommender Model
In the training process of the recommender model, as shown in fig_1, given a batch b_i^R∈ℬ^R, the sequences s_i in the batch initially pass through the noise recognition model to obtain weights w_i. Subsequently, the reweighted sequences are used to train the parameters of the recommender model. It is worth noting that the recommender model can be chosen based on specific requirements, such as DIN or DCN. Its training process aligns with these base models.
The CTR prediction of the recommender model can be viewed as a supervised binary classification task. Therefore, we optimize the recommender model using a binary cross-entropy loss function. Additionally, considering the impact of noisy sequences on the training of the recommender model, it is essential to recognize and assign smaller weights to mitigate the influence of noisy sequences. Consequently, we define the loss function for the recommender model as follows:
ℒ_i^R = -1/|b_i^R|∑_s_i∈ b_i^R^|b_i^R|(w_i(y_ilog f(u,s_i;Θ _R)
+ (1-y_i)log(1-f(u,s_i;Θ _R)))),
where y_i and f(u,s_i;Θ _R) represent the labels for clicks and the predicted probabilities of clicks for the sequences s_i in b_i^R, respectively. w_i represents the weights of the sequences s_i. Typically, noisy sequences have smaller weights w_i compared with clean sequences in model training (as shown in our experiments in section 4.2.4). This approach reduces the impact of noisy sequences on model performance <cit.>. We will elaborate on how to determine the sequence weights w_i that improve the performance of the recommender model in section 3.3.2 and section 3.4.2.
§.§ Noise Recognition Model
To build a noise recognition model capable of accurately distinguishing noisy sequences from clean sequences and weighting the sequences for the recommender model, we opt for a self-supervised training method. In this subsection, we will focus on two essential components: data replacement and weight generation.
§.§.§ Data Replacement
As shown in fig_2(a), we use the batch b_j^D= b_j^D_(+)∪ b_j^D_(-) as the input for the noise recognition model, where b_j^D_(+) represents clean batch consisting of original sequences, labeled as 1. And b_j^D_(-) represents noisy batch composed of randomly replaced noisy sequences, labeled as 0. While the selection of b_j^D_(-) from b_j^D is not fixed, it should not be too scant. Specifically, we assume that there are very few noisy sequences in b_j^D_(+). If we select too few sequences in b_j^D_(-), it may lead to a situation where the extremely few noisy sequences in b_j^D_(+) outnumber the sequences in b_j^D_(-), meaning that the number of sequences in b_j^D_(+) labeled as 1 while actually being 0 is greater than the number of sequences in b_j^D_(-) labeled as 0. This situation could lead the noise recognition model to incorrectly learn noisy sequences as positive (labeled 1). Hence, it is essential to ensure an adequate number of sequences in b_j^D_(-) to avoid an unstable situation that could lead the noise recognition model to learn in the wrong direction. Up to this point, we have discussed the training method for the recommender model and how input sequences for the noise recognition model are generated.
§.§.§ Weight Generation
In this subsection, we will describe the method for generating weights w_i. As depicted in fig_1, the noise recognition model is a sequence-to-value model. The model takes b_j^D∈ℬ^D as input. For s_j∈ b_j^D, where s_j consists of items with length N and a target item to be predicted. s_j first passes through the neural network's embedding layer, it is transformed into the sequences of embeddings [𝐞_1,⋯,𝐞_n,⋯,𝐞_N,𝐞_T], in which 𝐞_n represents the embedding of the n-th item in the sequences s_j after passes through the neural network's embedding layer. Then, we pass it through the attention network to obtain user hidden representation of the sequences s_j:
𝐡 = ∑_n = 1^N a_n 𝐞_n,
where
a_n = MLP(𝐞_n||𝐞_T)/∑^N_n'= 1MLP(𝐞_n'||𝐞_T).
|| represents the concatenation of embeddings. Subsequently, we concat 𝐡 with the embedding 𝐞_T of the target item, and then pass the results through a MLP to produce the weights w_j. Given that our noise recognition method can be viewed as a self-supervised binary classification task, we use the binary cross-entropy loss function for optimization:
ℒ_j^D = -1/|b_j^D|∑_s_j∈ b_j^D^|b_j^D|(y_jlog g(u,s_j;Θ _D)
+ (1-y_j) log(1-g(u,s_j;Θ _D))),
where y_j and g(u,s_j;Θ _D)=w_j represent the labels for noise and the predicted probabilities of noise for the sequences s_j in b_j^D, respectively.
The noise recognition model similarly uses training set b_i^R∈ℬ^R, which is used in training the recommender model, as input. This set of sequences is non-overlapping with the training sequences used for the noise recognition model, which will be explained in detail in section 3.4.1. At this point, we can use the results g(u,s_i;Θ _D) output by the noise recognition model as the weights for the training sequences of the recommender model, namely the weights w_i in eqn_1.
Furthermore, the noise recognition model is used only during the training phase to help the recommender model learn better parameters. It is not used during the evaluation phase. Therefore, the ATJT method does not increase the number of parameters in the recommender model.
§.§ ATJT Method
§.§.§ Data Partition.
To fully use the data, after fitting the parameters of both the recommender model and the noise recognition model, we can reverse the training data for ℬ^R and ℬ^D. This means that B^D optimizes the recommender model while ℬ^R optimizes the noise recognition model. It is worth noting that the recommender model continues to use the original model in the subsequent training, while the noise recognition model is trained using a duplicate model. The purpose of this is to ensure that all data can be used to train the recommender model, while the noise recognition model does not fit all the data. Throughout the training process, the reweighted recommender model and the noise recognition model are trained together. This ensures that the noise recognition model can accurately recognize noisy sequences while optimizing the results of sequence reweighting.
Two important points to note are: first, we only use the original sequences to train the recommender model, and the noise recognition model is trained with original sequences and randomly replaced noisy sequences. Second, if there is a need to make more extensive use of the data, the training set sequences can be divided into N groups instead of two groups. As shown fig_2(b), we demonstrate a training method where the training set is divided into four groups. In extreme cases, only one sequence receives the best reweighting output by the noise recognition model and trains the recommender model, while the rest of the sequences are input into the noise recognition model to achieve the best recognition performance in training. However, this method increases the number of duplicate models for training the noise recognition model. Therefore, the minimum grouping is two groups, and the maximum grouping is the size of the training set sequences |ℬ^R∪ℬ^D|. The specific grouping can be chosen based on available resources and performance considerations.
§.§.§ Loss Function
In this subsection, we focus on how to jointly train the recommender model with the noise recognition model by computing the loss value. When we partition the training set sequences into N groups, due to |ℬ^R|:|ℬ^D|=1:N-1, the batches used for training the recommender model should be in a ratio of 1:N-1 compared with those used for training the noise recognition model, as illustrated in fig_6. Therefore, the loss function for the noise recognition model should be:
ℒ_i^D=1/N-1∑_j=i(N-1)^(i+1)(N-1)-1ℒ_j^D,
where i represents the index of the batch used by the recommender model. N-1 represents the number of batches used by the noise recognition model corresponding to one batch of data used for training the recommender model, and ℒ_j^D represents the binary cross-entropy loss function when training the noise recognition model with the j-th batch of data.
During the training process, we combine the loss of the recommender model with the loss of the noise recognition model using a scaling factor α to obtain the total loss for model training:
ℒ_i^sum=ℒ_i^R+αℒ_i^D,
where α represents tunable parameters that allows us to control the learning rates of the recommender model and the noise recognition model, thereby achieving the goal of joint training. Joint training enables the noise recognition model to learn the weights w_i for sequences s_i in b_i ^R (as defined in eqn_1), which more suitable for training the recommender model while accurately recognizing noise. This enables the recommender model to achieve better performance with the weighted sequences.
§.§.§ Overall Optimization Algorithm of Model Training
The joint training process is illustrated in Algorithm <ref>. The joint training consists of two parts: the calculation of the ℒ_i^R for the recommender model (lines 6-7) and the calculation of the ℒ_i^D for the noise recognition model (lines 8-14). We achieve joint training by summing the loss values from these two parts (line 15). Specifically, we start by initializing the recommender model f(· ;Θ _R) (line 2). Next, we iterate over all groups in the training set as described in section 3.4.1 (line 3). We then initialize the noise recognition model (line 4) and retrieve the i-th batch from the recommender model training set (line 5). The sequences from the i-th batch are passed through the noise recognition model g(·;Θ _D) to determine their weights w_i (line 6). These sequences are then input into the recommender model, and the weighted loss is calculated. After averaging the loss for all sequences in the batch, we obtain ℒ_i^R (line 7). Subsequently, we iterate over the batches from the (i(N-1))-th to the ((i+1)(N-1)-1)-th in the noise recognition model training set (line 9). From the batch of the noise recognition model, we select half of the sequences. For these sequences, we perform random replacements of items, considering them as noisy sequences (line 10 and 11). The original sequences and the noisy sequences from set b_j^D are input into the noise recognition model, and the loss is calculated. After averaging the loss for all sequences in the batch, we obtain ℒ_j^D (line 12). Next, we accumulate all ℒ_j^D values during the iteration onto ℒ_i^D to obtain the final noise recognition model loss (line 13). Finally, we add ℒ_i^R and ℒ_i^D to obtain ℒ_i^sum (line 15). At this point, we can jointly optimize the recommender model and the noise recognition model based on ℒ_i^sum (line 16 and 17). At this point, we have completed the training of the noise recognition model and explained how to implement the ATJT method.
§ EXPERIMENTS
We conduct extensive experiments to address the following three questions:
* RQ1: How does the performance of the ATJT method compare with the base model?
* RQ2: What is the impact of different types of noisy sequences generation on performance?
* RQ3: How does different sequence weighting training methods affect performance?
§.§ Experimental Setting
§.§.§ Datasets and Baselines
We evaluate our method using the MovieLens20M1https://grouplens.org/datasets/movielens/20mhttps://grouplens.org/datasets/movielens/20m, Jun. 2024., Amazon (Electro)2http://jmcauley.ucsd.edu/data/amazon/http://jmcauley.ucsd.edu/data/amazon/, Jun. 2024. and Yelp3https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset/datahttps://www.kaggle.com/datasets/yelp-dataset/yelp-dataset/data, Jun. 2024. datasets. We select these three datasets for two reasons: 1) They represent diverse scenarios, namely an online movie platform and an e-commerce platform, with varying levels of product diversity. 2) They differ in size and characteristics. The statistical data for MovieLens20M, Amazon (Electro) and Yelp are shown in label_1.
12pt
The MovieLens20M and Amazon (Electro) datasets all consist of features such as user ID, historical interaction item IDs and their corresponding categories. In Yelp dataset, each item has features including its business_id, city, postal_code, star rating and categories. Each user has features including the user_id, useful, funny, cool and average star rating.
We employ several advanced recommender models as base models, including Wide & Deep <cit.>, DCN <cit.>, DIN <cit.>, SASRec <cit.>, S3Rec <cit.> and FEARec <cit.>. We use the results of these six base models on three datasets as the baseline and compare them with the ATJT method.
In summary, we conduct a total of 6 (the number of recommender models) * 2 (the number of contrastive models) experiments to assess the performance improvement of the ATJT method on three specified datasets for the recommender model.
§.§.§ Evaluation Protocol
To accurately assess the performance of the recommender model, we first divide users' historical interaction sequences into training and testing sets in a 4:1 ratio. In this setup, we use the training set to train both the noise recognition model and the recommender model, while the testing set is used to evaluate the performance of the recommender model. Notably, we need to ensure that users' historical interaction sequences in the training and testing sets are non-overlapping. Additionally, to avoid the issue described in section 3.4.1, where the noise recognition model fits the training data, we also need to ensure that the historical interaction sequences used for training the recommender model and the noise recognition model are non-overlapping.
We evaluate the testing set using standard AUC (Relative Improvement Area Under the ROC Curve) scores, HR@5 and NDCG@5. These three metrics are widely used in click prediction tasks <cit.>. Higher values for all three metrics indicate superior model performance.
§.§.§ Implementation Details
The construction of the ATJT method is based on the PyTorch framework. We encapsulate the noise recognition model into a class, allowing it to be integrated as a plugin with most recommender models. The implementation of the noise recognition model follows a unified structure when integrated with different underlying recommender models. The implementation of the noise recognition model relies on an attention mechanism. Specifically, we implement it as an attention model with embedding and output layers. The attention part consists of two layers of MLPs. For MLP (1), we set the linear layers as (64, 32), and for MLP (2), we set the linear layers as (32, 1). Each MLP consists of a linear layer, a PReLU activation function and a dropout operation (rate=0.5). The output part after SUM pooling consists of three layers of MLPs. For MLP (1), we set the linear layers as (40, 256), for MLP (2), we set the linear layers as (256, 64), and for MLP (3), we set the linear layers as (64, 1). The structure of MLPs is identical to the attention part. The output layer is implemented with a sigmoid function, with a dimension of 1, in order to obtain different weights for training the recommender model with noisy and clean sequences. The noise recognition model is uniformly optimized using the Adagrad optimizer, and a learning rate search is conducted from {0.1, 0.01, 0.001, 0.0001}.
When training recommender models, each method follows the following steps: 1) When training DCN and Wide & Deep models, we treat historical interactions as item features. The DNN architectures for DCN and Wide & Deep are set as (128, 128) and (256, 128), respectively. 2) For DIN model, we use user ID, historical interaction item IDs and their corresponding categories as input features, following <cit.>. 3) For SASRec and S3Rec models, we use historical interaction item IDs as input features, following <cit.> and <cit.>. 4) When training the FEARec model, our input features are the same as those used in the DIN model. Additionally, we use the default hyperparameter configurations provided by the original author of the model on GitHub4https://github.com/sudaada/FEARechttps://github.com/sudaada/FEARec, Jun. 2024.
§.§ Experimental Results
§.§.§ Overall Performance (RQ1)
We propose an ATJT method based on six fundamental recommender models and compare their performance with the base recommender models on three different datasets. Experimental results show that the ATJT method outperforms base models in terms of AUC, HR@5 and NDCG@5, as shown in label_2.
We find that the ATJT method yields better improvements in DCN and Wide & Deep models compared with DIN, SASRec, S3Rec and FEARec models. This phenomenon can be attributed to the attention mechanism possessed by DIN, SASRec, S3Rec and FEARec, which adaptively learns users' interest representations from the historical interaction sequences, thus mitigates the impact of behaviors unrelated to users' interest representations <cit.>. Furthermore, the enhancement of the ATJT method is more pronounced in the Wide & Deep model than in the DCN model. The reason may lie in that when we process the input features of the DCN model, we regard historical interactions as item features. Therefore, the cross network captures feature interactions, mitigates the impact of irrelevant features on model performance during training <cit.>. The Wide & Deep model lacks attention mechanisms like DIN, multi-head attention mechanism like SASRec, S3Rec and FEARec or the cross network like the DCN model, which can filter out irrelevant or negative behaviors on model optimization. The ATJT method compensates for the Wide & Deep model's inability to filter out irrelevant or negative behaviors within user actions, resulting in a more noticeable performance improvement.
The ATJT method demonstrates superior performance on the Yelp dataset compared with the MovieLens20M and Amazon (Electro) datasets. This observation may be attributed to the relatively larger size of the Yelp dataset, which provides more data for training more complex models after denoising. The MovieLens20M dataset is also substantial in size, whereas the Amazon (Electro) dataset is relatively smaller, potentially impacting the model's performance after denoising. However, the ATJT method exhibits significant improvements across various base models and data sizes. Furthermore, experiments conducted with six different recommender models indicate the adaptability and efficacy of the ATJT method.
§.§.§ Noise Generation Analysis (RQ2)
In this subsection, we analyze the effect of selecting how many sequences in the noise recognition model training data to replace, and the effect of replacing a different number of historical click items in the artificially replaced noisy sequences. fig_3 shows the impact of selecting 0.1, 0.3, 0.5, 0.7 and 0.9 of the data in the noise recognition model training as noisy sequences on the performance of the recommender model. label_3 shows the impact of replacing 1, 2, 3, 5 and 10 historical click items in the artificially replaced noisy sequences on the performance of the recommender model.
12pt
Based on fig_3, it is evident that training the noise recognition model with varying number of sequences, corresponding to different quantities of b_j^D_(-) as described in section 3.3.1, has a different impact on the fitting speed and effectiveness of the noise recognition model. When selecting 0.3 or 0.7 of the data, there is a noticeable decrease in fitting speed and performance. This is due to the use of too few noisy sequences during the training of the noise recognition model, which leads to its inability to accurately recognize noisy sequences. Conversely, if artificially replaced noisy sequences are overly abundant, it can also hinder the noise recognition model's ability to accurately distinguish clean sequences. Furthermore, when opting for a smaller number of sequences, such as using 0.1 of the data as noisy sequences, a situation similar to what was described in section 3.3.1 may occur, namely the noise recognition model struggles to differentiate between noisy and clean sequences. Selecting fewer data points also results in poorer fitting performance.
From label_3, we observe that replacing fewer historical click items in the sequences during noisy recognition model training leads to more accurate output weights for the sequences in the recommender model, resulting in improved performance. However, as the number of replaced items increases, the performance of the recommender model starts to deteriorate. This observation suggests that the original noisy sequences within the historical interaction sequences are mostly sparse. Sequences replacing fewer items exhibit greater similarity to the original noisy sequences, while sequences replacing 3 or more items show significant divergence from the original noisy sequences.
§.§.§ Sequence Weighting Ablation Experiments (RQ3)
To investigate the impact of different noisy sequence weighting methods on the performance of the recommender model, we compare three weighting training methods: Without Auxiliary Task (WAT), Direct Noise Recognition (DNR) and ATJT. The sequence weights for three methods are obtained through a consistent model structure as described in section 4.1.3. In the DNR method, the noise recognition model is first separately trained to accurately distinguish artificially replaced noisy sequences from the original clean sequences as the training objective. Then, the training sequences of the recommender model are passed through this noise recognition model to obtain sequence weights, followed by training the recommender model. In the WAT method, the training sequences of the recommender model are directly passed through the targetless model structure to obtain the sequence weights, followed by training the recommender model.
As shown in label_4, the ATJT method outperforms the other two methods, showing significant improvements in AUC, HR@5 and NDCG@5 metrics. The reason for this is, in comparison to the WAT weighted training method, the ATJT method takes recognition of noise as the auxiliary goal helps the training sequences of the recommender model find suitable training weights. Specifically, it assigns larger weights to clean sequences and smaller weights to noisy sequences, thus mitigates the impact of noisy sequences on the performance of the recommender model. In contrast to the DNR method, the ATJT method not only identifies noisy sequences but also assigns appropriate weights to them. In other words, when training the recommender model with noisy sequences, the goal is not to minimize the weights assigned to them. Rather, the objective is to find the weights that optimizes the performance of the recommender model. It is worth noting that using the ATJT method and the DNR method result in better performance compared with the base model and model trained using the WAT method. This underscores the meaningfulness of weighting sequences through noise recognition auxiliary Task. Additionally, the reason for the inferior performance of the recommender model trained using the WAT method compared with the base model is that the WAT method fails to capture the degree of noise in the sequences. It exhibits confusion in the early stages of training, potentially leading to incorrect weights. This also indicates that augmenting the complexity of the model does not lead to a significant improvement in performance.
§.§.§ Impact of Noise on Sequence Weights (w_i)
To demonstrate that noisy sequences have smaller weights compared with clean sequences, we use the DCN model as the base model and conduct analyses on three datasets. As shown in label_5, the weights of the noisy sequences are approximately 19% on average smaller compared with the weights of the clean sequences. This aligns with our expectation that assigning smaller weights to noisy sequences can enhance the base model's performance.
§ CONCLUSION AND FUTURE WORK
In this study, we proposed a novel self-supervised ATJT method. This method leverages the training outcomes of a noise recognition model to reweight sequences for training the recommender model. Additionally, we conducted joint training of the recommender model and noise recognition model to acquire more appropriate weights, further enhancing the performance of the recommender model. We then evaluated our method on three datasets and six base models, demonstrating its effectiveness. Finally, we validated the impact of different noisy sequences and training methods on recommender model performance through Noise Generation Analysis and Sequence Weighting Ablation experiments.
In the context of future prospects, through adversarial networks, the well-trained noise recognition model can discriminate between artificially replaced noisy sequences, is used as a discriminator to learn a generator that makes it unable to recognize whether the sequences has been artificially replaced with noise. At this point, the generator can create noisier sequences than those replaced by humans, which may be more similar to the noisy sequences in the original sequences. Therefore, using these generated sequences as noisy sequences might yield better results.
99
=-3pt plus.2pt minus.2pt
=14pt plus.2pt minus.2pt
1 Zhou R J, Khemmarat S, Gao L X. The impact of youtube recommendation system on video views. In Proc. the 10th ACM SIGCOMM conference on Internet measurement, Nov. 2010, pp.404-410. DOI: https://doi.org/10.1145/1879141.187919310.1145/1879141.1879193
2 Lee D and Hosanagar K. Impact of recommender systems on sales volume and diversity. 2014.
3 Lin T H, Gao C, Li Y. Cross: Cross-platform recommendation for social e-commerce. In Proc. the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2019, pp.515-524. DOI: https://dl.acm.org/doi/10.1145/3331184.333119110.1145/3331184.3331191
4 Hidasi B, Karatzoglou A, Baltrunas L, Tikk D. Session-based recommendations with recurrent neural networks. arXiv:1511.06939, 2015. https://arxiv.org/abs/1511.06939https://arxiv.org/abs/1511.06939, Mar. 2016.
5 Ni Y B, Ou D, Liu S C, Li X, Ou W W, Zeng A X, Si L. Perceive your users in depth: Learning universal user representations from multiple e-commerce tasks. In Proc. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Jul. 2018, pp.596-605. DOI: https://dl.acm.org/doi/abs/10.1145/3219819.321982810.1145/3219819.3219828
6 Zhou G R, Mou N, Fan Y, Pi Q, Bian W J, Zhou C, Zhu X Q, Gai K. Deep interest evolution network for click-through rate prediction. In Proc. the AAAI Conference on Artificial Intelligence, Jul. 2019, pp.5941-5948. DOI: https://doi.org/10.1609/aaai.v33i01.3301594110.1609/aaai.v33i01.33015941
7 Zhou G R, Zhu X Q, Song C R, Fan Y, Zhu H, Ma X, Yan Y H, Jin J Q, Li H, Gai K. Deep interest network for click-through rate prediction. In Proc. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Jul. 2018, pp.1059-1068. DOI: https://dl.acm.org/doi/abs/10.1145/3219819.321982310.1145/3219819.3219823
8 Chen H Y, Lin Y S, Pan M H, Wang L, Yeh C-C M, Li X T, Zheng Y, Wang F, Yang H. Denoising self-attentive sequential recommendation. In Proc. the 16th ACM Conference on Recommender Systems, Sep. 2022, pp.92-101. DOI: https://doi.org/10.1145/3523227.354678810.1145/3523227.3546788
9 Davidson J, Liebald B, Liu J N, Nandy P, Van Vleet T, Gargi U, Gupta S, He Y, Lambert M, Livingston B, et al. The YouTube video recommendation system. In Proc. the fourth ACM conference on Recommender systems, Sep. 2010, pp.293-296. Sep. DOI: https://doi.org/10.1145/1864708.186477010.1145/1864708.1864770
10 Wang J, Zhang Y. Opportunity model for e-commerce recommendation: right product; right time. In Proc. the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2013, pp.303-312. DOI: https://doi.org/10.1145/2484028.248406710.1145/2484028.2484067
11 Cheng H, Koc L, Harmsen J, Shaked T, Chandra T, Aradhye H, Anderson G, Corrado G, Chai W, Ispir M, et al. Wide & deep learning for recommender systems. In Proc. the 1st Workshop on Deep Learning for Recommender Systems, Sep. 2016, pp.7-10. DOI: https://doi.org/10.1145/2988450.298845410.1145/2988450.2988454
12 Guo H F, Tang R M, Ye Y M, Li Z G, He X Q. DeepFM: A factorization-machine based neural network for CTR prediction. arXiv:1703.04247, 2017. https://arxiv.org/abs/1703.04247https://arxiv.org/abs/1703.04247, Mar. 2017.
13 Lin W L, Zhao X Y, Wang Y J, Xu T, Wu X. Adafs: Adaptive feature selection in deep recommender system. In Proc. the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Aug. 2022, pp.3309-3317. DOI: https://doi.org/10.1145/3534678.353920410.1145/3534678.3539204
14 Wang R X, Fu B, Fu G, Wang M L. Deep & cross network for ad click predictions. In Proc. the ADKDD'17, Aug. 2017, pp.1-7. DOI: https://doi.org/10.1145/3124749.312475410.1145/3124749.3124754
15 Fang H, Zhang D N, Shu Y H, Guo G B. Deep learning for sequential recommendation: Algorithms, influential factors, and evaluations. ACM Trans. Information System, Nov. 2020, 39(1): 1-42. DOI: https://doi.org/10.1145/342672310.1145/3426723
16 Wang T L, Xia L H, Huang C. Denoised self-augmented learning for social recommendation. arXiv:2305.12685, 2023. https://arxiv.org/abs/2305.12685https://arxiv.org/abs/2305.12685, Nov. 2023.
17 Fan Z W, Xu K, Dong Z, Peng H, Zhang J W, Yu P S. Graph collaborative signals denoising and augmentation for recommendation. In Proc. the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul 2023, pp.2037-2041. DOI: https://doi.org/10.1145/3539618.359199410.1145/3539618.3591994
18 Wang W J, Feng F L, He X N, Nie L Q, Chua T-S. Denoising implicit feedback for recommendation. In Proc. the 14th ACM International Conference on Web Search and Data Mining, Mar. 2021, pp.373-381. DOI: https://doi.org/10.1145/3437963.344180010.1145/3437963.3441800
19 Gantner Z, Drumond L, Freudenthaler C, Schmidt-Thieme L. Personalized ranking for non-uniformly sampled items. In Proc. KDD Cup 2011, Aug. 2012, pp.231-247.
20 Hu K X, Li L, Xie Q, Liu J Q, Tao X H. What is next when sequential prediction meets implicitly hard interaction? In Proc. the 30th ACM International Conference on Information & Knowledge Management, Oct. 2021, pp.710-719. DOI: https://doi.org/10.1145/3459637.348249210.1145/3459637.3482492
21 Wang Z T, Xu Q Q, Yang Z Y, Cao X C, Huang Q M. Implicit feedbacks are not always favorable: Iterative relabeled one-class collaborative filtering against noisy interactions. In Proc. the 29th ACM International Conference on Multimedia, Oct. 2021, pp.3070-3078. DOI: https://doi.org/10.1145/3474085.347544610.1145/3474085.3475446
22 Qin Y Q, Wang P F, Li C L. The world is binary: Contrastive learning for denoising next basket recommendation. In Proc. the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, jul. 2021, pp.859-868. DOI: https://doi.org/10.1145/3404835.346283610.1145/3404835.3462836
23 Yu W H, Qin Z. Sampler design for implicit feedback data by noisy-label robust learning. In Proc. the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2020, pp.861-870. DOI: https://doi.org/10.1145/3397271.340115510.1145/3397271.3401155
24 Wang Y, Xin X, Meng Z Q, Jose J M, Feng F L, He X N. Learning robust recommenders through cross-model agreement. In Proc. the ACM Web Conference 2022, Apr. 2022, pp.2015-2025. DOI: https://doi.org/10.1145/3485447.351220210.1145/3485447.3512202
25 Lin W L, Zhao X Y, Wang Y J, Zhu Y S, Wang W Y. Autodenoise: Automatic data instance denoising for recommendations. In Proc. the ACM Web Conference 2023, Apr. 2023, pp.1003-1011. DOI: https://doi.org/10.1145/3543507.358333910.1145/3543507.3583339
26 He R, Kang W C, McAuley J. Translation-based recommendation. In Proc. the eleventh ACM Conference on Recommender Systems, Aug. 2017, pp.161-169. DOI: https://doi.org/10.1145/3109859.310988210.1145/3109859.3109882
27 He R, McAuley J. Fusing similarity models with Markov chains for sparse sequential recommendation. In 2016 IEEE 16th International Conference on Data Mining (ICDM), Dec. 2016, pp.191-200. DOI: https://doi.org/10.1109/ICDM.2016.003010.1109/ICDM.2016.0030
28 Medsker L R, Jain LC. Recurrent neural networks. Design and Applications, 2001, 5(64-67): 2.
29 Graves A, Graves A. Long short-term memory.
In Supervised sequence labelling with recurrent neural networks
, Jan. 2012, pp.37-45. DOI: https://doi.org/10.1007/978-3-642-24797-2_4
10.1007/978-3-642-24797-2_4
30 Han K, Xiao A, Wu E H, Guo J Y, Xu C J, Wang Y H. Transformer in transformer. Advances in Neural Information Processing Systems, 2021, 34: 15908-15919.
31 Fan X Y, Liu Z, Lian J X, Zhao W X, Xie X, Wen J-R. Lighter and better: low-rank decomposed self-attention networks for next-item recommendation. In Proc. the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2021, pp.1733-1737. DOI: https://doi.org/10.1145/3404835.346297810.1145/3404835.3462978
32 Tan Q Y, Zhang J W, Yao J C, Liu N H, Zhou J R, Yang H X, Hu X. Sparse-interest network for sequential recommendation. In Proc. the 14th ACM International Conference on Web Search and Data Mining, Mar. 2021, pp.598-606. DOI: https://doi.org/10.1145/3437963.344181110.1145/3437963.3441811
33 Xie X, Sun F, Liu Z Y, Wu S W, Gao J Y, Zhang J D, Ding B, Cui B L. Contrastive learning for sequential recommendation. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), May 2022, pp.1259-1273. DOI: https://doi.org/10.1109/ICDE53745.2022.0009910.1109/ICDE53745.2022.00099
34 Huang L W, Ma Y T, Liu Y B, Du B D, Wang S L, Li D Y. Position-enhanced and time-aware graph convolutional network for sequential recommendations. ACM Trans. Information Systems, Jan. 2023, 41(1): 1-32. DOI: https://doi.org/10.1145/351170010.1145/3511700
35 He X N, Deng K, Wang X, Li Y, Zhang Y D, Wang M. LightGCN: Simplifying and powering graph convolution network for recommendation. In Proc. the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2020, pp.639-648. DOI: https://doi.org/10.1145/3397271.340106310.1145/3397271.3401063
36 Mao K L, Zhu J M, Xiao X, Lu B, Wang Z W, He X Q. UltraGCN: Ultra simplification of graph convolutional networks for recommendation. In Proc. the 30th ACM International Conference on Information & Knowledge Management, Oct. 2021, pp.1253-1262. DOI: https://doi.org/10.1145/3459637.348229110.1145/3459637.3482291
37 Fan Z W, Liu Z W, Zhang J W, Xiong Y, Zheng L, Yu P S. Continuous-time sequential recommendation with temporal graph collaborative transformer. In Proc. the 30th ACM International Conference on Information & Knowledge Management, Oct. 2021, pp.433-442. DOI: https://doi.org/10.1145/3459637.348224210.1145/3459637.3482242
38 Kang W C, McAuley J. Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM), Nov. 2018, pp.197-206. DOI: https://doi.org/10.1109/ICDM.2018.0003510.1109/ICDM.2018.00035
39 Zhou K, Wang H, Zhao W X, Zhu Y T, Wang S R, Zhang F Z, Wang Z Y, Wen J R. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proc. the 29th ACM International Conference on Information & Knowledge Management, Oct. 2020, pp.1893-1902. DOI: https://doi.org/10.1145/3340531.341195410.1145/3340531.3411954
40 Hu J C, Chan Z M, Zhang Y, Han S G, Lou S Y, Liu B L, Zhu H, Jiang Y N, Xu J, Zheng B. Ps-sa: An efficient self-attention via progressive sampling for user behavior sequence modeling. In Proc. the 32nd ACM International Conference on Information and Knowledge Management, Oct. 2023, pp.4639-4645. DOI: https://doi.org/10.1145/3583780.361549510.1145/3583780.3615495
41 Du X Y, Yuan H H, Zhao P P, Qu J F, Zhuang F Z, Liu G F, Liu Y C, Sheng V S. Frequency enhanced hybrid attention network for sequential recommendation. In Proc. the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2023, pp.78-88. DOI: https://doi.org/10.1145/3539618.359168910.1145/3539618.3591689
42 Hjelm R D, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A, Bengio Y. Learning deep representations by mutual information estimation and maximization. arXiv:1808.06670, 2018. https://arxiv.org/abs/1808.06670https://arxiv.org/abs/1808.06670, Feb. 2019.
43 Qiu R H, Huang Z, Yin H Z, Wang Z J. Contrastive learning for representation degeneration problem in sequential recommendation. In Proc. the fifteenth ACM International Conference on Web Search and Data Mining, Feb. 2022, pp.813-823. DOI: https://doi.org/10.1145/3488560.349843310.1145/3488560.3498433
44 Hao B W, Yin H Z, Zhang J, Li C P, Chen H. A multi-strategy-based pre-training method for cold-start recommendation. ACM Trans. Information Systems, Jan. 2023, 41(2): 1-24. DOI: https://doi.org/10.1145/354410710.1145/3544107
45 Chen Y J, Liu Z W, Li J, McAuley J, Xiong C. Intent contrastive learning for sequential recommendation. In Proc. the ACM Web Conference 2022, Apr. 2022, pp.2172-2182. DOI: https://doi.org/10.1145/3485447.351209010.1145/3485447.3512090
46 Yu J L, Yin H Z, Xia X, Chen T, Cui L Z, Nguyen Q V H. Are graph augmentations necessary? Simple graph contrastive learning for recommendation. In Proc. the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2022, pp.1294-1303. DOI: https://doi.org/10.1145/3477495.353193710.1145/3477495.3531937
47 Hao B W, Zhang J, Yin H Z, Li C P, Chen H. Pre-training graph neural networks for cold-start users and items representation. In Proc. the 14th ACM International Conference on Web Search and Data Mining, Mar. 2021, pp.265-273. DOI: https://doi.org/10.1145/3437963.344173810.1145/3437963.3441738
48 Yu J L, Yin H Z, Li J D, Wang Q Y, Nguyen QVH, Zhang X L. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Proc. the Web Conference 2021, Apr. 2021, pp.413-424. DOI: https://doi.org/10.1145/3442381.344984410.1145/3442381.3449844
49 Xia X, Yin H Z, Yu J L, Wang Q Y, Cui L Z, Zhang X L. Self-supervised hypergraph convolutional networks for session-based recommendation. In Proc. the AAAI Conference on Artificial Intelligence, May 2021, pp.4503-4511. DOI: https://doi.org/10.1609/aaai.v35i5.1657810.1609/aaai.v35i5.16578
50 Yu J L, Yin H Z, Gao M, Xia X, Zhang X L, Nguyen QVH. Socially-aware self-supervised tri-training for recommendation. In Proc. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Aug. 2021, pp.2084-2092. DOI: https://doi.org/10.1145/3447548.346734010.1145/3447548.3467340
51 Wilson D R, Martinez T R. Reduction techniques for instance-based learning algorithms. Machine learning, Mar. 2000, 38: 257-286. DOI: https://doi.org/10.1023/A:100762691372110.1023/A:1007626913721
52 Leyva E, Gonzalez A, Perez R. Three new instance selection methods based on local sets: A comparative study with several approaches from a bi-objective perspective. Pattern Recognition, Apr. 2015, 48(4): 1523-1537. DOI: https://doi.org/10.1016/j.patcog.2014.10.00110.1016/j.patcog.2014.10.001
53 Zhang J, Hao B, Chen B, Li C P, Chen H, Sun J M. Hierarchical reinforcement learning for course recommendation in MOOCs. In Proc. the AAAI Conference on Artificial Intelligence, Jul. 2019, pp.435-442. DOI: https://doi.org/10.1609/aaai.v33i01.330143510.1609/aaai.v33i01.3301435
54 Qiu R H, Huang Z, Yin H Z. Memory augmented multi-instance contrastive predictive coding for sequential recommendation. In 2021 IEEE International Conference on Data Mining (ICDM), Dec. 2021, pp.519-528. DOI: https://doi.org/10.1109/ICDM51629.2021.0006310.1109/ICDM51629.2021.00063
55 Wang Y, Zhang H R, Liu Z W, Yang L W, Yu P S. ContrastVAE: Contrastive variational autoencoder for sequential recommendation. In Proc. the 31st ACM International Conference on Information & Knowledge Management, Oct. 2022, pp.2056-2066. DOI: https://doi.org/10.1145/3511808.355726810.1145/3511808.3557268
56 Fan Z W, Liu Z W, Wang S, Zheng L, Yu P S. Modeling sequences as distributions with uncertainty for sequential recommendation. In Proc. the 30th ACM International Conference on Information & Knowledge Management, Oct. 2021, pp.3019-3023. DOI: https://doi.org/10.1145/3459637.348214510.1145/3459637.3482145
57 Liu Y, Xuan H R, Li B H, Wang M, Chen T, Yin H Z. Self-supervised dynamic hypergraph recommendation based on hyper-relational knowledge graph. In Proc. the 32nd ACM International Conference on Information and Knowledge Management, Oct. 2023, pp.1617-1626. DOI: https://doi.org/10.1145/3583780.361505410.1145/3583780.3615054
58 Wu J C, Wang X, Feng F L, He X N, Chen L, Lian J X, Xie X. Self-supervised graph learning for recommendation. In Proc. the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2021, pp.726-735. DOI: https://doi.org/10.1145/3404835.346286210.1145/3404835.3462862
| Recommender systems play a crucial role in today's internet and e-commerce domains, offering users improved information retrieval and shopping experiences, while also yielding substantial economic benefits for businesses <cit.>. Click-through rate (CTR) prediction holds a significant role within personalized recommender systems <cit.>. By analyzing users' historical interaction sequences, these systems recommend products aligned with user interest and preferences, facilitating the discovery of potentially engaging content <cit.>. This method enhances user experience, fosters sales and propagates content <cit.>.
In the context of sequence-based recommendation, the issue of noise present in sequences significantly impacts the establishment of accurate and reliable recommender models, forming a complex and pivotal challenge within the field. Sequence noise can arise from various sources, including user curiosity, data collection inaccuracies and environmental shifts, consequently leading to misjudgments of user interest and inaccurate recommender model outcomes <cit.>. Models trained on clean sequences significantly outperform those trained on original, noise-containing sequences. This underscores the imperative of exploring denoising strategies in recommender systems <cit.>.
To address the challenges mentioned above, denoising of sequences has garnered increasing attention from researchers. Recent studies demonstrate that using denoising methods in recommender systems can lead to more efficient model training and better performance at a reasonable computational cost <cit.>. The existing denoising process involves two steps: recognizing noise and handling noisy sequences.
In practice, recognizing for noise typically judges sequences with high loss values as noisy sequences. Based on the handling of noisy sequences, existing methods can be categorized into two types: truncated denoising and reweighted denoising. For the truncated denoising method <cit.>, the objective is to train a network capable of recognizing noise and discarding noisy sequences, allowing the model to only learn from clean sequences. Regarding the reweighted denoising method <cit.>, once noisy sequences are recognized, this method tends to assign smaller weights to these sequences throughout the entire model training process, thereby reducing the contribution of these sequences to the recommender model.
Although these denoising methods contribute to improving recommender model's performance, user behavior encompasses diverse interest and motivations. Some interactions may be temporary, random or influenced by other factors, which increases the difficulty of recognizing between noisy and clean sequences. Moreover, due to complex data distributions and inherent learning difficulties, high loss values do not necessarily indicate noisy sequences. Additionally, the presence of thresholds in the truncated denoising method heavily relies on the sampling distribution during the decision-making process, inevitably discarding many clean sequences and potentially exhibiting biased selections <cit.>. Reweighted denoising method requires specific configurations for a given model or recommendation task, which can be time-consuming and challenging to transfer to other settings <cit.>.
To address the aforementioned issues, from an intuitive perspective, we posit that using a noise recognition model to identify noise sequences and then assigning smaller weights to these sequences to mitigate their influence can enhance the performance of the recommender model. Unlike traditional noise recognition methods, we propose a direct method by constructing a noise recognition model as an auxiliary task to specifically identify noisy sequences. Moreover, to mitigate the impact of reduced training data on the recommender model, we use a novel adaptive reweighting method: training the noise recognition model and the recommender model jointly. This method allows for assigning the most suitable weights for different sequences, optimizing the performance of the recommender model.
Initially, we construct a noise recognition model to differentiate between clean and noisy sequences in the original dateset. Given the difficulty of identifying noisy sequences within the original data <cit.>, we artificially create noisy sequences by replacing historical click items of the original sequences with random data. Due to the inherent limitations of human intervention, the artificially replaced noisy sequences may not fully replicate the authentic noisy sequences present in the original sequences. However, since certain authentic noisy sequences also result from users' sporadic, unintentional clicks, there are some similarities between them. Based on the assumption of the existence of certain similarities, we believe that the artificially replaced noisy sequences can represent a portion of the original noisy sequences, thus we regard the artificially replaced noisy sequences as noise data. Given the scarcity of true noise data within the original sequences, we regard the original sequences as clean data. At this point, we can conduct labeled training for the noise recognition model.
Furthermore, we cannot simply discard the noisy sequences from the original sequences, as these noisy sequences may contain factors that are beneficial for the training of the recommender model, and different noisy sequences have varying impacts on the training of the recommender model. Consequently, we use a novel adaptive reweighting method. Taking into account that a fixed weighting strategy does not adapt to model variations and that the contributions of noisy data to model training are not uniform, we opt to design the sequence weights as learnable parameters associated with denoising method and beneficial for the performance of the recommender model. Specifically, we train the noise recognition model using original sequences and randomly replaced noisy sequences. The noise recognition model then weights non-overlapping original sequences not used in its training. These weighted sequences are subsequently used to train the recommender model. This joint training is accomplished through auxiliary task, ensuring that the noise recognition model accurately identifies noisy sequences while optimizing the results of sequence reweighting.
After training the noise recognition model, the noise recognition model becomes adept at accurately distinguishing between these two types of sequences. In other words, the noise recognition model tends to classify the original sequences it was trained on as clean sequences, which results in the inability to recognize the noisy sequences in the original sequences. Taking this issue into consideration, we choose to use the non-overlapping original sequences that were not involved in the training of the noise recognition model as inputs for the recommender model allows us to determine which of the input sequences used during the training of the recommender model contain noise.
The main contributions of this work are:
* We introduce a novel self-supervised Auxiliary Task Joint Training (ATJT) method, where the weights obtained from the joint training of the noise recognition model and the recommender model are reweighted onto the sequences used for training the recommender model. This method enhances the performance of the recommender model.
* The ATJT method is versatile and can be applied to various underlying recommender models.
* We evaluate the ATJT method on three datasets using a consistent base model. Experimental results show that our method improves recommender model performance.
The paper is structured as follows: section 2 provides a comprehensive overview of related work, focusing on CTR models and denoising methods. section 3 introduces the preliminary work, describes the training processes for both the noise recognition and recommender models, and explains the ATJT method. section 4 presents the experimental setup, results and model analysis. section 5 concludes the paper with a summary of our work and discusses future research directions. | In this section, we introduce the CTR Models and provide a comprehensive overview of the methods related to sequence denoising in CTR Models.
§.§ CTR Models
In recent years, deep learning based models have gained significant traction in CTR prediction <cit.>. These models exhibit strong representation learning capabilities, enabling them to capture more intricate and challenging patterns and features. Existing deep learning based recommender models can be broadly categorized into two types: sequence-based <cit.> and graph-based <cit.>. We propose a sequence-based denoising method in this paper. Consequently, this subsection focuses on sequence-based recommender models. Wide & Deep <cit.> and DCN <cit.> leverage the memory and generalization capabilities of feature interactions by combining traditional generalized linear models with deep neural networks. DIN <cit.> uses self-attention mechanisms to enhance the representation of user interest. SASRec <cit.> and S3Rec <cit.> utilize multi-head self-attention mechanism to model relationships within sequences. PS-SA<cit.> employs a learnable progressive sampling strategy to identify the most valuable items. FEARec <cit.> enhances recommendation by converting user historical behavior sequences into frequency domain representations and combining them with a self-attention mechanism.
CTR models leverage self-supervised learning <cit.> methods to improve data utilization and learn feature representations. For instance, DuoRec <cit.> and MPT <cit.> enhance item embedding distributions through contrastive learning. ICL <cit.> and simple CL method <cit.> address data sparsity and popularity bias by learning user intent representations. Pre-training GNN <cit.>, multi-channel hypergraph convolutional network <cit.>, DHCN <cit.> and self-supervised tri-training <cit.> integrate self-supervised learning with other relevant techniques to enhance the performance of recommender systems.
§.§ Denoising Methods
Identifying noisy sequences is an essential step in sequence denoising. DROP <cit.> and three instance selection methods <cit.> discuss how to reduce the number of sequences in the training set without affecting classification accuracy. AutoDenoise <cit.> deletes sequences that have a counteractive effect on the model through rewards. Hierarchical Reinforcement Learning for Course Recommendation in MOOCs <cit.> removes noisy courses by jointly training of a hierarchical reinforcement learning-based modifier and a basic recommender model. DeCA <cit.> determines noisy sequences by analyzing the discrepancies in user preferences predicted by two recommender models. MMInfoRec <cit.> and ContrastVAE <cit.> address issues such as sparsity and uncertainty in recommender systems by leveraging contrastive learning techniques. DT4SR <cit.> effectively resolves the problem of neglecting user dynamic preferences and item relationships in traditional methods by introducing uncertainty into sequential modeling. SDK framework <cit.> deals with the challenges of Knowledge Graphs (KGs) in knowledge-aware recommendation by modeling hyper-relational facts and using self-supervised learning mechanisms. SGL <cit.> improves the recommendation performance of long-tail items and the robustness against interaction noises by using an auxiliary self-supervised learning task. We propose a denoising auxiliary task neither requires considering the impact on the model nor adds excessive additional training steps. We define a model capable of recognizing noise, thereby enhancing the model's performance.
After recognizing the noisy sequences, we need to handle these sequences to improve the performance of the recommender model. Existing methods for handling noisy sequences can be classified into two categories: truncated denoising <cit.> and reweighted denoising <cit.>. WBPR <cit.> and T-CE <cit.> define thresholds for samples, truncating sequences with loss values higher than the threshold at each iteration. IR <cit.> modifies labels to train downstream modules for recommendation tasks. In R-CE <cit.>, smaller weights are assigned to high-loss sequences to prevent the model from fitting them too quickly. However, truncated denoising risks filtering out many clean sequences, while reweighted denoising suffers from limited transferability. We propose an ATJT method similar to reweighted denoising, but it addresses limitations by adaptively adjusting the weighting degree. | In this section, we will introduce the preliminary work and discuss the training processes for both the noise recognition model and the recommender model. We will also provide a detailed explanation of how to implement the ATJT method.
§.§ Preliminary
In this paper, we use batch b composed of training sequences as the input for both the noise recognition model and the recommender model. Each batch has a size M, and the sequences have a length N.
All batches are divided into two groups, ℬ^R and ℬ^D. The batch in the first group, denoted as b_i^R = { s_i,1 ,⋯,s_i,m,⋯ , s_i,M}∈ℬ^R, undergoes obtaining the weights of historical interaction sequences through the noise recognition model. We then use the reweighted sequences to train the recommender model. We use s_i to represent the sequences of the i-th batch that are used for training the recommender model. And ℬ^R consists of a total of I batches. The batch in the second group, denoted as b_j^D = { s_j,1,⋯,s_j,m,⋯ , s_j,M}∈ℬ^D, is used to train the noise recognition model capable of accurately recognizing noisy sequences. We use s_j to represent the sequences of the j-th batch that are used for training the noise recognition model. And ℬ^D consists of a total of J batches. In summary, ℬ^R∪ℬ^D=ℬ and ℬ^R∩ℬ^D =ϕ.
We further divide the batch b_j^D into two batches, b_j^D_(+) and b_j^D_(-). b_j^D_(+) represents clean batch within b_j^D consisting of original sequences. b_j^D_(-) = { s_j,1 ^',⋯,s_j,m^',⋯ , s_j,M^'} represents noisy batch consisting of randomly replaced sequences from b_j^D, where s_j,m^'= { v_1,⋯,v_n^',⋯ ,v_N} represents the m-th noisy sequence that has undergone random replacement in the j-th batch of the noise recognition model. Within the sequence s_j,m^', v_n^' represents the n-th interaction item that has been randomly replaced. At this point, the second batch transforms into b_j^D = {s_j,1,⋯,s_j,m^',⋯ , s_j,M}∈ℬ^D. In summary, b_j^D_(+)∪ b_j^D_(-)=b_j^D and b_j^D_(+)∩ b_j^D_(-) =ϕ.
We use f(· ;Θ _R) to represent the recommender model and g(·;Θ _D) to represent the noise recognition model. Given the users' historical interaction sequences (u,s_i)∈ b_i^R, the recommender model can predict the probabilities f(u,s_i;Θ _R) of clicks. Similarly, given (u,s_j)∈ b_j^D, the noise recognition model can predict the probabilities g(u,s_j;Θ _D) of noise contamination.
In summary, we enhance the recommender model's performance by obtaining accurate weights w_i for the sequences s_i from the joint training of f(· ;Θ _R) and g(·;Θ _D).
§.§ Recommender Model
In the training process of the recommender model, as shown in fig_1, given a batch b_i^R∈ℬ^R, the sequences s_i in the batch initially pass through the noise recognition model to obtain weights w_i. Subsequently, the reweighted sequences are used to train the parameters of the recommender model. It is worth noting that the recommender model can be chosen based on specific requirements, such as DIN or DCN. Its training process aligns with these base models.
The CTR prediction of the recommender model can be viewed as a supervised binary classification task. Therefore, we optimize the recommender model using a binary cross-entropy loss function. Additionally, considering the impact of noisy sequences on the training of the recommender model, it is essential to recognize and assign smaller weights to mitigate the influence of noisy sequences. Consequently, we define the loss function for the recommender model as follows:
ℒ_i^R = -1/|b_i^R|∑_s_i∈ b_i^R^|b_i^R|(w_i(y_ilog f(u,s_i;Θ _R)
+ (1-y_i)log(1-f(u,s_i;Θ _R)))),
where y_i and f(u,s_i;Θ _R) represent the labels for clicks and the predicted probabilities of clicks for the sequences s_i in b_i^R, respectively. w_i represents the weights of the sequences s_i. Typically, noisy sequences have smaller weights w_i compared with clean sequences in model training (as shown in our experiments in section 4.2.4). This approach reduces the impact of noisy sequences on model performance <cit.>. We will elaborate on how to determine the sequence weights w_i that improve the performance of the recommender model in section 3.3.2 and section 3.4.2.
§.§ Noise Recognition Model
To build a noise recognition model capable of accurately distinguishing noisy sequences from clean sequences and weighting the sequences for the recommender model, we opt for a self-supervised training method. In this subsection, we will focus on two essential components: data replacement and weight generation.
§.§.§ Data Replacement
As shown in fig_2(a), we use the batch b_j^D= b_j^D_(+)∪ b_j^D_(-) as the input for the noise recognition model, where b_j^D_(+) represents clean batch consisting of original sequences, labeled as 1. And b_j^D_(-) represents noisy batch composed of randomly replaced noisy sequences, labeled as 0. While the selection of b_j^D_(-) from b_j^D is not fixed, it should not be too scant. Specifically, we assume that there are very few noisy sequences in b_j^D_(+). If we select too few sequences in b_j^D_(-), it may lead to a situation where the extremely few noisy sequences in b_j^D_(+) outnumber the sequences in b_j^D_(-), meaning that the number of sequences in b_j^D_(+) labeled as 1 while actually being 0 is greater than the number of sequences in b_j^D_(-) labeled as 0. This situation could lead the noise recognition model to incorrectly learn noisy sequences as positive (labeled 1). Hence, it is essential to ensure an adequate number of sequences in b_j^D_(-) to avoid an unstable situation that could lead the noise recognition model to learn in the wrong direction. Up to this point, we have discussed the training method for the recommender model and how input sequences for the noise recognition model are generated.
§.§.§ Weight Generation
In this subsection, we will describe the method for generating weights w_i. As depicted in fig_1, the noise recognition model is a sequence-to-value model. The model takes b_j^D∈ℬ^D as input. For s_j∈ b_j^D, where s_j consists of items with length N and a target item to be predicted. s_j first passes through the neural network's embedding layer, it is transformed into the sequences of embeddings [𝐞_1,⋯,𝐞_n,⋯,𝐞_N,𝐞_T], in which 𝐞_n represents the embedding of the n-th item in the sequences s_j after passes through the neural network's embedding layer. Then, we pass it through the attention network to obtain user hidden representation of the sequences s_j:
𝐡 = ∑_n = 1^N a_n 𝐞_n,
where
a_n = MLP(𝐞_n||𝐞_T)/∑^N_n'= 1MLP(𝐞_n'||𝐞_T).
|| represents the concatenation of embeddings. Subsequently, we concat 𝐡 with the embedding 𝐞_T of the target item, and then pass the results through a MLP to produce the weights w_j. Given that our noise recognition method can be viewed as a self-supervised binary classification task, we use the binary cross-entropy loss function for optimization:
ℒ_j^D = -1/|b_j^D|∑_s_j∈ b_j^D^|b_j^D|(y_jlog g(u,s_j;Θ _D)
+ (1-y_j) log(1-g(u,s_j;Θ _D))),
where y_j and g(u,s_j;Θ _D)=w_j represent the labels for noise and the predicted probabilities of noise for the sequences s_j in b_j^D, respectively.
The noise recognition model similarly uses training set b_i^R∈ℬ^R, which is used in training the recommender model, as input. This set of sequences is non-overlapping with the training sequences used for the noise recognition model, which will be explained in detail in section 3.4.1. At this point, we can use the results g(u,s_i;Θ _D) output by the noise recognition model as the weights for the training sequences of the recommender model, namely the weights w_i in eqn_1.
Furthermore, the noise recognition model is used only during the training phase to help the recommender model learn better parameters. It is not used during the evaluation phase. Therefore, the ATJT method does not increase the number of parameters in the recommender model.
§.§ ATJT Method
§.§.§ Data Partition.
To fully use the data, after fitting the parameters of both the recommender model and the noise recognition model, we can reverse the training data for ℬ^R and ℬ^D. This means that B^D optimizes the recommender model while ℬ^R optimizes the noise recognition model. It is worth noting that the recommender model continues to use the original model in the subsequent training, while the noise recognition model is trained using a duplicate model. The purpose of this is to ensure that all data can be used to train the recommender model, while the noise recognition model does not fit all the data. Throughout the training process, the reweighted recommender model and the noise recognition model are trained together. This ensures that the noise recognition model can accurately recognize noisy sequences while optimizing the results of sequence reweighting.
Two important points to note are: first, we only use the original sequences to train the recommender model, and the noise recognition model is trained with original sequences and randomly replaced noisy sequences. Second, if there is a need to make more extensive use of the data, the training set sequences can be divided into N groups instead of two groups. As shown fig_2(b), we demonstrate a training method where the training set is divided into four groups. In extreme cases, only one sequence receives the best reweighting output by the noise recognition model and trains the recommender model, while the rest of the sequences are input into the noise recognition model to achieve the best recognition performance in training. However, this method increases the number of duplicate models for training the noise recognition model. Therefore, the minimum grouping is two groups, and the maximum grouping is the size of the training set sequences |ℬ^R∪ℬ^D|. The specific grouping can be chosen based on available resources and performance considerations.
§.§.§ Loss Function
In this subsection, we focus on how to jointly train the recommender model with the noise recognition model by computing the loss value. When we partition the training set sequences into N groups, due to |ℬ^R|:|ℬ^D|=1:N-1, the batches used for training the recommender model should be in a ratio of 1:N-1 compared with those used for training the noise recognition model, as illustrated in fig_6. Therefore, the loss function for the noise recognition model should be:
ℒ_i^D=1/N-1∑_j=i(N-1)^(i+1)(N-1)-1ℒ_j^D,
where i represents the index of the batch used by the recommender model. N-1 represents the number of batches used by the noise recognition model corresponding to one batch of data used for training the recommender model, and ℒ_j^D represents the binary cross-entropy loss function when training the noise recognition model with the j-th batch of data.
During the training process, we combine the loss of the recommender model with the loss of the noise recognition model using a scaling factor α to obtain the total loss for model training:
ℒ_i^sum=ℒ_i^R+αℒ_i^D,
where α represents tunable parameters that allows us to control the learning rates of the recommender model and the noise recognition model, thereby achieving the goal of joint training. Joint training enables the noise recognition model to learn the weights w_i for sequences s_i in b_i ^R (as defined in eqn_1), which more suitable for training the recommender model while accurately recognizing noise. This enables the recommender model to achieve better performance with the weighted sequences.
§.§.§ Overall Optimization Algorithm of Model Training
The joint training process is illustrated in Algorithm <ref>. The joint training consists of two parts: the calculation of the ℒ_i^R for the recommender model (lines 6-7) and the calculation of the ℒ_i^D for the noise recognition model (lines 8-14). We achieve joint training by summing the loss values from these two parts (line 15). Specifically, we start by initializing the recommender model f(· ;Θ _R) (line 2). Next, we iterate over all groups in the training set as described in section 3.4.1 (line 3). We then initialize the noise recognition model (line 4) and retrieve the i-th batch from the recommender model training set (line 5). The sequences from the i-th batch are passed through the noise recognition model g(·;Θ _D) to determine their weights w_i (line 6). These sequences are then input into the recommender model, and the weighted loss is calculated. After averaging the loss for all sequences in the batch, we obtain ℒ_i^R (line 7). Subsequently, we iterate over the batches from the (i(N-1))-th to the ((i+1)(N-1)-1)-th in the noise recognition model training set (line 9). From the batch of the noise recognition model, we select half of the sequences. For these sequences, we perform random replacements of items, considering them as noisy sequences (line 10 and 11). The original sequences and the noisy sequences from set b_j^D are input into the noise recognition model, and the loss is calculated. After averaging the loss for all sequences in the batch, we obtain ℒ_j^D (line 12). Next, we accumulate all ℒ_j^D values during the iteration onto ℒ_i^D to obtain the final noise recognition model loss (line 13). Finally, we add ℒ_i^R and ℒ_i^D to obtain ℒ_i^sum (line 15). At this point, we can jointly optimize the recommender model and the noise recognition model based on ℒ_i^sum (line 16 and 17). At this point, we have completed the training of the noise recognition model and explained how to implement the ATJT method. | null | null | null |
http://arxiv.org/abs/2409.17453v1 | 20240926011201 | AgMTR: Agent Mining Transformer for Few-shot Segmentation in Remote Sensing | [
"Hanbo Bi",
"Yingchao Feng",
"Yongqiang Mao",
"Jianning Pei",
"Wenhui Diao",
"Hongqi Wang",
"Xian Sun"
] | cs.CV | [
"cs.CV"
] |
Hanbo Bi ([email protected])
Yingchao Feng ([email protected])
Yongqiang Mao ([email protected])
Jianning Pei ([email protected])
Wenhui Diao ([email protected])
Hongqi Wang ([email protected])
Xian Sun ([email protected])
1. Aerospace Information Research Institute, Chinese Academy of Sciences
2. School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences
3. University of Chinese Academy of Sciences
4. Key Laboratory of Network Information System Technology, Institute of Electronics, Chinese Academy of Sciences
5. Department of Electronic Engineering, Tsinghua University
⋆ Corresponding authors.
AgMTR: Agent Mining Transformer for Few-shot Segmentation in Remote Sensing
Hanbo Bi^1,2,3,4 Yingchao Feng^1,4⋆ Yongqiang Mao^1,2,3,4,5
Jianning Pei^1,2,3,4 Wenhui Diao ^1,4 Hongqi Wang ^1,4 Xian Sun ^1,2,3,4
Received: date / Accepted: date
============================================================================================================================================
§ ABSTRACT
Few-shot Segmentation (FSS) aims to segment the interested objects in the query image with just a handful of labeled samples (i.e., support images). Previous schemes would leverage the similarity between support-query pixel pairs to construct the pixel-level semantic correlation. However, in remote sensing scenarios with extreme intra-class variations and cluttered backgrounds, such pixel-level correlations may produce tremendous mismatches, resulting in semantic ambiguity between the query foreground (FG) and background (BG) pixels. To tackle this problem, we propose a novel Agent Mining Transformer (AgMTR), which adaptively mines a set of local-aware agents to construct agent-level semantic correlation. Compared with pixel-level semantics, the given agents are equipped with local-contextual information and possess a broader receptive field. At this point, different query pixels can selectively aggregate the fine-grained local semantics of different agents, thereby enhancing the semantic clarity between query FG and BG pixels. Concretely, the Agent Learning Encoder (ALE) is first proposed to erect the optimal transport plan that arranges different agents to aggregate support semantics under different local regions. Then, for further optimizing the agents, the Agent Aggregation Decoder (AAD) and the Semantic Alignment Decoder (SAD) are constructed to break through the limited support set for mining valuable class-specific semantics from unlabeled data sources and the query image itself, respectively. Extensive experiments on the remote sensing benchmark iSAID indicate that the proposed method achieves state-of-the-art performance. Surprisingly, our method remains quite competitive when extended to more common natural scenarios, i.e., PASCAL-5^i and COCO-20^i.
§ INTRODUCTION
Semantic segmentation <cit.>, one of the fundamental tasks in the intelligent interpretation of remote sensing, aims at assigning a pre-defined target category to each pixel in remote sensing images <cit.>. With the recent development of deep learning techniques, especially the emergence of the fully convolutional network (FCN) <cit.>, the semantic segmentation task of remote sensing has made remarkable breakthroughs. However, collecting massive pixel-level annotations for complex remote sensing scenes is time-consuming and labor-intensive <cit.>. Even though researchers propose weakly-supervised <cit.> and semi-supervised methods <cit.> to alleviate the data hunger, they fail to effectively generalize to unseen domains. Few-shot Learning (FSL) provides a promising research solution for tackling the above issue, which can exploit several labeled samples to quickly generalize to unseen (novel) classes <cit.>.
Few-shot Segmentation (FSS) is a natural extension of FSL for handling dense prediction tasks <cit.>. The purpose of FSS is to segment the corresponding object in the interested image (called query image) utilizing several labeled samples (called support images) containing the interested class. The current mainstream FSS schemes, i.e., prototype-learning and affinity-learning, are equipped with pixel-level semantic correlation in most cases. The former constructs the class-agnostic prior mask by maximizing pair-wise pixel similarity to compensate for the missing information caused by feature compression <cit.>, while the latter directly leverages the similarity of support-query pixel pairs to aggregate support semantics into the query feature <cit.>. For instance, CyCTR <cit.> introduced a cycle-consistent transformer to selectively aggregate support-pixel semantics and SCCAN <cit.> designed a self-calibrated cross-attention to alleviate the pixel-mismatch issue.
Despite the desirable results in natural scenarios, when extended to remote sensing scenarios with extreme intra-class variations and cluttered backgrounds, directly leveraging the correlations between support-query pixel pairs to aggregate support semantics may lead to tremendous mismatches. Unfortunately, the low-data regime of FSS would exacerbate these negative effects. As illustrated in Fig.<ref>(a), in view of the appearance difference between objects in the support-query pair and the complex background interference, the background (BG) pixel `Roof' P_3 in the query even matches the foreground (FG) pixel `Plane' P_1 in the support more than the FG-Pixel `Plane' P_2 in the query. At this point, both query FG and BG pixels will aggregate support FG semantics, resulting in semantic ambiguity between the FG and BG pixels of the query and yielding segmentation failures.
To correct the semantic ambiguity between the query FG and BG pixels caused by pixel-level mismatch, we suggest adaptively mining a set of local-aware agents (i.e., condensed feature representations) for constructing agent-level semantic correlation. Note that different agents will highlight different local regions of the interested class, e.g., the `Fuselage', `Wings', etc., of the `Plane'. In this case, different pixels in the query image will derive the corresponding agent-level semantic correlations, thus executing specific semantic aggregations. As illustrated in Fig.<ref>(b), given query FG-pixel P_2 selectively aggregates the agent semantics responsible for the `Fuselage', while the BG-pixel P_3 aggregates the background agent semantics. This is due to the fact that, compared to pixel-level semantics, the agents are equipped with local contextual information and possess a broader receptive field, thus ensuring the safe execution of semantic aggregation for query pixels and enhancing semantic clarity between query FG and BG pixels.
Nevertheless, excavating representative agents without explicit supervision is not trivial. This paper develops an Agent Mining Transformer network called AgMTR to address this issue. For minimizing the negative impact of large intra-class variations in remote sensing scenarios, besides routinely exploiting the support images, AgMTR focuses on exploring valuable semantics from unlabeled images and the query image itself to optimize agents. The specific process of mining agents is depicted in Fig.<ref>.
Firstly, the Agent Learning Encoder (ALE) is proposed to guide agents to efficiently mine salient semantics (i.e., class-specific semantics) from the support pixels that can help segment the target classes in the query image in a masked cross-attention manner <cit.>. To enhance the semantic diversity of the agents, ALE dynamically imposes equal division constraints on the foreground region, thus assigning different yet complementary local masks to different agents. At this point, different agents can aggregate the pixel semantics from the corresponding local regions, achieving the local-awareness of the target category, such as the `Fuselage', and `Wings' of the `Plane'.
Merely aggregating support semantics is not enough to bridge the semantic gap from the image pair. The Agent Aggregation Decoder (AAD) is proposed to introduce unlabeled images containing the interested class as references, with the expectation that agents can adaptively explore the corresponding object semantics from unlabeled data sources, thus breaking beyond the limited support set for better guiding the query segmentation. Notably, the unlabeled image features will be adaptively clustered into a set of local prototypes, where the proposed agents will selectively aggregate these prototype semantics to further enhance the local-awareness of the target category.
Furthermore, inspired by the strong intra-object similarity, i.e., pixels within the one object are more similar than pixels between different objects <cit.>, learning from the query image itself is equally crucial to guide its own segmentation. The Semantic Alignment Decoder (SAD) is suggested to promote semantic consistency between the agents and the query object. Specifically, the pseudo-local masks of the query image will be constructed by computing the similarity between the different local agents and the query feature. Under the constraint of the local masks, different agents will further aggregate the corresponding query local semantics, thus aligning with the query semantics.
Based on the above three components, AgMTR constructs precise agent-level semantic correlations, effectively distinguishing FG and BG pixels in the semantic aggregation of the query feature.
The primary contributions can be summarized as follows:
* A novel Agent Mining Transformer (AgMTR) network for FSS in remote sensing is proposed to adaptively mine a set of local-aware agents for different objects to construct agent-level semantic correlation, thus correcting the semantic ambiguity between the query FG and BG pixels caused by pixel-level mismatches.
* We construct an Agent Learning Encoder, Agent Aggregation Decoder, and Semantic Alignment Decoder to adaptively mine and optimize agent semantics from support, unlabeled, and query images, in that order.
* Extensive experiments on remote sensing FSS benchmark iSAID, validate the superiority of the proposed AgMTR and achieve state-of-the-art performance. Surprisingly, our method likewise outperforms previous SOTA on two popular CV benchmarks, PASCAL-5^i, and COCO-20^i.
§ RELATED WORK
§.§ Semantic Segmentation
Semantic segmentation, with the goal of predicting pixel-level labels within an image, has been widely emphasized by researchers as a fundamental task in image interpretation. Long et al. <cit.> first proposed a full convolutional network (FCN) for the semantic segmentation task, providing a solid foundation for the following research. To further mine the association of contextual information, Chen et al. <cit.> proposed Dilated Convolution to expand the convolutional receptive field. Meanwhile, Zhao et al. <cit.> and Chen et al. <cit.> proposed Pyramid Pooling Module (PPM) and Atrous Spatial Pyramid Pooling (ASPP), respectively, to combine the feature representations between different scales and regions for multi-scale semantic aggregation.
On this basis, some researchers have focused on designing semantic segmentation methods specific to remote sensing scenes. Feng et al. <cit.> proposed a neighboring pixel affinity loss (NPALoss) to focus on the optimization for small-size objects and hard object boundaries. Peng et al. <cit.> proposed a Cross Fusion Network (CF-Net) for fast and effective extraction of small-scale objects. In addition, Niu et al. <cit.> decoupled the semantic segmentation task and proposed a disentangled learning paradigm to model foreground and boundary objects separately. Despite promising success with large-scale data, these methods fail to perform well when generalizing to unseen domains.
§.§ Few-shot Learning
Recently community researchers have proposed Few-shot Learning (FSL) <cit.> for fast generalization to unseen classes, which aims to recognize novel (unseen) classes from a handful of labeled samples <cit.>. In general, FSL methods would employ the meta-learning paradigm <cit.>, where the model understands representative knowledge from the training dataset (seen classes) and generalizes it to novel tasks (unseen classes). On this basis, these methods can be divided into the following two branches: (i) Optimization-based method <cit.>, which aims to search for a set of proper parameters for the model that can be easily generalized to various novel tasks. For example, Finn et al. <cit.> learned the initialization parameters of the model while Ravi et al. <cit.> learned an optimization strategy instead of a fixed optimizer to facilitate faster convergence. (ii) Metric-based method <cit.>, which aims to find a specific embedding space that can efficiently distinguish different classes for classification. For example, Zhang et al. <cit.> utilized Earth Mover's Distance (EMD) to model the distance between images as the optimal transport plan. Bateni et al. <cit.> proposed to construct Mahalanobis-distance classifiers to improve the accuracy of few-shot classification.
In addition, given the large intra-class variations and small inter-class variations of remote sensing scenarios, some FSL methods specific to remote sensing scenarios have been successively proposed <cit.>. In this paper, we instead apply FSL to the more advanced task of scene understanding rather than scene classification in remote sensing scenes, i.e., semantic segmentation.
§.§ Few-shot Segmentation
Few-shot Segmentation (FSS) <cit.>, a natural extension of Few-shot Learning to tackle intensive prediction tasks, is proposed to guide the model to activate corresponding objects in the query image utilizing several labeled samples (i.e., support images) containing the interested class. Current FSS methods can be broadly divided into two parts: (i) Prototype-learning, which aims to compress the support feature into a prototype through masked average pooling (MAP), and perform feature comparison with the query feature to segment the interested object <cit.>. However, these methods would inevitably lose spatial information and lack context-awareness. To address this issue, besides attempting to generate multiple prototypes <cit.>, they typically constructed the class-agnostic prior mask by maximizing pair-wise pixel similarity to further mine pixel-level support semantics <cit.>. (ii) Affinity-learning <cit.>, which aims to directly construct pixel-level feature matching for segmentation. For instance, Zhang et al. <cit.> introduced a cycle-consistent transformer to aggregate support-pixels semantics selectively. Xu et al. <cit.> designed a self-calibrated cross-attention to alleviate the pixel-mismatch issue. However, such pixel-level similarity would ignore contextual information and tend to be affected by background clutter or noisy pixels causing semantic ambiguity. We propose feature embeddings called agents, which are condensed feature representations similar to prototypes, and at the same time mine the valuable pixel semantics for dynamic optimization, which can combine the strengths of prototype learning and affinity learning.
In addition, several methods, such as BAM <cit.>, D^2Zero <cit.>, and PADing <cit.>, focus on addressing the bias toward seen classes in Few-shot (Zero-shot) scenes for better generalize to unseen classes. In the field of remote sensing, the Few-shot Segmentation task has also received an enthusiastic response <cit.>. Yao et al. <cit.> proposed a scale-aware prototype matching scheme to handle the large variance of objects' appearances and scales. Bi et al. <cit.> designed DMNet to mine the class-specific semantics from query images. To maximize the breakthrough of the few-shot limitations for better adapting to remote sensing scenarios, this paper proposes to mine and optimize class-specific semantics from support, unlabeled, and query images.
§ PROBLEM DEFINITION
The purpose of Few-shot Segmentation (FSS) is to segment unseen class objects without additional training by utilizing only a handful of labeled samples. Following the mainstream schemes <cit.>, the meta-learning paradigm is employed to train the model. Concretely, given a training dataset D_train with seen classes C_seen and a testing dataset D_test with unseen classes C_unseen (C_seen∩ C_unseen= ∅), the model would understand representative knowledge from multiple tasks (i.e., episodes) sampled from D_train, and then generalizes directly to D_test. Specifically, both datasets D_train and D_test consist of multiple episodes, each containing a support set S ={ ( X^i_s, M^i_s ) }_i=1^K and a query set Q ={ ( X_q, M_q ) }, where X_*^i and M_*^i denote the image and the corresponding mask containing the interested class, respectively, K denotes the number of support image provided in S. During each training episode, the FSS models are optimized by predicting the query image X_q with the guidance of the support set S. After training, the model evaluates the performance in each testing episode on D_test without any optimization. Note that the query mask M_q is not available during the testing phase.
Variant: Considering that extreme intra-class variations in remote sensing scenarios would further exacerbate the gap between support-query image pairs in FSS, it is suboptimal and insufficient for previous work to utilize only support images (i.e., labeled samples) to guide the query image. Thus, this paper also introduces unlabeled sample set S_u ={ ( X^i_u ) }_i=1^N_u with the same interested class as the labeled samples to assist in segmentation. These samples are derived by random sampling from the interested class during the training and testing phase, with the corresponding masks removed. At this point, the model will segment the image X_q with the guidance of S and S_u.
§ PROPOSED METHOD
§.§ Method Overview
To correct the semantic ambiguity of query FG and BG pixels caused by the semantic mismatch in previous pixel-level semantic correlation, this paper develops an Agent Mining Transformer network named AgMTR, which suggests mining a set of local-aware agents (i.e., condensed feature representations similar to prototypes), and relies on them to construct agent-level semantic correlation. Across tasks, the proposed agents are not static, or an agent contains the local semantics of multiple classes. On the contrary, the semantic awareness of the agents is dynamic, where AgMTR would adaptively mine the corresponding class-specific semantics according to the target classes of different episodes/tasks, thus generating local-aware agents for the current class. Compared to prototypes, agents offer more precise and stable feature representations, a richer and more diverse semantic source, and enhanced local contextual awareness.
The overall pipeline is depicted in Fig.<ref>. In particular, given the support-query image pair X_s and X_q, we extract features with a shared-feature encoder (e.g., ViT <cit.> and DeiT <cit.>) to derive the support feature F_s∈ℝ^H × W × C and query feature F_q∈ℝ^H × W × C. For more matched agent-level semantic correlation, AgMTR constructs three components, Agent Learning Encoder (ALE), Agent Aggregation Decoder (AAD), and Semantic Alignment Decoder (SAD), to dynamically mine and optimize agent semantics from support, unlabeled, and query images, in that order:
* Agent Learning Encoder: ALE dynamically decomposes the support mask into multiple local masks and arranges different initial agents to aggregate the support semantics under different local masks for deriving local-awareness of the target category.
* Agent Aggregation Decoder: To break through the limited support set for better guidance, AAD introduces unlabeled images containing the interested class, where the agents will selectively perceive and aggregate the semantics beneficial to them from unlabeled images.
* Semantic Alignment Decoder: To further bridge the semantic gap, SAD constructs the pseudo-local masks of the query image, which drives the agents to aggregate the local semantics of the query itself to promote semantic consistency with the query object.
§.§ Agent Learning Encoder
As mentioned in Sec.<ref>, we suggest mining a set of context-rich agents to perform agent-level semantic correlation replacing the previous pixel-level. At this point, the focus turns to capturing representative agents. Agent Learning Encoder aims to guide agents to efficiently mine salient semantics (i.e., class-specific semantics) from the support pixels that can help segment the target classes in the query image, which is the most basic and important step, with the detailed pipeline illustrated in Algorithm <ref>.
Agent Initialization: Firstly, as depicted in Fig.<ref>, the initial agents P^ini= [ p^ini_1, p^ini_2,...,p^ini_N_a ] ∈ℝ^N_a × C are derived by adding between randomly initialized feature embeddings {t_i }_1^N_a∈ℝ^N_a × C and the support prototype obtained by performing masked average pooling (MAP) operation:
p_i^ini=MAP(F_s⊙ M_s)+t_i,i=1,2,...,N_a
where MAP (· ) denotes the compression of the support feature F_s under the foreground mask M_s into a feature vector and ⊙ denotes the dot product. Notably, the randomly initialized feature embeddings {t_i }_1^N_a obeys a normal distribution.
Agent Learning: To empower agents with context-awareness, ALE employs masked cross-attention <cit.> to aggregate the interested semantics in the support feature to corresponding agents. At this point, the attention weight A ∈ℝ^N_a × HWbetween agents and feature can be formulated as:
A=softmax ( (P^iniW_q ) (F_s^'W_k )^T/√(d)+M̃_s )
where W_*∈ℝ^C × C denote projection weights, F_s^'∈ℝ^HW × C denotes the flattened support feature. Notably, we utilize M̃_s to constrain the region of interest for the attention weight:
M̃_s ( i )=0, if M_s^' ( i ) =1
-∞, otherwise
where M_s^' denotes the flattened support mask. After normalization in Eq.(<ref>), the attention weight A focuses only on the foreground region of the support mask, thus ensuring that the agents aggregate only the interested foreground semantics.
Without supervising the learning process, such agents tend to capture similar foreground semantics resulting in redundancy. To enhance the semantic diversity of agents, we propose decomposing the attention weight into different but complementary local attention weights, i.e., decomposing the support mask into different local masks. Under the constraint of those local masks, different agents can aggregate the semantics from different local regions, yielding different local perceptions of the target category.
Concretely, we model this allocation as the Optimal Transport (OT) problem. By searching for the transport plan with the minimum transport cost, the pixel allocation of different agents is accomplished, which can be solved efficiently by the Sinkhorn <cit.> algorithm. We define the cost matrix as (1-A^f), where A^f ∈ℝ^N_a × N_f denotes the similarity matrix between the agents and the foreground feature of the support image instead of the whole feature. Notably, the larger A^f(i,j) is, the greater the similarity between the i^th agent and the j^th support pixel, the lower the transport cost, and the greater the likelihood that the pixel will be assigned to that agent. At this point, the transport plan is denoted as T∈ℝ^N_a × N_f, with the optimization function formulated as:
min_T∈Υ∑_i,jT ( i,j ) ( 1-A^f ( i,j ) ) -1/λ H ( T )
where H ( T )=-∑_i,jT(i,j)log T(i,j) denotes entropy regularization and λ is responsible for adjusting the degree of entropy impact. Besides, the transport plan T needs to obey the distribution of Υ:
Υ ={ T ∈ℝ^N_a × N_f| T𝐈_m=1/N_a𝐈_n ,T^T𝐈_n=1/N_f𝐈_m}
where 𝐈_𝐦∈ℝ^N_f and 𝐈_𝐧∈ℝ^N_a denote vectors with values of 1 in all dimensions. Eq.(<ref>) aims to force each agent to allocate the same number of foreground pixels, avoiding a situation where all pixels are allocated to one agent.
After deriving the optimal transport plan T, we zero-padded it to recover the background region to produce the local attention weights Ã∈ℝ^N_a × HW. At this point, different agents P will aggregate the corresponding pixel semantics under the constraint of local weights:
P=ℱ_proj ( Ã (F_s^'W_v ) )
where ℱ_proj denotes the two-layer MLP and W_v denotes the projection weight.
Note that the agents are not the multiple local regions divided by the ALE, but rather the product of condensing the pixel semantics of different local regions into a single feature vector. Finally, the background prototype derived by performing MAP on the support feature will be incorporated as the background agent in P= [ p_1,p_2,...,p_N_a, p_N_a+1 ] ∈ℝ^(N_a+1 )× C.
§.§ Agent Aggregation Decoder
Capturing the class-specific semantics from the support images merely is not sufficient to bridge the semantic gap between the image pairs, especially in complex remote sensing scenarios. To break through the limited support set for mitigating the intra-class variations, the Agent Aggregation Decoder introduces unlabeled images containing the interested class as a reference, expecting agents to adaptively explore the richer interested semantics from the unlabeled data source, with the pipeline illustrated in Algorithm <ref>.
Unlabeled prototype generation: Specifically, as illustrated in Fig.<ref>, given N_u unlabeled images X_u, we extract features via a shared encoder to generate unlabeled features F_u ∈ℝ^N_u× H × W × C. Considering that there is a significant amount of background noise in the unlabeled feature that cannot be directly exploited, we employ a super-pixel scheme SLIC <cit.> to cluster the unlabeled image into N_s pixel sets Φ^i ={Φ _1^i,Φ _2^i,...,Φ _N_s^i }, i=1,2,..., N_u based on color, texture, and distance. The unlabeled features can be compressed into a series of local prototypes P_u∈ℝ^(N_u× N_s) × C through MAP operation:
p_u^i;j=1/ | Φ _j^i | ∑_m∈Φ _j^if_m, f_m∈ F_u^i
where p_u^i;j∈ℝ^C denotes the j^th local prototype generated by the i^th unlabeled image and Φ _j^i denotes the j^th pixel set generated by the i^th unlabeled image.
To further enhance the contextual semantics of the unlabeled prototype set, we model it as the graph, with the prototypes considered as nodes in the graph, employing a graph attention network (GAT) <cit.> to enhance the semantic associations between nodes, which can be formulated as:
P_u =GAT ( P_u,adj )
s.t. adj(m,n)=1, if ⟨ p_u^m,p_u^n⟩>δ
0, otherwise
where GAT denotes the graph attention network, adj∈ℝ^(N_u× N_s) × (N_u× N_s) denotes the adjacency matrix formed between the nodes and ⟨·,·⟩ denotes the cosine similarity function, p_u^*∈ℝ^C denotes the *^th prototype in the unlabeled prototype set, and δ is a threshold to determine whether the nodes are neighboring or not, which is set to 0.5 in the experiments.
Agent Adaptive Aggregation: Given the augmented prototypes of unlabeled features P_u ∈ℝ^(N_u× N_s) × C, the agents P ∈ℝ^(N_a+1) × C will selectively perceive and aggregate object semantics from these prototypes that are beneficial to them in a cross-attention manner. As depicted in Fig.<ref>, by performing similarity computation with unlabeled prototypes P_u, different agents P will receive different similarity weights, resulting in different weighted semantic aggregations:
p̃^i=p^i+β∑_j=1^(N_u× N_s)⟨ p^i,p̃_u^j ⟩/∑_j⟨ p^i,p̃_u^j ⟩p̃_u^j
where β denotes the learnable weight to balance the semantic aggregation, which is initialized to 0.4. At this point, the local agents P∈ℝ^(N_a+1) × C will be further optimized.
§.§ Semantic Alignment Decoder
Based on the observation of strong intra-object similarity, i.e., pixels within the one object are more similar than pixels between different objects <cit.>, we argue that it is also crucial to mine the valuable information from the query image itself, which has been omitted in previous work. Thus, the Semantic Alignment Decoder is proposed to drive the agents to mine the class-specific semantics from the query image for promoting semantic consistency with the query object, with the pipeline illustrated in Algorithm <ref>.
Semantic Alignment Block: Concretely, as depicted in Fig.<ref>, SAD consists of multiple Semantic Alignment Blocks (SABs) in series, each of which continuously aggregates the query semantics for agents via the cross-attention. We construct the pseudo-local masks of the query image to force different agents to selectively aggregate their desired local semantics without introducing irrelevant noise. More specifically, we perform similarity computation of query feature F_q with different agents p^* to produce multiple segmentation results y^*_l, based on which we can gain the agent index corresponding to the maximum value of each position. Finally, the binary local masks M_l ∈ℝ^(N_a+1) × H × W corresponding to different agents can be derived by one-hot coding. The above process can be formulated as:
M_l = OneHot ( iarg max (⋃_i=1^N_a+1 y^i_l ) )
s.t. y^i_l=⟨ F_q,p^i ⟩
where OneHot(·) denotes the process of encoding the local mask into binary masks, ⋃ denotes the concatenation of multiple segmentation results, and argmax denotes the process of deriving the index corresponding to the maximum value at each pixel position.
Under the constraint of these local masks M_l = [ m_l^1,m_l^2,...,m_l^(N_a+1) ], different agents could selectively aggregate the local semantics within their respective corresponding masks in the query image without creating semantic confusion among agents. The above process can be formulated as:
p_a^i= softmax ( ( p^iW_q ) ( F_qW_k )^T /√(d)+ m_l^i ) ( F_qW_v )
where P_a= [p_a^1,p_a^2,...,p_a^ (N_a+1 ) ] denote the aligned agents, p^i denotes the i^th agent, F_q denotes the query feature, m_l^i denotes the attention weight derived from the mapping the i^th local mask m_l^i through Eq.(<ref>), and W_* denote the projection weights. Notably, the two-layer MLP in the semantic aggregation is omitted to simplify the formulation.
Multiple block iterations: Since the Semantic Alignment Block can drive agents to aggregate the corresponding local semantics in the query image to align with the query object. We can iterate the process multiple times to gradually improve the quality of the pseudo-local mask, thus further promoting semantic consistency between the agents and the query object. Suppose we have N-layer Semantic Alignment Blocks, then for each layer n:
P_a^n,M_l^n-1=SAB ( P_a^n-1,F_q ) , n=1,2,3,...,N
where SAB denotes the Semantic Alignment Block. Notably, after generating the last-layer aligned agents P_a, we construct cross-attention to drive the query feature F_q to aggregate the agents' semantics, thus enabling semantic alignment.
Agent Segmentation Loss: To supervise that each Semantic Alignment Block works, we construct an Agent Segmentation Loss for constraint. Specifically, we ensemble the local segmentation results {y_l^i }_i=1^N_a+1 received from each SAB to derive foreground-background binary prediction result, i.e., [ y_l^ ( N_a+1 ); imax (⋃_i=1^N_a y^i_l ) ], where y_l^ ( N_a+1 ) denotes the segmentation result from the background agent while other y_l^i denote the segmentation result from other foreground agents. Here, we utilize binary cross-entropy (BCE) to constrain the prediction:
ℒ_ASL = BCE ( [ y_l^ ( N_a+1 ); imax (⋃_i=1^N_a y^i_l ) ],M_q )
Thus, with N-layer Semantic Alignment Blocks, N losses are produced, where we take the mean as the final loss ℒ_ASL.
§.§ Matching
Given the final aligned agents P_a ∈ℝ^(N_a+1) × C and the aligned query feature F_q ∈ℝ^H × W × C, the segmentation result 𝒫∈ℝ^H× W× 2 can be achieved by performing similarity computation between F_q and the agents P_a (including foreground agents {p_a^i }_i=1^N_a and the background agent p_a^N_a+1), which can be formulated as:
𝒫= [ ⟨F_q,P_a^N_a+1⟩; imax (⋃_i=1^N_a⟨F_q, P_a^i⟩ )
]
where ⟨·,·⟩ denotes the similarity computation and ⋃ denotes the concatenation of multiple segmentation results. Notably, the maximum operation max(·) allows finding the foreground local agent that best matches the query pixel, thus effectively utilizing the local semantics to distinguish between the FG and BG pixels of the query image.
In this case, the binary cross entropy (BCE) loss ℒ_main is employed to supervise the optimization of the segmentation prediction 𝒫 towards M_q. Thus, the total loss ℒ of the model consists of ℒ_main and ℒ_ASL:
ℒ= ℒ_main+γℒ_ASL
where γ aims to balance the contributions of ℒ_main and ℒ_ASL.
§ EXPERIMENTS
Firstly, Sec.<ref> describes the experimental setup, including dataset introduction, implementation details, and evaluation metrics. Then, Sec.<ref> and Sec.<ref> conduct extensive comparative experiments and ablation studies to prove the superiority of our method, respectively. Finally, Sec.<ref> provides several failure cases.
§.§ Experimental Setup
§.§.§ Datasets
Three FSS benchmarks are utilized for evaluation, including a remote sensing benchmark iSAID <cit.> and two CV benchmarks PASCAL-5^i <cit.> and COCO-20^i <cit.>.
iSAID <cit.>, a massive benchmark for intensive prediction tasks in remote sensing scenarios, contains 655,451 objects from 15 classes in 2,806 high-resolution images. Following the setup of DMNet <cit.>, this dataset can be utilized for evaluating the FSS task. Concretely, the 15 classes will be split into three folds, each containing 10 training classes and 5 testing classes. Remarkably, both training and testing classes are disjointed in each fold to build unseen class situations. For each fold, we randomly sample 1000 image pairs for validation.
PASCAL-5^i, which is constructed from PASCAL VOC 2012 <cit.> and additional SBD <cit.>, contains 20 classes in total. For cross-validation, these classes will be split into four folds, each containing 15 training classes and 5 testing classes as done in OSLSM <cit.>. For each fold, we randomly sample 5000 image pairs for validation.
COCO-20^i, which is constructed from MS COCO <cit.>, contains 80 classes in total. For cross-validation, these classes will be split into four folds, each containing 60 training classes and 20 testing classes as done in <cit.>. For each fold, we randomly sample 5000 image pairs for validation.
§.§.§ Implementation Details
The model[The codes are available at https://github.com/HanboBizl/AgMTR/https://github.com/HanboBizl/AgMTR/.] is implemented on Tesla A40 GPUs using the PyTorch framework <cit.>. The SGD optimizer is utilized to optimize the model during the training phase, with the weight decay set to 0.00005. The batch size is set to 8 and the initial learning rate is set to 0.002, where the poly strategy <cit.> is utilized to adjust the learning rate. For iSAID, we resize the images to 256×256 and set 40 epochs. For PASCAL-5^i, we resize the images to 480×480 and set 30 epochs, and for COCO-20^i, we resize the images to 480×480 and set 40 epochs. The same data enhancements are utilized to enable fair comparisons with previous work.
For the proposed model, two transformers, i.e., ViT <cit.> and DeiT <cit.>, are utilized as the feature encoder, and both of them are pre-trained on ImageNet <cit.>. In the experiment, the number of agents N_a, unlabeled images N_u, pixel sets obtained through SLIC N_s, and SAB layers N are set to 5, 5, 100, and 7, respectively. The weight γ to balance two losses is set to 0.8.
§.§.§ Evaluation Metrics
Following previous FSS schemes <cit.>, we adopt the class mean intersection-over-union (mIoU) and foreground-background IoU (FB-IoU) as the evaluation metrics. The former calculates the averaged segmentation accuracy between different classes, while the latter calculates the accuracy between foreground and background. The mIoU evaluation metric is calculated as: mIoU=1/n∑_i=1^N_unseenIoU_i, where N_unseen indicates the number of unseen classes in each fold and IoU_i indicates the IoU score of the class i. And the FB-IoU metric is calculated as: FB-IoU = ( IoU_F+ IoU_B )/2, where IoU_F and IoU_B indicate the foreground and background IoU scores, respectively.
§.§ Comparison with State-of-the-arts
§.§.§ Quantitative Results
iSAID: Table <ref> quantitatively depicts the comparison between the proposed and other methods on iSAID benchmark. Satisfyingly, our AgMTR achieves state-of-the-art performance in all cases. For example, under the 1-shot setting, AgMTR with DeiT-B/16 backbone achieves 51.58% mIoU, outperforming the best CNN-based method BAM <cit.> by 1.15% and the Transformer-based method FPTrans <cit.> by 4.17%. Surprisingly, AgMTR under the 1-shot setting even beats several methods under the 5-shot setting, e.g., R2Net <cit.>, DMNet <cit.>, and SCCAN <cit.>, which fully demonstrates that our method can alleviate the pressure of dense annotation and easily generalize to unseen classes. Under the 5-shot setting, AgMTR achieves 3.62% gains over the best BAM. Similar gains are realized in FB-IoU. The above results strongly prove that our agent-level semantic correlation can effectively handle large intra-class variations and complex backgrounds in remote sensing scenarios, realizing excellent segmentation performance for unseen classes.
PASCAL-5^i and COCO-20^i: We also explore the performance of the proposed method in more common natural scenarios to further explore its generalization and applicability. Table <ref>-<ref> present the 1- and 5-shot results on PASCAL-5^i and COCO-20^i. Encouragingly, our AgMTR performs equally competitively. Specifically, for PASCAL-5^i, AgMTR with DeiT-16/B backbone outperforms the top CNN method HDMNet <cit.> by 0.92% and 4.54% under the 1- and 5-shot settings, respectively. Besides, in comparison with the transformer-based methods FPTrans <cit.> and MuHS <cit.>, we realize a lead of 1.44% and 1.19% mIoU under the 1-shot setting. And for COCO-20^i with larger object differences and more complex backgrounds, similar gains can be realized as well. For instance, AgMTR exceeds the top CNN-based method MIANet <cit.> and the Transformer-based method MuHS <cit.> by 0.81% and 1.66%, respectively, under the 1-shot setting. The above results indicate that our method is general enough to be applied to various domains.
§.§.§ Qualitative Results
To further exhibit the excellence of the proposed AgMTR, we visualize several segmentation results on three FSS datasets. Fig.<ref> illustrates the qualitative results on iSAID. One can find that our method has clear advantages: (i) Benefiting from mining class-specific semantics from support images, unlabeled images, and the query image, our agents have more powerful class-awareness to perform precise agent-level semantic correlation, thus effectively focusing on the interested class while suppressing the activation of other classes. For instance, for the dense small targets `small vehicles' in 4^th column and the `harbor' with irrelevant classes interference in 7^th column, R2Net incorrectly activates the `Roof' and `Boats', respectively, while our AgMTR segments nicely. (ii) Benefiting from the local semantic exploration of the target category, our AgMTR can reasonably find the local agent that best matches the query pixel, and thus better classify the pixel, yielding more accurate segmentation results, e.g., the `Storage tanks' (2^nd column) and the `Planes' (5^th column). When extended to common natural scenarios, the proposed AgMTR copes equally well, efficiently distinguishing between FG and BG pixels of the query image, as depicted in Fig.<ref>. The above advantages sufficiently prove that the proposed AgMTR can efficiently handle the FSS tasks in various scenarios and demonstrate robust generalizability.
§.§ Ablation Study and Analysis
A set of ablation studies are executed to discuss and understand our proposed method thoroughly. Unless otherwise noted, the experiments in this section are conducted on iSAID utilizing the DeiT-B/16 backbone under the 1-shot setting.
§.§.§ Component Analysis
Component Analysis in Segmentation Performance:
Table <ref> discusses the performance impact of the component variants in AgMTR. In general, we could observe the following phenomena: (i) With the integration of Agent Learning Encoder (ALE), Agent Aggregation Decoder (AAD), and Semantic Alignment Decoder (SAD) three components, AgMTR achieves a 4.76% mIoU improvement compared to the baseline, strongly validating their effectiveness. (ii) The proposed ALE contributes the most to the final model. This is possible because mining class-specific semantics from the support images is the most essential and fundamental step. Only when sufficiently valid semantics are mined can agents stably mine interested semantics from unlabeled and query images without introducing irrelevant noise. This also validates the effectiveness of the proposed agent paradigm to some extent. (iii) in case (c) and case (d), the introduction of AAD and SAD yields 1.07% and 1.33% gains, respectively, which quantitatively validates that the aggregation of class-specific semantics from unlabeled and query images for agents can facilitate breaking through the limited support set and better guide query segmentation. (IV) Besides, we present the segmentation results for different variants as shown in Fig.<ref>. By introducing components step by step, the segmentation results are further improved, qualitatively verifying the effectiveness and necessity of the proposed components.
Component Analysis in Complexity:
To investigate the time complexity of the different components in AgMTR, we conduct an ablation study on the iSAID dataset, with the results depicted in Table <ref>. The following phenomena can be derived:
(i) Among the three components, AAD has the largest computational complexity. This is because AAD aims to mine class-specific information from additional unlabeled images (5 images are utilized by default), which equates to processing 7 (1 query, 1 support, and 5 unlabeled) images simultaneously, incurring additional computational costs.
(ii) Despite the obvious computational cost of AAD, AgMTR achieves the best mIoU performance with the second-lowest GFLOPs compared to others, far better than SCCAN, i.e., 51.58% vs 48.31% mIoU and 119.93 vs 143.37 GFLOPs.
(iii) AgMTR achieves a balance between accuracy and complexity when AAD utilizes less unlabeled images. We attempt to mine class-specific semantics with only one unlabeled image, at which point the model achieves 50.64% mIoU at 70.37 GFLOPs, balancing segmentation accuracy and computational complexity.
Component Analysis in Intra-class Variance:
To objectively evaluate the effectiveness of the proposed three components, we employ cosine distance to measure the distance D_aq between the query prototype and the agents in every episode, where the distance between the query prototype and the support prototype D_sq serves as a comparison. The average distance of the classes in each fold and the average of all classes are listed in Table <ref>. Notably, multiple agents are averaged into one for computation with the query prototype. Obviously, after gradually aggregating the semantics from the support images, the unlabeled images, and the query image by driving the agents through ALE, AAD, and SAD, D_aq gradually increases, and is larger than D_sq in all folds. This phenomenon reveals that with the proposed three components, the proposed agents work significantly to reduce the distance with the query image and minimize the negative impact of large intra-class variations in remote sensing, providing better guidance for query image segmentation. We also visualize the activation maps of the agent derived from different components, as illustrated in Fig.<ref>. Compared to the support prototype, agents derived from different components can focus more on specific objects in the query image, enabling more precise activations. The above results fully validate the effectiveness of the three proposed components from the perspective of intra-class variance.
§.§.§ Ablation study of the Agent Learning Encoder
As mentioned in Sec.<ref>, Agent Learning Encoder (ALE) aims to mine class-specific semantics in the support images to derive local-aware agents for performing agent-level semantic correlation. The gain of 2.12% mIoU in Table <ref> has quantitatively verified that the local agents generated by utilizing ALE are successful in effectively distinguishing between foreground and background pixels, generating more complete objects and more accurate boundaries (see Fig.<ref>). This section discusses specific details in the ALE, including the number of agents generated and the learning scheme.
Firstly, we investigate the impact of the number of agents (i.e., N_a) on segmentation performance, as depicted in Fig.<ref>. It can be found that as the number of agents (i.e., N_a) increases from 1 to 5, the segmentation accuracy will gradually increase, especially with 51.58% mIoU achieved for N_a=5. It is reasonable because the support mask will be divided into more delicate local regions, thus driving the agent to aggregate finer-grained local semantics. However, when continuing to increase N_a, the segmentation accuracy begins to decrease. At this point, the support mask becomes forced to be divided into more unreasonable local masks, such as dividing the left and right `Wings' of the `Plane', which leads to different agents aggregating similar semantics, resulting in redundancy and even interference.
We then explore the performance impact of different initialization and learning schemes of agents in ALE, as illustrated in Table <ref>. In these schemes, `Random' and `Random+Sup' denote randomly initialized tokens, and the addition of the support prototype with randomly initialized tokens as initial agents, respectively. `OT' denotes dividing the support mask into local masks with the optimal transport algorithm. From the table, we can observe the following phenomena: (i) Utilizing only randomly initialized tokens as the initial agents is suboptimal (case (a)) since the agents lack class-specific semantics to guide the capture of similar object semantics. Instead, dependent on class-specific semantics in the support prototype (case (c)), the agents can continue to assimilate similar semantics in subsequent processes, thus further enhancing class-awareness. (ii) After introducing the optimal transport (OT) to adaptively generate local masks, ALE can drive different agents to aggregate the interested semantics under different local masks. The comparison between case (b) and case (c) proves the effectiveness of the above scheme. Fig.<ref> also visualizes the activation maps of different agents on the query image after generating local masks employing optimal transport (OT). It can be found that equipped with OT, ALE can drive different agents to focus on different local regions of the specific object. When OT is not employed, not only the generated agents do not have local-awareness and produce coarser activations, but also different agents will aggregate similar semantics and produce redundancy. The above phenomenon effectively proves the effectiveness and necessity of employing OT to explore local semantics in ALE.
§.§.§ Ablation study of the Agent Aggregation Decoder
As mentioned in Sec.<ref>, Agent Aggregation Decoder (AAD) aims to mine semantic information in unlabeled images that are beneficial to the current task, for breaking the limitations of the support set and enabling stronger generalization. The quantitative results in Table <ref> demonstrate that AAD can effectively mine class-specific semantics in unlabeled data sources, flexibly dealing with intra-class variance. A natural discussion is about how many unlabeled images (i.e., N_u) are the most appropriate. Fig.<ref> explores the impact of the number of unlabeled images (i.e., N_u) on segmentation performance. As the number of unlabeled images increases, AAD can mine richer class-specific semantics from the larger number of unlabeled images, thus further optimizing the agents to break through the limited support set and better guide query segmentation.The above observation is also qualitatively demonstrated by the segmentation results with different numbers of unlabeled images in Fig.<ref>. Considering the efficiency of the model (see Table <ref>), N_u is set to 5 in the experiments. We then investigate the impact of Graph Attention Network (GAT) on performance in AAD, as illustrated in Table <ref>. It can be found that 0.69% mIoU and 0.87% FB-IoU improvement is realized after introducing GAT, which is a good demonstration that GAT can enhance the contextual semantics of unlabeled prototypes and thus be efficiently captured by local agents.
§.§.§ Ablation study of the Semantic Alignment Decoder
As mentioned in Sec.<ref>, Semantic Alignment Decoder (SAD) aims to mine the class-specific semantics of the query image itself to promote semantic alignment with the query object, and this paradigm of mining one's own semantics to guide one's own segmentation can cope well with large intra-class variations. We explore the distance with the query object in feature space, with the results shown in Table <ref>, which quantitatively demonstrates the effectiveness of SAD in bringing the distance with the query object closer. The visualization results in Fig.<ref> also qualitatively validate that the proposed SAD can further mine the query image class-specific semantics to achieve more precise boundaries and fewer false activations.
Further, we explore the impact of the number of Semantic Alignment Blocks (i.e., N) on segmentation performance, as depicted in Fig.<ref>. As the number N increases, SAD can further improve the quality of the pseudo-local masks of the query image, which enables the agents to better aggregate the local semantics from the query image, achieving better segmentation, especially receiving 51.74% mIoU and 64.80% FB-IoU when N=9. Considering the efficiency of the model, N=7 is a proper choice. In addition, Table <ref> gives the impact of the contribution γ of the Agent Segmentation Loss ℒ_ASL in SAD. It can be observed that the segmentation accuracy shows an increasing and then decreasing trend, and reaches the best performance when γ=0.8.
§.§.§ Ablation study of K-shot
When expanded to the K-shot (K>1) setting, the support set can provide more labeled samples for guiding segmentation. Fig.<ref> demonstrates the impact on performance in iSAID with different numbers of support images provided. It can be noticed that the mIoU of each Fold achieves a large improvement as K keeps increasing. When continuing to increase to 10 images, the boost slows down, which is reasonable because of the decreasing marginal effect of the labeled samples. Fig.<ref> also compares the segmentation results of the proposed AgMTR under the 1- and 5-shot settings. The 5-shot results are significantly superior to the 1-shot results with more complete objects and fewer false activations. The insight behind this phenomenon is that AgMTR can mine more class-representative and stable agents from more support images, which can further facilitate agents to capture similar semantics from unlabeled and query images for more precise agent-level semantic correlations.
§.§.§ Performance Analysis for Each Class
Fig.<ref> compares the segmentation performance of our AgMTR with the baseline in different classes on iSAID. Our method achieves significant advantages in 14 out of 15 classes, especially for the `Storage tank', `Baseball field', `Tennis court', `Helicopter', and `Soccer field' classes achieving more than 7% gains. We observe that these classes tend to have large intra-class differences, requiring the FSS model to be more class-aware. However, our method performs worse than the baseline for the `Roundabout' class, i.e., 83.14% vs 85.27%. It may be due to the fact that the appearance of this class tends to be regular and no other classes are interfering in the image, the baseline method can easily tackle it as well.
§.§.§ Ablation study of the Backbone
Table <ref> gives the impact of different backbones on segmentation performance, where the inference speed is measured under the 5-shot setting. We can observe the following phenomena: (i) Employing the backbone of DeiT-B/16 realizes the optimal accuracy in all settings, followed by the backbone of ViT-B/16. (ii) Employing the backbone of DeiT-S/16 (89MB) yields a segmentation accuracy on par with the ResNet-101 (179MB) based method DMNet, i.e., 49.11% vs 49.21%, which clearly demonstrates the superiority of the proposed method. (iii) ViT-S/16 and DeiT-T/16 have slightly worse segmentation performance, but they have faster inference speeds than others.
§.§.§ Visualization of Local-aware Agents
To further explore and understand the role of the local-aware agents, Fig.<ref> exhibits the visualization of activation maps of different agents on the query image, which are derived by performing similarity computation between the agents and the query features. We could observe the following phenomena: (i) Different agents are able to focus on different yet complementary local regions, which strongly demonstrates the effectiveness of the local semantic mining as conceived. (ii) The constructed agents can reasonably focus on key semantic clues in the current episode, e.g., in the first column, different agents will focus on the `Fuselage', `Wings', and `Tail' of the multiple `Planes'. And this rationalization is more obvious in simpler natural scenarios (PASCAL-5^i). This also proves the generalizability of the proposed method from another perspective.
§.§ Extension in Weak-label Few-shot Segmentation
To further alleviate the burden of dense labeling in the real world, this section discusses the effectiveness of FSS with two cheaper mask schemes, bounding box and scribble. Following the setup of <cit.>, the above cheaper schemes are randomly generated from the original dense labeled masks of support images, as shown in Fig.<ref>. Table <ref> demonstrates the performance of different mask schemes on three benchmarks under the 1-shot setting. One can find that our method with the scribble mask can achieve 48.88%, 67.09%, and 44.43% mIoU accuracy on three benchmarks, respectively, which performs comparably with several competing methods with the dense mask, revealing AgMTR's tolerance to label quality. In addition, we notice that employing the bounding box mask provides weaker results than the scribble mask, which may be due to the fact that the bounding box mask contains a large amount of background noise that interferes with the mining of valuable semantics from the support images by AgMTR. Based on the above observations, we believe that the scribble mask can effectively alleviate the labeling burden while maintaining excellent performance.
§.§ Extension in Cross-domain Few-shot Segmentation
To explore the generalization of the proposed AgMTR to unseen domains in real-world situations, we perform the Cross-domain Few-shot Segmentation, where the samples utilized for training and testing are from different domains. Following the setup of ABCDFSS <cit.> and PATNet <cit.>, we evaluate the performance on three datasets with different domain shifts, including FSS-1000 <cit.>, Chest X-ray <cit.>, and DeepGlobe <cit.>, which cover 1,000 classes of everyday objects, X-ray images and 6 classes of satellite images, respectively. The proposed method is trained on PASCAL VOC and evaluated on these three datasets, with the mIoU results and visualization results presented in Table <ref> and Fig.<ref>. It can be found that our method achieves satisfactory performance, e.g., AgMTR exceeds ABCDFSS by 8.94% on FSS-1000, and by 6.7% on the Chest X-ray under the 1-shot setting. The above results demonstrate that even though our AgMTR is not specifically constructed for cross-domain scenarios, the designed local agents can dynamically mine class-specific semantics from labeled, unlabeled, and query images in the target domain to perform accurate segmentation and exhibit excellent generalisability.
§.§ Failure case analysis
By mining and optimizing local-aware agents from support, unlabeled, and query images, AgMTR is able to construct precise agent-level semantic correlation to efficiently perceive the interested object. However, it ignores the association between foreground objects and background, or other categories, producing several failure segmentation cases. Take the case (a) in Fig.<ref> as an example, the model fails to understand the `Bridge is always over water' connection and incorrectly considers a `Road' over the land as a `Bridge'. In addition, when the object appears in the shadow, the object shows extreme similarity with the background, or the object overlaps with other classes, the segmentation performance is also weak (see Fig.<ref>.(b)-(d)). A potential solution to radically improve the FSS performance may be to utilize the diffusion models to generate more labeled images based on the support image to provide more robust guidance, such as DifFSS <cit.>.
§ CONCLUSION
To correct the semantic ambiguity between the query FG and BG pixels due to pixel-level mismatch, this paper proposes a novel Agent Mining Transformer (AgMTR) for remote sensing FSS, which can adaptively mine a set of representative agents to construct agent-level semantic correlation. Compared to pixel-level semantics, the agents are equipped with local contextual information and possess a broader receptive field, thus ensuring that the query pixels safely execute the semantic aggregation and enhancing the semantic clarity between the query FG and BG pixels. Experiment results on remote sensing scenarios iSAID and more common natural scenarios, i.e., PASCAL-5^i and COCO-20^i validate the effectiveness of the proposed AgMTR and set the state-of-the-art performance, which strongly demonstrates its generality and extensibility. In the future, we consider introducing the semantic information of linguistic modality in FSS tasks, such as Visual-Language models, and try to explore more challenging Zero-shot segmentation tasks.
§ DATA AVAILABILITY STATEMENT
The datasets generated in this study are available from the iSAID[<https://captain-whu.github.io/iSAID/index>] website, the PASCAL[<http://host.robots.ox.ac.uk/pascal/VOC/>] website, and the MS COCO[<https://cocodataset.org/#home>] website.
§ ACKNOWLEDGMENTS
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 62301538.
spbasic
| Semantic segmentation <cit.>, one of the fundamental tasks in the intelligent interpretation of remote sensing, aims at assigning a pre-defined target category to each pixel in remote sensing images <cit.>. With the recent development of deep learning techniques, especially the emergence of the fully convolutional network (FCN) <cit.>, the semantic segmentation task of remote sensing has made remarkable breakthroughs. However, collecting massive pixel-level annotations for complex remote sensing scenes is time-consuming and labor-intensive <cit.>. Even though researchers propose weakly-supervised <cit.> and semi-supervised methods <cit.> to alleviate the data hunger, they fail to effectively generalize to unseen domains. Few-shot Learning (FSL) provides a promising research solution for tackling the above issue, which can exploit several labeled samples to quickly generalize to unseen (novel) classes <cit.>.
Few-shot Segmentation (FSS) is a natural extension of FSL for handling dense prediction tasks <cit.>. The purpose of FSS is to segment the corresponding object in the interested image (called query image) utilizing several labeled samples (called support images) containing the interested class. The current mainstream FSS schemes, i.e., prototype-learning and affinity-learning, are equipped with pixel-level semantic correlation in most cases. The former constructs the class-agnostic prior mask by maximizing pair-wise pixel similarity to compensate for the missing information caused by feature compression <cit.>, while the latter directly leverages the similarity of support-query pixel pairs to aggregate support semantics into the query feature <cit.>. For instance, CyCTR <cit.> introduced a cycle-consistent transformer to selectively aggregate support-pixel semantics and SCCAN <cit.> designed a self-calibrated cross-attention to alleviate the pixel-mismatch issue.
Despite the desirable results in natural scenarios, when extended to remote sensing scenarios with extreme intra-class variations and cluttered backgrounds, directly leveraging the correlations between support-query pixel pairs to aggregate support semantics may lead to tremendous mismatches. Unfortunately, the low-data regime of FSS would exacerbate these negative effects. As illustrated in Fig.<ref>(a), in view of the appearance difference between objects in the support-query pair and the complex background interference, the background (BG) pixel `Roof' P_3 in the query even matches the foreground (FG) pixel `Plane' P_1 in the support more than the FG-Pixel `Plane' P_2 in the query. At this point, both query FG and BG pixels will aggregate support FG semantics, resulting in semantic ambiguity between the FG and BG pixels of the query and yielding segmentation failures.
To correct the semantic ambiguity between the query FG and BG pixels caused by pixel-level mismatch, we suggest adaptively mining a set of local-aware agents (i.e., condensed feature representations) for constructing agent-level semantic correlation. Note that different agents will highlight different local regions of the interested class, e.g., the `Fuselage', `Wings', etc., of the `Plane'. In this case, different pixels in the query image will derive the corresponding agent-level semantic correlations, thus executing specific semantic aggregations. As illustrated in Fig.<ref>(b), given query FG-pixel P_2 selectively aggregates the agent semantics responsible for the `Fuselage', while the BG-pixel P_3 aggregates the background agent semantics. This is due to the fact that, compared to pixel-level semantics, the agents are equipped with local contextual information and possess a broader receptive field, thus ensuring the safe execution of semantic aggregation for query pixels and enhancing semantic clarity between query FG and BG pixels.
Nevertheless, excavating representative agents without explicit supervision is not trivial. This paper develops an Agent Mining Transformer network called AgMTR to address this issue. For minimizing the negative impact of large intra-class variations in remote sensing scenarios, besides routinely exploiting the support images, AgMTR focuses on exploring valuable semantics from unlabeled images and the query image itself to optimize agents. The specific process of mining agents is depicted in Fig.<ref>.
Firstly, the Agent Learning Encoder (ALE) is proposed to guide agents to efficiently mine salient semantics (i.e., class-specific semantics) from the support pixels that can help segment the target classes in the query image in a masked cross-attention manner <cit.>. To enhance the semantic diversity of the agents, ALE dynamically imposes equal division constraints on the foreground region, thus assigning different yet complementary local masks to different agents. At this point, different agents can aggregate the pixel semantics from the corresponding local regions, achieving the local-awareness of the target category, such as the `Fuselage', and `Wings' of the `Plane'.
Merely aggregating support semantics is not enough to bridge the semantic gap from the image pair. The Agent Aggregation Decoder (AAD) is proposed to introduce unlabeled images containing the interested class as references, with the expectation that agents can adaptively explore the corresponding object semantics from unlabeled data sources, thus breaking beyond the limited support set for better guiding the query segmentation. Notably, the unlabeled image features will be adaptively clustered into a set of local prototypes, where the proposed agents will selectively aggregate these prototype semantics to further enhance the local-awareness of the target category.
Furthermore, inspired by the strong intra-object similarity, i.e., pixels within the one object are more similar than pixels between different objects <cit.>, learning from the query image itself is equally crucial to guide its own segmentation. The Semantic Alignment Decoder (SAD) is suggested to promote semantic consistency between the agents and the query object. Specifically, the pseudo-local masks of the query image will be constructed by computing the similarity between the different local agents and the query feature. Under the constraint of the local masks, different agents will further aggregate the corresponding query local semantics, thus aligning with the query semantics.
Based on the above three components, AgMTR constructs precise agent-level semantic correlations, effectively distinguishing FG and BG pixels in the semantic aggregation of the query feature.
The primary contributions can be summarized as follows:
* A novel Agent Mining Transformer (AgMTR) network for FSS in remote sensing is proposed to adaptively mine a set of local-aware agents for different objects to construct agent-level semantic correlation, thus correcting the semantic ambiguity between the query FG and BG pixels caused by pixel-level mismatches.
* We construct an Agent Learning Encoder, Agent Aggregation Decoder, and Semantic Alignment Decoder to adaptively mine and optimize agent semantics from support, unlabeled, and query images, in that order.
* Extensive experiments on remote sensing FSS benchmark iSAID, validate the superiority of the proposed AgMTR and achieve state-of-the-art performance. Surprisingly, our method likewise outperforms previous SOTA on two popular CV benchmarks, PASCAL-5^i, and COCO-20^i. | §.§ Semantic Segmentation
Semantic segmentation, with the goal of predicting pixel-level labels within an image, has been widely emphasized by researchers as a fundamental task in image interpretation. Long et al. <cit.> first proposed a full convolutional network (FCN) for the semantic segmentation task, providing a solid foundation for the following research. To further mine the association of contextual information, Chen et al. <cit.> proposed Dilated Convolution to expand the convolutional receptive field. Meanwhile, Zhao et al. <cit.> and Chen et al. <cit.> proposed Pyramid Pooling Module (PPM) and Atrous Spatial Pyramid Pooling (ASPP), respectively, to combine the feature representations between different scales and regions for multi-scale semantic aggregation.
On this basis, some researchers have focused on designing semantic segmentation methods specific to remote sensing scenes. Feng et al. <cit.> proposed a neighboring pixel affinity loss (NPALoss) to focus on the optimization for small-size objects and hard object boundaries. Peng et al. <cit.> proposed a Cross Fusion Network (CF-Net) for fast and effective extraction of small-scale objects. In addition, Niu et al. <cit.> decoupled the semantic segmentation task and proposed a disentangled learning paradigm to model foreground and boundary objects separately. Despite promising success with large-scale data, these methods fail to perform well when generalizing to unseen domains.
§.§ Few-shot Learning
Recently community researchers have proposed Few-shot Learning (FSL) <cit.> for fast generalization to unseen classes, which aims to recognize novel (unseen) classes from a handful of labeled samples <cit.>. In general, FSL methods would employ the meta-learning paradigm <cit.>, where the model understands representative knowledge from the training dataset (seen classes) and generalizes it to novel tasks (unseen classes). On this basis, these methods can be divided into the following two branches: (i) Optimization-based method <cit.>, which aims to search for a set of proper parameters for the model that can be easily generalized to various novel tasks. For example, Finn et al. <cit.> learned the initialization parameters of the model while Ravi et al. <cit.> learned an optimization strategy instead of a fixed optimizer to facilitate faster convergence. (ii) Metric-based method <cit.>, which aims to find a specific embedding space that can efficiently distinguish different classes for classification. For example, Zhang et al. <cit.> utilized Earth Mover's Distance (EMD) to model the distance between images as the optimal transport plan. Bateni et al. <cit.> proposed to construct Mahalanobis-distance classifiers to improve the accuracy of few-shot classification.
In addition, given the large intra-class variations and small inter-class variations of remote sensing scenarios, some FSL methods specific to remote sensing scenarios have been successively proposed <cit.>. In this paper, we instead apply FSL to the more advanced task of scene understanding rather than scene classification in remote sensing scenes, i.e., semantic segmentation.
§.§ Few-shot Segmentation
Few-shot Segmentation (FSS) <cit.>, a natural extension of Few-shot Learning to tackle intensive prediction tasks, is proposed to guide the model to activate corresponding objects in the query image utilizing several labeled samples (i.e., support images) containing the interested class. Current FSS methods can be broadly divided into two parts: (i) Prototype-learning, which aims to compress the support feature into a prototype through masked average pooling (MAP), and perform feature comparison with the query feature to segment the interested object <cit.>. However, these methods would inevitably lose spatial information and lack context-awareness. To address this issue, besides attempting to generate multiple prototypes <cit.>, they typically constructed the class-agnostic prior mask by maximizing pair-wise pixel similarity to further mine pixel-level support semantics <cit.>. (ii) Affinity-learning <cit.>, which aims to directly construct pixel-level feature matching for segmentation. For instance, Zhang et al. <cit.> introduced a cycle-consistent transformer to aggregate support-pixels semantics selectively. Xu et al. <cit.> designed a self-calibrated cross-attention to alleviate the pixel-mismatch issue. However, such pixel-level similarity would ignore contextual information and tend to be affected by background clutter or noisy pixels causing semantic ambiguity. We propose feature embeddings called agents, which are condensed feature representations similar to prototypes, and at the same time mine the valuable pixel semantics for dynamic optimization, which can combine the strengths of prototype learning and affinity learning.
In addition, several methods, such as BAM <cit.>, D^2Zero <cit.>, and PADing <cit.>, focus on addressing the bias toward seen classes in Few-shot (Zero-shot) scenes for better generalize to unseen classes. In the field of remote sensing, the Few-shot Segmentation task has also received an enthusiastic response <cit.>. Yao et al. <cit.> proposed a scale-aware prototype matching scheme to handle the large variance of objects' appearances and scales. Bi et al. <cit.> designed DMNet to mine the class-specific semantics from query images. To maximize the breakthrough of the few-shot limitations for better adapting to remote sensing scenarios, this paper proposes to mine and optimize class-specific semantics from support, unlabeled, and query images. | null | null | null | To correct the semantic ambiguity between the query FG and BG pixels due to pixel-level mismatch, this paper proposes a novel Agent Mining Transformer (AgMTR) for remote sensing FSS, which can adaptively mine a set of representative agents to construct agent-level semantic correlation. Compared to pixel-level semantics, the agents are equipped with local contextual information and possess a broader receptive field, thus ensuring that the query pixels safely execute the semantic aggregation and enhancing the semantic clarity between the query FG and BG pixels. Experiment results on remote sensing scenarios iSAID and more common natural scenarios, i.e., PASCAL-5^i and COCO-20^i validate the effectiveness of the proposed AgMTR and set the state-of-the-art performance, which strongly demonstrates its generality and extensibility. In the future, we consider introducing the semantic information of linguistic modality in FSS tasks, such as Visual-Language models, and try to explore more challenging Zero-shot segmentation tasks. |
http://arxiv.org/abs/2409.18029v1 | 20240926163806 | Constraining the origin of the nanohertz gravitational-wave background by pulsar timing array observations of both the background and individual supermassive binary black holes | [
"Yunfeng Chen",
"Qingjuan Yu",
"Youjun Lu"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.GA",
"gr-qc"
] |
Origin of the nanohertz gravitational-wave background
Chen, Yu, & Lu
0000-0001-5393-9853]Yunfeng Chen
School of Astronomy and Space Science, University of Chinese
Academy of Sciences, Beijing 100049, China
National Astronomical Observatories, Chinese Academy of Sciences,
Beijing, 100101, China
0000-0002-1745-8064]Qingjuan Yu
Kavli Institute for Astronomy and Astrophysics, and School of
Physics, Peking University, Beijing, 100871, China
[email protected]
0000-0002-1310-4664]Youjun Lu
National Astronomical Observatories, Chinese Academy of Sciences,
Beijing, 100101, China
School of Astronomy and Space Science, University of Chinese
Academy of Sciences, Beijing 100049, China
[email protected]
Qingjuan Yu
§ ABSTRACT
The gravitational waves (GWs) from supermassive binary black holes (BBHs) are
long sought by pulsar timing array experiments (PTAs), in the forms of both a
stochastic GW background (GWB) and individual sources. The evidence for a GWB
was reported recently by several PTAs with origins to be determined. Here we use
a BBH population synthesis model to investigate the detection probability of
individual BBHs by the Chinese PTA (CPTA) and the constraint on the GWB origin
that may be obtained by PTA observations of both GWB and individual BBHs. If the
detected GWB signal is entirely due to BBHs, a significantly positive redshift
evolution (∝(1+z)^2.07) of the mass scaling relation between
supermassive black holes and their host galaxies is required. In this case, we
find that the detection probability of individual BBHs is ∼85% or
64% if using a
period of 3.4-year CPTA observation data, with an expectation of ∼1.9 or
1.0 BBHs detectable with a signal-to-noise ratio ≥3 or 5, and it is
expected to increase to >95% if extending the observation period to 5 years
or longer. Even if the contribution from BBHs to the GWB power signal is as
small as ∼10%, a positive detection of individual BBHs can still be
expected within an observation period of ∼10 years. A non-detection of
individual BBHs within several years from now jointly with the detected GWB
signal can put a strong constraint on the upper limit of the BBH contribution to
the GWB signal and help identify/falsify a cosmological origin.
§ INTRODUCTION
The gravitational wave (GW) radiation from the cosmic population of supermassive
binary black holes (BBHs) forms a stochastic GW background (GWB), which is
longly expected to be detected by the pulsar timing array (PTA) experiments. The
evidence of the presence of a GWB at the nanohertz band has been recently
reported by several PTAs, including the North American Nanohertz Observatory for
Gravitational Waves <cit.>,
the collaboration of the European PTA and the Indian PTA (EPTA+InPTA;
),
the Parkes PTA <cit.>, and the Chinese PTA
<cit.>, with a confidence level of ∼ 2-4.6σ.
However, the origin of this signal is still not determined, yet.
The detected signal can be consistent with that predicted from
the cosmic BBH origin, considering of the uncertainties in the model estimation
(e.g., ). It could also be (partly) originated from the cosmic
strings, phase transition, domain walls, primordial black holes, or other
processes in the early universe, which is intensively discussed recently in the
literature (e.g., ).
The GWB of the astrophysical origin may be different from the GWB of the
cosmological origin at least in the following two aspects.
First, the former one results in a characteristic strain spectrum following the
canonical power law with a slope of -2/3 in the nanohertz band,[
Note that at low frequencies (e.g., f≲ 10^-9), the
GWB may deviate from the canonical -2/3 power law, considering
that the environmental coupling effect <cit.> and the orbital
eccentricities of cosmic BBHs <cit.> both lead to the bending of the
GWB spectrum <cit.>.
] while the latter one may
result in a GWB much different from the -2/3 power-law, depending on the
detailed models <cit.>.
With the accumulation of the PTA observation time, the GWB spectrum may be
accurately measured in the future and thus used to distinguish these two
scenarios. Second, the former one is fluctuating significantly at high
frequencies (≳ 10^-8 Hz) due to the small number statistics of
individual BBHs <cit.>, while the latter one not.
It is expected that the signals of some individual BBHs can be loud enough to
stand out from the GWB composed of numerous weaker sources (e.g.,
). If the GWB signal detected by the current
PTA experiments is indeed originated from BBHs, there would be some loud
individual BBHs hiding in the data. Detection of any such individual BBHs by PTA
experiments would provide a consistency check for the BBH origin and/or
constraint on the contribution fraction of BBHs to the GWB, though currently no
evidence for individual BBHs was found in the data sets of NANOGrav
<cit.> and EPTA <cit.>.
In this paper, we forecast the detection of individual BBHs by PTA experiments,
especially CPTA, with consideration of the constraint from the GWB signal
reported recently on the cosmic BBH model, and investigate the possibility to
constrain the origin of the GWB jointly by the PTA detection of individual BBH
sources and the GWB. The paper is organized as follows. In
Section <ref>, we first briefly introduce the population synthesis
model for cosmic BBHs and constrain the model by using the GWB spectrum detected
recently by PTA experiments, and then describe the method to generate mock
samples for cosmic BBHs. With these samples, we investigate the detection
prospects of individual BBHs for CPTA. Our main results are presented in
Section <ref>, and the main conclusions are summarized in
Section <ref>.
§ BBH POPULATION MODEL
We adopt the BBH population model constructed in <cit.> (hereafter
CYL20) to estimate the GWB spectrum and generate mock cosmic BBH
populations. We briefly describe the model as follows (see also Sections 2-3
of CYL20 for details). The model is constructed on a set of
astrophysical ingredients, including the galaxy stellar mass function (GSMF),
the galaxy merger rate, the MBH–host galaxy scaling relation, the orbital
evolution of BBHs within their merged host galaxies. These ingredients can be
obtained from either observations or numerical simulations. Through the
incorporation of these ingredients in the model, the statistical distributions
of cosmic BBHs, their coalescence rates and gravitational wave radiation
strength can be obtained.
The cosmic BBH distribution function Φ(M,q,a,z), which is
defined so that Φ(M,q,a,z)dM dq da is the comoving number
density of BBHs at redshift z with total mass in the range M→
M+dM, mass ratio in the range q→ q+dq, and
semimajor axis in the range a→ a+da, can be
connected with the above model ingredients through the following
equation (cf. Eq. 17 in CYL20):
Φ(M,q,a,z)
= 1/N∑_i=1^N∫∫ d M d q n(M,z_i)
(q,z_i|M)
× p(M,q|M,q,z_i)
H(t-τ_a,i)
×| da_i|M,q,M,q/dτ|^-1_τ
=τ_a,i|M,q,M,q,
where t is the cosmic time at redshift z, n(M,z) is the GSMF defined so that n(M,z) dM
represents the comoving number density of galaxies at redshift z with stellar
mass within the range M→ M+dM,
(q,z|M) is the merger rate per galaxy (MRPG) defined so that
(q,z|M) dt dq represents the averaged number of galaxy
mergers with mass ratio in the range q→ q+dq within
cosmic time t → t+dt for a descendant galaxy with mass M, the
BBH systems (i = 1, 2, ..., N) with total mass M and mass ratio
q are generated by the Monte-Carlo method according to the properties of
the merged galaxies with total mass M and mass ratio q, a_i(τ)
represents the semimajor-axis evolution of the BBH system i as a function of
the
period τ taken since the galaxy merger, τ_a,i is the period taken
for the BBH semimajor-axis to decay to the value of a, H(t-τ_a,i) is a
step function defined by H(t-τ_a,i)=1 if t>τ_a,i and
H(t-τ_a,i)=0 if t≤τ_a,i, z_i is the corresponding redshift of
cosmic time t-τ_a,i, p(M,q|M,q,z) denotes the
probability distribution of the total masses and mass ratios (M,q) of
BBHs within a galaxy merger remnant characterized by (M,q) at
redshift z and can be derived from the MBH–host galaxy relations.
In Equation (<ref>), we ignore multiple galaxy major mergers that
could occur before the BBH coalescence since their host galaxy merger, i.e.,
setting the term P_ intact=1 in Equation (17) of CYL20, which is
plausible as the GWB at the PTA band is mainly contributed by galaxy or BBH
mergers within redshift lower than 2 (see Fig. 21 in CYL20).
The BBH coalescence rate R(M,q,z), which is defined so that
R(M,q,z) dt dM dq represents the comoving number density of
BBH coalescences occurred during the cosmic time t → t+dt, with
total BH mass within the range M→ M+dM and mass ratio
within the range q→ q+dq, can be obtained through
the following equation (cf. Eq. 22 in CYL20):
R (M,q,z(t))
= 1/N∑_i=1^N∬ dM dq
× n(M,z_i) (q,z_i|M)
× p(M,q|M,q,z_i)H(t-τ_a=0,i).
The characteristic strain amplitude of the stochastic GWB in the PTA band,
h, produced by a cosmic population of BBHs at the GW frequency f (in the
observer's rest frame) can be estimated by
h^2(f)≃ 4/πG/c^2f^-2
∭ dz dM dq|dt/dz|
× R(M,q,z) 1/1+z|dE/dln f|,
where c is the speed of light, G is the gravitational constant,
f= (1+z)f is the frequency of the GW signal in the source's rest frame,
E is the orbital energy of the BBH,
and |dE/dln f| is the GW energy per unit logarithmic rest-frame
frequency radiated by an inspiraling BBH with parameters (M, q, f)
(see Eqs. 30–34 in CYL20 and also the derivation of ). Note
that compared with Equation (33) in CYL20,
we set that the BBH evolution is at the gravitational radiation stage in
Equation (<ref>),
as we focus on the PTA band, where f is greater than the turnover frequency of
the expected GWB spectrum shown in Fig. 19 in CYL20 and the coupling of the BBH orbital
evolution with surrounding environment is negligible.
The GWB strain estimated from Equation (<ref>) may suffer from
uncertainties in the involved model ingredients.
CYL20 analyzes the effect on the estimation due to the uncertainty in
each of the model ingredients (see Section 6 therein), and find that the
uncertainty of the estimated GWB amplitude is dominated by the variation of the
MBH–host galaxy scaling relation (∼ 1 dex), as compared with those due
to the MRPG (∼ 0.3 dex), and that an ignoration of the time delay
τ_a=0,i in Equation (<ref>) can lead to an increase of the GWB
strain estimation by ∼ 0.15 dex. The gas effect in the BBH evolution is
neglected in the estimate of the GWB strain at the PTA band.
The BBH population model may be constrained by the detected GWB signal if it is
fully contributed by cosmic BBHs as demonstrated in <cit.>, where
the common uncorrelated red noise (CURN) signal obtained in <cit.>
is adopted.
<cit.> investigate the constraint on the MBH–host galaxy scaling
relation,
formulated in a general form of
log_10(M_ BH/M_⊙) =
log_10(M/10^11M_⊙)+
+ log_10(1+z)+(0,),
as its variation dominates the uncertainty of the GWB strain amplitude
estimation.
In Equation (<ref>), M represents the mass of the spheroidal
components of the host galaxies (i.e., elliptical galaxies themselves or bulges
in spiral galaxies, throughout this work we use “bulge” to represent both
cases), the term log_10(1+z) describes the
redshift evolution of the scaling relation, and the term (0,)
represents a random value following a normal distribution with zero mean and
standard deviation (the “intrinsic scatter”).
The other model ingredients are fixed by adopting the GSMF from <cit.> and the MRPG
from <cit.>, and converting the host galaxy mass to the bulge
mass through the prescription in <cit.>. The dynamical evolution of
individual BBHs is mainly based on <cit.>.
To produce a GWB with the modelled strain amplitude being the same as the CURN
signal, it requires either (i) a positive redshift evolution of the
MBH–host galaxy scaling relation (the best fit of =1.99 in Eq. <ref>, see
also ), or (ii)
a normalization (the best fit of =9.55 if fixing =0 in Eq. <ref>) much larger than the
empirically determined values (e.g., =8.46 in and
=8.69 in , which are among the largest ones for the
scaling relation determined in the literature using local MBHs with
dynamical mass measurements).
Variations of the other model ingredients can alleviate the requirements to
some extent. For example, if ignoring the time delays between BBH coalescences
and their host galaxy mergers, either the constraint of the best-fit
=1.26 or the best-fit =9.13 is obtained (see Table 3 of
); if the MRPG is increased by a factor of 3, the
constraint becomes =0.93 or =9.06. As seen from the examples,
a positive redshift evolution of the MBH–host galaxy scaling relation is still
needed or the above required best-fit normalizations of are still
higher than the largest empirically determined ones.
Here we revisit the constraint on the MBH-host galaxy scaling relation by using
the latest NANOGrav 15-year free-spectrum data set, which was derived by
considering simultaneously the HD-correlated component together with the
monopole-correlated, the dipole-correlated and the CURN components <cit.>.
The newly reported HD signal has an amplitude somewhat larger than the CURN
signal found in the NANOGrav 12.5 data set, i.e., the characteristic strain
amplitude at f=1 yr^-1 for the HD signal is A_ yr=2.4×
10^-15, while the amplitude for the CURN signal is A_ yr=1.92×
10^-15 <cit.>.
As mentioned in option (ii) above, adopting a redshift-independent scaling
relation in Equation (<ref>) yields a normalization of the scaling
relation significantly larger than those empirically determined values given by
the local MBHs with dynamical mass measurements (even being around the boundary
of the 3σ deviation from the maximum empirically determined value of
) and the newly obtained higher A_ yr from
the NANOGrav 15-year data implies a higher or more significantly deviated
normalization of ,
thus in this study we consider option (i), i.e., the redshift
evolution of the scaling relation by fixing the normalization =8.69 as measured in
<cit.>.
In the model fitting, we use the leftmost 5 frequency bins of the
NANOGrav 15-year free-spectrum data set <cit.>, in which the
data provide strong constraints on HD-correlated posteriors <cit.>,
assuming that the GWB posteriors detected in the different frequency bins are
independent of each other. Note that the usage of the leftmost 5 frequency
bins is different from <cit.>, in which only the strain amplitude
at a single frequency f=1 yr^-1 is used for the model calibration.
We find that the parameters in Equation (<ref>) that best match the
GWB spectrum data are =1.00± 0.12, =2.07± 0.47 and
=0.30± 0.17, suggesting again that the recent PTA observations
require a positive redshift evolution of the MBH–host galaxy scaling relation
(i.e., =2.07) given the adopted model settings.
This result is also roughly consistent with the recent James Webb Space Telescope (JWST) observations of MBHs at
z∼ 4-7 active galaxies, which suggest that the mass ratios of the MBHs to their host
galaxies at high redshift is substantially larger than those at nearby universe <cit.> (see also other
works, e.g., ).
Figure <ref> shows the NANOGrav 15-year free-spectrum data
(violin symbols) <cit.> and the GWB spectrum (cyan line)
expected from the BBH population model calibrated by the best fit to the
observational data. We denote this BBH model with the best fit of
(α̃, , ϵ̃)=(1.00, 2.07, 0.30) as the
reference BBH population model in this paper. The reference model gives a
magnitude of A_ yr=2.0× 10^-15 at frequency f=1 yr^-1,
which is consistent with the value given in <cit.> by
fitting the GWB spectrum with the -2/3 power law.
Note that similarly as mentioned above, the requirement of a positively
evolving MBH-host galaxy relation with redshift (>0) is not affected
much by the uncertainty in the adopted MRPG.
The one-sided uncertainty in the MRPG could be up to a factor of 2-3
(see the lower panel of Fig. 4 in CYL20).
To consider this, we have done the test by changing the MRPG in the BBH model
to be 2 or 3 times larger than that used in the reference model and
matching the expected GWB spectrum to the observed one spectrum, and find
(, , ) = (1.00± 0.12, 1.45± 0.47, 0.31± 0.17) or
(1.00± 0.12, 1.06± 0.49, 0.32± 0.17). These calculations suggest that
our result on the requirement of a positive evolving MBH-host galaxy relation
is robust.
§ DETECTABILITY OF INDIVIDUAL BBHS
With the reference BBH population model, calibrated by the latest NANOGrav
observations, we randomly generate realizations of the cosmic BBHs to calculate
the synthetic strain spectra. We assume that all BBHs are on circular orbits in
the PTA bands <cit.>. For a circular BBH system with masses
M_ BH,1 and M_ BH,2 (and total mass
M_ BBH=M_ BH,1+M_ BH,2) at redshift z, the BBH emits
GWs at a source-rest frequency twice the orbital frequency, i.e., f =
2f, which is then redshifted to f=f/(1+z). The sky- and
polarization-averaged strain amplitude of the GW from the BBH is
h_0 = √(32/5)1/d_ L
(G/c^2)^5/3
(π f/c)^2/3,
where d_ L is the luminosity distance of the BBH system, =
(1+z) =
M_ BH,1^3/5M_ BH,2^3/5/M_ BBH^1/5 are the
redshifted chirp mass. For the GWB produced by BBHs, the synthetic
characteristic strain amplitude at each frequency bin f_k=k/T
(k=1,2,...) is
h(f_k) = √(∑_i h_0,i^2(f_i)min(_i,f_iT)),
where =f^2/ḟ and
ḟ = 96π^8/3G^5/3^5/3f^11/3/5c^5.
Figure <ref> shows the synthetic GWB strain spectra
obtained for 10 realizations of the BBH population randomly generated from the
reference BBH model (lower panel, each realization represented by a black
curve). For comparison, the canonical -2/3 power-law spectrum for the same BBH
model is also shown by the cyan line. As expected, the synthetic spectra are all
well consistent with the canonical power-law spectrum in the left several
NANOGrav frequency bins (marked by the vertical grey lines). At higher
frequencies, however, the synthetic spectra gradually deviate from the canonical
power-law spectrum, with steeper slopes and large fluctuations, due to the
discrete distribution of BBH sources with strong GW signal in different
frequency bins (see , CYL20). In some bins, the
spectrum obtained from one realization may be dominated by a single (or a few)
loud individual BBH system(s), which may be detected as individual BBH sources.
For each realization, we record the top contributor to h within each
NANOGrav frequency bin, and determine if its contribution dominates over the
combined one of the remaining BBH sources within the same frequency bin. We
define the probability that a single BBH dominates the GW radiation in a given
frequency bin f_k as P_ dom(f_k), and the cumulative probability as
P_ dom,cum(f_k) that there is at least one BBH source in any of the
frequency bins lower than f_k dominating the GW radiation in that bin. We show
the top contributor in each frequency bin in the lower panel of
Figure <ref>, marked by the filled red or open magenta
circles if they are the dominant ones or not. In the upper panel, we plot
P_ dom(f) (thin histogram) and P_ dom,cum(f) (thick dashed line),
respectively. Both the probability functions are evaluated based on 1000
realizations of the reference BBH population model. As seen from the figure,
P_ dom increases nearly monotonically with increasing f_k, i.e., the
probability for finding dominant BBH sources is larger at higher frequency bin.
P_ dom,cum reaches 63% and 95% at the 15th and 27th frequency bins,
corresponding to the frequencies of 30.0 and 53.4, respectively.
Since the occurrences of dominant sources follow the Poisson distribution, the
occurrence probabilities of 63% and 95% among the realizations correspond
to the expected mean occurrence number of 1 and 3 in one realization,
respectively. Therefore, we expect that the probability of finding the signature
induced by dominant BBH sources in the leftmost 15 frequency bins of the
NANOGrav 15-year measurements is larger than 60%, if the detected GWB signal
is fully contributed by the cosmic BBHs. We can expect ∼ 1 dominant source
at f≲ 30, and ∼ 3 at f≲ 54.
To investigate the detection of individual BBHs, we consider two sets of PTA
configurations,
one is NANOGrav with a sensitivity curve represented by the 95% upper limit
on the strain of individual BBHs derived from the NANOGrav 15-year data set
<cit.>, and the other is for CPTA with the total number of monitoring
pulsars N=50, the timing precision σ_t=100 ns, the monitoring cadence
Δ t=0.04 yr (i.e., about 1 time per 2 weeks).
For CPTA, we first consider the case with an observation
period of T_ obs=3.4 yr (same as the current CPTA data set). With this
CPTA configuration, we estimate the signal-to-noise ratio () of the
reported HD-correlated GWB according to Equation (23.69) in
<cit.>, and find that =4.5.
The above CPTA settings (on N, σ_t, Δ t, and T_ obs)
are roughly
consistent with the current CPTA observations (see Fig. 1 in );
and we take them as a surrogate for the current
real one (see more discussion on possible effects of adding noise to this
simple
surrogation below at the end of this section), and estimate its sensitivity
curve for individual BBH detection.
For the details of evaluating the
sensitivity curve of a given PTA on individual sources, we refer to
Section 3.3.2 of <cit.> (see also
).
According to the sensitivity curves, we estimate
the for each mock BBH in each realization. We define those individual
BBHs with larger than a threshold of =3 or 5 as detectable ones
that can be resolved from the GWB. Then we calculate the detection probability,
i.e., the fraction of realizations that contains at least one detectable
individual BBHs.
Figure <ref> shows the strain amplitude h_0
(see Eq. <ref>) of individual BBHs from 10 realizations, together with
both the NANOGrav 15-year sensitivity curve <cit.> and the CPTA
sensitivity curve <cit.>.
The mock BBHs are generated from the reference BBH population model, in which
the detected GWB signal is assumed to be fully from cosmic BBHs. For each
realization, we record the top 30 contributors to the synthetic GWB
characteristic strain amplitude h (cf. Eq. <ref>) in each NANOGrav
frequency bin (filled circles). Among them, those dominating their frequency
bins are marked by red filled circles (i.e., the same sources as those in
Fig. <ref>). As seen from the figure, none of the top
contributors is above the 15-year NANOGrav sensitivity curve, suggesting that
current NANOGrav can hardly detect any individual BBH sources; while some
loudest BBHs are already above the sensitivity curve of CPTA which may be
detectable.
We find that the detection probability of individual BBHs by NANOGrav with
the 15-year sensitivity curve is 1.5%, which is negligible. This result is
consistent with the negative result of searching individual BBHs in the NANOGrav
15-year data set <cit.>.
The detection probability of individual BBHs by CPTA 3.4 yr observations can
be as large as 84.7% (or 64.0%) if setting the detection
threshold for detectable BBHs as =3 (or 5), with ∼ 1.85 (∼
1.01) detections of individual BBHs being expected in each realization.
This suggests that there might be resolvable individual BBHs in the current 3.4-year
CPTA data.
The upper panel of Figure <ref> shows the frequency
distribution of those detectable individual BBH sources expected by CPTA with
the 3.4 yr observations. We obtain the expected number of detectable sources
within each NANOGrav frequency bin ⟨ N⟩(f) (thin histogram) and
the corresponding cumulative expected number of detectable sources ⟨
N⟩_ cum(f) (thick dashed curve) according to 1000 realizations
generated from the reference BBH population model. As seen from this figure,
those detectable sources are most likely to occur at 1–2 frequency bin of
CPTA (i.e., 1–2f_0 where f_0=1/3.4). Within the leftmost 14
NANOGrav frequency bins, we find 929 dominant BBHs in 1000 realizations from
the reference BBH population model. Among them, 469 are expected to be
detectable by CPTA with ≥ 3 within an observation period of 3.4 yr.
Thus, the probability of detecting dominant individual BBHs in these frequency
bins by current CPTA is already ∼ 50%, for which a careful search of
individual BBHs in the current CPTA data set is strongly motivated. It is worth
to note that most of those detectable BBHs tend to have large total masses
(e.g., 9≲log_10(M_ BBH/M_⊙)≲ 10), large mass
ratios (e.g., -1≲log_10 q_ BBH≲ 0) and low
redshifts (e.g., 0≲ z≲ 2). Most of their signals occur within
frequency range of ∼ 1–3× 10^-8.
For comparison, we also calculate the detection probabilities by setting
different values of the observation period T or the threshold
, and list them in Table <ref>. As seen from the table, if
extending the observation time to T=5, the detection probability is as
high as 99.5% even for =5, with ∼ 5.6 detections being expected.
If extending the observation time to T=10, a positive detection can be
guaranteed, i.e., with a detection probability of 99.9% and expected number
of detections ∼ 31.6 even when we set =5.
In the reference BBH population model, we adopt the redshift dependent MBH-host
galaxy scaling relation constrained by the GWB signal by assuming the signal is
fully from the BBH population. It is possible that only a fraction of the signal
is from cosmic BBHs. For example, if the scaling relation is independent of
redshift and the same as the local one given by <cit.>, i.e., with
(,,,)=(1.17,8.69,0,0.29), the BBH population
model (denoted as the empirical model) would lead to a stochastic background
being only ∼ 28% of the detected GWB signal. In this case, the expected
detection probability and the number of BBH detections should be correspondingly
different from the above estimates for the reference BBH population model. For
comparison, we also generate mock BBHs from the empirical model and estimate the
detection probability and number of BBH detections, as shown in
Figure <ref> and the right two columns in Table <ref>.
As seen from the figure and table, CPTA with an observation period of 3.4
can hardly detect any individual BBHs in this model and the detection
probability is only about 8.5% assuming =3. If extending the
observation period to T=5 or 10, then the expected number of BBH
detections is about 0.539 or 10.4 with =3, and correspondingly the
detection probability is about 41.5% or 100%. These detection numbers are
substantially less than those expected from the reference BBH population model,
which suggests that the detection of individual BBHs by PTA experiments can be
used jointly with the GWB signal to put constraints on the BBH population model
and the fraction of the GWB signal contributed by cosmic BBHs.
cCCCCC
Detection probabilities and corresponding expected numbers of
detectable individual BBHs (in the bracket) from the BBH population model by CPTA with
different settings of T and . The BBH population model adopts
either the MBH–host galaxy scaling relation calibrated by the GWB signal
reported by NANOGrav (reference Model) or the empirical one given by
<cit.> (empirical Model).
2*T() 2cReference Model
2cEmpirical Model
2-3 5-6
=3 =5 =3
=5
3.4 84.7% (1.85) 64.0% (1.01) 8.5% (0.088) 5.0% (0.052)
5.0 100% (10.1) 99.5% (5.59) 41.5% (0.539) 26.8% (0.311)
10 100% (34.2) 100% (28.3) 100% (10.4) 99.9% (6.34)
We then investigate in detail how a non-detection of individual BBHs by CPTA in
the near future, if it was, can be converted into the constraint on the
contribution of cosmic BBHs to the GWB. Assuming _ yr and
_ yr,BBH represent the characteristic strain amplitude of the
detected GWB and that of the GWB induced by the cosmic BBHs at the reference
frequency f=1, respectively, we define the ratio _ BBH≡_ yr,BBH/_ yr to indicate the significance of the
contribution from the cosmic BBHs to the total GWB signal. With this definition,
the BBH-induced and non-BBH-induced components contribute a fraction of
_ BBH^2 and 1-_ BBH^2 to the total power of the detected
GWB, respectively. By assuming any given _ BBH, we can calibrate the
BBH population synthesis model by adjusting the mass scaling relation
(Eq. <ref>) as described in Section <ref> and then
generate mock BBHs according to the calibrated model. With the mock sample, we
can calculate the detection probability of individual BBHs by CPTA with any
given observational period T. Then we can estimate the constraint on
_ BBH if none of the cosmic BBHs was detected by CPTA with an
observational period of T as described in the previous paragraph.
Figure <ref> shows the inferred 95% upper limit on
_ BBH as a function of the CPTA observation time T (or the GWB
detection ) by assuming that none of individual BBHs could be detected by
CPTA within T. That is, at a given T, the detection probability of
individual BBHs by CPTA should be greater than 95% if _ BBH is
above the value indicated by the curves in the figure. As seen from the figure,
the contribution of the cosmic BBHs to the detected GWB can be effectively
constrained at T≳ 4–5, or equivalently, when the GWB detection
has ≳ 5–7, jointly by the GWB signal and the
detection/non-detection of individual BBHs. If non-detection is made by CPTA
with T∼ 6–7.5 (with GWB detection ∼ 8–10),
_ BBH should be below 0.5, and the contribution fraction of cosmic
BBHs to the total power of the detected GWB signal should be ≲ 25%. If
the CPTA does not detect individual BBHs with T∼ 7.5–9.5 (or with
GWB detection ∼ 10–12), _ BBH should be below 0.3,
which thus suggests the GWB strain amplitude produced by cosmic BBHs should be
below that predicted by the empirical model.
Adopting a similar BBH population
synthesis model, in which the MBH–host galaxy scaling relation is the same as
the local one given by <cit.> without redshift evolution, as an example,
the GWB induced by the cosmic BBHs may be a factor ∼ 0.3 of the
detected GWB strain signal. In this case, our calculations show that a 3.4-year accumulation of the CPTA
data leads to a detectability of individual BBHs ≲ 8.5%, and that a
7.4-year (or 8.1-year) accumulation of the data is needed for CPTA to have a
detection probability of 95% (or 99%).
Note that the above detection probabilities and numbers for CPTA are obtained
based on an idealized estimate for the sensitivity curve (the blue curve
in Figs. <ref> and <ref>) without
considering some complicate factors (e.g., unmodeled red noises or more
complicated noises) that may affect the sensitivity.
These factors may induce special features to the sensitivity curve at some
specific frequencies, and therefore affect the detection of individual GW
sources at these frequencies. Given the fact that such effects are hard to
quantify for a mock PTA configuration, we simply assume a conservative case, in
which the actual sensitivity for individual source detection is about a factor
of 3 times worse than the idealized estimation presented above.
In that assumed case, for a detection threshold of =3, we find that the
detection probability resulting from the reference model is 11.9% for
T=3.4.
A detection probability of 95% (or 99%) is expected for the reference
model if extending the observation period to T=6.7 (or 7.3);
while such a detection probability from the empirical model requires an
observation time period of 12.6 (or 13.8).
Even in the above assumed conservative case, it is still plausible to conclude
that the breakthrough of individual BBH detection should be realized within a
few years or about ten years, and strong constraints on the BBH and/or
cosmological origin of the GWB can be obtained in the near future.
§ CONCLUSIONS
In this work, we investigate the implications of the recently reported evidence
for a stochastic GWB by several PTAs <cit.>
on the detection of individual BBHs. We first adopt the reported GWB signal to
constrain the BBH population synthesis model, focusing specifically on the
redshift evolution of the MBH–host galaxy scaling relation, by assuming that
the GWB signal is entirely contributed by cosmic BBHs. With the constrained
model, we then generate random realizations of the cosmic BBH populations to
study the detections of individual BBHs as well as the dominance of individual
BBHs over the stochastic GWB in different GW frequency bins. The detected GWB
implies a significant positive redshift evolution of the MBH–host galaxy
scaling relation, which is consistent with <cit.>. We find a
considerable probability that there have already been signatures emerged in the
current NANOGrav 15-year free-spectrum data set as caused by some individual
BBHs dominating their frequency bins. Their occurrence probabilities are 63%
and 95% within the leftmost 15 and 27 NANOGrav frequency bins,
corresponding to f≤ 30 and f≤ 54, respectively. However, given
the current NANOGrav's capability <cit.>, the probability of detecting
these sources individually is rather low (i.e., ≲ 2%). We further find
that those loudest BBHs, if any, in the current CPTA data set (3.4 yr) may be
detectable with a detection probability of ∼ 85% for a detection S/N
threshold of 3, and with 1.85 such detectable BBHs being expected. If
extending the observation time of CPTA to 5, a positive detection of
individual BBHs is almost guaranteed, i.e., successful detection in each
realization, with the mean detection number of ∼ 10 expected in each.
If the cosmic BBHs only contribute a fraction of the detected signal, the
evolution of the MBH-host galaxy scaling relation may not be required and the
detection of individual BBHs is less likely. However, if the contribution from
cosmic BBHs to the total power of the detected GWB signal is ≳ 10% (or
_ BBH≳ 0.3), a positive detection of individual BBHs by CPTA
can still be expected with an observation period of ∼ 10.
Jointly with the detected GWB signal, even if no individual BBHs are detected by
CPTA in the coming several years, the non-detection can be converted to the
constraint on the upper limit of the cosmic BBH contribution to the detected
GWB. For example, the non-detection of individual BBHs by CPTA with T∼
6–7.5 or 7.5-9.5 suggests that the ratio of the BBH-induced GWB
strain amplitude to the detected GWB strain amplitude (_ BBH) is
below 0.5 or 0.3, and the contribution fraction of the BBHs to the total
power of the detected GWB signal is below 25% or 10%. We conclude that the
detection (or non-detection) of individual BBHs by CPTA in the coming several
years can play an important role in not only interpreting the astrophysical
origin of the recently detected GWB signal but also putting strong constraint on
the contribution from the cosmic BBHs to the GWB signal.
§ ACKNOWLEDGEMENTS
This work is partly supported by the National SKA Program of China (grant no.
2020SKA0120101), National Key Program for Science and Technology Research and
Development (grant nos. 2022YFC2205201, 2020YFC2201400), and the National
Natural Science Foundation of China (grant nos. 12173001, 12273050, 11721303,
11991052).
0
natexlab#1#1
[Afzal et al.(2023)]NG23alter
Afzal, A., Agazie, G., Anumarlapudi, A., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023ApJ...951L..11A
, 951, L11. doi:10.3847/2041-8213/acdc91
[Agazie et al.(2023a)]NG23hd
Agazie, G., Anumarlapudi, A., Archibald, A. M., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023ApJ...951L...8A
, 951, L8. doi:10.3847/2041-8213/acdac6
[Agazie et al.(2023b)]NG23indv
Agazie, G., Anumarlapudi, A., Archibald, A. M., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023ApJ...951L..50A
, 951, L50. doi:10.3847/2041-8213/ace18a
[Agazie et al.(2023c)]NG23constraint
Agazie, G., Anumarlapudi, A., Archibald, A. M., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023ApJ...952L..37A
, 952, L37. doi:10.3847/2041-8213/ace18b
[Antoniadis et al.(2022)]IPTA22cps
Antoniadis, J., Arzoumanian, Z., Babak, S., et al. 2022,
https://ui.adsabs.harvard.edu/abs/2022MNRAS.tmp...73A
. doi:10.1093/mnras/stab3418
[Antoniadis et al.(2023a)]EPTA23hd
Antoniadis, J., Arumugam, P., Arumugam, S., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230616214A
arXiv:2306.16214. doi:10.48550/arXiv.2306.16214
[Antoniadis et al.(2023b)]EPTA23indv
Antoniadis, J., Arumugam, P., Arumugam, S., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230616226A
arXiv:2306.16226. doi:10.48550/arXiv.2306.16226
[Antoniadis et al.(2023c)]EPTA23constraint
Antoniadis, J., Arumugam, P., Arumugam, S., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230616227A
arXiv:2306.16227. doi:10.48550/arXiv.2306.16227
[Arzoumanian et al.(2020)]NG20cps
Arzoumanian, Z., Baker, P. T., Blumer, H., et al. 2020,
https://ui.adsabs.harvard.edu/abs/2020ApJ...905L..34A
, 905, L34. doi:10.3847/2041-8213/abd401
[Bécsy et al.(2023)]Becsy23
Bécsy, B., Cornish, N. J., Meyers, P. M., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230904443B
arXiv:2309.04443. doi:10.48550/arXiv.2309.04443
[Begelman et al.(1980)]BBR80
Begelman, M. C., Blandford, R. D., & Rees, M. J. 1980,
https://ui.adsabs.harvard.edu/abs/1980Natur.287..307B
, 287, 307. doi:10.1038/287307a0
[Behroozi et al.(2019)]Behroozi19
Behroozi, P., Wechsler, R. H., Hearin, A. P., et al. 2019,
https://ui.adsabs.harvard.edu/abs/2019MNRAS.488.3143B
, 488, 3143. doi:10.1093/mnras/stz1182
[Bian et al.(2023)]BianLG23
Bian, L., Ge, S., Shu, J., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230702376B
arXiv:2307.02376. doi:10.48550/arXiv.2307.02376
[Chen et al.(2017)]ChenSY17
Chen, S., Sesana, A., & Del Pozzo, W. 2017,
https://ui.adsabs.harvard.edu/abs/2017MNRAS.470.1738C
, 470, 1738. doi:10.1093/mnras/stx1093
[Chen et al.(2021)]EPTA21cps
Chen, S., Caballero, R. N., Guo, Y. J., et al. 2021,
https://ui.adsabs.harvard.edu/abs/2021MNRAS.508.4970C
, 508, 4970. doi:10.1093/mnras/stab2833
[Chen et al.(2020)]CYL20bbh
Chen, Y., Yu, Q., & Lu, Y. 2020,
https://ui.adsabs.harvard.edu/abs/2020ApJ...897...86C
, 897, 86. doi:10.3847/1538-4357/ab9594 (CYL20)
[Chen et al.(2023)]CYL23cgws
Chen, Y., Yu, Q., & Lu, Y. 2023,
https://ui.adsabs.harvard.edu/abs/2023ApJ...955..132C
, 955, 132. doi:10.3847/1538-4357/ace59f
[Curyło & Bulik(2023)]Curylo23
Curyło, M. & Bulik, T. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230807720C
arXiv:2308.07720. doi:10.48550/arXiv.2308.07720
[Ellis et al.(2023)]Ellis23
Ellis, J., Fairbairn, M., Franciolini, G., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230808546E
arXiv:2308.08546. doi:10.48550/arXiv.2308.08546
[Enoki & Nagashima(2007)]Enoki07
Enoki, M. & Nagashima, M. 2007,
https://ui.adsabs.harvard.edu/abs/2007PThPh.117..241E
Progress of Theoretical Physics, 117, 241. doi:10.1143/PTP.117.241
[Gardiner et al.(2023)]Gardiner23
Gardiner, E. C., Kelley, L. Z., Lemke, A.-M., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230907227G
arXiv:2309.07227. doi:10.48550/arXiv.2309.07227
[Goncharov et al.(2021)]PPTA21cps
Goncharov, B., Shannon, R. M., Reardon, D. J., et al. 2021,
https://ui.adsabs.harvard.edu/abs/2021ApJ...917L..19G
, 917, L19. doi:10.3847/2041-8213/ac17f4
[Guo et al.(2022)]GLY22nf
Guo, X., Lu, Y., & Yu, Q. 2022,
https://ui.adsabs.harvard.edu/abs/2022ApJ...939...55G
, 939, 55. doi:10.3847/1538-4357/ac9131
[Hellings & Downs(1983)]HD83
Hellings, R. W. & Downs, G. S. 1983,
https://ui.adsabs.harvard.edu/abs/1983ApJ...265L..39H
, 265, L39. doi:10.1086/183954
[Kelley et al.(2018)]Kelley18
Kelley, L. Z., Blecha, L., Hernquist, L., et al. 2018,
https://ui.adsabs.harvard.edu/abs/2018MNRAS.477..964K
, 477, 964. doi:10.1093/mnras/sty689
[Kocsis & Sesana(2011)]Kocsis11
Kocsis, B. & Sesana, A. 2011,
https://ui.adsabs.harvard.edu/abs/2011MNRAS.411.1467K
, 411, 1467. doi:10.1111/j.1365-2966.2010.17782.x
[Kormendy & Ho(2013)]KH13
Kormendy, J. & Ho, L. C. 2013,
https://ui.adsabs.harvard.edu/abs/2013ARA
, 51, 511. doi:10.1146/annurev-astro-082708-101811
[Maggiore(2018)]Maggiore18
Maggiore, M. 2018,
Gravitational Waves: Vol. 2, Astrophysics and cosmology (Oxford University Press)
[McConnell & Ma(2013)]MM13
McConnell, N. J. & Ma, C.-P. 2013,
https://ui.adsabs.harvard.edu/abs/2013ApJ...764..184M
, 764, 184. doi:10.1088/0004-637X/764/2/184
[McLure et al.(2006)]McLure06
McLure, R. J., Jarvis, M. J., Targett, T. A., et al. 2006,
https://ui.adsabs.harvard.edu/abs/2006MNRAS.368.1395M
, 368, 1395. doi:10.1111/j.1365-2966.2006.10228.x
[Merloni et al.(2010)]Merloni10
Merloni, A., Bongiorno, A., Bolzonella, M., et al. 2010,
https://ui.adsabs.harvard.edu/abs/2010ApJ...708..137M
, 708, 137. doi:10.1088/0004-637X/708/1/137
[Mingarelli et al.(2017)]Mingarelli17
Mingarelli, C. M. F., Lazio, T. J. W., Sesana, A., et al. 2017,
https://ui.adsabs.harvard.edu/abs/2017NatAs...1..886M
Nature Astronomy, 1, 886. doi:10.1038/s41550-017-0299-6
[Moore et al.(2015)]Moore15pta
Moore, C. J., Taylor, S. R., & Gair, J. R. 2015,
https://ui.adsabs.harvard.edu/abs/2015CQGra..32e5004M
Classical and Quantum Gravity, 32, 055004. doi:10.1088/0264-9381/32/5/055004
[Muhamed Kozhikkal et al.(2023)]Muhamed23
Muhamed Kozhikkal, M., Chen, S., Theureau, G., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230518293M
arXiv:2305.18293. doi:10.48550/arXiv.2305.18293
[Pacucci et al.(2023)]Pacucci23
Pacucci, F., Nguyen, B., Carniani, S., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023ApJ...957L...3P/exportcitation
, 957, L3. doi:10.3847/2041-8213/ad0158
[Phinney(2001)]Phinney01
Phinney, E. S. 2001,
https://ui.adsabs.harvard.edu/abs/2001astro.ph..8028P
astro-ph/0108028. doi:10.48550/arXiv.astro-ph/0108028
[Rajagopal & Romani(1995)]Rajagopal95
Rajagopal, M. & Romani, R. W. 1995,
https://ui.adsabs.harvard.edu/abs/1995ApJ...446..543R
, 446, 543. doi:10.1086/175813
[Rasskazov & Merritt(2017)]Rasskazov17
Rasskazov, A. & Merritt, D. 2017,
https://ui.adsabs.harvard.edu/abs/2017PhRvD..95h4032R
, 95, 084032. doi:10.1103/PhysRevD.95.084032
[Ravi et al.(2015)]Ravi15
Ravi, V., Wyithe, J. S. B., Shannon, R. M., et al. 2015,
https://ui.adsabs.harvard.edu/abs/2015MNRAS.447.2772R
, 447, 2772. doi:10.1093/mnras/stu2659
[Reardon et al.(2023)]PPTA23hd
Reardon, D. J., Zic, A., Shannon, R. M., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023ApJ...951L...6R
, 951, L6. doi:10.3847/2041-8213/acdd02
[Rodriguez-Gomez et al.(2015)]RodriguezGomez15
Rodriguez-Gomez, V., Genel, S., Vogelsberger, M., et al. 2015,
https://ui.adsabs.harvard.edu/abs/2015MNRAS.449...49R
, 449, 49. doi:10.1093/mnras/stv264
[Roebber et al.(2016)]Roebber16
Roebber, E., Holder, G., Holz, D. E., et al. 2016,
https://ui.adsabs.harvard.edu/abs/2016ApJ...819..163R
, 819, 163. doi:10.3847/0004-637X/819/2/163
[Rosado et al.(2015)]Rosado15
Rosado, P. A., Sesana, A., & Gair, J. 2015,
https://ui.adsabs.harvard.edu/abs/2015MNRAS.451.2417R
, 451, 2417. doi:10.1093/mnras/stv1098
[Sesana et al.(2008)]Sesana08
Sesana, A., Vecchio, A., & Colacino, C. N. 2008,
https://ui.adsabs.harvard.edu/abs/2008MNRAS.390..192S
, 390, 192. doi:10.1111/j.1365-2966.2008.13682.x
[Sesana et al.(2009)]Sesana09
Sesana, A., Vecchio, A., & Volonteri, M. 2009,
https://ui.adsabs.harvard.edu/abs/2009MNRAS.394.2255S
, 394, 2255. doi:10.1111/j.1365-2966.2009.14499.x
[Valtolina et al.(2023)]Valtolina23
Valtolina, S., Shaifullah, G., Samajdar, A., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023arXiv230913117V
arXiv:2309.13117. doi:10.48550/arXiv.2309.13117
[Xu et al.(2023)]CPTA23hd
Xu, H., Chen, S., Guo, Y., et al. 2023,
https://ui.adsabs.harvard.edu/abs/2023RAA....23g5024X
Research in Astronomy and Astrophysics, 23, 075024.
doi: 10.1088/1674-4527/acdfa5
[Yu(2002)]Yu02
Yu, Q. 2002,
https://ui.adsabs.harvard.edu/abs/2020ApJ...897...86C
, 331, 935. doi:10.1046/j.1365-8711.2002.05242.x
[Zhang et al.(2012)]ZLY12
Zhang, X., Lu, Y., & Yu, Q. 2012,
https://ui.adsabs.harvard.edu/abs/2012ApJ...761....5Z
, 761, 5. doi:10.1088/0004-637X/761/1/5
| The gravitational wave (GW) radiation from the cosmic population of supermassive
binary black holes (BBHs) forms a stochastic GW background (GWB), which is
longly expected to be detected by the pulsar timing array (PTA) experiments. The
evidence of the presence of a GWB at the nanohertz band has been recently
reported by several PTAs, including the North American Nanohertz Observatory for
Gravitational Waves <cit.>,
the collaboration of the European PTA and the Indian PTA (EPTA+InPTA;
),
the Parkes PTA <cit.>, and the Chinese PTA
<cit.>, with a confidence level of ∼ 2-4.6σ.
However, the origin of this signal is still not determined, yet.
The detected signal can be consistent with that predicted from
the cosmic BBH origin, considering of the uncertainties in the model estimation
(e.g., ). It could also be (partly) originated from the cosmic
strings, phase transition, domain walls, primordial black holes, or other
processes in the early universe, which is intensively discussed recently in the
literature (e.g., ).
The GWB of the astrophysical origin may be different from the GWB of the
cosmological origin at least in the following two aspects.
First, the former one results in a characteristic strain spectrum following the
canonical power law with a slope of -2/3 in the nanohertz band,[
Note that at low frequencies (e.g., f≲ 10^-9), the
GWB may deviate from the canonical -2/3 power law, considering
that the environmental coupling effect <cit.> and the orbital
eccentricities of cosmic BBHs <cit.> both lead to the bending of the
GWB spectrum <cit.>.
] while the latter one may
result in a GWB much different from the -2/3 power-law, depending on the
detailed models <cit.>.
With the accumulation of the PTA observation time, the GWB spectrum may be
accurately measured in the future and thus used to distinguish these two
scenarios. Second, the former one is fluctuating significantly at high
frequencies (≳ 10^-8 Hz) due to the small number statistics of
individual BBHs <cit.>, while the latter one not.
It is expected that the signals of some individual BBHs can be loud enough to
stand out from the GWB composed of numerous weaker sources (e.g.,
). If the GWB signal detected by the current
PTA experiments is indeed originated from BBHs, there would be some loud
individual BBHs hiding in the data. Detection of any such individual BBHs by PTA
experiments would provide a consistency check for the BBH origin and/or
constraint on the contribution fraction of BBHs to the GWB, though currently no
evidence for individual BBHs was found in the data sets of NANOGrav
<cit.> and EPTA <cit.>.
In this paper, we forecast the detection of individual BBHs by PTA experiments,
especially CPTA, with consideration of the constraint from the GWB signal
reported recently on the cosmic BBH model, and investigate the possibility to
constrain the origin of the GWB jointly by the PTA detection of individual BBH
sources and the GWB. The paper is organized as follows. In
Section <ref>, we first briefly introduce the population synthesis
model for cosmic BBHs and constrain the model by using the GWB spectrum detected
recently by PTA experiments, and then describe the method to generate mock
samples for cosmic BBHs. With these samples, we investigate the detection
prospects of individual BBHs for CPTA. Our main results are presented in
Section <ref>, and the main conclusions are summarized in
Section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17853v1 | 20240926135832 | Small Leidenfrost droplet dynamics | [
"Benjamin Sobac",
"Alexey Rednikov",
"Pierre Colinet"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cond-mat.soft"
] |
Gamma-ray burst pulse structures and emission mechanisms
[
========================================================
§ ABSTRACT
An isolated Leidenfrost droplet levitating over its own vapor above a superheated flat substrate is considered theoretically, the superheating for water being up to several hundred degrees above the boiling temperature. The focus is on the limit of small, practically spherical droplets of several tens of micrometers or less. This may occur when the liquid is sprayed over a hot substrate, or just be a late life stage of an initially large Leidenfrost droplet. A rigorous numerically-assisted analysis is carried out within verifiable assumptions such as quasi-stationarities and small Reynolds/Péclet numbers. It is considered that the droplet is surrounded by its pure vapor. Simple fitting formulas of our numerical data for the forces and evaporation rates are preliminarily obtained, all respecting the asymptotic behaviors (also investigated) in the limits of small and large levitation heights. They are subsequently used within a system of ODEs to study the droplet dynamics and take-off (drastic height increase as the droplet vaporizes). A previously known quasi-stationary inverse-square root law for the droplet height as a function of its radius (at the root of the take-off) is recovered, although we point out different prefactors in the two limits. Deviations of a dynamic nature therefrom are uncovered as the droplet radius further decreases due to evaporation, improving the agreement with experiment. Furthermore, we reveal that, if initially large enough, the droplets vanish at a universal finite height (just dependent on the superheat and fluid properties). Scalings in various distinguished cases are obtained in the way.
§ INTRODUCTION
When a volatile liquid droplet is placed on a hot solid surface, superheated well above the boiling temperature, it neither touches the substrate nor boils, but rather floats on a thin film of its own vapor. This fascinating phenomenon, known as the Leidenfrost effect, does not cease to attract attention since its first descriptions about 300 years ago <cit.>. This is due to not only the myriad of intriguing and unexpected behaviors a droplet can exhibit in this state,
but also its relevance across a wide range of industrial and technological processes, spanning from the traditional heat transfer applications to the emerging field of multiphase milli-/micro-fluidics. See, for example, the review articles <cit.>, dedicated book chapters in <cit.>, and the many references therein.
The vapor film, a key feature of the Leidenfrost state, ensures the droplet levitation while acting as a thermal insulator, resulting in relatively low evaporation rates and hence long lifetimes of the droplet. In this state, the droplet's weight is balanced by the pressure within such a vapor cushion squeezed by the slowly and steadily evaporating drop. With no contact with the substrate, the observable shapes of the droplets are governed by a balance between capillarity and gravity similarly to a perfectly non-wetting (superhydrophobic) situation. Denoting the capillary length by ℓ_c, droplets with radii R smaller than ℓ_c remain quasi-spherical while puddles larger than ℓ_c are flattened by gravity, whose height is limited by ≈2ℓ_c <cit.>. The profile of the underlying vapor film is non-trivial. For a large droplet with R≳ℓ_c, the vapor film exhibits a pocket-like structure composed by an internal vapor `pocket' surrounded by a thin neck. As the drop gets smaller, the vapor film slimes down, the drop getting closer to the substrate. When the droplet radius is small enough as compared to ℓ_c, the vapor pocket disappears completely, and the droplet becomes quasi-spherical with a small circular area slightly flattened at the bottom. Accurate interferometric measurements of the vapor film thickness profile <cit.> turn out to be in a good agreement with a refined theoretical modeling <cit.> coupling lubricated vapor flow, capillarity and hydrostatic pressure effects, itself recently confirmed numerically by <cit.>. Note that the main scaling laws featuring the shapes of a Leidenfrost droplet and its evaporation dynamics can be found in <cit.>.
In practice, this absence of contact between the Leidenfrost droplet and the substrate leads to very rich dynamics. For large puddle-like drops, the vapor pocket grows until it eventually pops up as a central `chimney' due to a Rayleigh-Taylor mechanism <cit.>. Instability of large droplets can also occur (either spontaneously or forced) in the form of `star-faceted' shapes when azimuthal surface oscillations develop along the periphery of the droplets <cit.>. Self-induced spontaneous oscillations can also occurs in the vertical plane yielding to the recently reported bobing, bouncing or trampolining dynamics when Leidenfrost droplets reach moderate and small size with R⩽ℓ_c <cit.>. Other spectacular behaviors related to their high mobility have been observed. These include Leidenfrost wheels, when a droplet initially at rest spontaneously rolls and moves over a flat surface like a wheel due to symmetry breaking in the internal flow of the liquid <cit.>, and self-propelling of Leidenfrost droplets when interacting with substrate breaking the axi-symmetry, either due to surface topography such as ratchets or herringbones <cit.>, or temperature gradients <cit.>. Thus, droplets move in a direction dictated by the patterns due to symmetry breaking of the vapor layer. Strategies have emerged to control the motion and manipulate these droplets. In addition to geometric and thermal heterogeneities, chemical patterns of the surface can also be exploited to tailor the vapor film, enabling the stretching, sloshing, spinning, propelling, or trapping of a Leidenfrost droplet <cit.>.
As compared to large and moderate-size, the dynamics of small, near-spherical Leidenfrost drops (R≪ℓ_c) has not received so much attention. In their seminal work, <cit.> first explored the final fate of Leidenfrost drops as they became very small, just moments before disappearing. By spraying tiny droplets of water or ethanol in the size range of about 1-30 μm onto a superheated substrate, they discovered that when small enough (i.e, with R below a characteristic radius corresponding to the breakup of the lubrication approximation), Leidenfrost droplets took off from the heated substrate with an elevation h∝ R^-1/2, as predicted by <cit.> and also by <cit.>. Remarkably, in this regime, droplets become too light to stand over the upward force generated by the pressure due to evaporation, and they reach higher and higher elevations while vaporizing. This behavior drastically contrasts with what is observed for larger Leidenfrost droplets. More recently, <cit.> observed that a second final fate, other than lift-off, is possible for Leidenfrost droplets. Namely, if the liquid droplet is not pure or contaminant-free enough, small Leidenfrost droplets are unable to take off, but instead disappear by exploding with an audible crack.
Here, we propose to theoretically revisit the dynamics of small spherical Leidenfrost droplets with the aim of comprehensively and thoroughly analyzing the mechanisms involved in their final fate. Thanks to a model including a realistic description of the coupling between hydrodynamics, heat transfer and evaporation, this work seems to be the first to provide exact estimates of the drop elevation as a function of the physical parameters and without any fitting parameter. After numerically computing the entirety of fluxes, evaporation rates and forces, a master curve for droplet elevation as a function of its size is derived by simply balancing the drop weight with the upward evaporation-induced hydrodynamic force. While the scaling law agrees with <cit.>, there appear subtleties concerning the prefactor. Moreover, the analysis reveals that such a classical quasi-steady description is not fully sufficient to describe the take-off phenomenon. Even at these small scales, further dynamical effects must be taken into account to achieve a good agreement with the original experimental data of <cit.>.
§ STATEMENT OF THE PROBLEM, PREMISES AND OUTLOOK
Consider a small evaporating spherical droplet of radius R in a Leidenfrost state levitating at a height h above a superheated substrate at a `wall' temperature T_w, as sketched in Fig. <ref>. The substrate is flat and horizontal. We shall be interested in the take-off of Leidenfrost droplets <cit.>, which occurs in the realm of small droplets (R of the order of tens of μm) with a negligible deviation from the spherical shape. The (immediate) surroundings of the droplet are assumed to be saturated with vapour (totally displacing the air) and heated through to the substrate temperature. Thus, T_w is here also an effective overall ambient temperature (i.e. T_amb=T_w). The droplet is assumed isothermal at saturation (boiling) temperature T_sat. The superheat is given by Δ T=T_w-T_sat>0. The typical values considered here are T_sat=100^∘C (water at atmospheric pressure), T_w=400^∘C, and Δ T=300^∘C. Such superheat occurs e.g. in the experiments by <cit.>.
Mathematically, the goal of the present consideration is obtaining an interrelation between h, R (in particular, as functions of time t due to the droplet evaporation) and the parameters of the problem, such as Δ T, g (gravity acceleration, 9.81 m/s^2) and the liquid and vapour properties. The latter include ρ_l (liquid density, defined at T_sat) as well as the following vapour properties: ρ_v (density), μ_v (dynamic viscosity), ν_v=μ_v/ρ_v (kinematic viscosity), λ_v (thermal conductivity), c_p,v (heat capacity at constant pressure), α_v=λ_v/(ρ_v c_p,v) (thermal diffusivity), Ł (latent heat of evaporation). These may vary considerably in the temperature range between T_sat and T_w.
However, for the sake of simplicity, we shall here assume them constant and defined at the mid-temperature 1/2(T_w+T_sat), except for Ł defined at T_sat, similarly to the approach used elsewhere <cit.>. The relevant property values are provided in Appendix <ref>.
Other key assumptions include negligible advective/convective effects (small Péclet and Reynolds numbers), so that the temperature field in the vapour is governed by heat conduction, while the evaporative flow from the droplet can be considered by means of the Stokes approximation.
Quasi-steadiness of the temperature and velocity fields, in spite of R and h changing in time due to evaporation, is another key assumption of the analysis.
In other words, these fields and the evaporation fluxes and forces they determine are merely functions of the instantaneous values of R and h and do not depend on the history. It is under this premise that a preliminary calculation of these quantities is carried out in <ref>. The validity of this and other assumptions is verified a posteriori in their due course.
The quasi-steadiness assumption is also applied at first when it comes to the force balance on the droplet in <ref>, permitting to predict the levitation height h and make a first comparison with experiment. Yet, certain limitations are thereby disclosed, inspiring consideration of a more general droplet dynamics in <ref>–<ref>.
However, even in such a situation, the quasi-steadiness of the quantities like calculated in <ref> is still assumed to hold.
An important geometric parameter of the configuration (figure <ref>), meriting a special notation, is the ratio of the droplet's height and radius:
δ=h/R .
In the present study, we shall be interested in a full range of this relative-height' parameter, from very small to very large. The large values are expected for small droplets (small R) upon a take-off <cit.>. In contrast, small δ are attained for larger droplets. In this way, we arrive at a transition from spherical Leidenfrost droplets to Leidenfrost droplets for which deformation (at first at the bottom slightly flattened by gravity) becomes essential. Such a transition is touched upon in <ref>.
§ BASIC CALCULATIONS: FIELDS, FLUXES AND FORCES
Dimensionless variables are introduced using the scales given in table <ref> (definitions to be given in their due course). For simplicity and expecting no confusion, no notation distinction is made between the original, dimensional variables and their dimensionless versions in the present section (the distinction being clear from the context). We just note that
a dimensionless temperature is introduced as
T̂=T-T_sat/Δ T
where recall that Δ T=T_w-T_sat. Hereafter, in the same spirit, the hats are omitted for the sake of brevity.
§.§ Temperature field
As stipulated in <ref>,
the heat transport in the gas phase is conductive and quasi-steady. Thus, the thermal problem is decoupled from the evaporative velocity field, and the dimensionless temperature field T is governed by the Laplace equation
∇^2T=0.
It is subject to the boundary conditions:
T = 1 on the hot substrate,
T → 1 far away from the drop,
T = 0 on the droplet surface.
Although an exact solution in bipolar coordinates can be found e.g. using the methods by <cit.>, it is rather cumbersome so that we eventually opt for a numerical solution using COMSOL Multiphysics.
The results of the simulations are shown in Fig. <ref>(a) for three different values of the separating distance δ: a large droplet very close to the substrate with δ= 0.1; a droplet at a distance from the substrate comparable to its radius with δ= 1, and a small droplet beginning to be far away from the substrate with δ=5. One immediately observes that at small δ, the temperature difference is squeezed into a thin film between the droplet and the substrate. At large δ, the temperature field approaches a spherically symmetric one, as expected. Other results displayed in figure <ref> will be discussed later.
§.§ Evaporation rate
At the droplet surface, the evaporation mass flux j [kg/m^2 s] is at the expense of the heat coming by from the superheated surroundings through the vapor phase: j=λ_v/ℒ· T, where is the (external) unit normal vector. In dimensionless terms (cf. table <ref>), this reads
j = ·T .
Using the temperature field computed in <ref>, the profiles of the evaporation flux j are calculated from (<ref>) and shown in Fig. <ref>(b). One can appreciate that due to the presence of the hot substrate, j is maximum at the base and decreases towards the apex, where the minimum is attained. The closer the droplet is to the substrate (the smaller δ is), the more the profile of j is non-uniform and the values of j are large.
At small δ, one obviously obtains max(j) ∝ 1/δ (heat conduction across a thin vapour layer). When the relative droplet height δ increases, the non-uniformity of j weakens and the average of j decreases. Eventually, j tends to a uniform value of 1 for δ≫ 1 (as for a droplet in an unbounded medium).
The (global) evaporation rate J can be directly deduced by integrating the evaporation flux all over the droplet surface:
J=∬ j 𝚍S .
Figure <ref> reports the computed values of J as a function of δ. As expected from the knowledge of the j behaviour, J diverges as δ→ 0 and decreases to saturate at 4π as δ→ +∞. Such asymptotic behaviours are investigated in detail in Appendices <ref> and <ref> and also represented in figure <ref>. A good simple fit of the numerical data for J, respecting the leading-order asymptotic behaviours, is given by
J(δ)=4π[ 1+1/2ln(1+1/δ) ] ,
where a maximum deviation from the data does not exceed 2.7%.
However, a more precise fit is also provided for reference in Appendix <ref>.
§.§ Velocity field
In accordance with the approach followed in the present paper ( <ref>), the evaporative flow field, generated by droplet evaporation, is considered in the Stokes and quasi-steady approximation. Thus, we proceed from the continuity and Stokes equations
÷ = 0,
∇^2-p = 0.
The following boundary conditions are used:
= 0 on the hot substrate,
→ 0 far away from the drop,
· = 0 and · =j on the droplet surface.
Here and p are the dimensionless velocity and pressure fields in the vapour (cf. the scales in table <ref>), and is the unit tangential vector.
In the first boundary condition (<ref>), a possible internal flow in the droplet is neglected relative to the velocity scale in the vapour, hence no slip.
The last boundary condition (<ref>) contains the driving factor of the flow field, where the normal velocity at the droplet surface is determined by the evaporation flux (<ref>), which is in turn determined by the temperature field obtained from the formulation (<ref>)–(<ref>) in <ref>. Similarly to <ref>, this hydrodynamic part of the problem is also solved numerically using COMSOL Multiphysics.
The computation results for the vapour velocity fields are added into figure <ref>a, mirroring the temperature fields and also displaying the streamlines. For large relative heights δ, the streamlines remain straight near the droplet's surface, indicating that the flow is almost spherically symmetric there and only slightly disturbed by the substrate. Farther from the droplet, the streamlines are significantly bent due to the substrate presence. Higher velocity field values are attained for smaller δ. This is not only due to a profound maximum in the evaporation flux due to the substrate proximity at small δ (as in figure <ref>b1), but also additionally due to a confinement effect in a thin vapour layer between the droplet and the substrate, when the longitudinal velocity becomes even higher than the evaporation-flux-driven normal one at the droplet surface.
§.§ Levitation force
The bending and asymmetry of the evaporative flow due to the substrate gives rise to a hydrodynamic force acting on the evaporating droplet in the sense of its repulsion from the substrate. We refer to it as an evaporative force F_ev. In our configuration (figure <ref>), this amounts to a force acting on the droplet vertically upwards (along the z axis), which is responsible for droplet levitation against gravity. The force balance on the droplet and its levitation height are considered in <ref> later on. Here we simply calculate F_ev in dimensionless terms (cf. the scales in table <ref>) as a function of the relative height δ. Namely, we evaluate
F_ev = (∬_S (-p +(+^⊺)·) 𝚍 S ) ·
using the velocity and pressure fields computed in <ref>, where is a unit vector along z.
The result is reported in Fig. <ref> in terms of F_evδ^2. The overall tendency is F_ev∝δ^-2, as it is already known in the literature <cit.>. However, it is less known that the prefactor is different in the limits δ→0 and δ→∞. For instance, <cit.> attempted to fit the experimental data using a single prefactor. We obtain a prefactor 3π as δ→ 0, which can be deduced from the lubrication approximation <cit.>, although <cit.> obtained 3π/8 here (erroneously, in our opinion).
In contrast, the prefactor is 6π as δ→∞, which is confirmed by an asymptotic analysis described in Appendix <ref>, where a number of conributions in terms of the droplet–substrate interaction are followed through.
The following simple expression fits nicely our numerical result while respecting the prefactor values in both limits:
F_ev(δ)= 3 π/δ^2 1+2δ/1+δ .
It covers the numerical data with a relative error of 1.4%, whereas an even more precise fit is provided in Appendix <ref>.
§.§ Validity of assumptions
* Negligible advection. We start with the estimation of an evaporative Péclet number =[] R/α_v, where the evaporative velocity scale [] from table <ref> is used. We obtain
=c_p,vΔ T/Ł ,
which incidentally turns out to be a version of the Jakob number often used in the literature. For the typical Δ T value (cf. <ref>) and parameter values (cf. the first raw of table <ref> in Appendix <ref>), we obtain ≈ 0.19≪1, which justifies the approximation used in <ref>.
* Stokes approximation. Likewise, the Reynolds number is =ρ_v R [] / μ_v= ^-1, where =μ_v/(ρ_vα_v) is the Prandtl number. As =0.71 (cf. ibid), we obtain ≈ 0.26≪1, confirming the approximation used in <ref>.
* Negligible natural convection. This is related to small values of the Grashof number at the droplet scale
=ρ_v g R^3/μ_vν_v
(written in this form given that the variations of ρ_v are here of the order of ρ_v itself). One typically obtains <0.01 for our small droplets (R≲ 50 μm).
* Gas phase quasi-steadiness. The results of the present section imply the quasi-steadiness of the temperature and velocity fields in the entire region between the droplet and the substrate, which may be especially questionable for large levitation heights h. The appropriate thermal and viscous time scales can be chosen as τ_th=max(R,h)^2/α_v and τ_vis=ρ_v max(R,h)^2/μ_v. The quasi-steadiness takes place when τ_th≪τ and τ_vis≪τ, where τ is the typical time scale of the process. As =O(1) here, we just limit our attention to the first one of these conditions for the sake of brevity, and hence
max(R,h)^2/α_v≪τ .
An immediately obvious time scale of the process is here the evaporation time scale of the droplet τ_ev=ρ_l R^3/[J] (cf. table <ref> for [J]), i.e.
τ_ev=ρ_l Ł R^2/λ_v Δ T=ρ_l/ρ_v1/R^2/α_v .
Using it as τ=τ_ev in (<ref>), we arrive at
max(δ^2,1)≪ρ_l/ρ_v1/ .
Given that ρ_l≫ρ_v and ≪ 1 here, the condition (<ref>) leaves a considerable margin for possible large values of the relative height δ=h/R. We shall come back to it later on, after having considered concrete solutions for h.
§ LEVITATION AND TAKE-OFF: QUASI-STEADY APPROACH
§.§ Quasi-steady approach as such
A vertical balance of the (evaporative) levitation force (<ref>), where recall its scale in table <ref> and the definition (<ref>), against the droplet weight directly yields the equation for the levitation height of the droplet h as a function of its radius R:
3πμ_vλ_v Δ T/ρ_v ℒR^2/h^2 R+2 h/R+h=4π/3ρ_l g R^3 .
There exists a single natural length scale ℓ_*, the non-dimensionalization with which renders equation (<ref>) parameter-free:
1/ĥ^2 R̂+2 ĥ/R̂+ĥ=4/9R̂ ,
where
[R̂]=[ĥ]=(μ_vλ_v Δ T/ρ_v ρ_l g ℒ)^1/3≡ℓ_* , R̂=R/[R̂] , ĥ=h/[ĥ] .
The solution of (<ref>) is shown in figure <ref>(a) and adheres to the following asymptotic behaviour:
ĥ = 3/21/R̂^1/2 i.e. δ=3/21/R̂^3/2 as R̂→∞ ,
ĥ = 3/√(2)1/R̂^1/2 i.e. δ=3/√(2)1/R̂^3/2 as R̂→0 .
The length scale ℓ_* indicates the characteristic size R∼ℓ_* at which the droplet takes off at a height of the order of itself, with h ∼ R (i.e. δ∼ 1). At smaller sizes (R≪ℓ_*), the droplet soars even higher (h≫ℓ_*≫ R, δ≫ 1), whereas at larger sizes (R≫ℓ_*), the droplet levitates lower (h≪ℓ_*≪ R, δ≪ 1). Typically, ℓ_* is in the range of a few tens of micrometers. For our reference case of a water droplet on a superheated substrate with Δ T=300^∘C, we obtain ℓ_*=28.46 μm, which is much smaller than the capillary length ℓ_c=√(γ/(ρ_l g)) (γ being the liquid–air surface tension, ℓ_c∼ 2.5 mm for water). This `take-off scale' ℓ_* has earlier been pointed out by <cit.> and <cit.> (their notation R_l) as the scale at which a drastic take-off takes place <cit.>. They also interpret it as the droplet size starting from and below which the lubrication approximation in the vapour film between the droplet and the substrate becomes invalid (since h≪ R ceases indeed to hold). Note that h=R for R=3/2 ℓ_*.
Similarly to what was commented for F_ev in <ref>, the overall tendency h∼ R^-1/2 (or h/R∼ R^-3/2) is well-known since <cit.> and <cit.>, who first pointed out this exponent. However, we here calculate the prefactor and point out that it is actually not fully constant, as hightlighted in the inset of figure <ref>(a) and further put into evidence by the two different limiting values in (<ref>) and (<ref>).
A first comparison with experiment is undertaken in figure <ref>(b), where we consider just the upper layer of experimental points, while the points lower than that are deemed to belong to some transients (see also a remark in <ref> later).
It is important to recall that these experiments dealt with small droplets, typically ranging in radius from 1 μm to 30 μm, which is of the order of or smaller than ℓ_* here. For this size range (R̂≲ 1), specific to take-off observations, the full solution of (<ref>) already practically coincides with the asymptotic limit (<ref>), cf. figure <ref>.
While this seems to agree well with experiment for 15 μm≲ R≲ 30 μm, an overprediction is nonetheless observed as the droplet size decreases below R≲ 15 μm.
Astonishingly, we observe that it is rather the asymptotic behavior (<ref>) that starts to get closer to the experimental points.
However, this must be by chance or for a wrong reason, since equation (<ref>), with the prefactor it contains, is appropriate in the limit of larger droplets (and not the smaller ones we are discussing right now). We shall come later to what
will be the right explanation here.
The droplet radius R decreases over time by evaporation (rather than just being a given constant parameter), and hence h, related to R by (<ref>), is also a function of time. The steady force balance (<ref>) or (<ref>) is then assumed to be valid in a quasi-steady sense, and R(t) and h(t) follow the solid curve of Fig. <ref> as time goes on.
The mass of the droplet 4π/3ρ_l R^3 decreases in time at an evaporation rate given by (<ref>) with the scale from table <ref>. This balance gives rise to the following equation for the droplet radius evolution:
ρ_l R R/ t=-λ_v Δ T/ℒ[ 1+1/2ln(1+R/h) ] .
Using the time scale
[t̂]=ρ_l ℒℓ_*^2/λ_v Δ T=ρ_l/ρ_v1/ℓ_*^2/α_v≡τ_*
alongside the scales (<ref>), equation (<ref>) is rendered free of any parameters similarly to (<ref>). Namely, we arrive at
R̂ R̂/t̂= -1-1/2ln(1+R̂/ĥ) .
Now the dimensionless evolution problem for R̂(t̂) and ĥ(t̂) is defined by a system of two coupled equations (<ref>) and (<ref>), for which the initial conditions R̂=R̂_0 and ĥ=ĥ_0 at t̂=0 are posed with R̂_0 and ĥ_0 not being independent but rather related by (<ref>). Figure <ref> illustrates the (numerically obtained) solution for various initial droplet radii R̂_0={1/3,1/2,1,2,3}. Evidently, the curves demonstrate that R̂ decreases over time due to evaporation until extinction, with larger droplets exhibiting longer lifespans. Concurrently, ĥ increases over time as the droplet size decreases, larger droplets being closer to the substrate at the initial time in accordance with (<ref>). It is important to note that, within the present quasi-steady description, a Leidenfrost droplet takes off reaching an infinite height ĥ→∞ at the end of its life (as R̂→0), in accordance with (<ref>).
In the inset, (R̂/R̂_0)^2 is plotted as a function of t̂/t̂_ev^∞ in order to highlight the evaporative behavior of a spherical Leidenfrost droplet as compared to the well-known limit case of a spherical droplet suspended in an unbounded gas medium. Here t̂_ev^∞=(R̂_0)^2/2 is the dimensionless evaporation time of such a suspended droplet (which can be derived from equation (<ref>) in the limit δ=ĥ/R̂→∞).
Owing to the interaction with the superheated substrate, appearing through the logarithmic term in eq. (<ref>), the well-known R^2-law is recovered only for large values of ĥ_0. Thus, R̂^2 generally does not linearly decrease in time, while these droplets evaporate faster than their suspended counterparts due to the proximity of the superheated substrate.
Needless to note that by parametrically plotting ĥ(t̂)/R̂(t̂) or ĥ(t̂) as a function of R̂(t̂) (t being the parameter) we retrieve the same `master' curve as depicted in Fig. <ref>.
Within the present quasi-steady approach, such a master curve is just trivially given by an algebraic equation like (<ref>) or (<ref>). However, this may become less trivial in what follows.
§.§ Validity of assumptions
The quasi-steady result that the droplet soars to an infinite height at the end of its life, cf. equation (<ref>) and figure <ref> (right), looks suspicious from the physical point of view. One can wonder whether the quasi-steadiness criterium (<ref>) for the fields in the gas phase is still fulfilled in view of δ→∞. Furthermore, h→∞ also implies infinite velocity and acceleration (𝚍h/𝚍t→∞ and 𝚍^2 h/𝚍t^2→∞), and hence one can wonder whether the steady force balance such as (<ref>) is still adequate with neglecting the drag (proportional to 𝚍h/𝚍t) and inertia (proportional to 𝚍^2 h/𝚍t^2) forces on the droplet. Below, we make an estimation of those effects when the only source of unsteadiness is the evaporation of the droplet, i.e. when the time scale is τ=τ_ev as given by (<ref>).
Disregarding numerical prefactors, the inertia, (Stokes) drag and levitation forces can be estimated as
inertia∼ρ_l R^3 h/τ_ev^2 , drag∼μ_v R h/τ_ev , levitation∼μ_vλ_v Δ T/ρ_v ℒR^2/h^2 ,
where the estimation of the levitation force is just based on the left-hand side of (<ref>). Using the expression (<ref>) for τ_ev, one can immediately see that
inertia/drag∼^-1
As we have ∼ 1, ≪ 1 here (cf. <ref>), inertia can be disregarded against the drag in the present context with τ=τ_ev (which does not exclude that intertia can be essential in other contexts, cf. <ref> later on). Then, it just remains to compare the drag and levitation forces. Using (<ref>) on account of (<ref>), one can obtain
drag/levitation∼ρ_v/ρ_l(h/R)^3 .
Thus, the drag can be neglected in favour of a quasi-steady force balance like (<ref>) provided that
δ^3≪ρ_l/ρ_v .
Given that the liquid density is much greater then the vapour density (ρ_l/ρ_v≫ 1), the condition (<ref>) leaves quite a considerable margin for the present quasi-steady approach to be valid. It is only for sufficiently small droplets levitating too high (such that δ∼ (ρ_l/ρ_v)^1/3) that it breaks down and the drag force should be incorporated (but still not the inertia force, according to the earlier estimations), as intuitively expected. Moreover, one can clearly see that the condition (<ref>) is more restrictive in the realm of large δ than (<ref>), which is further reinforced by the fact that ≪ 1. This means that even when the drag force becomes important, the temperature and velocity fields between the droplet and the substrate can still be regarded quasi-steady, and hence the expressions like (<ref>) and (<ref>) are still valid. An analysis aiming at smaller R (and larger h) and incorporating the drag force is realized in <ref> and <ref> below.
In the opposite limit of larger R (and smaller h), the validity of the quasi-steady approach as used here is therefore not put into question. However, it is rather the full-sphericity assumption that becomes more restrictive, when (even still within R≪ℓ_c and a practical sphericity of the most of the droplet) a small part of the droplet bottom gets flattened by gravity <cit.> with an essential effect from the Leidenfrost viewpoint.
In this regard, the result (<ref>) should be understood in an intermediate asymptotic sense, as valid for R≫ℓ_* but R still much smaller than the bottom-flattening scale. This will be considered in more detail in <ref>.
§ BASIC CALCULATIONS (CONTINUED): DRAG FORCE
As stipulated in <ref>, the drag force, F_drag, is required for further analysis. The present section, dedicated to F_drag, is organized as a continuation of <ref> and mirrors the same style as far as notations, non-dimensionalization and scales are concerned. The scales relevant here are summarized in table <ref> (which is the counterpart of table <ref> there), where U is the droplet (translation) velocity in the vertical direction (its only component here).
As the primary need for such a consideration arose in the context of large δ (cf. <ref>), a mere use of the (dimensionless) Stokes drag F_drag=6π in an unbounded medium could be quite sufficient here (as well as in <ref> that follows), where the rigid-sphere prefactor 6π is used on account of the liquid dynamic viscosity being much larger than the vapour one. Nonetheless, for the sake of generality, we shall here proceed implying δ=O(1), all the more so that it will be particularly relevant later on in the context of <ref>. Thus, the goal is to compute F_drag(δ).
To this purpose, we once again solve the dimensionless Stokes equations (<ref>) –(<ref>) with the boundary conditions (<ref>)–(<ref>) (although the dimensional scales are now different and given by table <ref>). However, the `evaporation' boundary conditions (<ref>) are now replaced with
·=1
reflecting droplet translation along z. Finally, the same expression as on the right-hand side of (<ref>) is used to compute F_drag.
The computation results are illustrated in Fig <ref> together with the following approximate expression <cit.>:
F_drag(δ)= 6 π(1 + 1/δ)
While strictly valid in the limits δ→0 and δ→∞, the result (<ref>) can be seen to deviate from the numerical results up to 7% for intermediate values of δ∼1. A more precise fit of the numerical data is proposed in Appendix <ref>. Nonetheless, we shall stick in the present study to a simpler and elegant expression (<ref>), which will be sufficient for our purposes.
§ LEVITATION AND TAKE-OFF: (INDISPENSABLE) DYNAMIC APPROACH FOR SMALLER DROPLETS
We now turn to the case of smaller droplets (levitating higher), for which the quasi-steady approach followed through in <ref> breaks down. As pointed out in <ref>, the drag force, depending on the droplet translation velocity as it soars higher while vaporizing, becomes essential (although not the inertia force). Supplementing the quasi-steady force balance (<ref>) with the drag force, we obtain
-6πμ_v R (1+R/h) h/ t +3πμ_vλ_v Δ T/ρ_v ℒR^2/h^2 R+2 h/R+h=4π/3ρ_l g R^3 .
Here the dimensionless expression (<ref>) multiplied by the scale from table <ref> has been used with U= h/ t. (Strictly speaking, the velocity of the center of mass is rather given by h/ t+ R/ t, but we shall be disregarding such a nuance here.)
The natural distinguished scales are such that all terms in (<ref>) and in (<ref>) are of the same order of magnitude:
[R̃]=(ρ_v/ρ_l)^2/9ℓ_* , [h̃]=(ρ_v/ρ_l)^-1/9ℓ_* , [t̃]=(ρ_v/ρ_l)^4/9τ_* ,
R̃=R/[R̃] , h̃=h/[h̃] , t̃=t/[t̃] , ϵ≡[R̃]/[h̃]=(ρ_v/ρ_l)^1/3 ,
and the dynamical system becomes
- (1+ϵR̃/h̃) h̃/t̃ +R̃/h̃^2 h̃+1/2ϵR̃/h̃+ϵR̃=2/9R̃^2 ,
R̃ R̃/t̃=-1-1/2ln(1+ϵR̃/h̃) .
The distinction between the present scales (<ref>) and the previously considered ones (<ref>) and (<ref>) is eventually owing to a small parameter given by the vapour-to-liquid density ratio ρ_v/ρ_l≪ 1.
We note that the scales [R̃] and [h̃] are different here, and the typical relative height of the droplet levitation is now δ∼ (ρ_l/ρ_v)^1/3≫1. It is exactly at such values of δ that the criterion (<ref>) of the quasi-steady force balance breaks down, as expected, which confirms the coherence of the present dynamic approach.
At the same time, the criterion (<ref>) of the quasi-steadiness of the temperature and velocity fields between the droplet and the substrate is still satisfied, as already discussed in <ref>.
The phase portrait of the dynamical system (<ref>)–(<ref>) is represented in figure <ref>(a) (generated using the command in Mathematica).
Notably, while a Leidenfrost droplet is theoretically expected to ascend indefinitely within the quasi-steady approach, the presence of the drag force results in a saturation of h̃ towards a finite take-off value. In other words, the Leidenfrost droplets always vanish (R̃→ 0) at a finite height. Physically, the drag force, represented by the first term on the left-hand side of (<ref>), becomes so significant towards the end of life of the droplet that it blocks the soaring tendency dictated by the levitation force (the second term ibid). In principle, the droplet can end up at whatever height, depending on the initial conditions (figure <ref>a). However, there is a distinguished value of the final levitation height valid for most of the droplets, at least for those starting off from a sufficiently large size. In such a case, the phase trajectory is seen to fastly reach the separatrix (figure <ref>a), along which the droplet evolution ensues until the droplet vanishes at the distinguished final height corresponding to the separatrix:
h̃_fin=1.69 (1 + 0.35 ϵ) ,
which was computed numerically assuming a linear dependence on the small parameter ϵ. Using the scales (<ref>) on account of (<ref>), this can be rewritten in dimensional terms:
h_fin=1.69 (ρ_l/ρ_v)^1/9(μ_vλ_v Δ T/ρ_v ρ_l g ℒ)^1/3(1 + 0.35(ρ_v/ρ_l)^1/3) .
For instance, under the conditions of the experiments by <cit.> with water droplets (cf. Appendix <ref> for the parameters), we obtain h̃_fin = 1.64 and h_fin=110.36 μm. The separatrix now becomes our new, dynamic `master curve'. It replaces the quasi-steady one, soaring to infinity (h̃→∞) as R̃→ 0 and corresponding to
1/h̃^2 h̃+1/2ϵR̃/h̃+ϵR̃=2/9R̃
in terms of the tilded variables, which is also depicted in figure <ref> (solid black line) for comparison. The dynamic master curve is different for smaller droplets (R̃≲ 3),
whereas for larger droplets the quasi-steady result is recovered (the solid black curve coinciding with the separatrix in figure <ref>a), as expected.
Figure <ref>(b) undertakes a direct comparison with the experiments by <cit.> for water droplets at Δ T=300^∘C. We see that the dynamic model, considered in the present section, shows a noticeable improvement over the previously considered quasi-steady approach. The improvement is just manifest for smaller droplets (R≲ 15 μm).
§ DYNAMIC APPROACH IN GENERAL
The dynamic approach considered in <ref> came as indispensable and, in this sense, forced in the domain of smaller droplets, where the consideration of <ref> based on the quasi-steady force balance could no longer be valid. At the same time, this made it limited to that domain, where, in particular, only the drag force was essential while inertia could be disregarded. In the present section, we take up a full dynamic approach, which would presumably be valid in the entire range of droplet sizes considered in the present paper. In <ref>, we considered just the (quasi-)equilibrium positions (heights) of the droplet. Here, we inquire what the droplet behaviour will be if initially out of that equilibrium.
Complementing the force balance (<ref>) with intertia, we arrive at
4π/3ρ_l R^3 ^2 h/ t^2 =-6πμ_v R (1+R/h) h/ t +3πμ_vλ_v Δ T/ρ_v ℒR^2/h^2 R+2 h/R+h-4π/3ρ_l g R^3 .
Thus, we end up with a third-order system of ODEs given by equations (<ref>) and (<ref>). It can in principle be (numerically) solved starting from any initial condition R=R_0, h=h_0, h/ t=h'_0 at t=0 to obtain R(t) and h(t), where R_0>0, h_0>0 and h'_0 are some initial values.
The solution is illustrated in figure <ref> for a water Leidenfrost droplet of an initial radius R_0=30 μm starting from various initial heights h_0 with h'_0=0. As earlier, the parameters of the experiment by <cit.> are used (i.e. Δ T=300^∘C and 1 atm, cf. Appendix <ref>). We note that the chosen value of the initial radius is large enough for the quasi-steady approach to work well (cf. figure <ref>b) and the corresponding quasi-steady height, obtained from equation (<ref>) at R=R_0, is h_QS=50.24 μm. The several initial heights tested are then conveniently expressed in the units of h_QS. The previously obtained quasi-steady (<ref>) and dynamic (<ref>) master curves are also shown in figure <ref> for reference.
For h_0=h_QS, we see that the solution adheres to the dynamic master curve, which coincides with the quasi-steady one for larger R and is drag-force-moderated for smaller R, and where the incorporation of inertia into the force balance has practically no effect, as expected. For an initial height h_0 out of the equilibrium position h_QS, the droplet aproaches the dynamic master curve relatively fast and in an oscillatory way (like a damped oscillator), and then the evolution continues along that curve. The droplets intially located too close to the substrate can rebound to considerable heights as propelled by the levitation force. The droplets starting from or propelled to considerable heights rejoin the dynamic master curve at a later time and smaller size. In this case, the rejoining already happens in a monotonic way, quite in accordance with the scenario for smaller droplets described in <ref>, where inertia could be disregarded. Furthermore, the droplets finding themselves at a certain moment excessively high vaporize at some finite height before reaching the dynamic master curve, which also forms part of that scenario. Under the conditions explored in figure <ref>, the latter scenario occurs whenever h_0≲ (1/300)h_QS(R_0)=0.18 μm and h_0≳ 72 h_QS(R_0)=3.83 mm.
Small rebounds around the master curve can actually reproduce certain oscillatory trends observed in the experimental points by <cit.>, as illustrated in figure <ref>. However, the nature of the experimental points located too close to the substrate remains unclear, the understanding of which may require staging further experiments and thinking of physical factors not included in the present model. The present modelling indicates that there may be a certain dependence on the manner the Leidenfrost droplets are deposited in experiment as the initial conditions can be such that the droplet fully evaporates before reaching the master curve.
The consistency of the approach in regard of the oscillatory relaxation obtained here can be assessed as follows. Focusing just on larger droplets with h∼ R, the oscillation time scale τ_oscil=√(R/g) can be compared with the viscous and thermal time scales τ_vis∼ρ_v R^2/μ_v and τ_th∼ R^2/α_v (the latter two are of the same order on account of ∼ 1 and can thus be used interchangeably in estimations). As τ_visτ_th/τ_oscil^2=, cf. equation (<ref>), while it was estimated ≪ 1 in <ref>, we see that τ_vis , τ_th≪τ_oscil. Thus, the implied quasi-steadiness of the temperature and velocity fields does hold during the oscillation cycle, hence the sought consistency.
Similarly, the ratio of the intertia and drag forces in (<ref>) can be estimated at ∼ρ_l/ρ_v^1/2 taking τ_oscil as the time scale. Even if ≪ 1, this can be superseded by ρ_l/ρ_v≫ 1 for larger droplets so that inertia dominates, hence the observed oscillatory relaxation. For smaller droplets, however, as decreases drastically with R, it is the drag that come to dominate, hence a monotonic relaxation and the scenario of <ref>.
§ GLOBAL PICTURE LEIDENFROST
The larger the droplet is, the narrower the gap between the spherical droplet an the substrate becomes, as put into evidence by equation (<ref>). As mentioned at the end of <ref>, a key limitation to the present analysis from the side of larger droplets is expected to be caused by a deviation from sphericity within such a narrow gap, even if the droplet as a whole might still remain largely spherical.
As this region is crucial for vapour generation and heat transfer, in spite of its smallness,
any morphological change therein can have a significant impact on the Leidenfrost phenomenon.
On the other hand, it is through such a morphological change at the bottom of the droplet that a transition from the small spherical to larger non-spherical Leidenfrost droplets is bridged, which is touched upon in the present section.
As established by <cit.>, cf. their equation (26), a significant deviation from sphericity at the bottom of the droplet occurs for R starting from R∼ℓ_i, where the length scale ℓ_i is given by
ℓ_i=(ℓ_*^3 ℓ_c^4)^1/7
(all rewritten in our present notations). For R≪ℓ_i, the droplet is fully spherical and the analysis of the present paper holds. As ℓ_*≪ℓ_c, equation (<ref>) implies that ℓ_*≪ℓ_i≪ℓ_c. For our reference case of a water droplet on a superheated substrate with Δ T=300^∘C <cit.>, we obtain ℓ_i=367 μm.
Figure <ref> illustrates the Leidenfrost effect at large, over four decades of the droplet size. It combines the present results for the (small) spherical Leidenfrost droplets to the left of the figure (the dynamic master curve) with the results for the usual (large and deformed) droplets reproduced from <cit.> to the right. We note that for the non-spherical droplets the radius R is here defined as the radius of the vertical projection on the substrate (i.e. as the maximum horizontal radius). Two sets of experimental results, corresponding to the two drastically different size domains, are plotted alongside the theoretical curves: the ones by <cit.> and by <cit.>. The overlapping occurs at R∼ℓ_i, quite as expected. Nevertheless, it does not appear to happen smoothly, which is most definitely due to some accuracy loss towards the limit of small near-spherical droplets within the model employed by <cit.>. Notably, it is in this intermediate (overlapping) region that an absolute minimum of the vapour layer thicknesses is attained: from there, h increases both towards the small spherical droplets as a take-off precursor and towards the large deformed droplets.
§ CONCLUSIONS
The dynamics of small spherical Leidenfrost droplets have been investigated theoretically, yielding valuable new insights into their final stages of existence.
After numerically calculating the fluxes, evaporation rate, and forces for a spherical Leidenfrost droplet interacting with a superheated flat substrate (under the verifiable assumptions of quasi-stationarity and low Reynolds and Péclet numbers) and coming up with simple fitting formulas for these data as a function of the reduced height δ=h/R (all respecting the asymptotic behaviors as δ→ 0 and δ→∞, also studied here), a theoretical model has been developed which allows an accurate prediction of the droplet height h as a function of the physical parameters without any fitting parameters.
First, by balancing the droplet's weight against the upward hydrodynamic force induced by evaporation, a `quasi-steady master curve' relating drop height h to drop radius R was derived. This curve follow the h∝ R^-1/2 scaling law formulated by <cit.>. However, the prefactor is not constant and rather varies from 3/2 when R→∞ to 3/√(2) as R → 0.
Furthermore, our analysis reveals that the aforementioned classical quasi-steady description, while capturing the general trend of the `take-off' phenomenon, is unable to accurately reproduce the experimental data of <cit.>. Dynamic effects, especially those related to frictional forces, are crucial to accurately describe the take-off at small scales. Therefore, a `dynamic master curve', drag moderated, has been also derived and well reproduces the experimental data of <cit.>. As a consequence of the friction effect, the Leidenfrost droplets disappear at a finite height, the value of which turns out to be universal for sufficiently large initial drops. This is in contrast to the prediction of the quasi-stationary model, which suggests an infinite final height. A formula that predicts this universal final height has been established.
Combining the present modeling (valid when R<ℓ_i) with the one of <cit.> for larger deformed droplets (valid when R>ℓ_i), we offer a comprehensive picture of the shape and elevation of Leidenfrost drops across a the full range of stable axisymmetric shapes, spanning four decades of drop sizes. These studies also align with the hierarchy of length scales ℓ_*<ℓ_i<ℓ_c pointed out by <cit.>, emphasizing the dominant physical mechanisms and associated scalings for each scale.
In addition, a general dynamic model including both drag and inertia effects has been used to investigate the influence of initial conditions on droplet dynamics. For an initial height h_0 out of the equilibrium position h_QS, a sufficiently large droplet approaches the `dynamic master curve' relatively quickly and in an oscillatory manner (like a damped oscillator), and then continues to evolve along this curve. Such small rebounds can indeed reproduce certain oscillatory trends observed in the experimental points of <cit.>. This scenario holds for initial conditions not too far from equilibrium; otherwise, the droplets that find themselves too high at a given moment evaporate at a finite height before reaching the dynamic master curve, which highlights a certain dependence of the Leidenfrost droplet deposition manner on the result.
We hope this research will stimulate further theoretical investigations, particularly in the intermediate region (R≈ℓ_i) where the vapor layer is thinnest, and encourage further experimental studies on Leidenfrost droplets with R≲ℓ_i, an area that remains underexplored.
§.§ Acknowledgments
BS gratefully acknowledges the support from Centre National de la Recherche Scientifique – CNRS, AR from BELSPO and ESA PRODEX Evaporation, and PC from the Fonds de la Recherche Scientifique – FNRS.
§ PROPERTIES
The properties used in this work are provided in table <ref> <cit.>.
To facilitate continuity with some previous studies of the usual (larger non-spherical) Leidenfrost droplets <cit.>, apart from the parameters defined in the main text, we here also follow the (dimensionless) evaporation number
= μ_v λ_v Δ T/γŁρ_v ℓ_c .
We note the formulas ℓ_*=^1/3ℓ_c and ℓ_i=^1/7ℓ_c resulting from (<ref>), (<ref>) and (<ref>).
§ SOME MORE PRECISE FITS OF NUMERICAL DATA
In addition to the simplified fits outlined in the main text, more precise fits are proposed herein. These fits are presented alongside and compared with the numerically computed data in Fig.<ref>. The asymptotic behaviors derived in Appendix <ref> are also depicted.
§.§ Fit for J(δ)
The fit (<ref>) can further be improved as follows:
J(δ)= 4π[1+ln(1+1/δ)- (1-1/2ln 2-γ) 1/1+50.8 δ^2] ,
where γ=0.577216 is the Euler constant. The expression (<ref>) respects the two-term asymptotic expansions both as δ→ 0 and as δ→ +∞ obtained in Appendix <ref> and Appendix <ref>, respectively.
§.§ Fit for F_ev(δ)
A more precise fit than (<ref>) is given by
F_ev(δ)= 3 π/δ^2(1 + δ/0.924 + δ) .
§.§ Fit for F_drag(δ)
A better fit than (<ref>) is provided by the formula
F_drag(δ)=6π(1+ 1/δ+1.161 1+26.01δ/1+62.447δ+187.12δ^2+2.514 δ^3) .
§ ASYMPTOTIC BEHAVIOURS
§.§ J as δ→ 0
When the sphere (dimensionless radius unity) is close to the substrate (δ≪ 1), its profile z=h(r) (z=0 at the substrate) in a small vicinity of the downmost point is well approximated by a parabola
h=δ +1/2 r^2 matching with the outer circular drop shape.
The evaporation flux is given by heat conduction across this thin vapour layer, viz. j=1/h in dimensionless form. Integrating over such a small vicinity up to a reference point r=r_1≪ 1, one obtains
J_1=2π∫_0^r_1r j 𝚍r =2πln(δ+1/2 r_1^2)-2πlnδ
for the first contribution into the evaporation rate J. One can observe a logarithmic divergence and assume the sought asymptotic behaviour in the form J∼ 2π (-lnδ+const) as δ→ 0. However, to fully determine const, one needs to consider the contribution J_2 from the rest of the sphere, such that eventually J=J_1+J_2. To this purpose, it suffices to solve the problem (<ref>)–(<ref>) for a unit sphere lying on the substrate (with formally δ=0). This can be done numerically with J_2 determined by integrating (<ref>) over the rest of the sphere up to the point r=r_1 on the lower surface. The result diverges in the limit r_1→ 0: as J_2=-4πln r_1+const_1, where const_1 is determined numerically by choosing sufficiently small r_1. In this way, we finally arrive at J∼ 2π (-lnδ+1.85) as δ→ 0 (the terms with r_1 cancelling out in J_1+J_2). Within the computation precision and with γ being the Euler constant, this can be rewritten as J∼ 2π (-lnδ+ln 2+2γ) as δ→ 0, which is taken into account when constructing the fit (<ref>). This (exact) value of the constant can in principle be obtained from the exact solution of the present heat conduction problem in curvilinear coordinates <cit.>, although we here limit ourselves to corresponding numerical solutions. The simplified fit (<ref>) respects exactly the logarithmic divergence, but just approximately the constant.
§.§ J as δ→ +∞
When the sphere is far away from the substrate, the leading-order result is as for the sphere in an unbounded medium:
T=1-1/r ,
where r=√(r^2+(z-δ-1)^2) is the spherical radial coordinate from the centre of the sphere (note the font difference with the cylindrical radial coordinate r). Therefore, j=∂_rT|_r=1=1 and hence J=4π.
The first correction comes from the sphere reflection in the substrate. The image sphere adds the following primary contribution into the temperature field in the original domain (above the substrate, z>0):
T_im=1/r_im ,
where r_im=√(r^2+(z+δ+1)^2) and the subscript `im' is associated with the image.
The primary effect of (<ref>) is to increase the local ambient temperature around the original sphere from T=1 to T=1+1/(2δ). As T=0 at the sphere surface, cf. (<ref>), the values of j and J are then increased in the same proportion. Thus, we arrive at J∼ 4π (1+1/(2δ)) as δ→ +∞, which is respected by both the fit (<ref>) and (<ref>).
§.§ F_ev as δ→ 0
In dimensionless terms (scales provided in table <ref>), the lubrication equation in the thin vapour layer between the substrate and the sphere can be written as 1/12 r∂_r (r h^3 ∂_r P_v)+1/h=0 <cit.>, where P_v is the vapour pressure excess over the ambient one (hence P_v→ 0 far away). Integrating with h=δ +1/2 r^2 and on account of symmetry ∂_r P_v|_r=0=0, one obtains ∂_r P_v=12 h^-3lnh/δ. The leading-order contribution into F_ev is given by F_ev=2π∫_0^+∞ r P_v 𝚍r=π r^2 P_v|_r→+∞-π∫_0^+∞ r^2 ∂_r P_v 𝚍r=3π/δ^2, which is the sought behaviour as δ→ 0 and is respected in (<ref>).
§.§ F_ev as δ→ +∞
Assuming δ≫ 1, we shall distinguish three contributions into the sough asymptotic behaviour: F_ev=F_ev1+F_ev2+F_ev3, which are all of the same order.
First, the leading-order (spherically symmetric) flow field
=r/r^3
from our evaporating sphere (r being the position vector from the sphere centre) is supplemented by the one from the image sphere
_im=r_im/r_im^3
(in the original domain z>0). At the location of the original sphere (r=0, z=δ+1), in the limit δ≫ 1, the velocity field (<ref>) gives rise to a (quasi-)uniform streaming velocity v_0=1/(4(δ+1)^2)∼ 1/(4δ^2) directed vertically upwards. This in turn gives rise to the Stokes drag 6πμ_v R v_0 upon the original sphere (in dimensional terms). In our present dimensionless terms (cf. table <ref>), this amounts to F_ev1=6π v_0=3π/(2δ^2).
Second, the superposition of the velocity fields (<ref>) and (<ref>) does satisfy the impermeability condition at the substrate: v_z=0 at z=0 (hereafter v_r and v_z are the r- and z-components of the velocity field). However, the no-slip condition v_r=0 at z=0 is not satisfied. To remedy this, we consider another contribution into the velocity field, the addition of which permits to observe the no-slip condition. We proceed in terms of the stream function ψ:
v_r=1/r∂_z ψ , v_z=-1/r∂_r ψ .
The Stokes equation can be written as <cit.>
E^2 E^2ψ=0 , E^2= ∂_rr-1/r∂_r + ∂_zz ,
which is solved in the domain z>0 with the boundary conditions
ψ=0 , ∂_zψ=-2 r^2/[(δ+1)^2+r^2]^3/2 at z=0 , ψ/r^2→ 0 at infinity .
The slip velocity in the second condition (<ref>) is directed towards the axis and is such as to offset the corresponding contribution from the sum of (<ref>) and (<ref>). One can verify that
ψ=-2 r^2 z/[(δ+1+z)^2+r^2]^3/2
is an exact solution of the problem (<ref>), (<ref>). Eventually, we are just interested in the velocity field value v_0 at the location of the original sphere (r=0, z=δ+1). Using (<ref>) and (<ref>), one obtains v_0=1/(2(δ+1)^2)∼ 1/(2δ^2) directed vertically upwards. As with the first contribution, the Stokes drag considerations lead to the result F_ev2=6π v_0=3π/δ^2.
Third, the temperature field (<ref>) from the image sphere gives rise not only to an effective uniform temperature increase in the original sphere surrounding, already taken into account in Appendix <ref>, but also to a (dimensionless) temperature gradient ∂_z T ∼ -1/(4δ^2). This breaks down the spherical symmetry of the evaporation flux (j no longer constant along the sphere surface) and of the evaporative flow (a correction upon (<ref>)). Hydrodynamically, this can engender an additional force contribution F_ev3. We proceed with the analysis using the spherical coordinates {r,θ} related to the original sphere, such that r=rsinθ and z=δ+1+rcosθ. Then the problem for the mentioned (gradient-related) part of the temperature field around the original sphere can be formulated as
∇^2 T=0 , ∇^2=∂_rr+2/r∂_r + 1/r^2 sinθ∂_θsinθ ∂_θ ,
T=0 at r=1 , T∼ -1/4δ^2rcosθ as r→ +∞
to be solved in the domain r>1. The infinity in (<ref>) formally corresponds to 1≪r≪δ.
The solution of (<ref>) with (<ref>) is
T=-1/4δ^2(r - 1/r^2) cosθ .
Therefore,
j=∂_rT|_r=1=-3/4δ^2cosθ ,
which shows that the present contribution corresponds to evaporation reduction at the upper part of the sphere (j<0 for 0≤θ<π/2) and intensification at the lower part of the sphere (j>0 for π/2<θ≤π), closer to the substrate, as expected.
To calculate the flow induced by (<ref>), we work once again in terms of the stream function, now in the spherical coordinates:
v_r=1/r^2sinθ∂_θψ , v_θ=-1/rsinθ∂_rψ
for the r- and θ- components of the velocity field. The problem is formulated in the domain r>1. The Stokes equation <cit.> and the boundary conditions can be written as
E^2 E^2 ψ=0 , E^2=∂_rr + sinθ/r^2∂_θ1/sinθ ∂_θ ,
ψ=-3/8δ^2sin^2θ , ∂_rψ=0 at r=1 , ψ/r^2→ 0 as r→ +∞ .
The first condition (<ref>) corresponds to v_r=j (in our dimensionless terms) on account of (<ref>) and (<ref>), while the second condition (<ref>) to no slip (v_θ=0). The solution of (<ref>) with (<ref>) is
ψ=-3/8δ^2(r+1/r) sin^2θ/2 .
The force acting on the sphere in the z-direction is determined by (-4π) times the `Stokeslet' prefactor, i.e. the one at the term rsin^2θ/2 <cit.>. Thus, from (<ref>), one obtains F_ev3=3π/(2δ^2).
Summing up the three contributions, one finally obtains F_ev=6π/δ^2 (as δ→ +∞). It is noteworthy that the asymptotic behaviour remains O(δ^-2) in both limits: as δ→ +∞ and as δ→ 0 (cf. Appendix <ref>). Yet, the prefactors are twice different.
§ CASE OF ETHANOL
Since <cit.> and <cit.> have conducted experiments with ethanol droplets, it is worthwhile to compare the present theoretical predictions with those experimental results. Although the predictions match the form and order of magnitude of the experimental results, the agreement is less satisfactory than in the case of water. Specifically, we overestimate the data from <cit.> and underestimate the data from <cit.>, cf. figure <ref>. Such multidirectional discrepancies make it challenging to propose a hypothesis that could explain the differences, which might be attributed to experimental errors or some physical ingredient missing in the model that is important for ethanol, but not for water. Further investigation (including experimental) is needed to address this issue.
| When a volatile liquid droplet is placed on a hot solid surface, superheated well above the boiling temperature, it neither touches the substrate nor boils, but rather floats on a thin film of its own vapor. This fascinating phenomenon, known as the Leidenfrost effect, does not cease to attract attention since its first descriptions about 300 years ago <cit.>. This is due to not only the myriad of intriguing and unexpected behaviors a droplet can exhibit in this state,
but also its relevance across a wide range of industrial and technological processes, spanning from the traditional heat transfer applications to the emerging field of multiphase milli-/micro-fluidics. See, for example, the review articles <cit.>, dedicated book chapters in <cit.>, and the many references therein.
The vapor film, a key feature of the Leidenfrost state, ensures the droplet levitation while acting as a thermal insulator, resulting in relatively low evaporation rates and hence long lifetimes of the droplet. In this state, the droplet's weight is balanced by the pressure within such a vapor cushion squeezed by the slowly and steadily evaporating drop. With no contact with the substrate, the observable shapes of the droplets are governed by a balance between capillarity and gravity similarly to a perfectly non-wetting (superhydrophobic) situation. Denoting the capillary length by ℓ_c, droplets with radii R smaller than ℓ_c remain quasi-spherical while puddles larger than ℓ_c are flattened by gravity, whose height is limited by ≈2ℓ_c <cit.>. The profile of the underlying vapor film is non-trivial. For a large droplet with R≳ℓ_c, the vapor film exhibits a pocket-like structure composed by an internal vapor `pocket' surrounded by a thin neck. As the drop gets smaller, the vapor film slimes down, the drop getting closer to the substrate. When the droplet radius is small enough as compared to ℓ_c, the vapor pocket disappears completely, and the droplet becomes quasi-spherical with a small circular area slightly flattened at the bottom. Accurate interferometric measurements of the vapor film thickness profile <cit.> turn out to be in a good agreement with a refined theoretical modeling <cit.> coupling lubricated vapor flow, capillarity and hydrostatic pressure effects, itself recently confirmed numerically by <cit.>. Note that the main scaling laws featuring the shapes of a Leidenfrost droplet and its evaporation dynamics can be found in <cit.>.
In practice, this absence of contact between the Leidenfrost droplet and the substrate leads to very rich dynamics. For large puddle-like drops, the vapor pocket grows until it eventually pops up as a central `chimney' due to a Rayleigh-Taylor mechanism <cit.>. Instability of large droplets can also occur (either spontaneously or forced) in the form of `star-faceted' shapes when azimuthal surface oscillations develop along the periphery of the droplets <cit.>. Self-induced spontaneous oscillations can also occurs in the vertical plane yielding to the recently reported bobing, bouncing or trampolining dynamics when Leidenfrost droplets reach moderate and small size with R⩽ℓ_c <cit.>. Other spectacular behaviors related to their high mobility have been observed. These include Leidenfrost wheels, when a droplet initially at rest spontaneously rolls and moves over a flat surface like a wheel due to symmetry breaking in the internal flow of the liquid <cit.>, and self-propelling of Leidenfrost droplets when interacting with substrate breaking the axi-symmetry, either due to surface topography such as ratchets or herringbones <cit.>, or temperature gradients <cit.>. Thus, droplets move in a direction dictated by the patterns due to symmetry breaking of the vapor layer. Strategies have emerged to control the motion and manipulate these droplets. In addition to geometric and thermal heterogeneities, chemical patterns of the surface can also be exploited to tailor the vapor film, enabling the stretching, sloshing, spinning, propelling, or trapping of a Leidenfrost droplet <cit.>.
As compared to large and moderate-size, the dynamics of small, near-spherical Leidenfrost drops (R≪ℓ_c) has not received so much attention. In their seminal work, <cit.> first explored the final fate of Leidenfrost drops as they became very small, just moments before disappearing. By spraying tiny droplets of water or ethanol in the size range of about 1-30 μm onto a superheated substrate, they discovered that when small enough (i.e, with R below a characteristic radius corresponding to the breakup of the lubrication approximation), Leidenfrost droplets took off from the heated substrate with an elevation h∝ R^-1/2, as predicted by <cit.> and also by <cit.>. Remarkably, in this regime, droplets become too light to stand over the upward force generated by the pressure due to evaporation, and they reach higher and higher elevations while vaporizing. This behavior drastically contrasts with what is observed for larger Leidenfrost droplets. More recently, <cit.> observed that a second final fate, other than lift-off, is possible for Leidenfrost droplets. Namely, if the liquid droplet is not pure or contaminant-free enough, small Leidenfrost droplets are unable to take off, but instead disappear by exploding with an audible crack.
Here, we propose to theoretically revisit the dynamics of small spherical Leidenfrost droplets with the aim of comprehensively and thoroughly analyzing the mechanisms involved in their final fate. Thanks to a model including a realistic description of the coupling between hydrodynamics, heat transfer and evaporation, this work seems to be the first to provide exact estimates of the drop elevation as a function of the physical parameters and without any fitting parameter. After numerically computing the entirety of fluxes, evaporation rates and forces, a master curve for droplet elevation as a function of its size is derived by simply balancing the drop weight with the upward evaporation-induced hydrodynamic force. While the scaling law agrees with <cit.>, there appear subtleties concerning the prefactor. Moreover, the analysis reveals that such a classical quasi-steady description is not fully sufficient to describe the take-off phenomenon. Even at these small scales, further dynamical effects must be taken into account to achieve a good agreement with the original experimental data of <cit.>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17083v1 | 20240925164636 | Dynamics of Heisenberg XYZ spin Quantum Battery | [
"Disha Verma",
"Indrajith VS",
"R. Sankaranarayanan"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Department of Physics, National Institute of Technology, Tiruchirappalli 620015, India.
Department of Physics, Mar Ivanios College, Thiruvananathapuram 695015, India.
Department of Physics, National Institute of Technology, Tiruchirappalli 620015, India.
§ ABSTRACT
Spin systems have been extensively studied to understand the mechanisms of quantum batteries, which have shown the ability to charge faster than classical counterparts, even in closed systems. However, the internal dynamics of quantum batteries can significantly affect their performance, making it crucial to understand the influence of various parameters. In this study, we focus on the XYZ Heisenberg spin system, examining key factors such as anisotropy in spin interactions and external magnetic field to optimize work output to ensure effective charging.
Dynamics of Heisenberg XYZ spin Quantum Battery
R. Sankaranarayanan
September 28, 2024
===============================================
Dynamics of Heisenberg XYZ spin Quantum Battery
R. Sankaranarayanan
September 28, 2024
===============================================
§ INTRODUCTION
Energy storage remains a key constraint in our technological landscape. While conventional batteries have seen steady improvements, their fundamental limitations rooted in classical electrochemistry restrict their potential for the ever-growing demands of electric vehicles, renewable energy integration, and portable electronics. Recently, the field of quantum thermodynamics has explored the possibility of leveraging the principles of quantum mechanics to revolutionize energy storage. This leads to a new paradigm, known as quantum batteries (QBs), holds the promise of surpassing classical batteries in terms of efficiency, capacity, and even charging speed <cit.>.
The concept of QBs emerged in the last decade with theoretical proposals demonstrating the feasibility of storing energy in the quantum states of atoms and molecules <cit.>.
These theoretical frameworks suggested intriguing advantages, such as “superextensive charging", where larger batteries could charge faster than smaller ones, defying the limitations of classical systems. The past decade have witnessed significant progress in the theoretical understanding of QBs, exploring diverse physical systems and optimizing charging protocols <cit.>. Pioneering research has explored into various aspects of quantum batteries <cit.>, such as the utilization of Dicke states <cit.>, the significance of entanglement in work extraction <cit.>, and nonlocal charging mechanisms for enhanced power storage <cit.>. Furthermore, quantum batteries have made use of interacting spin systems <cit.>, which are charged through local magnetic fields. Importantly, these groundbreaking ideas have been effectively implemented in diverse platforms, including solid-state systems that feature individual two-level systems within single cavities or utilize ensembles of such systems <cit.>. An experimental study on XXZ Heisenberg quantum batteries shows coherence plays a crucial role in charging efficiency, challenging the emphasis on entanglement <cit.>. A paradigmatic model of a quantum battery has been experimentally implemented based on an organic microcavity <cit.>.
The XYZ Heisenberg spin model has emerged as a powerful tool for investigating the theoretical basis of QBs. This model captures the essential interactions between the battery's constituent parts and its environment, enabling researchers to analyze the impact of various factors on battery performance. Building upon previous work exploring the performance of Heisenberg XYZ spin chains <cit.>, this study investigates deeper into the role of external field and parameters involved in a two-spin system QB. Here we present an effective parameter for this model that characterizes the maximum energy storage (extractable work) within the battery. This observation underscores the critical role of external field in manipulating the energy storage mechanism of QBs, highlighting the importance of optimizing the battery's Hamiltonian parameters.
The manuscript is organised as follows, section <ref> lays the groundwork by defining the system's energy (Hamiltonian). Section <ref> analyzes charging: initial state and energy stored through unitary evolution. Section <ref> discusses the results, focusing on maximum energy storage.
§ THE MODEL OF BATTERY
We consider the Hamiltonian of two spin - 1/2 system as H = H_S + H_I. Here the model for quantum battery is the Heisenberg XYZ system whose hamiltonian is defined as
H_S = J/2[(1+γ)σ_x^1σ_x^2+(1-γ)σ_y^1σ_y^2]+ 1/2 J_zσ_z^1σ_z^2
where σ_k are the Pauli spin matrices, γ = (J_x-J_y)/(J_x+J_y) is the anisotropy in XY plane. Here J and J_z are the strength of interaction in respective spin components. The interaction Hamiltonian is given by H_I = 1/2[B(σ_z^1+σ_z^2)] where B is the strength of magnetic field. The matrix form of the Hamiltonian (H) in standard two qubit computational basis is given as
H=
[ J_z/2+B 0 0 γ J; 0 -J_z/2 J 0; 0 J -J_z/2 0; γ J 0 0 J_z/2-B ].
The above Hamiltonian is diagonalized and the corresponding eigenvalues and eigenvectors are
E_1,2=1/2 (-J_z± 2J) ,|ψ_1,2⟩ = 1/√(2)(±|01⟩ + |10⟩)
E_3,4=J_z/2±η , |ψ_1,2⟩=N_±(B±η/γ J|00⟩+|11⟩)
where η = √(B^2 + (γ J)^2), and the normalization constant N_± = ((B ±η/γ J)^2 + 1)^-1/2. For B = 0 and γ = 0, J = J_z, the Hamiltonian corresponds to the Heisenberg spin with isotropic interaction, and the eigenfunctions are reduced to maximally entangled Bell states.
The thermal state or Gibbs state of this Hamiltonian is given by ρ(T) = e^- β H/𝒵, and 𝒵 = Tr (e^- β H) is the partition function where β=1/k_B T. It should be noted here that the energy of the system is scaled by setting k_B T =1 with k_B being the Boltzmann constant, T is the equilibrium temperature. The thermal state in the computational basis is obtained as <cit.>:
ρ(T) = 1/𝒵[ μ_- 0 0 κ; 0 ν ϵ 0; 0 ϵ ν 0; κ 0 0 μ_+ ]
with matrix elements μ_±=e^-J_z/2(coshη ± B/ηsinhη), κ=-γ J/ηe^-J_z/2sinhη, ν=e^J_z/2(cosh J ), ϵ=-e^J_z/2sinh J and the partition function 𝒵=2(e^-J_z/2coshη+e^J_z/2cosh J).
§ ENERGY STORED IN QUANTUM CHARGING
§.§ Initial Battery State
In the initial preparation phase, the quantum battery can be in one of two distinct states: the ground state or the Gibbs state. In the first scenario, the battery assumes the ground state of the normalized Hamiltonian, representing a state corresponding to absolute zero temperature. This configuration sets the foundation for understanding the quantum properties of the battery under no thermal influence.
On the other hand, in the second scenario the battery is prepared in Gibbs state (also called as canonical equilibrium state) characterized by a finite temperature T. In this state, the battery experiences thermal effects, and its properties are influenced by the thermal distribution of energy levels. The partition function 𝒵 encapsulates the statistical sum of all quantum states weighted by the corresponding Boltzmann factors. This canonical equilibrium state provides valuable insights into the quantum battery's behavior under realistic thermal conditions, offering a more comprehensive understanding of its performance characteristics. The dichotomy between the ground state and the canonical equilibrium state serves as a foundational consideration in the study of quantum batteries, enabling exploration of their quantum features at zero and finite temperature regimes. It has been observed <cit.> that tuning system parameters could lead to maximal power generation from a state that is initially prepared at finite temperature than the state with absolute zero temperature.
§.§ Unitary charging and energy stored
In the charging process of the quantum battery within a closed system, the evolution of state is dictated by the unitary operation. Besides reducing the amount of heat generated during the charging process, the implementation of unitary charging techniques has been shown to induce non-classical correlations within many-body quantum batteries <cit.>. This results in significant enhancement of the charging power scaling, surpassing the conventional limits typically encountered in classical battery systems. The amount of energy that can be stored and extracted can be calculated from the time evolution
ρ̇(t) = -i/ħ[H + H_c(t),ρ(t)]
where H_c is hermitian time dependent charging that is turned on at time t=0 and off at time t and ħ is the Planck's constant. The energy (W) stored in such way, reaching some arbitrary state ρ(t) from some initial state ρ(0) is measured with respect to battery Hamiltonian H is
W(t) = Tr[H ρ(t)] - Tr[H ρ(0)]
where ρ(t) = U(t;0) ρ(0) U^†(t;0) is the time evolved state obtained from the solution of eq.(<ref>). Here U(t;0) = 𝒯 exp{ -i/ħ∫_0^t ds [H_0 + H_c(s)] } is the time-evolution operator where 𝒯 is the time-ordering operator. In what follows, we set ħ=1. It is important to consider that when focusing solely on unitary evolution, the processes of work injection (charging) and extraction can be seen as essentially identical tasks <cit.>. The maximum amount of work that can be extracted from a quantum battery is given by ergotropy <cit.>, which is same as W(t) if the initial state ρ(0) is the lowest possible energy state (passive state). It is because for the entropy-preserving unitary evolution process the maximum amount work that can be extracted is equal to the energy stored in the system <cit.>. In general, the stored energy does not coincide with the ergotropy for a battery Hamiltonian in contact with the environment <cit.>, where the evolution is nonunitary.
In this quantum battery model the fundamental framework involves the time evolution of the initial state using the time independent charging Hamiltonian H_c over a total time t. The specific charging Hamiltonian used in this study is H_c = ω S_x, represents a constant magnetic field applied along the x axis, where ω is the strength of applied field, and S_x = 1/2 (σ_x^1 + σ_x^2) is the total spin operator along x axis.
The unitary transformation in case of a time independent charging Hamiltonian, U = e^-iH_c t governs the evolution of the quantum state during the charging period. This transformation, applied to the initial state ρ(0)=ρ(T) is resulting to the state ρ(t) at later time t. Since ρ(T) is a unique passive state of the Hamiltonian H, W(t) is the limiting value of maximum extractable work <cit.>.
§ RESULTS AND DISCUSSION
§.§ Charging Dynamics
We begin our discussion by obtaining an analytical expression for the energy stored as
W (t) =(a+b) -b cos2 ω t-a cosω t
where
a = 4 B^2 sinhη/d
b = b_1 coshJ + b_2 coshη/d + b_3 (-e^J_zηsinhJ + J γsinhη)/d
with b_1 = e^J_z (J_z + J(1+γ)) η, b_2 = (-J_z + J(1+γ)) η, b_3 = J(-1+γ) and d= η(e^J_zcoshJ+coshη).
Thus the energy stored in the system (battery) is periodic with frequency ω, as we intuitively expect.
Generally, reducing the strength of the field can lead to a more gradual process of charging and energy storage. This is primarily because the external field has an impact on the energy levels of the quantum system, and a weaker field tends to cause smaller shifts in energy levels during the charging cycles. The slowed-down process of energy storage may present various benefits for quantum batteries. Notably, it can enable a more precise regulation of energy transfer, thus decreasing the chances of energy loss or degradation. In Fig. <ref>, we depict the oscillation of energy stored over time for specific chosen parameter values. The graph exhibits a periodic pattern with a time period of t= 2π/ω, at which ρ(t)=ρ(0).
This result implies that achieving a greater amount of energy stored in a shorter time frame necessitates a stronger external magnetic field. However, a slow and stable charging can be realised with weaker magnetic field. This enhancement contributes to increased energy efficiency and a prolonged lifespan for the battery. Additionally, a slower pace of charging provides the quantum system with more time to sustain coherence, a crucial characteristic for efficient energy storage and transfer in quantum batteries. Coherence, in this context, pertains to the preservation of phase relationships among different elements of the quantum system. A coherent system is better equipped to store and release energy effectively <cit.>.
§.§ Maximum work
From the simple form of W(t) given by eq.(<ref>), the maximum energy stored can be calculated as
W_max1 = 2a , if| a/4b| > 1
W_max2 = 2b + a + a^2/8b , if| a/4b| ≤ 1.
The corresponding time for the two different cases are
t_1 = n_1 π/ω, where n_1 is an odd integer, and
t_2 = 2n_2 π± (1/ω) cos^-1( -a/4b) ), where n_2 is an integer.
We shall note that Fig. <ref> correspond to the case of |a/4b| > 1 and W_max1 = 3.369, with t_1 = n_1 π/ω. On the other hand, the energy stored for the case |a/4b| ≤ 1 (Fig. <ref>) shows two maxima separated by a time interval Δ t = 2 π - (2/ω) cos^-1(-a/4b) within one time period of the cycle. We also observe that the minimum W_m=2a is same as W_max1. This minimum in work becomes W_max1 as B increases such that |a/4b| > 1. Interestingly, it is observed that as the factor a/4b → 1, the difference W_max2-W_m = 1/2 +(1/2)(a/4b)^2 - a/4b → 0 such that Δ t → 2π (1-1/ ω). In other words, maximum extractable work is fairly stable over a finite time Δ t whose limiting value is 2 π as ω→∞. This feature is shown in Fig. <ref> at B=1, for which a/4b ≈ 0.9.
Fig. <ref>(a) shows the dependence of W_max1 on the anisotropy parameter γ. It should be noted that Fig. <ref>(b) shows the behavior for low values of B, such that as B increases (keeping all other parameters constant) Fig. <ref>(a) becomes relevant since the condition |a/4b| > 1 is fulfilled. It is clear from Fig. <ref>(a) and <ref>(b) that while the dependence of γ on W_max1 is insignificant, W_max2 varies significantly with γ. In other words, the role of anisotropy becomes important as B is decreased. Also it is clear that the parameter B in the system holds a crucial role in determining the maximum stored work in the battery, as depicted in Figure <ref>, and the same can be understood as follows.
Recollecting that
W_max1= 8 B^2 sinhη/d
the B dependence for large B is more evident with the following approximation:
Since η ≈ B for large B,
W_max1≈8 B^2 sinh B/B (e^J_zcosh J + cosh B) .
Further, for large B >0, cosh B ≈sinh B and hence,
W_max1≈8 B^2 sinh B/B sinh B = 8B.
Therefore, W_max1 is linearly dependent on B for the large values of B. This emphasizes the role of B on the maximum energy stored in two-qubit XYZ spin as quantum battery.
§ CONCLUSION
The manipulation of external field in Heisenberg XYZ spin quantum battery plays a crucial role in influencing energy storage dynamics. Our study reveals that when the battery is coupled with a constant external field which acts as a charging, the energy stored within it exhibits oscillatory behavior. Through detailed analysis, we have identified two distinct oscillations of energy storage
in terms of an effective parameter of the battery system. Here we have demonstrated that one of the oscillations can be tuned for a maximum stable energy storage or work extraction over a finite time duration. It is also observed that the dependence of maximum work on anisotropy
is significant only when the magnetic field of the battery is low. Finally, it is shown that the maximum extractable work is eight times the strength of the magnetic field of the battery. In conclusion, this study reveals the significance of the internal parameters of the XYZ spin system
from the quantum battery perspective.
unsrt
| Energy storage remains a key constraint in our technological landscape. While conventional batteries have seen steady improvements, their fundamental limitations rooted in classical electrochemistry restrict their potential for the ever-growing demands of electric vehicles, renewable energy integration, and portable electronics. Recently, the field of quantum thermodynamics has explored the possibility of leveraging the principles of quantum mechanics to revolutionize energy storage. This leads to a new paradigm, known as quantum batteries (QBs), holds the promise of surpassing classical batteries in terms of efficiency, capacity, and even charging speed <cit.>.
The concept of QBs emerged in the last decade with theoretical proposals demonstrating the feasibility of storing energy in the quantum states of atoms and molecules <cit.>.
These theoretical frameworks suggested intriguing advantages, such as “superextensive charging", where larger batteries could charge faster than smaller ones, defying the limitations of classical systems. The past decade have witnessed significant progress in the theoretical understanding of QBs, exploring diverse physical systems and optimizing charging protocols <cit.>. Pioneering research has explored into various aspects of quantum batteries <cit.>, such as the utilization of Dicke states <cit.>, the significance of entanglement in work extraction <cit.>, and nonlocal charging mechanisms for enhanced power storage <cit.>. Furthermore, quantum batteries have made use of interacting spin systems <cit.>, which are charged through local magnetic fields. Importantly, these groundbreaking ideas have been effectively implemented in diverse platforms, including solid-state systems that feature individual two-level systems within single cavities or utilize ensembles of such systems <cit.>. An experimental study on XXZ Heisenberg quantum batteries shows coherence plays a crucial role in charging efficiency, challenging the emphasis on entanglement <cit.>. A paradigmatic model of a quantum battery has been experimentally implemented based on an organic microcavity <cit.>.
The XYZ Heisenberg spin model has emerged as a powerful tool for investigating the theoretical basis of QBs. This model captures the essential interactions between the battery's constituent parts and its environment, enabling researchers to analyze the impact of various factors on battery performance. Building upon previous work exploring the performance of Heisenberg XYZ spin chains <cit.>, this study investigates deeper into the role of external field and parameters involved in a two-spin system QB. Here we present an effective parameter for this model that characterizes the maximum energy storage (extractable work) within the battery. This observation underscores the critical role of external field in manipulating the energy storage mechanism of QBs, highlighting the importance of optimizing the battery's Hamiltonian parameters.
The manuscript is organised as follows, section <ref> lays the groundwork by defining the system's energy (Hamiltonian). Section <ref> analyzes charging: initial state and energy stored through unitary evolution. Section <ref> discusses the results, focusing on maximum energy storage. | null | null | null | null | The manipulation of external field in Heisenberg XYZ spin quantum battery plays a crucial role in influencing energy storage dynamics. Our study reveals that when the battery is coupled with a constant external field which acts as a charging, the energy stored within it exhibits oscillatory behavior. Through detailed analysis, we have identified two distinct oscillations of energy storage
in terms of an effective parameter of the battery system. Here we have demonstrated that one of the oscillations can be tuned for a maximum stable energy storage or work extraction over a finite time duration. It is also observed that the dependence of maximum work on anisotropy
is significant only when the magnetic field of the battery is low. Finally, it is shown that the maximum extractable work is eight times the strength of the magnetic field of the battery. In conclusion, this study reveals the significance of the internal parameters of the XYZ spin system
from the quantum battery perspective.
unsrt |
http://arxiv.org/abs/2409.18038v1 | 20240926164253 | MMDVS-LF: A Multi-Modal Dynamic-Vision-Sensor Line Following Dataset | [
"Felix Resch",
"Mónika Farsang",
"Radu Grosu"
] | cs.RO | [
"cs.RO"
] |
FreeEdit: Mask-free Reference-based
Image Editing with Multi-modal Instruction
Runze He,
Kai Ma,
Linjiang Huang,
Shaofei Huang,
Jialin Gao,
Xiaoming Wei,
Jiao Dai,
Jizhong Han,
Si Liu
Runze He, Shaofei Huang, Jiao Dai, and Jizhong Han are with Chinese Academy of Sciences Institute of Information Engineering. Email: {hrz010109,nowherespyfly}@gmail.com, {daijiao,hanjizhong}@iie.ac.cn.
Linjiang Huang and Si Liu are with the Institute of Artificial Intelligence, Beihang University. Email: [email protected], [email protected].
Kai Ma, Jialin Gao, Xiaoming Wei are with Meituan, Email: {makai20,gaojialin04,weixiaoming}@meituan.com.
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
§ ABSTRACT
Dynamic Vision Sensors (DVS), offer a unique advantage in control applications, due to their high temporal resolution, and asynchronous event-based data. Still, their adoption in machine learning algorithms remains limited. To address this gap, and promote the development of models that leverage the specific characteristics of DVS data, we introduce the Multi-Modal Dynamic-Vision-Sensor Line Following dataset (MMDVS-LF). This comprehensive dataset, is the first to integrate multiple sensor modalities, including DVS recordings, RGB video, odometry, and Inertial Measurement Unit (IMU) data, from a small-scale standardized vehicle. Additionally, the dataset includes eye-tracking and demographic data of drivers performing a Line Following task on a track. With its diverse range of data, MMDVS-LF opens new opportunities for developing deep learning algorithms, and conducting data science projects across various domains, supporting innovation in autonomous systems and control applications.
§ INTRODUCTION
Dynamic Vision Sensors (DVS) are an emerging visual-sensing technology, providing high-frequency asynchronous per-pixel intensity-change events, instead of full-image frames, at fixed intervals.
The events provide a sparse representation of the observed scene, and modern sensors can achieve a per-pixel update rate of up to 10 kHz.
In this paper, we introduce a multi-modal DVS dataset, for a simple Line Following task, in a simplified environment. The aim is to encourage the development of novel, event-based neural-network theories, for event-based vision.
Currently, there are two approaches to applying machine learning (ML) methods to event-based data:
* Converting events in a certain time range to a frame representation, making it usable to a wide range of existing ML techniques
* Fully utilizing the sparse nature of DVS data using Spiking Neural Networks (SNNs)
While each frame representation requires a different trade-off between losing temporal information, or requiring a lot of storage and processing time, SNN simulations perform poorly on classic von Neumann architectures. As a consequence, they usually use specialized hardware<cit.>.
State-of-the-art datasets for autonomous driving with DVS sensors, such as DDD17 <cit.> or its successor DDD20 <cit.>, offer recordings of various driving scenarios.
While this enables networks to generalize well, developing new ML methods on datasets created for tasks as complex as street driving, is challenging.
Even when only using a subset of the datasets, the environment is still very diverse and may contain observations not relevant to the task at hand.
The main challenge with datasets for complex tasks is that it is difficult to determine whether a potential new artificial-neural-network (ANN) architecture, fails to optimize due to a lack of hyperparameter tuning, or a faulty novel ML theory.
We strongly believe that a dataset with reduced complexity could help to combat this issue.
This paper introduces MMDVS-LF, a multi-modal DVS dataset for the Line Following task, recorded in a simplified environment.
In our simple Line Following task, the agent is equipped with a visual sensor, usually aimed at the floor, as the primary sensory input.
The agent has to synthesize movement commands, to remain on a line marked on the floor, while continuously moving forward on that line.
In addition to the visual input, the agent can receive additional information, such as inertial measurements or odometry data.
The data representation mentioned above, must balance temporal details, and representation size for ML approaches, that do not use the DVS event stream directly.
Typical representations <cit.> for ANNs try to capture the input data in a fixed-size format, as most architectures require fixed-sized inputs.
These representations provide formats similar to classical video frames for ANNs, to utilize established architectures by aggregating events in a specific time range.
Some examples are event frames, which store the polarity of the last event per pixel; time surfaces, which store the last timestamps per pixel; and event tensors, which can represent multiple events per pixel, by further discretizing the time and aggregating events in those sub-steps.
MMDVS-LF consists of recordings from human drivers performing the Line Following task with F1Tenth <cit.> cars (standardized small-scale cars), in a simplified environment.
We record: (1) The DVS event steam, (2) RGB/̄D frames, (3) IMU measurements, (4) Driving inputs, and (5) Eye-tracking data of the human drivers.
We recorded approximately 401 GB of raw data, from which we generated datasets with different resolutions and frequencies.
All generated datasets remain below 15 GB in compressed size, and contain DVS time surface and event frame data, IMU measurements, and driving inputs.
Due to its compact size, MMDVS-LF is easy to use and offers many application possibilities.
This paper also demonstrates training established ANNs for a steering-prediction task, based on time-surface data from the dataset.
From the data collection and pre-processing point of view, we first give details of the recording procedure and processing pipeline for synchronizing and aligning the different modalities. Second, we describe our scaling methodology to scale down the DVS event data.
Based on MMDVS-LF, new ML architectures can be developed that fully utilize the sparse and asynchronous nature of DVS event streams. Moreover, the unique eye-tracking data also allows verifying ANNs by comparing their saliency information with human attention.
In summary, our contributions in this paper are as follows:
* MMDVS-LF, a dataset for a simple task with multiple resolutions, modalities, and frequencies.
* A method for collecting, synchronizing, and aligning multi-modal DVS datasets
* Potential use case for control application, showing how to use it with convolutional neural networks and those in combination with recurrent neural networks to take advantage of the temporal nature of the task.
We provide links to the dataset files and contact information for access to the raw data at https://github.com/CPS-TUWien/mmdvshttps://github.com/CPS-TUWien/mmdvs.
§ RELATED WORK
First, we overview existing DVS datasets and compare those to our MMDVS-LF dataset. Then, we summarize the tasks defined from DVS data with deep learning solutions in existing literature.
§.§ DVS Datasets
Table <ref> summarizes DVS datasets, which contain recordings for automotive applications for various tasks.
The datasets in the first section of the table are designed for computer vision tasks, such as detection or visual reconstruction tasks, and contain no driving commands.
EventVOT <cit.>, FELT <cit.>, and Prophesee's 1MP Automotive dataset <cit.> are designed for detection tasks, offering either raw event data or polarity-separated event frames and classified object boxes.
Prophesee's dataset exclusively contains traffic scenarios with various lighting conditions and traffic volume.
It has been annotated by reconstructing frames from events and creating the bounding boxes using an established detection algorithm.
EventVOT and FELT contain recordings of various situations, including automotive scenarios and have been annotated manually.
EventVOT and the 1MP Automotive dataset both use Prophesee's EVK3 DVS sensor. This sensor records visual events at 1280 by 720 pixels with a maximum event rate per pixel of 10 kHz.
FELT uses DVS346, a combined sensor that records DVS events and RGB pixel data on the same chip.
This results in almost identical optical frames for event-based and RGB data.
MVSEC <cit.> and DSEC <cit.> are two datasets for 3D reconstruction and depth estimation using two DVS sensors and 3D LIDARs for ground truth depth data.
They also include two frame-based cameras and inertial measurement units.
While MVSEC only provides grayscale frames, DSEV provides RGB frames in the dataset.
Both datasets use recorded ground-truth data as labels for ML approaches.
Vivid++ <cit.> is a dataset recorded for Visual SLAM with some recordings from automotive scenarios.
It includes modalities similar to MVSEC and DSEC but with only one DVS and RGB sensor.
Vivid++ obtains ground truth data from sensors in use during recording.
The second section of Table <ref> lists datasets designed for learning control tasks.
While there appears to be a larger number of computer vision datasets, the amount of datasets for control tasks is limited.
Moeys et al. <cit.> recorded a dataset for following a target and manually added high-level commands to reach the target.
As high-level commands, they used a direction (left, center, right) in which the target vehicle is located or "Not detected" to indicate that the target had not been detected yet.
The available dataset contains low-resolution DVS event data and the manually annotated action.
DDD17 <cit.> and DDD20 <cit.> comprise multiple recordings of manual driving on public roads in various settings and situations.
DDD17 contains approximately 12 hours of recordings, while DDD20 extends it with an additional 39 hours, resulting in 51 hours.
Both include DVS and RGB data, driving commands, and vehicle telemetry data, such as speed, control position, and the state of various other vehicle components.
§.§ Benchmark Tasks
Many deep learning techniques have been applied to work with DVS data <cit.>. For optical flow estimation and object recognition tasks, methods such as the Synaptic Kernel Inverse Method (SKIM) <cit.>, hierarchical Spiking Neural Networks <cit.>, <cit.>, and LSTM variants <cit.> have been utilized. Building on Convolutional Neural Networks (CNNs) <cit.>, ResNet architectures <cit.> in <cit.> and EV-FlowNet <cit.> were proposed to learn from event-based inputs. Additionally, <cit.> explored depth reconstruction using unsupervised learning with the Evenly-Cascaded Convolutional Network (ECN).
However, previous work related to control tasks using machine learning algorithms, the same way as available datasets for control, is limited. <cit.> employs CNNs to predict control commands for four classes of robot movements based on DVS data. This approach restricts the robot's controllability to discrete values. A setup more similar to our work is described in <cit.>, where ResNet architectures are used for event frames to predict steering angles. In contrast, we aim to explore a broader range of network architectures by employing not only pure CNN-based solutions but also those incorporating recurrent networks.
§ DATASET
In this section, we describe the recording setup, the dataset annotation, the different formats of the MMDVS-LF dataset we provide, and statistical information.
§.§ Recording
We recorded the dataset on 1:10 scale racecars, based on the F1Tenth autonomous racing cars lecture by the University of Pennsylvania <cit.>.
F1Tenth cars use chassis of commercially available 1:10 model racecars and are equipped with a computing platform, motor electronics, and sensors for environment perception.
The sensors typically include a Hokuyo UST-10LX 270° 2D/̄Lidar <cit.> sensor and inertial measurement units.
We use the Robot Operating System (ROS) <cit.> to run control software for the racecar.
We mounted an Intel realsense i435 in front of a Sony Prophesee IMX636 dynamic vision sensor (DVS) for the recording.
The RGB video of the realsense camera is streamed to a screen in front of a human driver, who can control the car with a steering wheel and pedals.
All other data streams, including driving commands and depth data, are recorded on the car for later processing.
In addition to the driving data, we record eye-tracking (ET) data, of the human drivers using a VPS 19<cit.> ET system.
We use ArUco <cit.> markers displayed on the screen showing the video to transform the ET video to the RGB stream.
As we were streaming the RGB video to the control station, we recorded that stream separately to record the same stream the driver sees, including, for example, camera artifacts.
The remaining data is recorded with tooling from the ROS ecosystem, which includes timestamps for each recorded datum.
We used pulsed visual signals generated by LEDs observed by the DVS sensor and the RGB camera to synchronize the ROS recording and the RGB and ET streams.
For each recording, we gave the human driver a few minutes to get comfortable with the controls and the task before recording them driving in their training direction.
After approximately four minutes, we interrupted the recording, turned the car around, and let the drivers drive in the opposite direction for another four minutes.
In addition to recording their driving, we asked participants to complete a consent form and a demographic questionnaire. This questionnaire collected their age, gender, country of origin and residence, and health details, including any chronic illnesses, visual impairments, or conditions affecting their vision. We also gathered information about their driving experience, including their length and frequency, professional or racing experience, prior experience with driving F1Tenth cars, comfort level with new technology, and whether they experience motion sickness while driving.
Anonymized participants' data, including the mapping of the recordings to a driver, is available in the raw data.
Figure <ref> displays the distribution of selected demographic data.
For the Line Following task, we had seven participants, of whom six were born and obtained their driver's licenses in a country in Western Europe and one in North America.
We had six male participants and one female participant, with an age distribution peaking at 25-30, including participants up to 35-40.
One participant reported having no or less than one year of experience.
Another participant reported 3-5 years of experience; two reported 6-10 years, and three more than ten years.
Three participants reported that they have a visual impairment or use a visual aid for driving.
Only one participant reported having a chronic illness, which impairs their driving skills, and one participant reported being a professional driver.
§.§ Annotation
We manually annotated the raw data to obtain sections of the recordings with desired behavior.
All sections where, the line on the floor is visible in the bottom row of pixels, and where the driver managed to stay on or return to the line without losing it, were considered desired behavior.
This extended acceptance leads to a broader range of recorded situations, which should also allow learning-based algorithms to learn recovering behaviors.
We also labeled possible sections that contain objects that have no direct influence on the Line Following task, but might interfere with computer vision applications.
Examples are insects detected by the DVS only, and humans standing on or close to the line.
The latter occurred in some recordings at the end to mark the end of the recording.
Due to differing lighting conditions in the recording area, the infrared lasers of the Intel realsense's active depth estimation system were visible in some sections of the recordings.
They introduced noise into the dataset recording, so we removed the areas with heavy noise from the generated DVS dataset.
Some sections contained less noise, which we included in a separate dataset for more robust training.
We derive the action annotations from the human drivers' driving commands and include observations from some sensors.
Other sensors, such as LIDAR, were omitted from the dataset as they are irrelevant to the Line Following task.
§.§ Format
From the raw data recorded in Sec. <ref> and the annotations, we generated frame-based datasets with frequencies of 60 Hz, 100 Hz, and 120 Hz and image resolutions of 128x256 and 256x512.
The dataset with 60 Hz includes RGB images, as we use a camera with 60 FPS for recording.
We omitted the RGB images for datasets with higher frequencies to avoid using poor interpolation results.
We treat events' polarity separately for this dataset, generating two channels, one for each polarity.
To scale down the DVS data, we first crop the sensor area to a power of two and use virtual macro pixels.
Each macro pixel stores an internal state, which counts increasing and decreasing events, with events of opposing polarity canceling each other out.
Once that internal state exceeds the number of pixels in the macro pixel, the macro pixel generates an event with the respective polarity.
We generate time surfaces and event frames from the scaled-down event stream, as described in <cit.>.
We also provide different sets of masks, which include filters and a mode we call . It removes events of opposite polarity if a more recent event occurs.
This mode performed better during initial tests with classic-control approaches, allowing algorithms to interpret only the most recent data.
We use neighborhood filtering to remove events from a frame if less than two other events occur in the adjacent pixels.
After generation, we store the dataset in compressed archives, storing each frame as file.
Storing each frame in separate files allows splitting and rearranging the datasets arbitrarily.
Table <ref> lists the arrays present in the archive and their values.
We also include index files containing continuous sections of recordings to sample continuous sections from the dataset.
All arrays represent event frames of the dataset.
The array might contain unfiltered arbitrary data, which must be combined with one array.
The consists of the steering angle and either speed or acceleration commands. The array provides data from the IMU sensor, including acceleration in the (x,y,z) directions, angular velocity around these axes, and the orientation quaternion for (x,y,z,w) components. In addition to this, the also includes odometry information, such as pose estimation (x,y,z), orientation quaternion for (x,y,z,w), and velocity values along the (x,y,z) axes.
§.§ Statistics
We generate twelve datasets with time surfaces and event frames, actions, and observations, based on the different resolutions, frequencies, and the inclusion or exclusion of sections with a small amount of noise.
While the representations differ in resolution and generation frequency, the underlying data is the same, and the resulting datasets have the same action distributions.
The analysis in this section was performed on the 256x512@60Hz dataset, and light noise sections were included.
Other datasets, especially the ones without the light noise sections, might differ slightly.
The generated datasets span 38 minutes, including noisy sections, or 27 minutes without those sections.
Depending on the frequency, this leads to datasets of 96,161 to 272,838 frames.
Table <ref> gives an overview of the size of the generated datasets and the compressed frames size sum.
Fig. <ref> shows the distributions of the actions taken by the human drivers during the desirable driving sections.
The steering angle's distribution is symmetric with the mean at -0.006 rad, as seen in Fig. <ref>. The standard deviation is 0.182 rad, which is expected, as large sections of the track are straight.
As the cars were comparably heavy, no breaking was necessary, and only positive acceleration inputs (Fig. <ref>)
were recorded. The acceleration inputs
have a mean of 0.586 m/s2 and a standard deviation of 0.138 m/s2.
Fig. <ref> shows that a large portion of the driving occurred within the range of 0.6-2.0 m/s, with a peak at 0.8-1.0 m/s. This peak and the fact that most other observations have a higher speed allow training neural networks to only predict for steering angle, further simplifying the network architectures.
We recorded the MMDVS-LF's data over about 8 hours, including instructions for the drivers, training, setup time, and technically required breaks, such as changing batteries.
The annotation of the dataset took approximately four weeks, including regular updates of the annotation tools, as we discovered issues with our tooling or errors in annotation.
Generating the dataset with our tooling takes approximately 36 hours on an Intel(R) Xeon(R) Gold 6130 machine with 20 CPU cores and 64 GB of RAM.
§ BENCHMARK
The DVS data holds many promising directions for deep learning research. One can investigate which representation of the data fits better with existing machine learning algorithms or develop new ANN architectures that suit the unique characteristics of DVS data.
§.§ Setups from the dataset
As summarized in Sec. <ref>, most machine learning work on DVS data focused on optical flow and object recognition tasks. Here, we aim to highlight various other possibilities our MMDVS-LF dataset provides, such as control tasks (regression), driver identification (classification), and other data science tasks.
Fig. <ref> summarizes possible training setups using the dataset, including regression tasks predicting the steering angle with velocity or acceleration values based on various combinations of inputs. For classification tasks, one can consider the different drivers completing the Line Following task as class labels and use the available input data (excluding the demographic information) to make the prediction. Suppose someone aims to pursue a data science project. In that case, exploring the correlation between driving characteristics and demographic information or fault detection from the various sensor readings is possible.
§.§ Steering prediction from time surfaces
Here, we present a use case for the MMDVS-LF dataset of 128x256@100Hz, where the goal is to train neural network models to predict the steering angle based on the time surface data from the DVS sensor. As pointed out in Sec. <ref>, most of the velocity values fall into a small range, allowing us to simplify the task by treating the speed as constant. The pipeline is illustrated in Fig. <ref>. We provide the code of a TensorFlow dataloader pipeline and training scripts.
We trained a convolutional-neural-network (CNN) front-end <cit.>, with either a fully-connected dense layer, or a Recurrent Neural Network (RNN), respectively, as the back-end policy. As an RNN we used either a fully-connected simple RNN <cit.>, a Minimal Gated United (MGU) <cit.>, a Gated Recurrent Unit (GRU) <cit.>, a Long-Short Term Memory (LSTM) <cit.>, or a Liquid-Time Constant Network (LTC) <cit.>, respectively. In these architectures, the CNN extracts visual information, while the RNN component leverages the sequential nature of the task. For configuring the CNN layers, we adapted the settings from the convolutional head used in <cit.>, which was designed to explore the task of curvature prediction based on RGB images using a combination of CNNs and bio-inspired recurrent models. This adaptation is appropriate because, at a high level, our task is similar from an ML perspective.
We compute the mean squared error (MSE) between the predicted steering angle and the ground truth values over the sequences, scaling the errors by 10^4 for better readability. The data is split into training, validation and test sets with a ratio of 60%/20%/20%. We did hyperparameter tuning for the learning rate in the range of {0.0001, 0.001, 0.01}. Based on the best validation loss, we train all networks using a learning rate of 0.0001. During the training, we use the AdamW optimizer <cit.> with a cosine weight decay of 10^-6. We run the training for 50 epochs and save the final models with the best validation loss.
The results of these experiments are shown in Table <ref>.
We found that all architectures were able to adapt to the task. Our results demonstrate that more sophisticated architectures incorporating recurrent networks generalized better on the MMDVS-LF dataset, leading to smaller loss values.
Here, we presented a simple setup from our MMDVS-LF dataset with a wide range of deep-learning approaches. This setup can be extended by using additional available information, such as stacking RGB channels to DVS as extra input channels to the CNN part, resulting in 5 channels (3 channels from RGB, two channels from DVS) in total, and mapping other sensor information of IMU and odometry to the dense or recurrent part. One can extend the output to making sequential predictions not only on the steering angle but also on the velocity or acceleration commands. In this case, one should adapt the loss function to L = w_sMSE(y_s,ŷ_s) + w_vMSE(y_v,ŷ_v) to properly scale the mean squared errors of the used commands between the ground truth labels y_s,y_v and predictions ŷ_s, ŷ_v by the corresponding weights w_s,w_v, for the steering and velocity, respectively.
§ CONCLUSIONS
We introduced MMDVS-LF, a multimodal, compact, and easy-to-use dataset primarily intended for basic research, focusing on novel deep learning solutions leveraging sparse DVS data for control applications. The paper described the methods for recording experiments and constructing the dataset. We also showed several use cases of our dataset and demonstrated the power of recurrent neural networks predicting steering commands from time surface data.
In the future, we aim to explore DVS-specific control solutions and verify the attention maps of trained neural networks with the recorded eye-tracking data. The relatively inexpensive standardized platform of F1Tenth cars holds the potential to deploy end-to-end machine learning solutions on hardware, making it accessible to universities, research institutions, and the general public to test their solution developed and trained on the MMDVS-LF dataset.
-12cm
§ ACKNOWLEDGMENT
We thank Mihaela-Larisa Clement for helping with the data collection and the participants in our recordings.
IEEEtran
| Dynamic Vision Sensors (DVS) are an emerging visual-sensing technology, providing high-frequency asynchronous per-pixel intensity-change events, instead of full-image frames, at fixed intervals.
The events provide a sparse representation of the observed scene, and modern sensors can achieve a per-pixel update rate of up to 10 kHz.
In this paper, we introduce a multi-modal DVS dataset, for a simple Line Following task, in a simplified environment. The aim is to encourage the development of novel, event-based neural-network theories, for event-based vision.
Currently, there are two approaches to applying machine learning (ML) methods to event-based data:
* Converting events in a certain time range to a frame representation, making it usable to a wide range of existing ML techniques
* Fully utilizing the sparse nature of DVS data using Spiking Neural Networks (SNNs)
While each frame representation requires a different trade-off between losing temporal information, or requiring a lot of storage and processing time, SNN simulations perform poorly on classic von Neumann architectures. As a consequence, they usually use specialized hardware<cit.>.
State-of-the-art datasets for autonomous driving with DVS sensors, such as DDD17 <cit.> or its successor DDD20 <cit.>, offer recordings of various driving scenarios.
While this enables networks to generalize well, developing new ML methods on datasets created for tasks as complex as street driving, is challenging.
Even when only using a subset of the datasets, the environment is still very diverse and may contain observations not relevant to the task at hand.
The main challenge with datasets for complex tasks is that it is difficult to determine whether a potential new artificial-neural-network (ANN) architecture, fails to optimize due to a lack of hyperparameter tuning, or a faulty novel ML theory.
We strongly believe that a dataset with reduced complexity could help to combat this issue.
This paper introduces MMDVS-LF, a multi-modal DVS dataset for the Line Following task, recorded in a simplified environment.
In our simple Line Following task, the agent is equipped with a visual sensor, usually aimed at the floor, as the primary sensory input.
The agent has to synthesize movement commands, to remain on a line marked on the floor, while continuously moving forward on that line.
In addition to the visual input, the agent can receive additional information, such as inertial measurements or odometry data.
The data representation mentioned above, must balance temporal details, and representation size for ML approaches, that do not use the DVS event stream directly.
Typical representations <cit.> for ANNs try to capture the input data in a fixed-size format, as most architectures require fixed-sized inputs.
These representations provide formats similar to classical video frames for ANNs, to utilize established architectures by aggregating events in a specific time range.
Some examples are event frames, which store the polarity of the last event per pixel; time surfaces, which store the last timestamps per pixel; and event tensors, which can represent multiple events per pixel, by further discretizing the time and aggregating events in those sub-steps.
MMDVS-LF consists of recordings from human drivers performing the Line Following task with F1Tenth <cit.> cars (standardized small-scale cars), in a simplified environment.
We record: (1) The DVS event steam, (2) RGB/̄D frames, (3) IMU measurements, (4) Driving inputs, and (5) Eye-tracking data of the human drivers.
We recorded approximately 401 GB of raw data, from which we generated datasets with different resolutions and frequencies.
All generated datasets remain below 15 GB in compressed size, and contain DVS time surface and event frame data, IMU measurements, and driving inputs.
Due to its compact size, MMDVS-LF is easy to use and offers many application possibilities.
This paper also demonstrates training established ANNs for a steering-prediction task, based on time-surface data from the dataset.
From the data collection and pre-processing point of view, we first give details of the recording procedure and processing pipeline for synchronizing and aligning the different modalities. Second, we describe our scaling methodology to scale down the DVS event data.
Based on MMDVS-LF, new ML architectures can be developed that fully utilize the sparse and asynchronous nature of DVS event streams. Moreover, the unique eye-tracking data also allows verifying ANNs by comparing their saliency information with human attention.
In summary, our contributions in this paper are as follows:
* MMDVS-LF, a dataset for a simple task with multiple resolutions, modalities, and frequencies.
* A method for collecting, synchronizing, and aligning multi-modal DVS datasets
* Potential use case for control application, showing how to use it with convolutional neural networks and those in combination with recurrent neural networks to take advantage of the temporal nature of the task.
We provide links to the dataset files and contact information for access to the raw data at | First, we overview existing DVS datasets and compare those to our MMDVS-LF dataset. Then, we summarize the tasks defined from DVS data with deep learning solutions in existing literature.
§.§ DVS Datasets
Table <ref> summarizes DVS datasets, which contain recordings for automotive applications for various tasks.
The datasets in the first section of the table are designed for computer vision tasks, such as detection or visual reconstruction tasks, and contain no driving commands.
EventVOT <cit.>, FELT <cit.>, and Prophesee's 1MP Automotive dataset <cit.> are designed for detection tasks, offering either raw event data or polarity-separated event frames and classified object boxes.
Prophesee's dataset exclusively contains traffic scenarios with various lighting conditions and traffic volume.
It has been annotated by reconstructing frames from events and creating the bounding boxes using an established detection algorithm.
EventVOT and FELT contain recordings of various situations, including automotive scenarios and have been annotated manually.
EventVOT and the 1MP Automotive dataset both use Prophesee's EVK3 DVS sensor. This sensor records visual events at 1280 by 720 pixels with a maximum event rate per pixel of 10 kHz.
FELT uses DVS346, a combined sensor that records DVS events and RGB pixel data on the same chip.
This results in almost identical optical frames for event-based and RGB data.
MVSEC <cit.> and DSEC <cit.> are two datasets for 3D reconstruction and depth estimation using two DVS sensors and 3D LIDARs for ground truth depth data.
They also include two frame-based cameras and inertial measurement units.
While MVSEC only provides grayscale frames, DSEV provides RGB frames in the dataset.
Both datasets use recorded ground-truth data as labels for ML approaches.
Vivid++ <cit.> is a dataset recorded for Visual SLAM with some recordings from automotive scenarios.
It includes modalities similar to MVSEC and DSEC but with only one DVS and RGB sensor.
Vivid++ obtains ground truth data from sensors in use during recording.
The second section of Table <ref> lists datasets designed for learning control tasks.
While there appears to be a larger number of computer vision datasets, the amount of datasets for control tasks is limited.
Moeys et al. <cit.> recorded a dataset for following a target and manually added high-level commands to reach the target.
As high-level commands, they used a direction (left, center, right) in which the target vehicle is located or "Not detected" to indicate that the target had not been detected yet.
The available dataset contains low-resolution DVS event data and the manually annotated action.
DDD17 <cit.> and DDD20 <cit.> comprise multiple recordings of manual driving on public roads in various settings and situations.
DDD17 contains approximately 12 hours of recordings, while DDD20 extends it with an additional 39 hours, resulting in 51 hours.
Both include DVS and RGB data, driving commands, and vehicle telemetry data, such as speed, control position, and the state of various other vehicle components.
§.§ Benchmark Tasks
Many deep learning techniques have been applied to work with DVS data <cit.>. For optical flow estimation and object recognition tasks, methods such as the Synaptic Kernel Inverse Method (SKIM) <cit.>, hierarchical Spiking Neural Networks <cit.>, <cit.>, and LSTM variants <cit.> have been utilized. Building on Convolutional Neural Networks (CNNs) <cit.>, ResNet architectures <cit.> in <cit.> and EV-FlowNet <cit.> were proposed to learn from event-based inputs. Additionally, <cit.> explored depth reconstruction using unsupervised learning with the Evenly-Cascaded Convolutional Network (ECN).
However, previous work related to control tasks using machine learning algorithms, the same way as available datasets for control, is limited. <cit.> employs CNNs to predict control commands for four classes of robot movements based on DVS data. This approach restricts the robot's controllability to discrete values. A setup more similar to our work is described in <cit.>, where ResNet architectures are used for event frames to predict steering angles. In contrast, we aim to explore a broader range of network architectures by employing not only pure CNN-based solutions but also those incorporating recurrent networks. | null | null | null | null |
http://arxiv.org/abs/2409.17596v1 | 20240926072238 | Subjective and Objective Quality-of-Experience Evaluation Study for Live Video Streaming | [
"Zehao Zhu",
"Wei Sun",
"Jun Jia",
"Wei Wu",
"Sibin Deng",
"Kai Li",
"Ying Chen",
"Xiongkuo Min",
"Jia Wang",
"Guangtao Zhai"
] | cs.MM | [
"cs.MM",
"cs.AI",
"eess.IV"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Subjective and Objective Quality-of-Experience Evaluation Study for Live Video Streaming
Zehao Zhu, Wei Sun, Jun Jia,Wei Wu, Sibin Deng, Kai Li, Ying Chen, Xiongkuo Min, Jia Wang, Guangtao Zhai
This paper was produced by the IEEE Publication Technology Group. They are in Piscataway, NJ.
Manuscript received April 19, 2021; revised August 16, 2021.
September 28, 2024
==================================================================================================================================================================================================================================================================================
§ ABSTRACT
In recent years, live video streaming has gained widespread popularity across various social media platforms. Quality of experience (QoE), which reflects end-users' satisfaction and overall experience, plays a critical role for media service providers to optimize large-scale live compression and transmission strategies to achieve perceptually optimal rate-distortion trade-off. Although many QoE metrics for video-on-demand (VoD) have been proposed, there remain significant challenges in developing QoE metrics for live video streaming. To bridge this gap, we conduct a comprehensive study of subjective and objective QoE evaluations for live video streaming. For the subjective QoE study, we introduce the first live video streaming QoE dataset, TaoLive QoE, which consists of 42 source videos collected from real live broadcasts and 1,155 corresponding distorted ones degraded due to a variety of streaming distortions, including conventional streaming distortions such as compression, stalling, as well as live streaming-specific distortions like frame skipping, variable frame rate, etc. Subsequently, a human study was conducted to derive subjective QoE scores of videos in the TaoLive QoE dataset. For the objective QoE study, we benchmark existing QoE models on the TaoLive QoE dataset as well as publicly available QoE datasets for VoD scenarios, highlighting that current models struggle to accurately assess video QoE, particularly for live content. Hence, we propose an end-to-end QoE evaluation model, Tao-QoE, which integrates multi-scale semantic features and optical flow-based motion features to predicting a retrospective QoE score, eliminating reliance on statistical quality of service (QoS) features. Extensive experiments demonstrate that Tao-QoE outperforms other models on the TaoLive QoE dataset, six publicly available QoE datasets, and eight user-generated content (UGC) video quality assessment (VQA) datasets, showcasing the effectiveness and feasibility of Tao-QoE.
quality of experience, optical flow, video quality assessment, streaming.
§ INTRODUCTION
With the rapid growth of mobile devices and advancements in wireless networks in recent years, people can now watch video content on mobile devices anywhere and anytime. Streaming media technologies play an important role in ensuring that users can view such content smoothly and in real-time without waiting for complete file downloads. Specifically, the streaming media content captured by the cameras or the third-party streaming media content is encoded and segmented into data fragments. These data fragments are then transmitted to the server using appropriate transport protocols such as HTTP, HLS, RTMP, or RTSP. Users utilize client devices (e.g., mobile phones, tablets, computers, network TVs) to send requests over the Internet for accessing streaming media content. Upon receiving a client request, the server employs a content distribution network (CDN) to distribute the corresponding data fragment to the requesting client device. After decoding and rendering, the data is converted into audio and video content that the user can watch and listen to <cit.>. Video on Demand (VoD) and live streaming are two prevalent methods of streaming media technology. On the other hand, Live Streaming involves real-time transmission and display of audio or video content over the Internet, ensuring synchronized delivery for viewers to experience events as they unfold.
Limited network resources and fluctuations in client networks can result in distortions, such as degradation of video quality and stalling events, leading to a decline in the end users' Quality of Experience (QoE).<cit.> Therefore, it is crucial for streaming media content providers to comprehend the factors that influence user QoE and allocate resources appropriately to enhance their satisfaction.<cit.> In the domain of video streaming media, Quality of Experience (QoE) is associated with numerous indicators. Among them, Video Quality Assessment (VQA) plays a pivotal role in perceiving visual quality. However, the user's QoE is highly susceptible to disruptions such as stalling events and bit rate switching caused by network fluctuations. These factors are not evaluated by conventional VQA methods. QoE represents a comprehensive metric that encompasses video quality along with other distortions like stalling events and quality switching.
In contrast to the well-established VoD and QoE industry, the research on live streaming QoE remains insufficient, primarily due to two key factors.
* Limited publicly available live video databases. Current QoE databases like LIVE-NFLX and waterlooSQoE predominantly resemble VoD setups. Moreover, publicly available databases fail to accurately capture video stalling manifestations in live streaming scenarios where network issues often result in unexpected fluctuations in frame rate and frame skipping. These distortions as shown in Fig. <ref>.
* Unsatisfied qoe model mechanism. Publicly available QoE models such as KSQI and GCNN-QoE heavily rely on statistical data (e.g., stallinging time and location, bitrate), which are challenging to obtain beforehand in real-life situations, thus rendering them unsuitable for real-time live broadcast scenarios.
To address this issue, we have developed an extensive and authentic live broadcasting database known as the Tao Live QoE Database. We collect live videos from the Tao Live APP and artificially induce stalling events by manipulating the presentation time stamp (PTS) of the videos. It is important to note that our database encompasses various quality degradations commonly encountered in live broadcasts, including compression artifacts, stalling distortions, accelerated frame rates, and frame skipping. Furthermore, all videos in our database undergo rigorous subjective testing to obtain comprehensive retrospective QoE scores which are subsequently validated. Additionally, we introduce TAO-QoE, a pioneering deep learning-based approach capable of directly predicting QoE scores from video inputs without relying on supplementary statistics. This model performs feature extraction and fusion for assessing video presentation quality, quality switching dynamics, and occurrence of stalling events ultimately leading to retrospective QoE score predictions.
The main contributions of this work are summarized as follows:
* We establish a large-scale live video database. The study involved the collection of 42 high-quality videos, which were subsequently subjected to compression artifacts and stalling events by adjusting the Constant Rate Factor (CRF) parameters and presentation time stamp for each video frame. As a result, a total of 1,155 distorted live streaming videos were generated.
* We carry out a well-controlled subjective experiment. We invited 20 participants to take part in the subjective experiment, resulting in a total of 23,100 subjective annotations collected to generate the QoE scores for live videos.
* We propose TAO-QoE, a deep learning-based model for predicting Quality of Experience (QoE) in live video streaming. This model achieves optimal performance on public databases without the need for statistical information.
§ RELATED WORK
§.§ QoE Database
Over the past 15 years, numerous publicly available QoE databases have been developed to tackle QoE challenges. Table <ref> illustrates common QoE databases, including WaterlooSQoE database<cit.> and LIVE-NFLX<cit.>, which serve as comprehensive collections of multimedia content specifically designed for evaluating QoE in diverse multimedia applications. These databases encompass a wide range of multimedia stimuli, such as images and videos, spanning different resolutions, compression levels, and content types. They also incorporate intentionally impaired content to simulate various degradation scenarios like compression artifacts, rebuffering issues, and quality adaptation.
§.§ QoE Models
In early research, video Quality of Experience (QoE) was often determined based on a set of statistical features. These studies attempted to fit certain video transmission-related metrics into a mathematical formula to predict video QoE<cit.>. However, the video QoE is influenced by multiple factors, including presentation quality, smoothness, video quality switching, and video stuttering. These factors are closely related to users' viewing environments, personal preferences, and perceptual abilities. Therefore, relying solely on statistical features makes it difficult to capture users' subjective experiences, and more detailed consideration of user perception and evaluation is needed. In pursuit of a better evaluation of the impact of video presentation quality on the overall Video Quality of Experience (QoE), an increasing number of studies have embraced the integration of Visual Quality Assessment (VQA) within the QoE assessment framework. Depending on the availability of reference videos during the evaluation process, video quality assessment can be classified into three categories: full-reference(FR)<cit.>, reduced-reference(RR)<cit.>, and no-reference(NR) approaches<cit.>. Both Spiteri2016<cit.> and Bentaleb2016<cit.> regard the average bitrate of the video experienced by the user and the duration of the rebuffer events as the influencing factors of QoE. Duanmu et al. devised a QoE algorithm named SQI, which combines the FR VQA algorithm with video stalling quantification information to predict the QoE scores of videos<cit.>. In Video Assessment of Temporal Artifacts and Stalls
(Video ATLAS)<cit.>, Bampis et al. unify modeling of video presentation quality, stall-related features, and memory-related features of video. Subsequently, et made improvements to the SQI algorithm and developed the KSQI algorithm, which takes video presentation quality(VMAF), rebuffering, and quality adaptation (switching between profiles) into consideration<cit.>. With the vigorous development of deep learning technology, more and more researchers apply convolutional neural network(CNN) and recurrent neural network(RNN) to the prediction of video QoE. GCNN-QoE<cit.> and DA-QoE<cit.> both perform feature extraction and fusion on statistical features, then uses GRU to process the features and finally returns the QoE score. DeSVQ<cit.> feeds the high-level spatio-temporal features extracted by CNN and the low-level features measured by VQA to LSTM in turn, and finally returns the QoE score. The above three models have two common features that use statistical features and RNN. In <cit.>, Pengfei Chen et al. constructed an end-to-end framework named TRR-QoE, which combines feature extraction, processing and QoE prediction. In Chunyi Li et al.<cit.> employ ResNet-50 for frame feature extraction, fuse statistics like resolution and rebuffering, and regress QoE using Support Vector Regression (SVR).
§ LIVE STREAMING SCENE DATABASE CONSTRUCTION
§.§ Motivation:
Despite the abundance of QoE and VQA databases, these databases suffer from certain limitations: i) insufficient diversity in source videos, resulting in a lack of complex human interaction broadcasts; ii) stalling events are predominantly represented as repeated frames, often caused by network issues leading to uneven Presentation Time Stamp (PTS) distribution; iii) live broadcast scenarios typically involve brief rebuffering periods. However, state-of-the-art QoE and VQA databases like WaterlooSQoE-III and WaterlooSQoE-IV do not encompass stalling events lasting less than one second, which is a common occurrence in real live broadcasts. Furthermore, after such stalling events in live scenarios, there is often a transition to accelerated video playback characterized by an increased frame rate or frame skipping. These variations in frame rates are not addressed in publicly available QoE databases that usually maintain a fixed frame rate.
To address these challenges, we established the TaoLive QoE database, which encompasses a larger corpus of source videos and incorporates more authentic setups involving accelerated frame rate playback, frame skipping, and other related techniques. Additionally, we manipulated PTS of video frames to accurately simulate stalling events, thereby closely resembling real-life streaming scenarios. Fig. <ref> illustrates the occurrence of stalling events, accelerated frame rate playback, and frame skipping in the TaoLive QoE database. The blue video frames represent the frames played according to the source video frame rate, while the red video frames depict the displayed frames during stalling events. Additionally, green video frames indicate fast playback (accelerated frame rate), and yellow video frames signify skipped frames due to prolonged stalling duration. Comparison between the TaoLive QoE database and other QoE databases include WaterlooSQoE database <cit.> and LIVE-NFLX <cit.> is shown in Table <ref>.
§.§ Database Construction
§.§.§ Source Video
We carefully selected 42 high-quality live videos encoded in H.264 from the Taobao Live app, encompassing various resolutions and frame rates. Each video has a duration of 10 seconds. To ensure optimal video performance, we excluded any videos with stall events. Specifically, we employed two resolutions (1080p and 720p) and three frame rates (20fps, 25fps, and 30fps), resulting in seven source videos for each combination of resolution and frame rate. In total, we collected a comprehensive set of 42 source videos.
§.§.§ Distortion added
The types of distortion we incorporated include compression, stalling events, and accelerated playback following a stall event. Due to the real-time nature of live broadcasting, once the stall event concludes, video playback resumes with certain frames being played at an accelerated rate. Frame skipping occurs when the duration of a stall event exceeds a specific threshold. The speed and duration of fast playback are generally determined by the buffer ratio on the playback side and the length of the stall event. To simulate videos with varying presentation qualities, we compressed these source videos using FFmpeg with a Constant Rate Factors (CRF) set to 15, 22, 27, 32, and 37. The 7 source videos for each frame rate are compressed based on the aforementioned 5 CRFs. Subsequently, we manually introduce stall events to these compressed videos. To ensure that no secondary compression occurs during the addition of stalling events, we utilize FFmpeg to modify the presentation time stamp (PTS) of the video in accordance with the designated stalling mode. This mode encompasses various combinations of stall event duration and frequency. The duration of a stall event is categorized into four levels: short (s) (0.5s or 1s), medium (m) (1.5s, 2s or 2.5s), long (l) (3s, 3.5s, 4s or 4.5s), and extra long(el) (5s, 5.5s or 6s). The maximum limit for stall event occurrences is set to 3. As depicted in Table <ref>, there are a total of 21 combinations observed. The acceleration rate (AR) applied to expedite video playback following the termination of a stall event is configured as 1.1, 1.25, 1.5, 1.75, and 2.25.
A stall event is generated as follows. F = {f_1,f_2,...,f_n } are all video frames of compressed video. P = {p_1,p_2,...,p_n } is PTS of all compressed video frames. n is the number of compressed video frames. L = {l_1,l_2,...,l_m } is the time point when the set stall event occurs. T = {t_1,t_2,...,t_m } is the duration of stall event. m is the number of stall events. First, the index of the stall video frame is calculated according to the set time point of occurrence of the stall event and the video frame rate. The index of the stall video frame SF = {sf_1,sf_2,...,sf_m } is given by
sf_j = l_j × framerate j = 1,2...,m
Secondly, the PTS delay D = {d_1,d_2,...,d_m } for all video frames after this frame is calculated by
d_j = t_j/timebase j = 1,2...,m
Where timebase is the time base of compressed video. The PTS delay of all video frames of the compressed video AD = {ad_1, ad_2, ..., ad_n}is calculated as
ad_i =
0 i ≤ sf_1
∑_k d_k sf_k < i ≤ sf_k+1
∑_k^m d_k i > sf_m
Where i represents the frame index of the compressed video, adjustments to certain PTS values are necessary in order to ensure smooth playback following a stall event. Specifically, the PTS interval for fast-playing video frames should be reduced based on the predetermined acceleration rate, while maintaining unchanged intervals for other video frames. The PTS interval is a constant within the FFmpeg structure AVPacket (replaced by pkt.duration below). The frame index, PTS, and pkt.duration of the video are calculated as PTS = index * pkt.duration. The total number of accelerated playing video frames following the occurrence of a stall event in this database is directly related to the duration of said stall event. It represents the cumulative count of accelerated playing video frames required to fully catch up with the live progress that was delayed due to the stall event. The total number of fast-playing video frames QN = {qn_1,qn_2,...,qn_m } is given by
qn_j = t_j × AR × framerate/AR - 1 j = 1,2...,m
The PTS interval between the current and subsequent video frames is reduced according to AR when fast forwarding, while the PTS intervals of the remaining frames remain unchanged and equal to pkt.duration. Then recalculate the PTS of the video SP = {sp_1, sp_2,...,sp_n}. Finally we add the PTS delay of all video frames AD = {ad_1, ad_2, ..., ad_n} and SP = {sp_1, sp_2,...,sp_n} to get the PTS of the output videos. Then we follow Algorithm <ref> to add the PTS delay of all video frames and the PTS of the source video to get the PTS of the stalled video SP = {sp_1, sp_2,...,sp_n}.
§.§.§ Summary
For 7 compressed videos with different CRFs, frame rates, and resolutions, each video is subjected to 3 stall events of different modes. It should be noted that for compressed 1080p videos with stall events, AR is set to 1 during the generation of the initial batch of distorted videos with stall events. For generating the second batch of distorted 1080p videos and the first batch of 720p videos with stall events, AR is randomly assigned based on probabilities specified in Table <ref>. The occurrence of the stall event is stochastic. A total of 945 videos exhibiting stalling events(21 1080p compressed videos × 3 stalling modes per source video × 5 CRFs × 2 batchs + 21 720p compressed videos × 3 stalling modes per source video × 5 CRFs × 1 batch) are generated. A total of 210 videos without any stalling events are generated. The number of videos in the entire database is 1155. The samples of which are exhibited in Fig. <ref>.
§.§ Subjective Experiment Methodology
In the following, we present the comprehensive methodology and configuration of the subjective test.
* Method: Various subjective testing methodologies have been established by ITU-R BT500-11<cit.> to evaluate image quality, including single-stimulus (SS), double-stimulus impairment scale (DSIS), and paired comparison (PC). For this study, due to the short duration of the videos and retrospective scoring requirements, we utilized the SS method for our assessment.
* QoE Rating: The QoE scores range from 1 to 5, representing the spectrum of viewing experiences. A higher value indicates a superior quality of viewing.
* Participants: Before commencing the subjective testing, a training session was conducted with the subjects who then performed a subjective evaluation on a set of samples not present in the database. This facilitated familiarization of the subjects with various types of distortions contained within the video database. The Mean Opinion Score (MOS) was calculated based on subjective evaluation scores provided by 40 subjects for this sample batch, and Spearman Rank Order Correlation Coefficient (SRCC) was computed between each subject's evaluations and MOS. Subsequently, we selected the top 20 subjects exhibiting highest SRCC values to participate in the final round of subjective testing, comprising 11 males and 9 females.
* Test Device: We developed a Python-based graphical user interface (GUI) that effectively renders videos based on the specified PTS and frame rate, while also automatically collecting subjective quality scores. To mitigate geometric distortion resulting from scaling operations, we ensured playback at the video's original resolution, with the surrounding area grayed out. The GUI was executed on a computer equipped with a 2.4 GHz Intel Core i5 processor and 16 GB of RAM. Our viewing setup comprised a 24" ViewSonic VA 2452 SM display.
§ DATA PROCESSING AND ANALYSIS
Based on the subjective test, we have gathered scores from all participants. Following the MOS calculation method outlined in <cit.>. Let m_ij represent the raw subjective scores assigned by participant i to video j. We calculate the z-scores using
z_ij = m_ij - μ_i/δ_i,
μ_ij = 1/N_i∑_j=1^N_i m_ij ,
δ_i = √(1/N_i - 1∑_j=1^N_i (m_ij - μ_i)),
where N_i denotes the number of test videos viewed by subject i.
The subject rejection procedure specified in the ITU-R BT500-11 is employed to eliminate scores from unreliable subjects<cit.>. Let z_ij^' denote the discarded z-scores assigned by subject i to video j. Ultimately, the z-scores are rescaled to a linear rescaling to the range [1, 5], and the Mean Opinion Score (MOS) for test video j is calculated by the averaging z-scores z_ij^' from M_j:
MOS_j = 1/M_j∑_i=1^M_j z_ij^'
The illustration in Fig.<ref> presents the MOS distributions of the proposed TaoLive QoE database from various perspectives. As depicted in Fig.<ref> and Fig.<ref>, videos with higher resolutions and frame rates exhibit correspondingly elevated QoE scores, aligning with our expectations. Notably, within the range of resolutions and frame rates present in this database, the observed differences are relatively modest. As depicted in Fig.<ref>, the selection of CRF parameters (15 to 22) results in a slight degradation of perceptual quality in live videos. However, when the CRF value is increased from 27 to 32, the decline in presentation quality becomes more pronounced, with most videos receiving QoE scores below 4. Furthermore, raising the CRF to 37 leads to all videos scoring below 4 on QoE assessment. These findings highlight that a CRF value of 32 or higher can significantly impair the viewing experience of the video.
As depicted in Fig.<ref>, the results demonstrate a negative correlation between stalling distortion and QoE score, indicating that an increase in the number of stalling events leads to a decrease in QoE score. Specifically, when the stalling event count doubles, most videos exhibit QoE scores below 2 points. Furthermore, with a threefold increase in this count, there is a further rise in the proportion of videos scoring below 2 points.
As depicted in Fig.<ref>, the rapid playback following a stalling event has a discernible impact on QoE. For brief stalling events (A1), the selection of AR parameters (1.1, 1.25, 1.5) results in only slight QoE degradation compared to AR=1; this loss can be considered negligible due to the short duration of fast-played video clips during such events. However, when AR exceeds 1.75, there is a noticeable decline in QoE score.
Although the duration of stalling is brief, excessively fast playback rates of video clips can result in a suboptimal viewing experience. It is noteworthy that for medium stalling events (A2), when the average rate exceeds 1.75, the quality of experience (QoE) score starts to decline sharply, surpassing the decline observed during short stalling events (A1). We hypothesize that as the duration of stalling increases, so does the time spent on fast-playing video clips, leading to a significant drop in QoE scores. For long (A3) and extra long stalling events (A4), selecting appropriate AR parameters no longer remains a crucial factor influencing the viewing experience. During this period, deterioration in viewing experience primarily depends on the duration of stalling event.
The viewing experience, as depicted in Fig.<ref>, exhibits a decline with an increase in the total duration of stalling. It is noteworthy that even a single instance of short stalling lasting 0.5s leads to a significant drop in QoE score (decreasing by 0.8). When the cumulative duration of stalling exceeds 5s, nearly all videos receive QoE scores below 2 points. For videos ranging from 10s to 20s in duration, a total stalling duration surpassing 5s results in an extremely poor viewing experience that viewers find difficult to tolerate.
§ PROPOSED METHOD
The network architecture comprises five components: a video restructuring sub-network, a semantic feature extraction sub-network, a multi-scale feature fusion sub-network, a flow motion feature extraction sub-network, and a feature regression sub-network. It is shown in Fig. <ref>. When presented with a distorted video for evaluation, the video restructure sub-network initially analyzes the input frames to identify any stalling events caused by discontinuous PTS or accelerated playback. If such an event is detected, the corresponding stalling frames are supplemented based on the video's frame rate and duration of the stall event. The sub-network will generate the restructured video frame sequence and its corresponding presentation timestamp (PTS). The semantic extraction sub-network aims to extract the perceived quality of all restructured video frames. Subsequently, the multi-scale quality feature fusion sub-network processes the extracted features. The flow motion feature extraction sub-network extracts flow motion features from the restructured video frames. Finally, the feature regression sub-network predicts retrospective QoE scores by integrating information from two aspects. In the following sections, we provide a detailed description of each component.
§.§ Video Restructure
Mathematically, given the evaluated video consists of N input frames F = {f_1,f_2,...,f_N }. We use FFmpeg to obtain the theoretical value P̂ of the PTS interval between frames and the PTS of all input video frames P = {p_1,p_2,...,p_N }. If the actual PTS interval between the frame i and the frame i+1 exceeds the theoretical value P̂, we believe that there is a stalling event between the frame i and the frame i+1, and the frame i is defined as the stalling frame. Then calculate the number of times rn the model needs to read the stalling frame repeatedly.
rn = ⌊p_i+1-p_i/P̂⌋
The process of video restructure sub-network is given in Algorithm 2.
§.§ Semantic Feature Extraction
We employ the pre-trained Swin Transformer<cit.> as the underlying network architecture. The primary objective of the semantic feature extraction network is to acquire multi-scale semantic features for each frame. It should be noted that diverse semantic content can exert varying influences on human tolerance towards distinct distortions<cit.>. Furthermore, incorporating semantic information can aid in detecting and measuring perceptual distortions, making it a reasonable addition to the assessment of presentation quality. Additionally, presentation quality is hierarchical in nature, with perception occurring from low-level features to high-level ones. To account for this hierarchy, we splice multi-scale features extracted by the four stages of Swin Transformer and use them as frame-level semantic features.
Mathematically, given the evaluated video consists of 2L input frames, we feed these RE frames V = {v_1,v_2,...,v_2L} into the semantic feature extraction network. SF = {SF_1,SF_2,...,SF_2L} is the output multi-scale semantic features.
SF_i = α_1 ⊕α_2 ⊕α_3 ⊕α_4 i = 1,2...,2L
α_j = GAP(L_j(v_i)) j = 1,2,3,4
where SF_i indicates the extracted semantic features from i-th frame v_i. GAP(·) represents the global average pooling operation. L_j(v_i) is the feature of the j-th stage output of Swin Transformer. α_j denotes the average pooled features from L_j(v_i).
§.§ Flow Motion Feature Extraction
Live broadcasts are often affected by unstable shooting environments and restricted network conditions, resulting in motion distortions such as jitter and stagnant events. Therefore, relying solely on semantic features at the frame level is insufficient to accurately capture these distortions. While some videos may exhibit high presentation quality, the presence of jitter and stalling events significantly diminishes the Quality of Experience (QoE) for such live broadcasts. Hence, it is imperative to incorporate motion features in QoE prediction models. To detect stalling events effectively, we initially segment the extracted optical flow based on the Presentation Time Stamp (PTS) of Reference Frames (RE frames), with each segment having a duration of 1 second. Subsequently, inter-frame optical flow images are extracted from each segment using a pretrained PWC-Net<cit.>.
C_k = Γ(V_i) i = 1,2...,M
where C_k represents the extraction and clipping operations of inter-frame optical flow images for RE frames. We employ PTS to perform the clipping operation on inter-frame optical flow images. In case of accelerated video playback, the number of optical flow images contained in C_k may vary.
Subsequently, the inter-frame optical flow images are resampled at a rate of 16fps for each clip, followed by leveraging a pre-trained 3D-CNN backbone ResNet-18<cit.> to capture motion distortion at the clip level.
MF_k = Φ(C_k) k = 1,2...,2L/r
The flow motion features extracted from clip C_k are denoted as MF_k, where Γ(·) represents the operation of extracting flow and Φ(·) represents the operation of extracting flow motion features.
§.§ Multi-scale Feature Fusion
The evidence from <cit.> demonstrates that there exists an inverse relationship between video quality and adaptation quality, where lower adaptation quality contributes to a more enjoyable viewing experience for the audience. Consequently, the absolute error of semantic features between consecutive frames can serve as an indicator of adaptation quality.
SF_2m^' = |SF_2m - SF_2m-1| m = 1,2...,L
where SF_2m^' represent the absolute error between adjacent semantic features. Then the multi-scale fusion can be derived as:
STF_2m = W(φ(SF_2m) ⊕φ(SF_2m^')) m = 1,2...,L
where ⊕(·) stands for the concatenation operation, φ(·) represents the learnable Multilayer Perceptron (MLP). W is a learnable linear mapping operation, and we finally obtain the spatio-temporal fused features STF_k. Then we connect the spatio-temporal fusion feature and the flow motion feature to get the final QoE feature.
QF_k = STF_k ⊕φ(MF_k) k = 1,2...,L
§.§ Feature Regression
After the aforementioned process of feature extraction and fusion, we employ a fully connected layer to perform regression on the QoE features in order to obtain clip-level QoE scores.
Q_k = FC(QF_k) k = 1,2...,L
where FC(·) is the fully-connected layers and Q_i presents the QoE score of clip C_k. Finally, we average all clips of the input video to obtain a retrospective QoE score for that video.
Q = r/n∑_1^n/r Q_k
where Q is the video QoE score and n/r stands for the number of clips. We simply use the Mean Squared Error (MSE) as the loss function:
Loss = 1/n∑_i=1^n (Q_g - Q_p)^2
where n indicates the number of videos in a mini-batch, Q_g and Q_p are the MOS and predicted retrospective QoE score respectively.
§ EXPERIMENT
In this section, we initially provide a comprehensive description of the experimental setup. Subsequently, we evaluate the performance of our proposed TAO-QoE model and compare it with other prominent QoE models using our Tao Live QoE Database as well as publicly available QoE and VQA databases. Furthermore, ablation experiments are conducted to investigate the individual contributions of different sub-modules towards enhancing the overall model performance.
§.§ Implementation Details
The Tao-QoE model is implemented in PyTorch<cit.>, with the Swin Transformer backbone utilizing pretrained weights from the ImageNet-1K database<cit.> for semantic feature extraction. Additionally, the ResNet3D-18 employs pretrained weights from the Kinetics-400 database<cit.>. The weights of both the multi-scale feature fusion sub-network and sub-feature regression are initialized randomly. Regarding the semantic feature extraction sub-network, it operates at the original resolution(1920 × 1080 or 1280 × 960) of input video frames. The flow motion feature extraction sub-network involves the extraction of optical flow maps from video frames at their original resolution, followed by resizing the optical flow map to 224 × 224 and inputting it into a ResNet-18 3D-CNN. Our model was trained and tested on a server equipped with an Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz, 128GB RAM, and NVIDIA Tesla V100 SXM2. The Adam optimizer<cit.> is utilized with an initial learning rate of 0.001. In case the training loss fails to decrease within 5 epochs, the learning rate is halved. The default number of epochs is set to 50. During the process of flow motion feature extraction, all videos are down-sampled to a frame rate of 16fps for ensuring consistent feature dimensions. Following standard practice, we split the database into train and test sets at an 80%-20% ratio. To assess the stability of the QoE model, we randomly perform 10 content-based splits and record their average result as the final performance. Specifically, for the WaterlooSQoE-IV database, we performed content-based splitting five times due to the limited availability of only 5 source videos in the database.
§.§ Benchmark Databases & Compared Models
We compared the currently available QoE and VQA models on the QoE and VQA database respectively.
In the field of QoE, we selected TaoLive QoE Database and five other available QoE databases, including LIVE-NFLX-II<cit.>, WaterlooSQoE-I<cit.>, WaterlooSQoE-II<cit.>, WaterlooSQoE-III<cit.> and WaterlooSQoE-IV<cit.>. We compare the proposed model with the following QoE models:
* Traditional models: P.1203<cit.>, SQI<cit.>, Bentaleb2016<cit.>, Spiteri2016<cit.>, VideoATLAS<cit.>, KSQI<cit.>
* Deep learning models: GCNN-QoE<cit.>, ASPECT<cit.>
Unfortunately, the code for the GCNN model is not publicly available, thus hindering our ability to assess its performance on TaoLive QoE database.
In the domain of VQA, we selected 8 UGC VQA databases: LIVE-Qualcomn<cit.>, CVD2014<cit.>, KoNViD-1k<cit.>, VDPVE<cit.>, LIVE-VQC<cit.>, MSU<cit.>, YouTubeUGC<cit.>, LIVE-WC<cit.>.
We compare the proposed method with the following no-reference models: LTVQM<cit.>, VSFA<cit.>, SimpleVQA<cit.>, FastVQA<cit.>.
§.§ Criteria
Two types of evaluation criteria are employed to assess the performance of models. The first criterion, known as the Video Quality Experts Group (VQEG) criteria<cit.>, calculates a series of correlation values between predicted scores and Mean Opinion Scores (MOSs). The second criterion, proposed by Krasula et al.<cit.>, evaluates the classification abilities of models in distinguishing between two videos based on their quality. We refer to the first criterion as VQEG criteria and the second one as classification criteria.
For VQEG criteria, the model prediction scores should be initially mapped using the following function:
f(p) = ξ_1(1/2 - 1/1 + e^ξ_2(p - ξ_3)) + ξ_4 p + ξ_5
where {ξ | i = 1,2,3,4,5} is the parameter to be fitted, p and f(p) represent the prediction score and mapping score respectively. The mapped scores are then used to calculate four correlation values with MOSs, namely Spearman Rank-Order Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), Root Mean Squared Error (RMSE), and Kendall Rank-order Correlation Coefficient (KRCC). These statistical indices serve different purposes in assessing model performance. Specifically, PLCC reflects the linearity of algorithm predictions, SRCC indicates their monotonicity or predictive correlation, while RMSE evaluates model consistency. An excellent model should achieve values close to 1 for SRCC, PLCC and KRCC.
For the classification criteria, we adhere to the procedures outlined in <cit.> and employ statistical methods from <cit.> to analyze subjective data for determining the significance of differences between each pair of stimuli. A confidence level of 95% is set. The entire dataset is partitioned into subsets based on significant differences and similarities. In a significantly distinct subset, we partition the stimulus pairs into groups based on positive and negative differences in MOS. A higher ability to discriminate dissimilar/similar pairs and superior/inferior stimulus pairs indicates better model performance. Therefore, we employ the area under the ROC curve (AUC) as an evaluation metric for assessing classification performance of models. Furthermore, we compare AUC values obtained from different models to determine if there are statistically significant disparities in their performances<cit.>.
We use the VQEG standard to analyze the performance of the Tao-QoE model and other VQA models on different UGC databases. Since the VQA database selected in this paper does not disclose the standard deviation of the annotation scores for each video, we are unable to calculate the classification criteria. In the case of QoE models, adhering to standard practices<cit.>, we utilize both VQEG criteria and Classification criteria for evaluating the effectiveness of Tao-QoE and other QoE models on the QoE database.
§.§ QoE Performance
§.§.§ VQEG Criteria
The experimental performance on 6 QoE databases is presented in Table <ref>. The following conclusions can be drawn from the results. (1) Our proposed Tao-QoE model demonstrates the best performance among all models. Specifically, on the largest publicly available database, WaterlooSQoE-IV, SRCC and PLCC improve by 0.012 and 0.010, respectively. (2) Traditional QoE algorithms perform significantly worse than deep learning models on WaterlooSQoE-III, WaterlooSQoE-IV, and LIVE-NFLX-II databases due to the presence of various distortion types such as quality switching and rebuffering. Deep learning models have an advantage in perceiving these distortions compared to traditional models. (3) Unfortunately, we were unable to obtain the performance of GCNN-QoE on TaoLive QoE Database since it is not available. However, it is evident that our model can accurately evaluate the QoE of live videos compared with traditional QoE algorithms.
§.§.§ Classfication Criteria
We present the performance evaluation based on the classification criteria of all the aforementioned QoE models using the largest publicly available database, WaterlooSQoE-IV, as shown in Fig <ref>. From this figure, we can draw similar conclusions to those derived from the VQEG performance. Firstly, our proposed model Tao-QoE outperforms other QoE models by a significant margin in both the 'Different vs. Similar' and 'Better vs. Worse' classification tasks. Statistical analysis also demonstrates that our proposed model is significantly superior to other models on the WaterlooSQoE-IV database. Secondly, the AUC values for the 'Better vs. Worse' classification task are consistently higher than those for the 'Different vs. Similar' classification task, indicating that the latter is more challenging and there is still room for improvement in this area.
§.§ VQA Performance
We present the VQEG performance on 8 UGC VQA databases in Table <ref>. Several observations can be made. Firstly, our proposed Tao-QoE model achieves the highest performance among all models. Particularly, on the three recently introduced larger-scale UGC databases (MSU, YouTubeUGC, and LIVE-WC), our model demonstrates significantly improved performance compared to other models. Secondly, it is evident that deep learning models hold an advantage over traditional models, which aligns with the findings of the QoE experiment.
§.§ Ablation Study
§.§.§ QoE
To evaluate the contributions of different features and sub-networks in Tao-QoE, we conduct ablation experiments. The experimental results of QoE are shown in Table <ref>. Firstly, combining features yield better performance than using a single group of features and employing all features leads to the best performance among the combinations of different features. In addition, models that use semantic features achieve better performance, while models that do not use semantic features (such as FM) perform poorly, which proves that semantic features contribute the most to the final performance. Secondly, Compare the final model(ALL: S+FM+F) with the model without optical flow(S+F+M, M represents the extraction of motion features directly from the video clip instead of the optical flow clip), using optical flow motion features will significantly improve QoE prediction. The QoE database contains more inter-frame distortions, such as stalling distortion. Optical flow can perceive the movement of pixels between two frames, which is very helpful for perceiving stalling distortion. Thirdly, Since the multi-scale feature fusion sub-network performs differential operations on the frame-level semantic features, the multi-scale feature fusion sub-network has a certain ability to perceive the quality switching distortion in the QoE field. The results in the Table <ref> show that the performance of using the multi-scale feature fusion sub-network is significantly improved compared to not using the sub-network(such as S and S+F, S+F and ALL).
§.§.§ VQA
The experimental results of QoE are shown in Table <ref>. Firstly, although the VQA database does not include complex distortions such as stalling distortion and quality switching, the model using all features (ALL) still shows good performance. Secondly, on the VQA database, the multi-scale feature fusion sub-network does not contribute as much to the performance as on the QoE database. This is because the VQA database does not include quality switching distortions, and the semantic feature extraction sub-network is basically competent for the task of extracting video quality. Thirdly, the flow motion feature still contributes to the prediction of video quality(such as S and S+FM, S+F+M and ALL).
§ CONCLUSION
In this paper, we construct a database Taolive QoE Database for large-scale live broadcast scenes. Taolive QoE Database selects 42 high-quality videos as the original video, and adds distortion by changing the CRF parameters and the PTS of the video frame. Meanwhile we conduct a subjective experiment to collect the QoE scores of these videos. Furthermore, we propose a QoE model to evaluate the QoE of videos from both semantic and motion aspects. Extensive experimental results confirm the effectiveness of the proposed method.
1
IEEEtran
ref-1
Hamilton, William A., Oliver Garretson, and Andruid Kerne. "Streaming on twitch: fostering participatory communities of play within live mixed media." Proceedings of the SIGCHI conference on human factors in computing systems. 2014.
ref-2
Akhtar, Zahid, et al. "Why is multimedia quality of experience assessment a challenging problem?." IEEE Access 7 (2019): 117897-117915.
ref-3
Brunnström, Kjell, et al. "Qualinet white paper on definitions of quality of experience." (2013).
brunnstrom2013qualinet
Brunnström, Kjell, et al. "Qualinet white paper on definitions of quality of experience." (2013).
mok2011measuring
Mok, Ricky KP, Edmond WW Chan, and Rocky KC Chang. "Measuring the quality of experience of HTTP video streaming." 12th IFIP/IEEE international symposium on integrated network management (IM 2011) and workshops. IEEE, 2011.
xue2014assessing
Xue, Jingteng, et al. "Assessing quality of experience for adaptive HTTP video streaming." 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 2014.
yin2015control
Yin, Xiaoqi, et al. "A control-theoretic approach for dynamic adaptive video streaming over HTTP." Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication. 2015.
manasa2016optical
Manasa, K., and Sumohana S. Channappayya. "An optical flow-based full reference video quality assessment algorithm." IEEE Transactions on Image Processing 25.6 (2016): 2480-2492.
wu2019quality
Wu, Jinjian, et al. "Quality assessment for video with degradation along salient trajectories." IEEE Transactions on Multimedia 21.11 (2019): 2738-2749.
bampis2018spatiotemporal
Bampis, Christos G., Zhi Li, and Alan C. Bovik. "Spatiotemporal feature integration and model fusion for full reference video quality assessment." IEEE Transactions on Circuits and Systems for Video Technology 29.8 (2018): 2256-2270.
wang2004video
Wang, Zhou, Ligang Lu, and Alan C. Bovik. "Video quality assessment based on structural distortion measurement." Signal processing: Image communication 19.2 (2004): 121-132.
seshadrinathan2009motion
Seshadrinathan, Kalpana, and Alan Conrad Bovik. "Motion tuned spatio-temporal quality assessment of natural videos." IEEE transactions on image processing 19.2 (2009): 335-350.
soundararajan2012video
Soundararajan, Rajiv, and Alan C. Bovik. "Video quality assessment by reduced reference spatio-temporal entropic differencing." IEEE Transactions on Circuits and Systems for Video Technology 23.4 (2012): 684-694.
wang2005reduced
Wang, Zhou, and Eero P. Simoncelli. "Reduced-reference image quality assessment using a wavelet-domain natural image statistic model." Human vision and electronic imaging X. Vol. 5666. SPIE, 2005.
ma2011reduced
Ma, Lin, et al. "Reduced-reference image quality assessment using reorganized DCT-based image representation." IEEE Transactions on multimedia 13.4 (2011): 824-829.
rehman2012reduced
Rehman, Abdul, and Zhou Wang. "Reduced-reference image quality assessment by structural similarity estimation." IEEE transactions on image processing 21.8 (2012): 3378-3389.
men2017empirical
Men, Hui, Hanhe Lin, and Dietmar Saupe. "Empirical evaluation of no-reference VQA methods on a natural video quality database." 2017 Ninth international conference on quality of multimedia experience (QoMEX). IEEE, 2017.
men2018spatiotemporal
Men, Hui, Hanhe Lin, and Dietmar Saupe. "Spatiotemporal feature combination model for no-reference video quality assessment." 2018 Tenth international conference on quality of multimedia experience (QoMEX). IEEE, 2018.
li2015no
Li, Yuming, et al. "No-reference video quality assessment with 3D shearlet transform and convolutional neural networks." IEEE Transactions on Circuits and Systems for Video Technology 26.6 (2015): 1044-1057.
saad2014blind
Saad, Michele A., Alan C. Bovik, and Christophe Charrier. "Blind prediction of natural video quality." IEEE Transactions on image Processing 23.3 (2014): 1352-1365.
xu2014no
Xu, Jingtao, et al. "No-reference video quality assessment via feature learning." 2014 IEEE international conference on image processing (ICIP). IEEE, 2014.
mittal2015completely
Mittal, Anish, Michele A. Saad, and Alan C. Bovik. "A completely blind video integrity oracle." IEEE Transactions on Image Processing 25.1 (2015): 289-300.
li2019quality
Li, Dingquan, Tingting Jiang, and Ming Jiang. "Quality assessment of in-the-wild videos." Proceedings of the 27th ACM International Conference on Multimedia. 2019.
chen2020rirnet
Chen, Pengfei, et al. "RIRNet: Recurrent-in-recurrent network for video quality assessment." Proceedings of the 28th ACM international conference on multimedia. 2020.
spiteri2020bola
Spiteri, Kevin, Rahul Urgaonkar, and Ramesh K. Sitaraman. "BOLA: Near-optimal bitrate adaptation for online videos." IEEE/ACM transactions on networking 28.4 (2020): 1698-1711.
bentaleb2016sdndash
Bentaleb, Abdelhak, Ali C. Begen, and Roger Zimmermann. "SDNDASH: Improving QoE of HTTP adaptive streaming using software defined networking." Proceedings of the 24th ACM international conference on Multimedia. 2016.
duanmu2016quality
Duanmu, Zhengfang, et al. "A quality-of-experience index for streaming video." IEEE Journal of Selected Topics in Signal Processing 11.1 (2016): 154-166.
ALTAS
Bampis, Christos G., and Alan C. Bovik. "Learning to predict streaming video QoE: Distortions, rebuffering and memory." arXiv preprint arXiv:1703.00633 (2017).
duanmu2019knowledge
Duanmu, Zhengfang, et al. "A knowledge-driven quality-of-experience model for adaptive streaming videos." arXiv preprint arXiv:1911.07944 (2019).
GCNN-QoE
Zhou, Zhiming, et al. "Quality of Experience Evaluation for Streaming Video Using CGNN." 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020.
DA-QoE
Li, Leida, et al. "From Whole Video to Frames: Weakly-Supervised Domain Adaptive Continuous-Time QoE Evaluation." IEEE Transactions on Image Processing 31 (2022): 4937-4951.
DeSVQ
Ghosh, Monalisa, Dr Chetna Singhal, and Rushikesh Wayal. "DeSVQ: Deep learning based streaming video QoE estimation." Proceedings of the 23rd International Conference on Distributed Computing and Networking. 2022.
TRR-QoE
Chen, Pengfei, et al. "Temporal reasoning guided QoE evaluation for mobile live video broadcasting." IEEE Transactions on Image Processing 30 (2021): 3279-3292.
P1203
Raake, Alexander, et al. "A bitstream-based, scalable video-quality model for HTTP adaptive streaming: ITU-T P. 1203.1." 2017 Ninth international conference on quality of multimedia experience (QoMEX). IEEE, 2017.
waterloo1
Duanmu, Zhengfang, et al. "A quality-of-experience index for streaming video." IEEE Journal of Selected Topics in Signal Processing 11.1 (2016): 154-166.
waterloo2
Duanmu, Zhengfang, Kede Ma, and Zhou Wang. "Quality-of-experience for adaptive streaming videos: An expectation confirmation theory motivated approach." IEEE Transactions on Image Processing 27.12 (2018): 6135-6146.
waterloo3
Duanmu, Zhengfang, Abdul Rehman, and Zhou Wang. "A quality-of-experience database for adaptive video streaming." IEEE Transactions on Broadcasting 64.2 (2018): 474-487.
waterloo4
Duanmu, Zhengfang, et al. "Assessing the quality-of-experience of adaptive bitrate video streaming." arXiv preprint arXiv:2008.08804 (2020).
live1
Bampis, Christos George, et al. "Study of temporal effects on subjective video quality of experience." IEEE Transactions on Image Processing 26.11 (2017): 5217-5231.
live2
Bampis, Christos G., et al. "Towards perceptually optimized end-to-end adaptive video streaming." arXiv preprint arXiv:1808.03898 (2018).
luchunyi
Li, Chunyi, et al. "A real-time blind quality-of-experience assessment metric for HTTP adaptive streaming." arXiv preprint arXiv:2303.09818 (2023).
ITI
BT, RIR. "Methodology for the subjective assessment of the quality of television pictures." International Telecommunication Union 4 (2002).
swin
Liu, Ze, et al. "Swin transformer: Hierarchical vision transformer using shifted windows." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
sun2018pwc
Sun, Deqing, et al. "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
resnet
Hara, Kensho, Hirokatsu Kataoka, and Yutaka Satoh. "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?." Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2018.
sunwei
Sun, Wei, et al. "A deep learning based no-reference quality assessment model for ugc videos." Proceedings of the 30th ACM International Conference on Multimedia. 2022.
narwaria2012low
Narwaria, Manish, Weisi Lin, and Anmin Liu. "Low-complexity video quality assessment using temporal quality variations." IEEE Transactions on Multimedia 14.3 (2012): 525-535.
imagenet
Deng, Jia, et al. "Imagenet: A large-scale hierarchical image database." 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009.
kay2017kinetics
Kay, Will, et al. "The kinetics human action video dataset." arXiv preprint arXiv:1705.06950 (2017).
kingma2014adam
Kingma, Diederik P., and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
pytorch
Paszke, Adam, et al. "Automatic differentiation in pytorch." (2017).
LIVE-Qualcomm
Ghadiyaram, Deepti, et al. "In-capture mobile video distortions: A study of subjective behavior and objective algorithms." IEEE Transactions on Circuits and Systems for Video Technology 28.9 (2017): 2061-2077.
CVD2014
Nuutinen, Mikko, et al. "CVD2014—A database for evaluating no-reference video quality assessment algorithms." IEEE Transactions on Image Processing 25.7 (2016): 3073-3086.
KoNViD-1k
Hosu, Vlad, et al. "The Konstanz natural video database (KoNViD-1k)." 2017 Ninth international conference on quality of multimedia experience (QoMEX). IEEE, 2017.
VDPVE
Gao, Yixuan, et al. "VDPVE: VQA Dataset for Perceptual Video Enhancement." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
MSU
Antsiferova, Anastasia, et al. "Video compression dataset and benchmark of learning-based video-quality metrics." Advances in Neural Information Processing Systems 35 (2022): 13814-13825.
LIVE-VQC
Sinno, Zeina, and Alan Conrad Bovik. "Large-scale study of perceptual video quality." IEEE Transactions on Image Processing 28.2 (2018): 612-627.
YouTubeUGC
Wang, Yilin, Sasi Inguva, and Balu Adsumilli. "YouTube UGC dataset for video compression research." 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2019.
LIVE-WC
Yu, Xiangxu, et al. "Predicting the quality of compressed videos with pre-existing distortions." IEEE Transactions on Image Processing 30 (2021): 7511-7526.
VQEG-1
Zhu, Wenhan, et al. "Multi-channel decomposition in tandem with free-energy principle for reduced-reference image quality assessment." IEEE Transactions on Multimedia 21.9 (2019): 2334-2346.
VQEG-2
Min, Xiongkuo, et al. "Objective quality evaluation of dehazed images." IEEE Transactions on Intelligent Transportation Systems 20.8 (2018): 2879-2892.
VQEG-3
Min, Xiongkuo, et al. "Quality evaluation of image dehazing methods using synthetic hazy images." IEEE Transactions on Multimedia 21.9 (2019): 2319-2333.
critetia-2-1
Krasula, Lukáš, et al. "On the accuracy of objective image and video quality models: New methodology for performance evaluation." 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2016.
critetia-2-2
Hanhart, Philippe, et al. "How to benchmark objective quality metrics from paired comparison data?." 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX). Ieee, 2016.
critetia-2-3
Krasula, Lukáš, et al. "Quality assessment of sharpened images: Challenges, methodology, and objective metrics." IEEE Transactions on Image Processing 26.3 (2017): 1496-1508.
critetia-2-4
Krasula, Lukáš, et al. "Preference of experience in image tone-mapping: Dataset and framework for objective measures comparison." IEEE Journal of Selected Topics in Signal Processing 11.1 (2016): 64-74.
Brill
Brill, Michael H., et al. "Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1." Signal Processing: Image Communication 19.2 (2004): 101-107.
Hanley
Hanley, James A., and Barbara J. McNeil. "A method of comparing the areas under receiver operating characteristic curves derived from the same cases." Radiology 148.3 (1983): 839-843.
LTVQM
Korhonen, Jari. "Two-level approach for no-reference consumer video quality assessment." IEEE Transactions on Image Processing 28.12 (2019): 5923-5938.
VSFA
Li, Dingquan, Tingting Jiang, and Ming Jiang. "Quality assessment of in-the-wild videos." Proceedings of the 27th ACM International Conference on Multimedia. 2019.
SimpleVQA
Sun, Wei, et al. "A deep learning based no-reference quality assessment model for ugc videos." Proceedings of the 30th ACM International Conference on Multimedia. 2022.
FastVQA
Wu, Haoning, et al. "Fast-vqa: Efficient end-to-end video quality assessment with fragment sampling." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[
< g r a p h i c s >
]Zehao Zhu
received the B.E. degree in electronic information engineering from Jilin University in 2018 and the M.E. degree in information and communication engineering from Shanghai Jiao Tong University in 2021. He is currently pursuing the Ph.D. degree in electronic engineering with Shanghai Jiao Tong University, Shanghai, China. His research interests include streaming media quality of experience assessment and video quality assessment.
[
< g r a p h i c s >
]Wei Sun
received the B.E. degree from the East China University of Science and Technology, Shanghai, China, in 2016, and the Ph.D. degree from Shanghai Jiao Tong University, Shanghai, China, in 2023. He is currently a Post-Doctoral Fellow with Shanghai Jiao Tong University. His research interests include image quality assessment, perceptual signal processing and mobile video processing.
[
< g r a p h i c s >
]Jun Jia
received the B.S. degree in computer science and technology from Hunan University, Changsha, China, in 2018. He is currently pursuing the Ph.D. degree in electronic engineering with Shanghai Jiao Tong University, Shanghai, China.
His current research interests include computer vision and image processing.
[
< g r a p h i c s >
]Wei Wu
received his Ph.D. in computer application technology from Wuhan University, Wuhan, China, in 2021. He joined Donghai Laboratory in 2023. Before joining Donghai Laboratory, he was a Senior Engineer with Alibaba Group, Hangzhou, China, from 2021 to 2023. His research interests include image and video processing, computer vision, and multi-source heterogeneous data fusion.
[
< g r a p h i c s >
]Jia Wang
received the B.Sc. degree in electronic engineering, the M.S. degree in pattern recognition and intelligence control, and the Ph.D. degree in electronic engineering from Shanghai Jiao Tong University, China, in 1997, 1999, and 2002, respectively. He is currently a Professor with the Department of Electronic Engineering, Shanghai Jiao Tong University, and also a member of the Shanghai Key Laboratory of Digital Media Processing and Transmission. His research interests include multiuser information theory and mathematics in artificial intelligence.
[
< g r a p h i c s >
]Ying Chen
(IEEE M’05 - SM’11) received a B.S. in Applied Mathematics and an M.S. in Electrical Engineering & Computer Science, both from Peking University, in 2001 and 2004 respectively. He received his PhD in Computing and Electrical Engineering from Tampere University of Technology (TUT), Finland, in 2010.
Dr. Chen joined Alibaba Group, in 2018 as a Senior Director. Before joining Alibaba. His earlier working experiences include Principal Engineer in Qualcomm Incorporated, San Diego, CA, USA from 2009 to 2018, Researcher in TUT and Nokia Research Center, Finland from 2006 to 2009 and Research Engineer in Thomson Corporate Research, Beijing, from 2004 to 2006. Dr. Chen is currently leading the Audiovisual Technology Group in Taobao, Alibaba, supporting end-to-end multimedia features and applications within Taobao. Dr. Chen has been focusing on algorithm development and commercialization of multimedia technologies. Dr. Chen contributed to three generations of video coding standards, H.264/AVC, H.265/HEVC, and H.266/VVC (earlier stage) and video file format and transport standards, with various technical contribution documents (500+). Dr. Chen has served as an editor and a software coordinator for H.264/AVC and H.265/HEVC (both for Multiview and 3D Video extensions). Dr. Chen’s research areas include video coding, image/video restoration and enhancement, image/video quality assessment and video transmission. Dr. Chen has authored or co-authored 80+ academic papers and over 250 granted US patents in the fields of image/video processing and coding, multimedia transmission and computer vision. His publications have been cited for more than 20000 times. Dr. Chen has served as an associate editor for IEEE Transactions on CSVT.
[
< g r a p h i c s >
]Guangtao Zhai
(Senior Member, IEEE) received the B.E. and M.E. degrees from Shandong University, Shandong, China, in 2001 and 2004, respectively, and the Ph.D. degree from Shanghai Jiao Tong University, Shanghai, China, in 2009. From 2008 to 2009, he was a Visiting Student with the Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada, where he was a Post-Doctoral Fellow, from 2010 to 2012. From 2012 to 2013, he was a Humboldt Research Fellow with the Institute of Multimedia Communication and Signal Processing, Friedrich-Alexander-University of Erlangen–Nüremberg, Germany. He is currently a Research Professor with the Institute of Image Communication and Information Processing, Shanghai Jiao Tong University. His research interests include multimedia signal processing and perceptual signal processing. He received the Award of National Excellent Ph.D. Thesis from the Ministry of Education of China in 2012.
| With the rapid growth of mobile devices and advancements in wireless networks in recent years, people can now watch video content on mobile devices anywhere and anytime. Streaming media technologies play an important role in ensuring that users can view such content smoothly and in real-time without waiting for complete file downloads. Specifically, the streaming media content captured by the cameras or the third-party streaming media content is encoded and segmented into data fragments. These data fragments are then transmitted to the server using appropriate transport protocols such as HTTP, HLS, RTMP, or RTSP. Users utilize client devices (e.g., mobile phones, tablets, computers, network TVs) to send requests over the Internet for accessing streaming media content. Upon receiving a client request, the server employs a content distribution network (CDN) to distribute the corresponding data fragment to the requesting client device. After decoding and rendering, the data is converted into audio and video content that the user can watch and listen to <cit.>. Video on Demand (VoD) and live streaming are two prevalent methods of streaming media technology. On the other hand, Live Streaming involves real-time transmission and display of audio or video content over the Internet, ensuring synchronized delivery for viewers to experience events as they unfold.
Limited network resources and fluctuations in client networks can result in distortions, such as degradation of video quality and stalling events, leading to a decline in the end users' Quality of Experience (QoE).<cit.> Therefore, it is crucial for streaming media content providers to comprehend the factors that influence user QoE and allocate resources appropriately to enhance their satisfaction.<cit.> In the domain of video streaming media, Quality of Experience (QoE) is associated with numerous indicators. Among them, Video Quality Assessment (VQA) plays a pivotal role in perceiving visual quality. However, the user's QoE is highly susceptible to disruptions such as stalling events and bit rate switching caused by network fluctuations. These factors are not evaluated by conventional VQA methods. QoE represents a comprehensive metric that encompasses video quality along with other distortions like stalling events and quality switching.
In contrast to the well-established VoD and QoE industry, the research on live streaming QoE remains insufficient, primarily due to two key factors.
* Limited publicly available live video databases. Current QoE databases like LIVE-NFLX and waterlooSQoE predominantly resemble VoD setups. Moreover, publicly available databases fail to accurately capture video stalling manifestations in live streaming scenarios where network issues often result in unexpected fluctuations in frame rate and frame skipping. These distortions as shown in Fig. <ref>.
* Unsatisfied qoe model mechanism. Publicly available QoE models such as KSQI and GCNN-QoE heavily rely on statistical data (e.g., stallinging time and location, bitrate), which are challenging to obtain beforehand in real-life situations, thus rendering them unsuitable for real-time live broadcast scenarios.
To address this issue, we have developed an extensive and authentic live broadcasting database known as the Tao Live QoE Database. We collect live videos from the Tao Live APP and artificially induce stalling events by manipulating the presentation time stamp (PTS) of the videos. It is important to note that our database encompasses various quality degradations commonly encountered in live broadcasts, including compression artifacts, stalling distortions, accelerated frame rates, and frame skipping. Furthermore, all videos in our database undergo rigorous subjective testing to obtain comprehensive retrospective QoE scores which are subsequently validated. Additionally, we introduce TAO-QoE, a pioneering deep learning-based approach capable of directly predicting QoE scores from video inputs without relying on supplementary statistics. This model performs feature extraction and fusion for assessing video presentation quality, quality switching dynamics, and occurrence of stalling events ultimately leading to retrospective QoE score predictions.
The main contributions of this work are summarized as follows:
* We establish a large-scale live video database. The study involved the collection of 42 high-quality videos, which were subsequently subjected to compression artifacts and stalling events by adjusting the Constant Rate Factor (CRF) parameters and presentation time stamp for each video frame. As a result, a total of 1,155 distorted live streaming videos were generated.
* We carry out a well-controlled subjective experiment. We invited 20 participants to take part in the subjective experiment, resulting in a total of 23,100 subjective annotations collected to generate the QoE scores for live videos.
* We propose TAO-QoE, a deep learning-based model for predicting Quality of Experience (QoE) in live video streaming. This model achieves optimal performance on public databases without the need for statistical information. | §.§ QoE Database
Over the past 15 years, numerous publicly available QoE databases have been developed to tackle QoE challenges. Table <ref> illustrates common QoE databases, including WaterlooSQoE database<cit.> and LIVE-NFLX<cit.>, which serve as comprehensive collections of multimedia content specifically designed for evaluating QoE in diverse multimedia applications. These databases encompass a wide range of multimedia stimuli, such as images and videos, spanning different resolutions, compression levels, and content types. They also incorporate intentionally impaired content to simulate various degradation scenarios like compression artifacts, rebuffering issues, and quality adaptation.
§.§ QoE Models
In early research, video Quality of Experience (QoE) was often determined based on a set of statistical features. These studies attempted to fit certain video transmission-related metrics into a mathematical formula to predict video QoE<cit.>. However, the video QoE is influenced by multiple factors, including presentation quality, smoothness, video quality switching, and video stuttering. These factors are closely related to users' viewing environments, personal preferences, and perceptual abilities. Therefore, relying solely on statistical features makes it difficult to capture users' subjective experiences, and more detailed consideration of user perception and evaluation is needed. In pursuit of a better evaluation of the impact of video presentation quality on the overall Video Quality of Experience (QoE), an increasing number of studies have embraced the integration of Visual Quality Assessment (VQA) within the QoE assessment framework. Depending on the availability of reference videos during the evaluation process, video quality assessment can be classified into three categories: full-reference(FR)<cit.>, reduced-reference(RR)<cit.>, and no-reference(NR) approaches<cit.>. Both Spiteri2016<cit.> and Bentaleb2016<cit.> regard the average bitrate of the video experienced by the user and the duration of the rebuffer events as the influencing factors of QoE. Duanmu et al. devised a QoE algorithm named SQI, which combines the FR VQA algorithm with video stalling quantification information to predict the QoE scores of videos<cit.>. In Video Assessment of Temporal Artifacts and Stalls
(Video ATLAS)<cit.>, Bampis et al. unify modeling of video presentation quality, stall-related features, and memory-related features of video. Subsequently, et made improvements to the SQI algorithm and developed the KSQI algorithm, which takes video presentation quality(VMAF), rebuffering, and quality adaptation (switching between profiles) into consideration<cit.>. With the vigorous development of deep learning technology, more and more researchers apply convolutional neural network(CNN) and recurrent neural network(RNN) to the prediction of video QoE. GCNN-QoE<cit.> and DA-QoE<cit.> both perform feature extraction and fusion on statistical features, then uses GRU to process the features and finally returns the QoE score. DeSVQ<cit.> feeds the high-level spatio-temporal features extracted by CNN and the low-level features measured by VQA to LSTM in turn, and finally returns the QoE score. The above three models have two common features that use statistical features and RNN. In <cit.>, Pengfei Chen et al. constructed an end-to-end framework named TRR-QoE, which combines feature extraction, processing and QoE prediction. In Chunyi Li et al.<cit.> employ ResNet-50 for frame feature extraction, fuse statistics like resolution and rebuffering, and regress QoE using Support Vector Regression (SVR). | null | null | null | In this paper, we construct a database Taolive QoE Database for large-scale live broadcast scenes. Taolive QoE Database selects 42 high-quality videos as the original video, and adds distortion by changing the CRF parameters and the PTS of the video frame. Meanwhile we conduct a subjective experiment to collect the QoE scores of these videos. Furthermore, we propose a QoE model to evaluate the QoE of videos from both semantic and motion aspects. Extensive experimental results confirm the effectiveness of the proposed method.
1
IEEEtran
ref-1
Hamilton, William A., Oliver Garretson, and Andruid Kerne. "Streaming on twitch: fostering participatory communities of play within live mixed media." Proceedings of the SIGCHI conference on human factors in computing systems. 2014.
ref-2
Akhtar, Zahid, et al. "Why is multimedia quality of experience assessment a challenging problem?." IEEE Access 7 (2019): 117897-117915.
ref-3
Brunnström, Kjell, et al. "Qualinet white paper on definitions of quality of experience." (2013).
brunnstrom2013qualinet
Brunnström, Kjell, et al. "Qualinet white paper on definitions of quality of experience." (2013).
mok2011measuring
Mok, Ricky KP, Edmond WW Chan, and Rocky KC Chang. "Measuring the quality of experience of HTTP video streaming." 12th IFIP/IEEE international symposium on integrated network management (IM 2011) and workshops. IEEE, 2011.
xue2014assessing
Xue, Jingteng, et al. "Assessing quality of experience for adaptive HTTP video streaming." 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 2014.
yin2015control
Yin, Xiaoqi, et al. "A control-theoretic approach for dynamic adaptive video streaming over HTTP." Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication. 2015.
manasa2016optical
Manasa, K., and Sumohana S. Channappayya. "An optical flow-based full reference video quality assessment algorithm." IEEE Transactions on Image Processing 25.6 (2016): 2480-2492.
wu2019quality
Wu, Jinjian, et al. "Quality assessment for video with degradation along salient trajectories." IEEE Transactions on Multimedia 21.11 (2019): 2738-2749.
bampis2018spatiotemporal
Bampis, Christos G., Zhi Li, and Alan C. Bovik. "Spatiotemporal feature integration and model fusion for full reference video quality assessment." IEEE Transactions on Circuits and Systems for Video Technology 29.8 (2018): 2256-2270.
wang2004video
Wang, Zhou, Ligang Lu, and Alan C. Bovik. "Video quality assessment based on structural distortion measurement." Signal processing: Image communication 19.2 (2004): 121-132.
seshadrinathan2009motion
Seshadrinathan, Kalpana, and Alan Conrad Bovik. "Motion tuned spatio-temporal quality assessment of natural videos." IEEE transactions on image processing 19.2 (2009): 335-350.
soundararajan2012video
Soundararajan, Rajiv, and Alan C. Bovik. "Video quality assessment by reduced reference spatio-temporal entropic differencing." IEEE Transactions on Circuits and Systems for Video Technology 23.4 (2012): 684-694.
wang2005reduced
Wang, Zhou, and Eero P. Simoncelli. "Reduced-reference image quality assessment using a wavelet-domain natural image statistic model." Human vision and electronic imaging X. Vol. 5666. SPIE, 2005.
ma2011reduced
Ma, Lin, et al. "Reduced-reference image quality assessment using reorganized DCT-based image representation." IEEE Transactions on multimedia 13.4 (2011): 824-829.
rehman2012reduced
Rehman, Abdul, and Zhou Wang. "Reduced-reference image quality assessment by structural similarity estimation." IEEE transactions on image processing 21.8 (2012): 3378-3389.
men2017empirical
Men, Hui, Hanhe Lin, and Dietmar Saupe. "Empirical evaluation of no-reference VQA methods on a natural video quality database." 2017 Ninth international conference on quality of multimedia experience (QoMEX). IEEE, 2017.
men2018spatiotemporal
Men, Hui, Hanhe Lin, and Dietmar Saupe. "Spatiotemporal feature combination model for no-reference video quality assessment." 2018 Tenth international conference on quality of multimedia experience (QoMEX). IEEE, 2018.
li2015no
Li, Yuming, et al. "No-reference video quality assessment with 3D shearlet transform and convolutional neural networks." IEEE Transactions on Circuits and Systems for Video Technology 26.6 (2015): 1044-1057.
saad2014blind
Saad, Michele A., Alan C. Bovik, and Christophe Charrier. "Blind prediction of natural video quality." IEEE Transactions on image Processing 23.3 (2014): 1352-1365.
xu2014no
Xu, Jingtao, et al. "No-reference video quality assessment via feature learning." 2014 IEEE international conference on image processing (ICIP). IEEE, 2014.
mittal2015completely
Mittal, Anish, Michele A. Saad, and Alan C. Bovik. "A completely blind video integrity oracle." IEEE Transactions on Image Processing 25.1 (2015): 289-300.
li2019quality
Li, Dingquan, Tingting Jiang, and Ming Jiang. "Quality assessment of in-the-wild videos." Proceedings of the 27th ACM International Conference on Multimedia. 2019.
chen2020rirnet
Chen, Pengfei, et al. "RIRNet: Recurrent-in-recurrent network for video quality assessment." Proceedings of the 28th ACM international conference on multimedia. 2020.
spiteri2020bola
Spiteri, Kevin, Rahul Urgaonkar, and Ramesh K. Sitaraman. "BOLA: Near-optimal bitrate adaptation for online videos." IEEE/ACM transactions on networking 28.4 (2020): 1698-1711.
bentaleb2016sdndash
Bentaleb, Abdelhak, Ali C. Begen, and Roger Zimmermann. "SDNDASH: Improving QoE of HTTP adaptive streaming using software defined networking." Proceedings of the 24th ACM international conference on Multimedia. 2016.
duanmu2016quality
Duanmu, Zhengfang, et al. "A quality-of-experience index for streaming video." IEEE Journal of Selected Topics in Signal Processing 11.1 (2016): 154-166.
ALTAS
Bampis, Christos G., and Alan C. Bovik. "Learning to predict streaming video QoE: Distortions, rebuffering and memory." arXiv preprint arXiv:1703.00633 (2017).
duanmu2019knowledge
Duanmu, Zhengfang, et al. "A knowledge-driven quality-of-experience model for adaptive streaming videos." arXiv preprint arXiv:1911.07944 (2019).
GCNN-QoE
Zhou, Zhiming, et al. "Quality of Experience Evaluation for Streaming Video Using CGNN." 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020.
DA-QoE
Li, Leida, et al. "From Whole Video to Frames: Weakly-Supervised Domain Adaptive Continuous-Time QoE Evaluation." IEEE Transactions on Image Processing 31 (2022): 4937-4951.
DeSVQ
Ghosh, Monalisa, Dr Chetna Singhal, and Rushikesh Wayal. "DeSVQ: Deep learning based streaming video QoE estimation." Proceedings of the 23rd International Conference on Distributed Computing and Networking. 2022.
TRR-QoE
Chen, Pengfei, et al. "Temporal reasoning guided QoE evaluation for mobile live video broadcasting." IEEE Transactions on Image Processing 30 (2021): 3279-3292.
P1203
Raake, Alexander, et al. "A bitstream-based, scalable video-quality model for HTTP adaptive streaming: ITU-T P. 1203.1." 2017 Ninth international conference on quality of multimedia experience (QoMEX). IEEE, 2017.
waterloo1
Duanmu, Zhengfang, et al. "A quality-of-experience index for streaming video." IEEE Journal of Selected Topics in Signal Processing 11.1 (2016): 154-166.
waterloo2
Duanmu, Zhengfang, Kede Ma, and Zhou Wang. "Quality-of-experience for adaptive streaming videos: An expectation confirmation theory motivated approach." IEEE Transactions on Image Processing 27.12 (2018): 6135-6146.
waterloo3
Duanmu, Zhengfang, Abdul Rehman, and Zhou Wang. "A quality-of-experience database for adaptive video streaming." IEEE Transactions on Broadcasting 64.2 (2018): 474-487.
waterloo4
Duanmu, Zhengfang, et al. "Assessing the quality-of-experience of adaptive bitrate video streaming." arXiv preprint arXiv:2008.08804 (2020).
live1
Bampis, Christos George, et al. "Study of temporal effects on subjective video quality of experience." IEEE Transactions on Image Processing 26.11 (2017): 5217-5231.
live2
Bampis, Christos G., et al. "Towards perceptually optimized end-to-end adaptive video streaming." arXiv preprint arXiv:1808.03898 (2018).
luchunyi
Li, Chunyi, et al. "A real-time blind quality-of-experience assessment metric for HTTP adaptive streaming." arXiv preprint arXiv:2303.09818 (2023).
ITI
BT, RIR. "Methodology for the subjective assessment of the quality of television pictures." International Telecommunication Union 4 (2002).
swin
Liu, Ze, et al. "Swin transformer: Hierarchical vision transformer using shifted windows." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
sun2018pwc
Sun, Deqing, et al. "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
resnet
Hara, Kensho, Hirokatsu Kataoka, and Yutaka Satoh. "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?." Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2018.
sunwei
Sun, Wei, et al. "A deep learning based no-reference quality assessment model for ugc videos." Proceedings of the 30th ACM International Conference on Multimedia. 2022.
narwaria2012low
Narwaria, Manish, Weisi Lin, and Anmin Liu. "Low-complexity video quality assessment using temporal quality variations." IEEE Transactions on Multimedia 14.3 (2012): 525-535.
imagenet
Deng, Jia, et al. "Imagenet: A large-scale hierarchical image database." 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009.
kay2017kinetics
Kay, Will, et al. "The kinetics human action video dataset." arXiv preprint arXiv:1705.06950 (2017).
kingma2014adam
Kingma, Diederik P., and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
pytorch
Paszke, Adam, et al. "Automatic differentiation in pytorch." (2017).
LIVE-Qualcomm
Ghadiyaram, Deepti, et al. "In-capture mobile video distortions: A study of subjective behavior and objective algorithms." IEEE Transactions on Circuits and Systems for Video Technology 28.9 (2017): 2061-2077.
CVD2014
Nuutinen, Mikko, et al. "CVD2014—A database for evaluating no-reference video quality assessment algorithms." IEEE Transactions on Image Processing 25.7 (2016): 3073-3086.
KoNViD-1k
Hosu, Vlad, et al. "The Konstanz natural video database (KoNViD-1k)." 2017 Ninth international conference on quality of multimedia experience (QoMEX). IEEE, 2017.
VDPVE
Gao, Yixuan, et al. "VDPVE: VQA Dataset for Perceptual Video Enhancement." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
MSU
Antsiferova, Anastasia, et al. "Video compression dataset and benchmark of learning-based video-quality metrics." Advances in Neural Information Processing Systems 35 (2022): 13814-13825.
LIVE-VQC
Sinno, Zeina, and Alan Conrad Bovik. "Large-scale study of perceptual video quality." IEEE Transactions on Image Processing 28.2 (2018): 612-627.
YouTubeUGC
Wang, Yilin, Sasi Inguva, and Balu Adsumilli. "YouTube UGC dataset for video compression research." 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2019.
LIVE-WC
Yu, Xiangxu, et al. "Predicting the quality of compressed videos with pre-existing distortions." IEEE Transactions on Image Processing 30 (2021): 7511-7526.
VQEG-1
Zhu, Wenhan, et al. "Multi-channel decomposition in tandem with free-energy principle for reduced-reference image quality assessment." IEEE Transactions on Multimedia 21.9 (2019): 2334-2346.
VQEG-2
Min, Xiongkuo, et al. "Objective quality evaluation of dehazed images." IEEE Transactions on Intelligent Transportation Systems 20.8 (2018): 2879-2892.
VQEG-3
Min, Xiongkuo, et al. "Quality evaluation of image dehazing methods using synthetic hazy images." IEEE Transactions on Multimedia 21.9 (2019): 2319-2333.
critetia-2-1
Krasula, Lukáš, et al. "On the accuracy of objective image and video quality models: New methodology for performance evaluation." 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2016.
critetia-2-2
Hanhart, Philippe, et al. "How to benchmark objective quality metrics from paired comparison data?." 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX). Ieee, 2016.
critetia-2-3
Krasula, Lukáš, et al. "Quality assessment of sharpened images: Challenges, methodology, and objective metrics." IEEE Transactions on Image Processing 26.3 (2017): 1496-1508.
critetia-2-4
Krasula, Lukáš, et al. "Preference of experience in image tone-mapping: Dataset and framework for objective measures comparison." IEEE Journal of Selected Topics in Signal Processing 11.1 (2016): 64-74.
Brill
Brill, Michael H., et al. "Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1." Signal Processing: Image Communication 19.2 (2004): 101-107.
Hanley
Hanley, James A., and Barbara J. McNeil. "A method of comparing the areas under receiver operating characteristic curves derived from the same cases." Radiology 148.3 (1983): 839-843.
LTVQM
Korhonen, Jari. "Two-level approach for no-reference consumer video quality assessment." IEEE Transactions on Image Processing 28.12 (2019): 5923-5938.
VSFA
Li, Dingquan, Tingting Jiang, and Ming Jiang. "Quality assessment of in-the-wild videos." Proceedings of the 27th ACM International Conference on Multimedia. 2019.
SimpleVQA
Sun, Wei, et al. "A deep learning based no-reference quality assessment model for ugc videos." Proceedings of the 30th ACM International Conference on Multimedia. 2022.
FastVQA
Wu, Haoning, et al. "Fast-vqa: Efficient end-to-end video quality assessment with fragment sampling." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[
< g r a p h i c s >
]Zehao Zhu
received the B.E. degree in electronic information engineering from Jilin University in 2018 and the M.E. degree in information and communication engineering from Shanghai Jiao Tong University in 2021. He is currently pursuing the Ph.D. degree in electronic engineering with Shanghai Jiao Tong University, Shanghai, China. His research interests include streaming media quality of experience assessment and video quality assessment.
[
< g r a p h i c s >
]Wei Sun
received the B.E. degree from the East China University of Science and Technology, Shanghai, China, in 2016, and the Ph.D. degree from Shanghai Jiao Tong University, Shanghai, China, in 2023. He is currently a Post-Doctoral Fellow with Shanghai Jiao Tong University. His research interests include image quality assessment, perceptual signal processing and mobile video processing.
[
< g r a p h i c s >
]Jun Jia
received the B.S. degree in computer science and technology from Hunan University, Changsha, China, in 2018. He is currently pursuing the Ph.D. degree in electronic engineering with Shanghai Jiao Tong University, Shanghai, China.
His current research interests include computer vision and image processing.
[
< g r a p h i c s >
]Wei Wu
received his Ph.D. in computer application technology from Wuhan University, Wuhan, China, in 2021. He joined Donghai Laboratory in 2023. Before joining Donghai Laboratory, he was a Senior Engineer with Alibaba Group, Hangzhou, China, from 2021 to 2023. His research interests include image and video processing, computer vision, and multi-source heterogeneous data fusion.
[
< g r a p h i c s >
]Jia Wang
received the B.Sc. degree in electronic engineering, the M.S. degree in pattern recognition and intelligence control, and the Ph.D. degree in electronic engineering from Shanghai Jiao Tong University, China, in 1997, 1999, and 2002, respectively. He is currently a Professor with the Department of Electronic Engineering, Shanghai Jiao Tong University, and also a member of the Shanghai Key Laboratory of Digital Media Processing and Transmission. His research interests include multiuser information theory and mathematics in artificial intelligence.
[
< g r a p h i c s >
]Ying Chen
(IEEE M’05 - SM’11) received a B.S. in Applied Mathematics and an M.S. in Electrical Engineering & Computer Science, both from Peking University, in 2001 and 2004 respectively. He received his PhD in Computing and Electrical Engineering from Tampere University of Technology (TUT), Finland, in 2010.
Dr. Chen joined Alibaba Group, in 2018 as a Senior Director. Before joining Alibaba. His earlier working experiences include Principal Engineer in Qualcomm Incorporated, San Diego, CA, USA from 2009 to 2018, Researcher in TUT and Nokia Research Center, Finland from 2006 to 2009 and Research Engineer in Thomson Corporate Research, Beijing, from 2004 to 2006. Dr. Chen is currently leading the Audiovisual Technology Group in Taobao, Alibaba, supporting end-to-end multimedia features and applications within Taobao. Dr. Chen has been focusing on algorithm development and commercialization of multimedia technologies. Dr. Chen contributed to three generations of video coding standards, H.264/AVC, H.265/HEVC, and H.266/VVC (earlier stage) and video file format and transport standards, with various technical contribution documents (500+). Dr. Chen has served as an editor and a software coordinator for H.264/AVC and H.265/HEVC (both for Multiview and 3D Video extensions). Dr. Chen’s research areas include video coding, image/video restoration and enhancement, image/video quality assessment and video transmission. Dr. Chen has authored or co-authored 80+ academic papers and over 250 granted US patents in the fields of image/video processing and coding, multimedia transmission and computer vision. His publications have been cited for more than 20000 times. Dr. Chen has served as an associate editor for IEEE Transactions on CSVT.
[
< g r a p h i c s >
]Guangtao Zhai
(Senior Member, IEEE) received the B.E. and M.E. degrees from Shandong University, Shandong, China, in 2001 and 2004, respectively, and the Ph.D. degree from Shanghai Jiao Tong University, Shanghai, China, in 2009. From 2008 to 2009, he was a Visiting Student with the Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada, where he was a Post-Doctoral Fellow, from 2010 to 2012. From 2012 to 2013, he was a Humboldt Research Fellow with the Institute of Multimedia Communication and Signal Processing, Friedrich-Alexander-University of Erlangen–Nüremberg, Germany. He is currently a Research Professor with the Institute of Image Communication and Information Processing, Shanghai Jiao Tong University. His research interests include multimedia signal processing and perceptual signal processing. He received the Award of National Excellent Ph.D. Thesis from the Ministry of Education of China in 2012. |
http://arxiv.org/abs/2409.17302v1 | 20240925192043 | Riemannian conjugate Sobolev gradients and their application to compute ground states of BECs | [
"Yueshan Ai",
"Patrick Henning",
"Mahima Yadav",
"Sitong Yuan"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Riemannian conjugate Sobolev gradients and their application to compute ground states of BECs
[Y. Ai and S. Yuan acknowledge the support by The Chinese University of Hong Kong and P. Henning and M. Yadav acknowledge the support by the German Research Foundation (DFG grant HE 2464/7-1).]
Yueshan Ai[Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong.
email: mailto:[email protected]@link.cuhk.edu.hk and mailto:[email protected]@link.cuhk.edu.hk ], Patrick Henning[Department of Mathematics, Ruhr-University Bochum, DE-44801 Bochum, Germany.
email: mailto:[email protected]@rub.de and mailto:[email protected]@rub.de ], Mahima Yadav<ref> and Sitong Yuan<ref>
September 28, 2024
§ ABSTRACT
This work considers the numerical computation of ground states of rotating Bose-Einstein condensates (BECs) which can exhibit a multiscale lattice of quantized vortices. This problem involves the minimization of an energy functional on a Riemannian manifold. For this we apply the framework of nonlinear conjugate gradient methods in combination with the paradigm of Sobolev gradients to investigate different metrics. Here we build on previous work that proposed to enhance the convergence of regular Riemannian gradients methods by an adaptively changing metric that is based on the current energy. In this work, we extend this approach to the branch of Riemannian conjugate gradient (CG) methods and investigate the arising schemes numerically. Special attention is given to the selection of the momentum parameter in search direction and how this affects the performance of the resulting schemes. As known from similar applications, we find that the choice of the momentum parameter plays a critical role, with certain parameters reducing the number of iterations required to achieve a specified tolerance by a significant factor. Besides the influence of the momentum parameters, we also investigate how the methods with adaptive metric compare to the corresponding realizations with a standard H^1_0-metric. As one of our main findings, the results of the numerical experiments show that the Riemannian CG method with the proposed adaptive metric along with a Polak–Ribiére or Hestenes–Stiefel-type momentum parameter show the best performance and highest robustness compared to the other CG methods that were part of our numerical study.
§ INTRODUCTION
An extreme state of matter with remarkable superfluid properties is formed when a dilute bosonic gas condenses at temperatures close to 0 Kelvin to a so-called Bose–Einstein condensate (BEC), cf. <cit.>. Its extraordinary superfluid nature can be checked by verifying the existence of quantized vortices in the rotating BEC. In practical setups, the appearance of such vortices is crucially related to the interplay of a (magnetic or optical) trapping potential V (to confine the condensate) and the angular frequency Ω of a stirring potential (to rotate the condensate). If the angular frequency is too small compared to the strength of the trapping potential, no vortices appear. If the angular frequency is too high, the condensate can be destroyed by centrifugal forces. Only in an intermediate regime, a rich landscape of vortex pattern can observed and studied. In this paper, we are concerned with the numerical computation of such pattern by seeking ground states (i.e. the lowest energy states) of BECs in a rotating frame.
On a given computational domain ⊂^d (for d=2,3) a ground state is mathematically described through its quantum state u : →ℂ, whereas the vortices become visible in the corresponding density |u|^2 : →ℝ. The density is usually normalized such that the total mass of the BEC fulfills ∫_ |u|^2 x =1 or, more precisely, it should hold
u ∈ 𝕊 := { v ∈ H^1_0(,ℂ) | v _L^2() = 1 },
where H^1_0(,ℂ) denotes the usual Sobolev space of weakly differentiable, square-integratable and complex-valued functions with a vanishing trace v_|∂=0.
In a given configuration, a corresponding ground state is characterized as a global minimizer of the total energy of the system. This energy is described by the Gross–Pitaevskii energy functional E : 𝕊→ℝ given by
E(u)
:= 1/2∫_𝒟 |∇ u|^2 + V |u|^2 - Ω u̅ ℒ_3 u + κ/2 |u|^4
for u∈𝕊.
Here, u denotes the complex conjugate of u, V represents the external trapping potential, ℒ_3 := - ( x_1 ∂_x_2 - x_2 ∂_x_1) denotes the x_3-component of the angular momentum (hence describing a rotation around the x_3-axis), Ω∈ℝ is the corresponding angular velocity of the stirring potential and κ∈ℝ^+ is a repulsion parameter that encodes the strength of particle interactions. For a comprehensive introduction to the basic theory and mathematical properties of ground states of rotating BECs, we refer to the papers by Bao et al. <cit.>.
In a nutshell, the finding of a ground state requires to find a minimizer of the energy E on the Riemannian manifold 𝕊 (which we recall as incorporating the mass normalization constraint for u). Hence, we are concerned with a Riemannian minimization/optimization problem, which is precisely the perspective that we are taking in this paper. Alternatively, the problem can be also viewed through the lens of nonlinear eigenvalue problems by examining the corresponding Euler-Lagrange equations, known as Gross-Pitaevskii eigenvalue problem, associated with the constrained energy minimization problem <cit.>. The links between the two perspectives (nonlinear eigenvalue problem vs. Riemannian minimization problem) are elaborated in the review paper <cit.>. We also note that the Euler–Lagrange equations can be tackled with Newton-type methods <cit.>.
As mentioned above, we adopt in the following the perspective of directly minimizing the energy in an iterative process through discretized Riemannian gradient flows / Riemannian gradient methods, cf. <cit.>. The development of optimization techniques on Riemannian manifolds goes back to Smith et al. <cit.> and has been significantly extended in the past three decades. Modern methods combine the concepts of Sobolev gradients, Riemannian descent directions, Riemannian vector transport, and retractions to the constrained manifold 𝕊, cf. <cit.>. In this paper, we will further explore this path by constructing new metric-adaptive Riemannian conjugate gradient (CG) methods for the considered application of rotating BECs. In particular, we will numerically investigate the performance of the new methods. Our experimental analysis primarily focuses on improving the computational efficiency of the schemes, specifically by (a) selecting appropriate metrics for the constrained manifold (or more precisely, the tangent spaces at the current iterates u^n) and (b) choosing the proper momentum parameter (denoted as β) to update the next search direction. With this, we also want to examine an open question posed in <cit.>: “it remains an open question whether the use of a well-adapted Riemannian metric defined on the constraint manifold could further improve the performance of the approach.”
Note here that the choice of the metric is a crucial ingredient, because it changes the Riemannian gradient of E on the manifold and hence the way how a steepest descent/ascent is characterised. More precisely, the Riemannian gradient of E in a point v∈𝕊 with respect to an X-metric (induced by an inner product (·,·)_X) is obtained in two steps: First, construct the Riesz representation of E^'(v) in X. The representation is called a Sobolev gradient of E (cf. <cit.>) and is denoted by ∇_X E(v). Second, project ∇_X E(v) into the tangent space at v ∈𝕊 with the X-orthogonal projection. The resulting function is the Riemannian gradient of E in v w.r.t. to the X-metric. In the context of rotating BECs, the most popular metrics for conjugate Riemannian gradient methods are the L^2-metric (in combination with suitable preconditioners) as studied by Tang et al. <cit.> and an energy-metric (based on an inner product of the form (u,v)_ + ( u , v )_ with =∇ v - Ω 𝐀^⊤ and 𝐀=(x_2,-x_1,0)) proposed by Danaila et al. <cit.>. The idea of using adaptively changing metrics for (general) optimization problems is discussed by Ring and Wirth <cit.> and Sato <cit.>. In this work, we investigate a particular adaptively changing metric (based on the location on 𝕊) that is selected in such a way that the corresponding Sobolev gradient fulfills ∇_XE(v) =v for all v ∈ in the sense of optimal preconditioning. Motivated by guaranteed energy dissipation and global convergence for the corresponding gradient flows, this adaptive choice for the metric was first suggested in <cit.> for the Gross–Pitaevskii energy without rotation and later transferred to the case with rotation in <cit.>. However, both works are only concerned with regular Riemannian gradient methods and their analysis and extensions to conjugate gradient versions were left open. Even though the generalization of these concepts to Riemannian conjugate gradients was recently discussed in <cit.>, it was only for Ω=0 and for a Fletcher–Reeves-type momentum parameter that we found to perform suboptimal in our experiments with rotating BECs.
In this work, we hence close this gap and formulate a corresponding Riemannian conjugate gradient method with adaptively changing metric as sketched above and with various choices for momentum parameters. Selecting a suitable momentum parameter is of great practical importance since different choices can result in significant variations in the number of iterations required to attain a specified tolerance. We will consider four different choices according to the most popular types of momentum parameters, which are Dai–Yuan <cit.>, Fletcher–Reeves <cit.>, Hestenes–Stiefel <cit.> and Polak–Ribiére <cit.>. If the momentum parameter is uniformly zero, the method reduces to a Riemannian gradient method. Besides comparing the performance of the parameters, we also compare the iteration numbers for the adaptive metric with the iteration numbers for the H^1_0-metric. In our numerical experiments we find that the adaptive metric leads to a significant acceleration compared to the H^1_0-method and that the corresponding Polak–Ribiére and Hestenes–Stiefel-type parameters performed significantly better than the other two (including the generic choice β=0).
So far, the mathematical convergence analysis of Riemannian conjugate gradient methods for the Gross–Pitaevskii problem is fully open and hence beyond the scope of this paper. However, in the case β=0 (i.e. the case of Riemannian gradient methods), a convergence analysis was recently established in <cit.>.
Finally we note that a practical application of Riemannian gradient methods also requires a space discretization (of the partial differential equations that have to be solved in each iteration of the gradient method). Typical approaches include spectral and pseudo-spectral methods <cit.>, finite element methods <cit.> or spectral element methods <cit.>. We leave the choice open in this paper and only investigate the outer gradient iteration. The algorithms can be afterwards combined with any preferred spatial discretization that potentially exploits the structure of the considered metric.
Outline: The paper is organised as follows. The model framework is introduced in Section <ref>. In Section <ref>, we recall the concepts of Sobolev gradients and Riemannian steepest descent. With this we propose a class of Riemannian conjugate gradient methods with different realizations based on the choice of the metric and the momentum parameter. Lastly, in Section <ref>, we evaluate the performance of the aforementioned Riemannian CG methods for different model problems and discuss the numerical observations.
§ SETTING AND PROBLEM FORMULATION
In this section, we introduce the basic notation and state the Gross–Pitaevskii equation for rotating Bose–Einstein condensates, including the necessary assumptions for the existence of ground states. These assumptions are valid for the whole manuscript.
§.§ Notation and assumptions
For the subsequent discussion on minimizing the energy E given by (<ref>), we consider a bounded Lipschitz-domain ⊂ℝ^d for d=2,3. Furthermore, we make the following set of assumptions.
* The repulsion parameter κ≥ 0 that characterizes the particle interactions is a real-valued constant. The trapping potential V is non-negative, real-valued and essentially bounded on , i.e., V ∈ L^∞(𝒟,ℝ_≥ 0 ). To ensure that the centrifugal forces to do not exceed the strength of the trapping potential, the angular velocity Ω∈ must be small enough such that there is a constant > 0 with
V(x) - 1 + ε/4Ω^2 (x_1^2 + x_2^2) ≥ 0 for almost all x ∈.
Note here that only the trapping frequencies in the (x_1,x_2)-plane are relevant, because the rotation of the BEC is around the x_3-axis.
The above assumptions are not only needed to ensure the existence of minimizers (ground states) but also to ensure well-posedness of the adaptive metric that we construct later on. Regarding the existence of ground states, we note that the positivity of V is not required, but only introduced to avoid technical issues in the presentation of our results. Note that assuming positivity of V is not restrictive since constant shifts of the energy do not change the minimizers. Hence, we can just add a (sufficiently large) constant to V to make it positive. The assumption of positivity of κ can be relaxed to some extend to small negative values. However, this regime of attractive particle interactions is still not yet fully understood, which is why we exclude it here. Finally, the balancing assumption of V and Ω is crucial and there exist typically no ground states if it is violated, cf. <cit.>.
As we are concerned with minimizing a real-valued functional E, it is important to note that we must consider -Hilbert spaces to ensure Fréchet-differentiability of E (cf. <cit.> for a detailed motivation). For that reason, we equip the Sobolev space H_0^1(𝒟):= H_0^1(𝒟,ℂ), over which we minimize E, with the real inner product vw:= ( ∫_𝒟∇ v ·∇ w ). Recall here w as the complex conjugate of w. Similarly, we define the Lebesgue space L^2(𝒟):= L^2(𝒟,) as a real Hilbert space with inner product vw:= ( ∫_ v w ). The corresponding (real) dual space is denoted by H^-1(𝒟) := (H_0^1())^* with canonical duality pairing ⟨· , ·⟩ := ⟨· ,·⟩ _H^-1(𝒟) , H^1_0(𝒟).
§.§ Model Problem and first and second order optimality conditions
With the notation above, we recall the Gross–Pitaevskii energy functional E :H_0^1() →ℝ from (<ref>) as
E(v)
= 1/2∫_𝒟 |∇ v|^2 + V |v|^2 - Ω v̅ ℒ_3v + κ/2 |v|^4
and we recall that a ground state u is defined as a global minimizer of E on the restricted L^2-unit sphere 𝕊 = { v ∈ H^1_0() | v_L^2() = 1 }. Since 𝕊 is a Riemannian manifold, we are hence concerned with a Riemannian optimization problem which reads:
Find u ∈𝕊such that E(u) = v ∈𝕊 min E(v).
The homogeneous Dirichlet boundary conditions imposed for u through H^1_0() can be practically justified with the exponential decay of ground states outside of any sufficiently large domain (whose necessary diameter can be estimated through the Thomas–Fermi radius of the condensate), cf. <cit.>.
As stated in the introduction, ground states are the stable stationary states at the lowest possible energy level, with a constraint on the mass of the condensate, represented by the condition u ∈𝕊. Local minimizers or saddle points of E on 𝕊 that are no ground states are called excited states. Under the assumptions stated in <ref>, the energy functional is positively bounded from below on 𝕊 and the existence of global minimizers can be established with standard arguments. In particular, we have the following existence result (cf. <cit.> and <cit.>).
Let 𝒟⊂ℝ^d, d=2,3 be a bounded Lipschitz-domain, and assume <ref> holds. Then there exists at least one ground state u ∈𝕊 to problem (<ref>) and it holds E(u)>0.
Since the energy E is invariant under complex phase shifts, i.e., E(u) = E(e^ω u) for all ω∈ (0, 2π], we have that e^ω u is a ground state if u is a ground state. Hence, we cannot hope for uniqueness of minimizers in (<ref>). Although the density |u|^2 is independent of such phase shifts, even uniqueness of |u|^2 can only be guaranteed up to some (small) critical frequency of Ω, cf. <cit.>.
In order to set the stage for the Riemannian gradient descent in the next section, we need to consider the derivatives of E. It is easy to see that the energy is infinitely often (-)Fréchet differentiable on the Hilbert space and a calculation of the first two derivatives yields
⟨ E^'(u) , v ⟩ = ( ∇ u , ∇ v)_L^2() + ( V u - Ω ℒ_3 u + κ |u|^2 u , v )_L^2() ,
⟨ E”(u) v, w ⟩ = ( ∇ v , ∇ w)_L^2() + ( V v - Ω ℒ_3 v +κ |u|^2 v , w )_L^2() + 2 β ( (u v) u,w )_ .
for any u,v,w ∈. Note that E”(u) is a self-adjoint operator with real and positive eigenvalues if <ref> is fulfilled, cf. <cit.>.
First order optimality condition.
Since a ground state u ∈𝕊 is a constrained minimizer of E (with constraint ∫_ |u|^2 =1), we can also formulate the corresponding Euler Lagrange equations. To precise, for any ground state u ∈ H^1_0(), there exists a Lagrange multiplier λ >0 such that
⟨ E^'(u) , v ⟩ = λ ( u , v )_L^2() v∈ H^1_0().
The right hand side of (<ref>) should be seen as the -Frećhet derivative of the constraint functional 12( v _L^2()^2 - 1). Since λ can be equivalently interpreted as an eigenvalue of E^', problem (<ref>) is commonly known as the Gross–Pitaevskii eigenvalue problem (GPEVP). By using the explicit representation of E^' and expressing it in the sense of distributions, we can write the GPEVP (<ref>) as: Find u ∈ H^1_0() and λ∈ such that
-Δ u + V u - Ω ℒ_3u + κ |u|^2u = λ u.
The eigenvalue λ corresponding to a ground state u is called a ground state eigenvalue. Unlike for non-rotating condensates, i.e., Ω=0, the ground state eigenvalue λ may not be the smallest eigenvalue if the angular velocity is sufficiently high. Numerical evidence supporting this were first pointed out in <cit.> and can be also found in Section <ref> of this paper.
Note that (<ref>) (and equivalently (<ref>)) is nothing but the first order optimality condition for minimizers of E on 𝕊. In the case of Ω=0, this condition is even sufficient to characterize global minimizers if λ is the smallest eigenvalue of E^', cf. <cit.>. For the case Ω≠0, this is unfortunately no longer possible since λ can appear anywhere in the spectrum of E^'. However, we can consider the second order optimality conditions for our minimization problem.
Second order optimality condition. For the second order optimality condition, we consider the tangent space at u on the manifold 𝕊 which is given by
u := { v ∈ H^1_0() | (u,v)_ = 0 }.
The above tangent space is obtained as the null space of the Fréchet derivative of the mass constraint functional u ↦( ∫_|u|^2 -1 ) on . To identify if a given u ∈𝕊 is a minimizer, we have to study the spectrum of E^''(u)|_u. To be precise, we seek eigenfunctions v_i ∈u and corresponding eigenvalues λ_i ∈ such that
⟨ E”(u) v_i , w ⟩ = λ_i (v_i ,w)_L^2() w∈u.
Let the eigenvalues be ordered by size with 0<λ_1 ≤λ_2 ≤…. If u is a local minimizer of E on 𝕊 with Lagrange multiplier λ, then the necessary second order optimality condition states that it must hold
λ_i ≥λ i ≥ 1,
whereas the sufficient second order optimality condition demands
λ_1 = λ λ_i > λ i ≥ 2.
Here, λ denotes the ground state eigenvalue to the ground state u in the sense of the first order condition (GPEVP) (<ref>). Note that λ must always appear at the bottom of the spectrum (if u is a minimizer) since we have λ_1=λ for the eigenfunction v_1 = u ∈u, cf. <cit.>. This eigenfunction originates from the aforementioned invariance of E under phase shifts of u, i.e. E(u) = E(e^ω u). Consequently, if we have an eigenfunction u ∈𝕊 of E^' with eigenvalue λ (i.e. it holds (<ref>)) and if we find that λ_1=λ is a simple eigenvalue of E^''(u)|_u, then the sufficient second order optimality condition holds and u must be a local minimizer. We call such minimizers quasi-isolated and it is an open conjecture if all minimizers are quasi-isolated.
To give a brief motivation for the first and second order optimality conditions, we can consider arbitrary smooth curves γ(t) on 𝕊. If, for example γ : (-1,1) →𝕊 is a two-times differentiable curve with γ(0)=u, then it must hold γ^'(0) ∈u, since the curve is tangential to u in t=0. Furthermore, the map t ↦ E( γ(t) ) must have a minimum in t=0. Consequently, it holds
/t E(γ(t)) |_t=0 = 0 ^ 2/t^2 E(γ(t)) |_t=0≥ 0.
By computing the corresponding derivatives and defining v:=γ^'(0) ∈u (which can be arbitrary), we obtain the first order optimality condition /t E(γ(t)) |_t=0 = 0 as ⟨ E^'(u) , v⟩ =0 for all v∈u. Equivalently, we can decompose an arbitrary w ∈ H^1_0() into w = α u + v for some α∈ and v∈u. By defining λ:=⟨ E^'(u) , u ⟩ we obtain from ⟨ E^'(u) , v⟩ =0 that
⟨ E^'(u) , w⟩ = ⟨ E^'(u) , α u ⟩ = α λu ∈𝕊=λ ( u , α u )_L^2()v∈u=λ ( u , w )_L^2(),
which is precisely the GPEVP (<ref>). In a similar fashion, the second order optimality condition is obtained as ⟨ E^''(u) v , v⟩≥ λ (v,v)_L^2() for all v∈u from the condition ^ 2/t^2 E(γ(t)) |_t=0≥ 0. We refer to <cit.> for further details for these standard calculations.
In Section <ref>, we verify that our numerically computed approximations are indeed local minimizers of E by computing the corresponding ground state eigenvalue λ and verifying afterwards the sufficient second order optimality condition: _1= and λ_i > λ for all i ≥ 2, where _i are the eigenvalues of E^''(u)|_u.
§ RIEMANNIAN CONJUGATE SOBOLEV GRADIENT METHODS
In this section we introduce the Riemannian conjugate Sobolev gradient method to minimize the Gross-Pitaeveskii energy functional in a rotating frame.
§.§ Riemannian X-Sobolev gradients
The usage of Sobolev gradients gives us a tool to select different metrics for the gradient method in a natural way. In the following, we shall denote by X a Hilbert space equipped with an inner product (· , ·)_X and with ⊂ X in the sense of continuous embeddings. The space X gives rise to the corresponding Sobolev gradient ∇_X E(v) of E in v and we will investigate the influence of different choices for X on the computational efficiency of the method. We also explore different forms of this gradient method based on the choice of the momentum parameter (denoted as β), which influences the next search direction in our setting.
The basic idea of a Riemannian gradient method is to move, from a given current point on the manifold, into the direction of the Riemannian steepest descent. The steepest descent is given by the negative Riemannian gradient of E in that given point and depends on the metric through the chosen Sobolev gradient. To make this dependency clear, we will call it the Riemannian Sobolev gradient of E. In a point u ∈𝕊 on the manifold, the Riemannian Sobolev gradient is given by P_u,X ( ∇_X E(u) ) and is obtained in two steps as follows. First, we construct the (X-)Sobolev gradient ∇_X E(u) of v ∈ as the Riesz-representation of the regular gradient E'(v) ∈ H^-1() in X. To be precise, we let ∇_X E(u) ∈ X denote the unique function with
(∇_X E(u) , v )_X = ⟨ E^'(u) , v ⟩ v ∈.
Note that this problem is in general only well-posed if X contains the elements of H^1_0(), so that E^'(u) ∈ X^∗. However, this can be often relaxed in practice. For example, if X=L^2() and if u ∈ H^2(), then E^'(u) can be represented as an L^2-function and problem (<ref>) is still well-posed. Since we will only consider H^1-type metrics in this work, we will not go into further detail here and refer to <cit.> instead. To obtain the Riemannian Sobolev gradient from the Sobolev gradient, the second step involves an X-orthogonal projection into the tangent space u. For v ∈ X, the X-orthogonal projection P_u,X: X →u is given by
( P_u,X(v) ,w ) _X=(v,w)_X for any w ∈u.
Note that the above definition formally requires that we equip the tangent space u with the X-metric. This is often indicated by using the notation T_u,X𝕊 instead of T_u𝕊. For the sake of readability and since the elements of T_u,X𝕊 do not change with X, we do not use an additional index here and just continue with T_u𝕊.
The explicit expression for the projection P_u,X(v) ∈u of v is given by
P_u,X(v)=v - R_X(u)/R_X(u)^2_X (u,v)_L^2(𝒟).
Here R_X(v) ∈ X is the Riesz-representation of v in X, i.e., ( R_X(v),w ) _X=(v,w)_L^2(𝒟) for all w ∈.
To summarize, the Riemannian X-Sobolev gradient of E in u ∈𝕊 is given by P_u,X ( ∇_X E(u) ), i.e., the X-orthogonal projection of the Sobolev gradient ∇_X E(u) into u. With these notations and operators, we can now formulate the corresponding Riemannian (conjugate) gradient method.
§.§ The conjugate gradient method on the Riemannian manifold 𝕊
To introduce the concept of our method, we let u^0 ∈𝕊⊂ H^1_0() be a given starting point. For n ≥ 0, the sequence of iterates {u^n+1}⊂𝕊 of the Riemannian conjugate Sobolev gradient (RCSG) method are generated by the steps
u^n+1 = u^n + τ_n d^ n/ u^n + τ_n d^ n_L^2().
That is, from u^n∈𝕊 we move by the distance τ_n>0 into the search direction d^ n. Ideally, the search direction should be a descent direction, i.e., the Riemannian gradient in direction d^n is negative. The normalization v ↦ v / v _L^2() after each iteration is the canonical retraction to 𝕊. Unlike the Riemannian gradient method where the search direction is selected as the negative Riemannian gradient, i.e. d^ n = - P_u^n,X ( ∇_X E(u^n) ) (which is always a descent direction), here also the contribution from the previous search direction is included by setting
d^ n:=
- P_u^n,X ( ∇_X E(u^n) ) if n = 0,
- P_u^n,X ( ∇_X E(u^n) ) + β^n P_u^n,X(d^ n-1) else,
for some momentum parameter β^n∈ℝ that we specify later. Considering the fact that the Riemannian gradient P_u^n,X ( ∇_X E(u^n) ) ∈ T_u^n𝕊 and the previous search direction d^ n-1∈ T_u^n-1𝕊 both lie in different tangent spaces, we must transport the previous direction (in T_u^n-1𝕊) to the tangent space T_u^n𝕊 of the current iteration u^n in order to make d^ n tangential to the current point u^n on 𝕊. This justifies to work with P_u^n,X(d^ n-1) instead of d^ n-1 in (<ref>). This choice is known as projection-based vector transport.
Typically, the step length τ_n in (<ref>) is adaptively computed to ensure optimal energy dissipation in each iteration. This can be achieved by defining:
τ_n:=min_τ >0E(u^n+1).
Several optimization methods, such as Brent's method, the bisection method, the golden search section, etc., can be used to solve the above optimization problem to obtain the optimal value for τ_n. It is important to note that the stability of the method is only guaranteed within a specific range of τ_n. However, providing a rigorous analytical proof of this stability requires further investigation, which is beyond the scope of this paper. For the case when uniformly β^n=0, the particular domain for searching the sequence of optimal τ_n-values has been established as the interval (0,2) in article <cit.> for a Riemannian gradient method with adaptive metric.
As shown below, various realizations of the Riemannian conjugate gradient method can be derived from (<ref>) by making specific choices of (a) the inner product for the Hilbert space X, and (b) the real-valued momentum parameter β^n in definition (<ref>) for the direction d^n. In the next subsection, we start with discussing the influence of the inner product.
§.§ Choices for the X-metric on 𝕊
In the following we discuss two different inner products on the Hilbert space : the standard H^1_0-inner product and an energy-adaptive inner product which is defined as the linearized approximation of E^' around an arbitrary linearization point u ∈𝕊. Accordingly, our choice for the space X is (X,(·,·)_X)=(H^1_0(),(·,·)_X) equipped with the two aforementioned inner products (·,·)_X respectively. Since 𝕊⊂ H^1_0() ≅ X, this also induces a metric on the Riemannian manifold 𝕊. For a review on more choices for the metric we refer to <cit.>.
§.§.§ The H^1_0-metric and the a_u-metric
We now make the choices explicit. For arbitrary v,w ∈ and a linearization point u ∈𝕊, we define
(v,w)_H^1_0 :=(∇ v, ∇ w)_,
a_u(v,w) := (∇ v, ∇ w)_ + (V v, w)_ -Ω (ℒ_3 v ,w)_ +κ(|u|^2v, w)_.
Note here that a_u(u,w)=⟨ E^'(u) , w ⟩ for all u,w ∈, hence we can indeed interpret a_u(·,·) as a linearization of E^'. To convince ourselves that a_u(·,·) is an inner product on H^1_0(), we need to verify that it is positive definite. This crucially requires assumption <ref> and a corresponding proof can be found in <cit.> (for a_0(·,·), but the result for a_u(·,·) is directly implied). In particular, we have equivalence of the norms induced by a_u(·,·) and (·,·)_H^1_0 in the sense that there exist constants 0<c ≤ C_u (that depend on , V, Ω, κ and, for the upper constant, on u _L^4()) such that
c ( v ,v )_H^1_0 ≤ a_u(v,v) ≤ C_u ( v ,v )_H^1_0 v ∈ H^1_0().
Hence, both a_u(·,·) and (·,·)_H^1_0 induce H^1-type metrics on 𝕊 and we make the choices (X,(·,·)_X)=(H^1_0(),a_u(·,·)) and (X,(·,·)_X)=(H^1_0(),(·,·)_H^1_0). Note that the admissible sets of functions in X are equal in both cases and that just the metric changes.
Next, we discuss the resulting Riemannian Sobolev gradients.
§.§.§ Realizations of the Riemannian X-Sobolev gradients of E
In this subsection, we briefly state the Sobolev gradients associated with the H^1_0- and the a_u(·,·)-metric.
* 𝐇^1_0-gradient:
For the Hilbert space equipped with standard H^1_0-inner product, the H^1_0-Sobolev gradient ∇_H^1_0E(u) of E at u ∈ is given by
(∇_H^1_0E(u) , v)_H^1_0 = ⟨ E'(u) , v ⟩ v ∈.
Recalling E^'(u) from (<ref>) and that (u,v)_H^1_0 =(∇ u, ∇ v)_, we observe that
⟨ E'(u) , v ⟩ = (u,v)_H^1_0 + (V u - Ωℒ_z u + κ |u|^2 u , v)_.
This gives us
∇_H^1_0E(u) = u+ R_H^1_0( V u - Ωℒ_3u + κ |u|^2 u),
where R_H^1_0: → is the Ritz-projection operator given by (R_H^1_0(u) , v)_H^1_0 = (u,v)_ for all v ∈.
Note that the practical computation of the Sobolev gradient ∇_H^1_0E(u) for some given u ∈𝕊 according to (<ref>), requires the solving of the Laplace equation of the form: Find q ∈ H^1_0() such that
- Δ q = V u - Ωℒ_3u + κ |u|^2 u.
In this case we have q=R_H^1_0( V u - Ωℒ_3u + κ |u|^2 u) and consequently ∇_H^1_0E(u) = u+q.
From this, we obtain the corresponding Riemannian H^1_0-Sobolev gradient according to (<ref>) as
P_u,H^1_0 ( ∇_H^1_0 E(u) ) = ∇_H^1_0 E(u) - R_H^1_0(u)/R_H^1_0(u)^2_H^1_0 (u,∇_H^1_0 E(u))_L^2(𝒟)
= u+q - R_H^1_0(u)/R_H^1_0(u)^2_H^1_0 (u,u+q)_L^2(𝒟),
where R_H^1_0(u) ∈ H^1_0() solves - Δ R_H^1_0(u) = u. Hence, the assembly of P_u,H^1_0 ( ∇_H^1_0 E(u) ) requires the solution of two Laplace equations.
The aspect that Riemannian gradient methods based on the H^1_0-gradient only require the solving of the Laplace equation (with different right hand sides) can make them very attractive if they are embedded into a software package with a very efficient solver for that equation, cf. <cit.>.
* a_u(·,·)-gradient:
Let u ∈ be the linearization point and, at the same time, the point in which we want to compute the gradient of E. Equipping the space with the inner product a_u(·,·), we obtain the a_u-Sobolev gradient of E in u ∈𝕊 as the unique solution ∇_a_uE(u) ∈ H^1_0() to
a_u(∇_a_uE(u) , v) = ⟨ E'(u) , v ⟩ v ∈.
Since ⟨ E'(u) , v ⟩ = a_u(u,v), we readily conclude ∇_a_uE(u) = u, i.e., the a_u(·,·)-Sobolev gradient of E in u is just the identity.
We can now use again (<ref>) to compute the corresponding Riemannian gradient, which is purely driven by the a_u(·,·)-orthogonal projection into u and we have
P_u,a_u ( ∇_a_u E(u) )
= P_u,a_u (u) = u - R_a_u(u)/ a_u ( R_a_u(u) , R_a_u(u) ) (u,u)_L^2(𝒟)
= u - R_a_u(u)/ ( u , R_a_u(u) )_L^2(),
where used u_L^2()=1 and a_u ( R_a_u(u) , R_a_u(u) )= ( u , R_a_u(u) )_L^2(). We recall that the Ritz projection R_a_u(u) ∈ H^1_0() requires to solve
a_u ( R_a_u(u) , v ) = (u , v )_L^2() v ∈ H^1_0().
With this, only one linear elliptic problem has to be solved to compute the Riemannian gradient P_u,a_u ( ∇_a_u E(u) ). However, the differential operator changes with u and has to be typically reassembled in each iteration of the (conjugate) gradient method.
Before we conclude this subsection, we need to specify the role of the linearization point u in a corresponding Riemannian gradient method. In fact, the point u∈𝕊 is chosen as the approximation from the previous iteration. To make this explicit, consider the corresponding gradient method that we obtain from (<ref>) and (<ref>) for β^n=0. In this case, we select the metric in iteration n+1 adaptively based on u^n from the previous iteration. Hence, we obtain (for β^n=0)
d^n := -P_u^n,a_u^n ( ∇_a_u^n E(u^n) ) = u^n - R_a_u^n(u^n)/ ( u^n , R_a_u^n(u^n) )_L^2()
with gradient step u^n+1 := (u^n + τ_n d^n)/ u^n + τ_n d^n _L^2().
§.§ Choices for the momentum parameter β^n
In the previous subsection, we specified two different metrics and described how the corresponding Riemannian Sobolev gradients are constructed. Next, we will turn towards the momentum parameter. In general, the momentum parameter is chosen in such a way that the arising methods coincide with the conventional conjugate gradient method if applied to a strongly convex quadratic functional (or in other words, if applied to solve a linear problem with a symmetric, invertible operator). The specific realizations obtained e.g. by Fletcher and Reeves <cit.>, Polak and Ribiére <cit.>, Dai and Yuan <cit.>, etc. were derived by considering linearizations of the original problem and by attaining convergence results for specific β choices if the minimizing functional admits certain properties. A comprehensive abstract generalization of the parameters to Riemannian optimization problems was given by Sato <cit.>, which will be also the basis for our formulas for the momentum parameters.
Before presenting them, we want to illustrate the relevance of a proper β-choice as well as revealing a certain orthogonality relation for the Riemannian gradient and the previous direction which is fulfilled in our setting. In the following, we consider the RCSG method given by (<ref>) and (<ref>) for the adaptive metric a_u^n(·,·) as described in the previous subsection. The specific choice of β^n is left open for the moment.
Before we start, note that d^n ∈u^n is a descent direction for E in u^n, if the Riemannian gradient is negative in direction d^n, which we can express as a_u^n( P_u^n,a_u^n ( ∇_a_u^n E(u^n) ) , d^n )<0. However, in the a_u^n-metric this expression simplifies tremendously by exploiting ∇_a_u^n E(u^n) = u^n and the definition of the a_u^n(·,·)-orthogonal projection P_u^n,a_u^n. Using this, we obtain that d^n is a descent direction if it holds
a_u^n( u^n , d^n )<0.
With this, we return to the role of β^n. Let u^n ∈𝕊 denote a current iterate and d^n the corresponding search direction given by (<ref>). As mentioned before, the step length τ to reach the next iterate u^n+1=u^n+τ d^n u^n+τ d^n _L^2(𝒟) is determined in such a way that the function
τ↦ E(u^n+1) := E( u^n+τ d^n u^n+τ d^n _L^2(𝒟) )
is minimized. To achieve this, we compute the derivative of E(u^n+1) with respect to τ and set it to zero to determine the optimal value, i.e., we seek τ such that ddτ E(u^n+1) = 0. By the chain rule, we therefore have for the optimal τ
⟨ E'(u^n+1), d/dτ(u^n+1) ⟩ = 0.
Noting that
d/dτ u^n +τ d^n _L^2(𝒟) = (u^n+τ d^n,d^n)_L^2(𝒟)/ u^n+τ d^n _L^2(𝒟)
we compute the derivative of u^n+1 w.r.t. τ as
d/dτ(u^n+1)
= d/dτ(u^n+τ d^n/ u^n+τ d^n _L^2(𝒟))
= d^n u^n +τ d^n _L^2(𝒟) - (u^n+τ d^n)(u^n+τ d^n,d^n)_L^2(𝒟)/ u^n+τ d^n _L^2(𝒟)/ u^n +τ d^n^2 _L^2(𝒟)
= 1/ u^n+τ d^n _L^2(𝒟)(d^n-u^n+τ d^n/ u^n+τ d^n _L^2(𝒟)( u^n+τ d^n/ u^n+τ d^n _L^2(𝒟),d^n )_L^2(𝒟))
= d^n - (d^n,u^n+1)_L^2(𝒟)u^n+1/ u^n+τ d^n _L^2.
By (<ref>) we can write the condition (<ref>) for the optimal step length as
⟨ E'(u^n+1), d^n-(d^n,u^n+1)_L^2(𝒟)u^n+1⟩ = 0.
By using the definition of the a_u-Sobolev gradient and the L^2-orthogonal projection onto the tangent space u^n+1 which is given by P_u^n+1,L^2(𝒟)(d^n) = d^n-(d^n,u^n+1)_L^2(𝒟)u^n+1, we can rewrite the condition as
a_u^n+1( ∇ _a_u^n+1 E( u^n+1),P_u^n+1,L^2(𝒟) (d^n) ) =0.
Since P_u^n+1,L^2(𝒟) (d^n) ∈u^n+1 we can exploit the a_u^n+1(·,·)-orthogonal projection onto u^n+1 to conclude
a_u^n+1( P_u^n+1,a_u^n+1(∇ _a_u^n+1E(u^n+1)) ,P_u^n+1,L^2(d^n) ) =0.
Note here that P_u^n+1,a_u^n+1(∇ _a_u^n+1E(u^n+1)) is nothing but the Riemannian gradient of E in u^n+1 with respect to the a_u^n+1(·,·)-metric. Hence, property (<ref>) is a natural orthogonality relation between the current Riemannian gradient and the L^2-projection based vector transport of the previous search direction, which is fulfilled for the optimal τ value.
Thanks to our specific choice of the metric, we recall that ∇ _a_u^n+1E(u^n+1)=u^n+1. Exploiting this together with P_u^n+1,L^2(𝒟)(d^n)=d^n-(d^n,u^n+1)_L^2()u^n+1 in (<ref>) and relabelling the index n → n-1, we obtain
0 = a_u^n(P_u^n,a_u^n(u^n),d^n-1-(d^n-1,u^n)_L^2(𝒟)u^n )
=a_u^n( P_u^n,a_u^n(u^n),d^n-1 ) -( d^n-1, u^n )_L^2(𝒟) a_u^n(P_u^n,a_u^n(u^n),u^n)
(<ref>) = a_u^n( P_u^n,a_u^n(u^n),d^n-1 )-( d^n-1, u^n )_L^2(𝒟) P_u^n,a_u^n(u^n)^2_a_u^n
where for brevity v ^2_a_u^n := a_u^n(v,v). We can now identify the requirement for β^n such that d^n is a descent direction in u^n. For this, the Riemannian gradient P_u^n,a_u^n(u^n) in the direction d^n needs to be negative, i.e., according to (<ref>), we require a_u^n ( u^n,d^n )<0. To check this property, we use d^ n = - P_u^n,a_u^n (u^n) + β^n P_u^n,a_u^n(d^ n-1) to get
a_u^n ( u^n ,d^n ) = - a_u^n( u^n, P_u^n,a_u^n(u^n) ) +β^n a_u^n( u^n, P_u^n,a_u^n(d^n-1) )
(<ref>)= - P_u^n,a_u^n(u^n) ^2_a_u^n+β^n a_u^n( P_u^n,a_u^n (u^n),d^n-1 )
(<ref>)= ( -1+β^n(d^n-1,u^n)_L^2(𝒟) ) P_u^n,a_u^n(u^n) ^2 _a_u^n.
Consequently, d^n can only be descent direction in u^n if β^n fulfills
1-β^n(d^n-1,u^n)_L^2(𝒟) > 0.
The validity of this condition is crucially determined by β^n (and implicitly through optimality by τ_n-1). A proof that shows all the momentum parameters considered here satisfy this condition is still missing in the literature and is, of course, beyond the scope of this paper. However, for the Dai–Yuan version of the parameter, the validity of the property can be made explicit, as we will demonstrate with a simple calculation below. Nevertheless, during our numerical experiments, we observed that all parameter choices produced a descent direction.
We now state the four different choices for the momentum parameter in our setting. We only formulate the parameters in the a_u^n-metric. The changes in the H^1_0-metric are straightforward and can be easily extracted from <cit.>.
§.§.§ Dai–Yuan-type parameter
Picking up on the previous discussion we can now transfer the idea by Dai and Yuan <cit.> to our setting to state the relevant momentum parameter in the energy-adaptive metric a_u^n(·,·). For that, suppose that the previous search direction d^n-1∈ T_u^n-1𝕊 was a descent direction, i.e. (recalling (<ref>))
0< - a_u^n-1 ( u^n-1 , d^n-1 ) .
Adding a_u^n(P_u^n,a_u^n(u^n), d^n-1 )
to both sides, the descent property (<ref>) is equivalent to
a_u^n( P_u^n,a_u^n(u^n) , d^n-1 ) < a_u^n( P_u^n,a_u^n(u^n) , d^n-1 )- a_u^n-1( u^n-1 , d^n-1) .
As we saw in (<ref>), the current search direction d^n is now also a descent direction (that is a_u^n( u^n , d^n ) <0) if β^n is selected such that
-P_u^n,a_u^n(u^n) ^2_a_u^n + β^n a_u^n(P_u^n,a_u^n(u^n), d^n-1 ) <0.
In fact, this can be fulfilled for a positive β^n with the Dai–Yuan choice
β_^n := max{ 0, P_u^n,a_u^n(u^n)_a_u^n^2/ a_u^n( P_u^n,a_u^n(u^n) , d^n-1 ) -a_u^n-1( u^n-1 , d^n-1)},
where v _a_u^n^2 := a_u^n(v,v). To see this, let β_^n>0 (the statement is trivial for β_^n=0). In this case, the positivity of β_^n together with the descent property (<ref>) of the previous direction d^n-1 imply
a_u^n(P_u^n,a_u^n(u^n), d^n-1 )/ a_u^n( P_u^n,a_u^n(u^n) , d^n-1 )- a_u^n-1( u^n-1 , d^n-1) <1.
Hence, by using the formula (<ref>) for β_^n we have
-P_u^n,a_u^n(u^n) ^2_a_u^n + β^n a_u^n(P_u^n,a_u^n(u^n), d^n-1 )
= ( a_u^n(P_u^n,a_u^n(u^n), d^n-1 )/ a_u^n( P_u^n,a_u^n(u^n) , d^n-1 )- a_u^n-1( u^n-1 , d^n-1) - 1) P_u^n,a_u^n(u^n) ^2_a_u^n (<ref>)< 0,
which verifies (<ref>) and therefore the desired descent property a_u^n( u^n , d^n ) <0. This gives a justification for the choice (<ref>) in our setting. Note that the “max” in the definition of the Dai–Yuan parameter is a consequence of the required positivity of β_^n in our arguments. If β_^n=0, the iteration reduces to a standard Riemannian gradient step (in the a_u-metric).
§.§.§ Fletcher–Reeves-type parameter
In our setting, the Fletcher–Reeves parameter can be directly extracted from the original work by Fletcher and Reeves <cit.>. The formula for constrained optimization problems can be also found in <cit.>
and in particular for Riemannian optimization problems in <cit.>. For our a_u^n-metric, we have
β_^n := P_u^n,a_u^n(u^n) _a_u^n^2/ P_u^n-1,a_u^n-1(u^n-1) _a_u^n-1^2,
where v _a_u^n^2 := a_u^n(v,v).
§.§.§ Polak–Ribiére-type parameter
Following Polak and Ribiére <cit.> and again Sato <cit.>, the corresponding momentum parameter for the a_u^n-metric is given by
β_^n := max{ 0, a_u^n( P_u^n,a_u^n(u^n), P_u^n,a_u^n(u^n)-P_u^n-1,a_u^n-1(u^n-1) )/ P_u^n-1,a_u^n-1(u^n-1) _a_u^n-1^2}.
§.§.§ Hestenes–Stiefel-type parameter
Historically, the invention of the conjugate gradient method goes back to Hestenes and Stiefel <cit.>. Even though their work only considered quadratic minimization problems, their parameter choice can be heuristically transferred to more general Riemannian optimization problems, resulting in a mixture of the parameters by Polak–Ribiére and Dai-Yuan. Following, the formulation provided in <cit.>, we obtain the Hestenes–Stiefel parameter in the a_u^n-metric as
β_^n :=max{ 0, a_u^n ( P_u^n,a_u^n(u^n),P_u^n,a_u^n(u^n)-P_u^n-1,a_u^n-1(u^n-1) ) / a_u^n( P_u^n,a_u^n(u^n) , d^n-1 ) - a_u^n-1( u^n-1 , d^n-1 ) }.
In practice, for nonlinear unconstrained optimization problems, the Hestenes–Stiefel and Polak–Ribiére methods typically show a similar performance and are often chosen over the Fletcher-Reeves and Dai-Yuan approaches (cf. <cit.>). We have observed the same behavior in our setting as well. Note that several authors have suggested various methods to improve computational efficiency or to reduce the number of iterations, such as restarting the scheme, i.e., repeatedly switching off the β^n-term after a certain number of steps. In our numerical experiments we did not incorporate such steps since they are mainly heuristic and require some tuning.
Looking at the standard choices for β^n (without switching them in between) therefore ensures a fair comparison of the different methods. Note however, that we still enforce that the parameters cannot become negative as ensured through our definitions (<ref>), (<ref>) and (<ref>). Only the Fletcher–Reeves parameter in (<ref>) is positive by default.
§ NUMERICAL EXPERIMENTS
In this section we compare the Riemannian conjugate gradient methods from the previous section regarding their performance to compute ground states of the Gross-Pitaevskii energy functional. All our experiments were conducted on an Apple iMac-2021 with an Apple M1 Chip of 8 cores and 16 GB of RAM equipped with MATLAB 2024a. Our primary focus is to investigate the impact of the two different metrics (inner products) stated in Section <ref> and the different choices for the beta parameter on the convergence speed of the RCSG method. We also compare the Riemannian conjugate gradient method with the standard Riemannian gradient method (β^n=0). One of our main observations is the notably fast convergence of the RCSG with adaptive metric (a_u^n-metric) and either Polak-Ribiére or Hestenes–Stiefel momentum parameter, highlighting the great numerical efficiency of this choice in terms of a comparably low iteration number.
In the following, we consider a square domain 𝒟=[-L,L]^2⊂ℝ^2, where the value of L will be specified later for each experiment.
To conduct our numerical experiments, we use a spatial discretization based on the first-order Lagrange finite elements, i.e., we consider the minimization problem (<ref>) over a corresponding ℙ^1-finite element space (on a quasi-uniform triangular mesh) and with an L^2-normalization constraint. Corresponding error estimates of optimal order were established in <cit.>. The mesh size in all our experiments is h=2*L*2^-8.
In each experiment, we used a golden section search to compute the optimal values for the step length τ_n in each iteration. The reference ground state was computed with the RCSG method with a_u^n-metric and Polak-Ribiére parameter and a tolerance of 10^-13 for the difference of two consecutive energies, i.e. E(u^n)-E(u^n+1). The reason for this choice is that this combination gave us the lowest energy value compared to all other choices.
To compare the performance of the Riemannian conjugate gradient method with different parameters, we used the stopping criterion E(u^n)-E(u) < 10^-9, where E(u) denotes the energy of the reference ground state. Note that in all the experiments performed below, the interaction parameter κ, the angular velocity Ω and the trapping potential V were all selected such that assumption <ref> is satisfied. The trapping potential V is always chosen as a harmonic potential
V(x,y):=1/2(γ_x^2 x^2 + γ_y^2 y^2),
where the trapping frequencies γ_x and γ_y are specified in the individual experiments (together with Ω, κ and L).
§.§ Experiment 1
We chose the square domain with L=6 and the trapping frequencies, angular velocity and the interaction parameter as
γ_x=2, γ_y=1.9, Ω=1.9 and κ=500.
We initialize our method with the L^2-normalized interpolation of the function u^0(x,y)=Ω√(π) (x+ y) e^-(x^2+y^2)/2 (containing a center vortex) as suggested in <cit.>. The reference ground state contains quantized vortices, and the corresponding density |u|^2 is depicted in Figure <ref> (left).
The reference ground state has an energy of E(u)≈ 7.168961589 and the corresponding ground state eigenvalue is λ≈ 20.516919568 (for minimizers in the selected finite element space). Recalling the second order optimality condition, we also verified that the ground state eigenvalue is the smallest (and simple) eigenvalue of E”(u) |_u, as shown in the table of Figure <ref>. As expected, the corresponding eigenfunction of E”(u) |_u to λ is given by u. This confirms that the sufficient condition for u being a (quasi-isolated) local minimizer is satisfied for our reference ground state.
To compare the performance of the RCSG methods (<ref>)-(<ref>) with adaptive inner product (· ,·)_a_u^n and four different choices of the momentum parameter as stated in Subsection <ref>, along with the standard choice of β^n=0, we plot the corresponding numbers of iterations versus the error E(u^n)-E(u) in Figure <ref> (left).
The plot shows that the Polak–Ribiére and Hestenes–Stiefel parameters perform similarly, and both perform significantly better than the other two available options, and, as expected, also better than the Riemannian gradient method (β^n=0), which took nearly 10,000 iterations to reach an energy error of order 10^-3.
It should be noted that in order to prevent degeneracy of the step size τ_n, we enforced that τ_n should not fall below a value of 0.001. If there is no τ_n ≥ 0.001 such that the energy reduces in the n'th iteration, we default to β^n=0 and hence take a step with the standard Riemannian gradient method for which τ_n cannot degenerate (cf. <cit.>).
As a result of this strategy, for very few iterations
with the Dai–Yuan and Fletcher–Reeves parameters, we had to switch to β^n=0 in order to avoid any increase in energy. We did not observe this effect for the Polak–Ribiére and Hestenes–Stiefel parameters.
Finally, we also compared the performance of the RCSG method with Polak–Ribiére parameter in the adaptive metric with the corresponding method in the standard H^1_0-metric (<ref>). The plot is presented in Figure <ref> (right). Although in this particular experiment the RCSG method with (adaptive) a_u^n-metric only required 100 iterations less than the realization with H^1_0-metric (which is not much considering the total number of iterations), the difference in performance became much more pronounced in our other experiments, where the methods with H^1_0-metric required a significantly larger number of iterations. However, we also noticed that all methods converged to the same ground state in Experiment 1, regardless of the choice of the β^n parameter in the search direction. We will revisit this perspective in Experiment 3, where we observed that this is not always the case.
§.§ Experiment 2
We select L=8, i.e. 𝒟=[-8,8]^2, and the parameters for trapping frequencies, angular velocity and interaction between the particles as γ_x=1.1, γ_y=1.3, Ω=1.2 and κ=400.
We initialize the methods with the complex conjugate of the L^2-normalized interpolation of the function u^0(x,y) as in Experiment 1. The energy of the reference ground state is computed as E(u)≈ 3.886043618 and the corresponding ground state eigenvalue as λ≈ 10.994845147. The plot of the corresponding density is shown in Figure <ref>, and the table therein shows that u is at least a local minimizer (as explained in Section <ref> and for Experiment 1). The plot in Figure <ref> (left) compares the performance of all five different momentum parameters for the a_u^n-metric. Again, the Polak–Ribiére and Hestenes–Stiefel parameters perform better then the other three possibilities by a substantial margin. Moreover, the plot in Figure <ref> (right) shows that the method in the a_u^n-metric requires significantly less iterations than the method in the H^1_0-metric.
§.§ Experiment 3
We let L=3 and select the data parameters as γ_x=11, γ_y=10, Ω=9 and κ=1000.
As suggested in <cit.>, the methods are initialized with an interpolation of the following L^2-normalized function,
u^0(x,y)=Ω√(π) (x+ y) e^-(x^2+y^2)/2 + 1-Ω√(π)e^-(x^2+y^2)/2.
The energy of the reference ground state (in the selected finite element space) is determined as E≈ 55.92566166 and the corresponding ground state eigenvalue as ≈ 162.8748496. The presence of the ground state eigenvalue at the bottom of the spectrum of E^''(u)_|u proves that the computed u is a local minimizer (see Figure <ref>). Again, in comparing the performance of our method in the a_u^n-metric for different momentum parameters, the Polak–Ribiére and Hestenes–Stiefel parameters outperformed all the others, see Figure <ref> (left).
Moreover, in Figure <ref> (right) we also observe a significant reduction of the iteration numbers when the Riemannian conjugate gradient method is used in combination with the a_u^n-metric compared to the H^1_0-metric.
We also observed that the Riemannian gradient method (β^n=0) did not convergence to the ground state, but got stuck near an excited state. This did not only happen for Experiment 3, but was also noticed for other settings not included in this article. The given experiment hence serves as an illustrative example to observe this drawback. To be more specific, we noticed that even after more than 10,000 iterations of the Riemannian gradient method (for β^n=0), the difference between the energy of the converged state u_ and the reference ground state u is on the order of 10^-1. We also computed the corresponding residual E^'(u_)- λ_ I u_ and the spectrum of E^''(u_)|_u_ to confirm that the converged state u_ is indeed a saddle point of E on 𝕊 and not a local minimizer.
The corresponding densities of the reference ground state u and the excited state u_ are plotted in Figure <ref>. An important point to notice here is that the eigenvalue corresponding to the excited state found by the Riemannian gradient method is smaller than the ground state eigenvalue. Hence, the ground state eigenvalue is not necessarily the smallest eigenvalue of the Gross-Pitaevskii eigenvalue problem (<ref>).
'
10
AHP21NumMath
R. Altmann, P. Henning, and D. Peterseim.
The J-method for the Gross-Pitaevskii eigenvalue problem.
Numer. Math., 148(3):575–610, 2021.
AltPetSty22
R. Altmann, D. Peterseim, and T. Stykel.
Energy-adaptive Riemannian optimization on the Stiefel manifold.
ESAIM Math. Model. Numer. Anal., 56(5):1629–1653, 2022.
APS23Newton
R. Altmann, D. Peterseim, and T. Stykel.
Riemannian Newton Methods for Energy Minimization Problems
of Kohn–Sham Type.
J. Sci. Comput., 101(1):Paper No. 6, 2024.
AnD14
X. Antoine and R. Duboscq.
Robust and efficient preconditioned Krylov spectral solvers for
computing the ground states of fast rotating and strongly interacting
Bose-Einstein condensates.
J. Comput. Phys., 258:509–523, 2014.
ALT17
X. Antoine, A. Levitt, and Q. Tang.
Efficient spectral computation of the stationary states of rotating
Bose-Einstein condensates by preconditioned nonlinear conjugate gradient
methods.
J. Comput. Phys., 343:92–109, 2017.
Bao14
W. Bao.
Mathematical models and numerical methods for Bose-Einstein
condensation.
Proceedings of the International Congress for Mathematicians
2014, 2014.
BaC13b
W. Bao and Y. Cai.
Mathematical theory and numerical methods for Bose-Einstein
condensation.
Kinet. Relat. Models, 6(1):1–135, 2013.
BaD04
W. Bao and Q. Du.
Computing the ground state solution of Bose-Einstein condensates
by a normalized gradient flow.
SIAM J. Sci. Comput., 25(5):1674–1697, 2004.
BWM05
W. Bao, H. Wang, and P. A. Markowich.
Ground, symmetric and central vortex states in rotating
Bose-Einstein condensates.
Commun. Math. Sci., 3(1):57–88, 2005.
Begout22
P. Bégout.
The dual space of a complex Banach space restricted to the field of
real numbers.
Adv. Math. Sci. Appl., 31(2):241–252, 2022.
Bos24
S. N. Bose.
Plancks Gesetz und Lichtquantenhypothese.
Zeitschrift für Physik, 26(1):178–181, 1924.
Can00
E. Cancès.
SCF algorithms for HF electronic calculations.
In Mathematical models and methods for ab initio quantum
chemistry, volume 74 of Lecture Notes in Chem., pages 17–43.
Springer, Berlin, 2000.
CCM10
E. Cancès, R. Chakir, and Y. Maday.
Numerical analysis of nonlinear eigenvalue problems.
J. Sci. Comput., 45(1-3):90–117, 2010.
CaL00
E. Cancès and C. Le Bris.
Can we outperform the DIIS approach for electronic structure
calculations?
Int. J. Quantum Chem., 79(2):82–90, 2000.
CaL00B
E. Cancès and C. Le Bris.
On the convergence of SCF algorithms for the Hartree-Fock
equations.
M2AN Math. Model. Numer. Anal., 34(4):749–774, 2000.
CDLX23
H. Chen, G. Dong, W. Liu, and Z. Xie.
Second-order flows for computing the ground states of rotating
Bose-Einstein condensates.
J. Comput. Phys., 475:Paper No. 111872, 28, 2023.
CGHZ11
H. Chen, X. Gong, L. He, and A. Zhou.
Adaptive finite element approximations for a class of nonlinear
eigenvalue problems in quantum physics.
Adv. Appl. Math. Mech., 3(4):493–518, 2011.
CLLZ24-discrete
Z. Chen, J. Lu, Y. Lu, and X. Zhang.
Fully discretized Sobolev gradient flow for the
Gross–Pitaevskii eigenvalue problem.
ArXiv e-print 2403.06028, 2024.
CLLZ24
Z. Chen, J. Lu, Y. Lu, and X. Zhang.
On the convergence of Sobolev gradient flow for the
Gross-Pitaevskii eigenvalue problem.
SIAM J. Numer. Anal., 62(2):667–691, 2024.
dai1999
Y. H. Dai and Y. Yuan.
A nonlinear conjugate gradient method with a strong global
convergence property.
SIAM J. Optim., 10(1):177–182, 1999.
DaH10
I. Danaila and F. Hecht.
A finite element method with mesh adaptivity for computing vortex
states in fast-rotating Bose-Einstein condensates.
J. Comput. Phys., 229(19):6946–6960, 2010.
DaK10
I. Danaila and P. Kazemi.
A new Sobolev gradient method for direct minimization of the
Gross-Pitaevskii energy with rotation.
SIAM J. Sci. Comput., 32(5):2447–2467, 2010.
DaP17
I. Danaila and B. Protas.
Computation of ground states of the Gross-Pitaevskii functional
via Riemannian optimization.
SIAM J. Sci. Comput., 39(6):B1102–B1129, 2017.
DiC07
C. M. Dion and E. Cancès.
Ground state of the time-independent Gross-Pitaevskii equation.
Comput. Phys. Comm., 177(10):787–798, 2007.
pc22
C. Döding and P. Henning.
Uniform L^∞-bounds for energy-conserving higher-order time
integrators for the Gross-Pitaevskii equation with rotation.
ArXiv e-print 2210.01553, to appear in IMA J. Numer. Anal.,
2024.
Smith_MIT_1998
A. Edelman, T. A. Arias, and S. T. Smith.
The geometry of algorithms with orthogonality constraints.
SIAM J. Matrix Anal. Appl., 20(2):303–353, 1999.
Ein24
A. Einstein.
Quantentheorie des einatomigen idealen Gases, pages 261–267.
Sitzber. Kgl. Preuss. Akad. Wiss., 1924.
EngGianGrub22
C. Engström, S. Giani, and L. Grubišić.
Higher order composite DG approximations of Gross-Pitaevskii
ground state: benchmark results and experiments.
J. Comput. Appl. Math., 400:Paper No. 113652, 15, 2022.
fletcher1964
R. Fletcher and C. M. Reeves.
Function minimization by conjugate gradients.
Comput. J., 7:149–154, 1964.
GN1992
J. C. Gilbert and J. Nocedal.
Global convergence properties of conjugate gradient methods for
optimization.
SIAM J. Optim., 2(1):21–42, 1992.
HLP24
M. Hauck, Y. Liang, and D. Peterseim.
Positivity preserving finite element method for the
Gross–Pitaevskii ground state: discrete uniqueness and global
convergence.
ArXiv e-print 2405.17090, 2024.
HSW21
P. Heid, B. Stamm, and T. P. Wihler.
Gradient flow finite element discretizations with energy-based
adaptivity for the Gross-Pitaevskii equation.
J. Comput. Phys., 436:Paper No. 110165, 15, 2021.
PH24
P. Henning.
The dependency of spectral gaps on the convergence of the inverse
iteration for a nonlinear eigenvector problem.
Math. Models Methods Appl. Sci., 33(7):1517–1544, 2023.
HenJar24
P. Henning and E. Jarlebring.
The Gross–Pitaevskii equation and eigenvector nonlinearities:
Numerical methods and algorithms.
To appear in SIAM Rev., 2024.
HeP23
P. Henning and A. Persson.
On optimal convergence rates for discrete minimizers of the
Gross-Pitaevskii energy in localized orthogonal decomposition spaces.
Multiscale Model. Simul., 21(3):993–1011, 2023.
HeP20
P. Henning and D. Peterseim.
Sobolev gradient flow for the Gross-Pitaevskii eigenvalue
problem: global convergence and computational efficiency.
SIAM J. Numer. Anal., 58(3):1744–1772, 2020.
PHMY24_1
P. Henning and M. Yadav.
Convergence of a Riemannian gradient method for the
Gross–Pitaevskii energy functional in a rotating frame.
ArXiv e-print 2406.03885, 2024.
PHMY24
P. Henning and M. Yadav.
On discrete ground states of rotating Bose–Einstein condensates.
Math. Comp., 2024.
hestenes1952
M. R. Hestenes and E. Stiefel.
Methods of conjugate gradients for solving linear systems.
J. Research Nat. Bur. Standards, 49:409–436, 1952.
JarKM14
E. Jarlebring, S. Kvaal, and W. Michiels.
An inverse iteration method for eigenvalue problems with eigenvector
nonlinearities.
SIAM J. Sci. Comput., 36(4):A1978–A2001, 2014.
KaE10
P. Kazemi and M. Eckart.
Minimizing the Gross-Pitaevskii energy functional with the
Sobolev gradient – analytical and numerical results.
Int. J. Comput. Methods, 7(3):453–475, 2010.
Neu97
J. W. Neuberger.
Sobolev gradients and differential equations, volume 1670 of
Lecture Notes in Mathematics.
Springer-Verlag, Berlin, 1997.
NoceWrig06
J. Nocedal and S. J. Wright.
Numerical optimization.
Springer Series in Operations Research and Financial Engineering.
Springer, New York, second edition, 2006.
PiS03
L. Pitaevskii and S. Stringari.
Bose-Einstein condensation, volume 116 of International
Series of Monographs on Physics.
The Clarendon Press, Oxford University Press, Oxford, 2003.
polak1969
E. Polak and G. Ribière.
Note sur la convergence de méthodes de directions conjuguées.
Rev. Française Informat. Recherche Opérationnelle,
3(16):35–43, 1969.
RingWirth12
W. Ring and B. Wirth.
Optimization methods on Riemannian manifolds and their application
to shape space.
SIAM J. Optim., 22(2):596–627, 2012.
Sato_2022
H. Sato.
Riemannian conjugate gradient methods: general framework and specific
algorithms with convergence analyses.
SIAM J. Optim., 32(4):2690–2717, 2022.
ShuTangZhangZhang2024
Q. Shu, Q. Tang, S. Zhang, and Y. Zhang.
A preconditioned Riemannian conjugate gradient method for computing
the ground states of arbitrary-angle rotating Bose-Einstein condensates.
J. Comput. Phys., 512:Paper No. 113130, 16, 2024.
smith2014
S. T. Smith.
Optimization techniques on Riemannian manifolds.
In Hamiltonian and gradient flows, algorithms and control,
volume 3 of Fields Inst. Commun., pages 113–136. Amer. Math. Soc.,
Providence, RI, 1994.
TCWW20
T. Tian, Y. Cai, X. Wu, and Z. Wen.
Ground states of spin-F Bose-Einstein condensates.
SIAM J. Sci. Comput., 42(4):B983–B1013, 2020.
WWB17
X. Wu, Z. Wen, and W. Bao.
A regularized Newton method for computing ground states of
Bose-Einstein condensates.
J. Sci. Comput., 73(1):303–329, 2017.
XXXY21
F. Xu, H. Xie, M. Xie, and M. Yue.
A multigrid method for the ground state solution of Bose-Einstein
condensates based on Newton iteration.
BIT, 61(2):645–663, 2021.
Zhang2022
Z. Zhang.
Exponential convergence of Sobolev gradient descent for a class of
nonlinear eigenproblems.
Commun. Math. Sci., 20(2):377–403, 2022.
| An extreme state of matter with remarkable superfluid properties is formed when a dilute bosonic gas condenses at temperatures close to 0 Kelvin to a so-called Bose–Einstein condensate (BEC), cf. <cit.>. Its extraordinary superfluid nature can be checked by verifying the existence of quantized vortices in the rotating BEC. In practical setups, the appearance of such vortices is crucially related to the interplay of a (magnetic or optical) trapping potential V (to confine the condensate) and the angular frequency Ω of a stirring potential (to rotate the condensate). If the angular frequency is too small compared to the strength of the trapping potential, no vortices appear. If the angular frequency is too high, the condensate can be destroyed by centrifugal forces. Only in an intermediate regime, a rich landscape of vortex pattern can observed and studied. In this paper, we are concerned with the numerical computation of such pattern by seeking ground states (i.e. the lowest energy states) of BECs in a rotating frame.
On a given computational domain ⊂^d (for d=2,3) a ground state is mathematically described through its quantum state u : →ℂ, whereas the vortices become visible in the corresponding density |u|^2 : →ℝ. The density is usually normalized such that the total mass of the BEC fulfills ∫_ |u|^2 x =1 or, more precisely, it should hold
u ∈ 𝕊 := { v ∈ H^1_0(,ℂ) | v _L^2() = 1 },
where H^1_0(,ℂ) denotes the usual Sobolev space of weakly differentiable, square-integratable and complex-valued functions with a vanishing trace v_|∂=0.
In a given configuration, a corresponding ground state is characterized as a global minimizer of the total energy of the system. This energy is described by the Gross–Pitaevskii energy functional E : 𝕊→ℝ given by
E(u)
:= 1/2∫_𝒟 |∇ u|^2 + V |u|^2 - Ω u̅ ℒ_3 u + κ/2 |u|^4
for u∈𝕊.
Here, u denotes the complex conjugate of u, V represents the external trapping potential, ℒ_3 := - ( x_1 ∂_x_2 - x_2 ∂_x_1) denotes the x_3-component of the angular momentum (hence describing a rotation around the x_3-axis), Ω∈ℝ is the corresponding angular velocity of the stirring potential and κ∈ℝ^+ is a repulsion parameter that encodes the strength of particle interactions. For a comprehensive introduction to the basic theory and mathematical properties of ground states of rotating BECs, we refer to the papers by Bao et al. <cit.>.
In a nutshell, the finding of a ground state requires to find a minimizer of the energy E on the Riemannian manifold 𝕊 (which we recall as incorporating the mass normalization constraint for u). Hence, we are concerned with a Riemannian minimization/optimization problem, which is precisely the perspective that we are taking in this paper. Alternatively, the problem can be also viewed through the lens of nonlinear eigenvalue problems by examining the corresponding Euler-Lagrange equations, known as Gross-Pitaevskii eigenvalue problem, associated with the constrained energy minimization problem <cit.>. The links between the two perspectives (nonlinear eigenvalue problem vs. Riemannian minimization problem) are elaborated in the review paper <cit.>. We also note that the Euler–Lagrange equations can be tackled with Newton-type methods <cit.>.
As mentioned above, we adopt in the following the perspective of directly minimizing the energy in an iterative process through discretized Riemannian gradient flows / Riemannian gradient methods, cf. <cit.>. The development of optimization techniques on Riemannian manifolds goes back to Smith et al. <cit.> and has been significantly extended in the past three decades. Modern methods combine the concepts of Sobolev gradients, Riemannian descent directions, Riemannian vector transport, and retractions to the constrained manifold 𝕊, cf. <cit.>. In this paper, we will further explore this path by constructing new metric-adaptive Riemannian conjugate gradient (CG) methods for the considered application of rotating BECs. In particular, we will numerically investigate the performance of the new methods. Our experimental analysis primarily focuses on improving the computational efficiency of the schemes, specifically by (a) selecting appropriate metrics for the constrained manifold (or more precisely, the tangent spaces at the current iterates u^n) and (b) choosing the proper momentum parameter (denoted as β) to update the next search direction. With this, we also want to examine an open question posed in <cit.>: “it remains an open question whether the use of a well-adapted Riemannian metric defined on the constraint manifold could further improve the performance of the approach.”
Note here that the choice of the metric is a crucial ingredient, because it changes the Riemannian gradient of E on the manifold and hence the way how a steepest descent/ascent is characterised. More precisely, the Riemannian gradient of E in a point v∈𝕊 with respect to an X-metric (induced by an inner product (·,·)_X) is obtained in two steps: First, construct the Riesz representation of E^'(v) in X. The representation is called a Sobolev gradient of E (cf. <cit.>) and is denoted by ∇_X E(v). Second, project ∇_X E(v) into the tangent space at v ∈𝕊 with the X-orthogonal projection. The resulting function is the Riemannian gradient of E in v w.r.t. to the X-metric. In the context of rotating BECs, the most popular metrics for conjugate Riemannian gradient methods are the L^2-metric (in combination with suitable preconditioners) as studied by Tang et al. <cit.> and an energy-metric (based on an inner product of the form (u,v)_ + ( u , v )_ with =∇ v - Ω 𝐀^⊤ and 𝐀=(x_2,-x_1,0)) proposed by Danaila et al. <cit.>. The idea of using adaptively changing metrics for (general) optimization problems is discussed by Ring and Wirth <cit.> and Sato <cit.>. In this work, we investigate a particular adaptively changing metric (based on the location on 𝕊) that is selected in such a way that the corresponding Sobolev gradient fulfills ∇_XE(v) =v for all v ∈ in the sense of optimal preconditioning. Motivated by guaranteed energy dissipation and global convergence for the corresponding gradient flows, this adaptive choice for the metric was first suggested in <cit.> for the Gross–Pitaevskii energy without rotation and later transferred to the case with rotation in <cit.>. However, both works are only concerned with regular Riemannian gradient methods and their analysis and extensions to conjugate gradient versions were left open. Even though the generalization of these concepts to Riemannian conjugate gradients was recently discussed in <cit.>, it was only for Ω=0 and for a Fletcher–Reeves-type momentum parameter that we found to perform suboptimal in our experiments with rotating BECs.
In this work, we hence close this gap and formulate a corresponding Riemannian conjugate gradient method with adaptively changing metric as sketched above and with various choices for momentum parameters. Selecting a suitable momentum parameter is of great practical importance since different choices can result in significant variations in the number of iterations required to attain a specified tolerance. We will consider four different choices according to the most popular types of momentum parameters, which are Dai–Yuan <cit.>, Fletcher–Reeves <cit.>, Hestenes–Stiefel <cit.> and Polak–Ribiére <cit.>. If the momentum parameter is uniformly zero, the method reduces to a Riemannian gradient method. Besides comparing the performance of the parameters, we also compare the iteration numbers for the adaptive metric with the iteration numbers for the H^1_0-metric. In our numerical experiments we find that the adaptive metric leads to a significant acceleration compared to the H^1_0-method and that the corresponding Polak–Ribiére and Hestenes–Stiefel-type parameters performed significantly better than the other two (including the generic choice β=0).
So far, the mathematical convergence analysis of Riemannian conjugate gradient methods for the Gross–Pitaevskii problem is fully open and hence beyond the scope of this paper. However, in the case β=0 (i.e. the case of Riemannian gradient methods), a convergence analysis was recently established in <cit.>.
Finally we note that a practical application of Riemannian gradient methods also requires a space discretization (of the partial differential equations that have to be solved in each iteration of the gradient method). Typical approaches include spectral and pseudo-spectral methods <cit.>, finite element methods <cit.> or spectral element methods <cit.>. We leave the choice open in this paper and only investigate the outer gradient iteration. The algorithms can be afterwards combined with any preferred spatial discretization that potentially exploits the structure of the considered metric.
Outline: The paper is organised as follows. The model framework is introduced in Section <ref>. In Section <ref>, we recall the concepts of Sobolev gradients and Riemannian steepest descent. With this we propose a class of Riemannian conjugate gradient methods with different realizations based on the choice of the metric and the momentum parameter. Lastly, in Section <ref>, we evaluate the performance of the aforementioned Riemannian CG methods for different model problems and discuss the numerical observations. | null | null | null | null | null |
http://arxiv.org/abs/2409.17271v1 | 20240925183734 | Boson-fermion algebraic mapping in second quantization | [
"F. Lingua",
"D. M. Peñafiel",
"L. Ravera",
"S. Salgado"
] | hep-th | [
"hep-th",
"cond-mat.quant-gas",
"quant-ph"
] |
Boson-fermion algebraic mapping in second quantization
F. Lingua ^a, * D. M. Peñafiel ^b, † L. Ravera ^c, d, e, ⋆ S. Salgado ^f,
September 28, 2024
==============================================================================
-0.6cm
^a Department of Applied Physics, KTH Royal Institute of Technology – KTH.
SE-10691 Stockholm, Sweden.
^b Instituto de Ciencias Exactas y Naturales, Facultad de Ciencias, Univesidad Arturo Prat – UNAP.
Avda. Arturo Prat 2120, Iquique, Chile.
^c DISAT, Politecnico di Torino – PoliTo.
Corso Duca degli Abruzzi 24, 10129 Torino, Italy.
^d Istituto Nazionale di Fisica Nucleare, Section of Torino – INFN.
Via P. Giuria 1, 10125 Torino, Italy.
^e Grupo de Investigación en Física Teórica – GIFT.
Universidad Católica De La Santísima Concepción, Concepción, Chile.
^f Instituto de Alta Investigación, Universidad de Tarapacá.
Casilla 7D, Arica, Chile.
^* [email protected] ^† [email protected] ^⋆ [email protected] ^ [email protected]
§ ABSTRACT
We present an algebraic method to derive the structure at the basis of the mapping of bosonic algebras of creation and annihilation operators into fermionic algebras, and vice versa, introducing a suitable identification between bosonic and fermionic generators.
The algebraic structure thus obtained corresponds to a deformed Grassmann algebra, involving anticommuting Grassmann-type variables. The role played by the latter in the implementation of gauge invariance in second quantization within our procedure is then discussed, together with the application of the mapping to the case of the bosonic and fermionic harmonic oscillator Hamiltonians.
Keywords: Bosons and fermions, Second quantization, Grassmann variables, Gauge invariance.
§ INTRODUCTION
Bosons and fermions have different characteristics and are distinguished by their intrinsic properties, particularly their spin and statistics. Bosons have integer spin, obey Bose-Einstein statistics, meaning that multiple bosons can occupy the same quantum state simultaneously – which leads to phenomena such as Bose-Einstein condensation and the behavior of photons in a laser. Fermions obey Fermi-Dirac statistics and are subject to the Pauli exclusion principle, which states that no two fermions can occupy the same quantum state simultaneously.
In quantum field theory (QFT), bosons are typically associated with fields that mediate interactions, fermions are the building blocks of matter. In particle and nuclear physics, composite particles can be bosons if their constituent fermions pair up in such a way that their total spin is an integer (e.g., mesons, made of one quark and one antiquark), while composite particles can be fermions if their constituent fermions combine to give a half-integer spin (e.g., baryons, such as protons and neutrons, which are made of three quarks).
Despite their differences, systems and theories of bosons and fermions can be mapped into each other and one can write, when certain conditions are met and in well-defined regimes, bosons in terms of fermions, and vice versa, allowing a boson-fermion correspondence. For example, the bosonization technique <cit.> is a well-established analytic tool for investigating the low energy regime of one-dimensional interacting fermionic systems, essentially consisting in linearizing the spectrum around the Fermi points, passing to the continuum limit, and finally expressing the fermionic operators in terms of bosonic fields.
On the other hand, with the so-called fermionization technique, involving Jordan-Wigner transformations <cit.>, it is possible to map spin and bosonic systems into fermionic ones – see <cit.> for a review on bosonization and fermionization.
Furthermore, the Fock space for fermion fields can be identified with the Fock space for boson fields, provided the overall numbers of internal degrees of freedom (d.o.f.) are the same. As a consequence, the respective free field Hamiltonian systems are equivalent, or, as also frequently said, dual.
The underlying principles of connecting bosonic and fermionic descriptions through dualities are applicable across various domains in QFT, see e.g. the pivotal work <cit.>.
Another possible way to relate bosons and fermions may be dictated by supersymmetry. One of the mathematical tools used to formulate and understand supersymmetry is Grassmann-type variables. In particular, supersymmetric theories can be formulated in a way that extends the concept of spacetime into a new structure called superspace, which includes both the usual spacetime coordinates and (anticommuting) Grassmann coordinates.
In this work, we present an algebraic approach to the mapping of the algebra of bosonic creation and annihilation operators (second quantization) into the algebra of fermionic operators, and vice versa, which allows to systematically derive the algebraic structure underlying the mapping. Our procedure is based on the introduction of a proper identification criterion between bosonic and fermionic generators, inherited from a Lie algebra expansion method, called S-expansion <cit.>, and adapted to our purposes.
The basis of the S-expansion consists in combining the inner multiplication law of a semigroup S with the structure constants of a Lie algebra 𝒢. The new Lie algebra thus obtained is called an S-expanded algebra. From the physical point of view, several theories have been extensively studied using the S-expansion method, enabling numerous results over recent years, especially in the context of gravitational theories – see e.g. <cit.>.
The identification criterion we adopt here is reminiscent of the one described in <cit.>. However, the algebraic structure we obtain is not that of a semigroup. Rather, it corresponds to a graded (deformed) Grassmann algebra, involving anticommuting variables.
It is here derived from the analysis of the (anti)commutation relations of the bosonic and fermionic algebras.
The reminder of the paper is organised as follows: In Section <ref> we briefly review the identification criterion of the S-expansion. This criterion is then adapted to write the fermionic generators in terms of bosonic ones – converting, via the elements of the algebraic structure involved, the bosonic generators into fermionic ones – and vice versa. In Section <ref> we describe our procedure and derive the algebraic structure underlying the mapping, both in the case in which the identification considered preserve the creation and annihilation operations and in the case in which bosonic/fermionic creation operators are mapped into fermionic/bosonic annihilation operators.
Moreover, we discuss the role played by the Grassmann-like variables underlying the mapping in realising gauge invariance
in second quantization, within our procedure. In Section <ref> we provide the application to the case of the bosonic and fermionic harmonic oscillator Hamiltonians.
Section <ref> is devoted to the conclusion.
§ REVIEW OF THE IDENTIFICATION CRITERION
The basis of the S-expansion consists in combining the multiplication law of a semigroup S with the structure constants of a Lie algebra 𝒢
<cit.>. The new (larger) Lie algebra obtained through this procedure is named S-expanded algebra and can be written as 𝒢_S= S ×𝒢.
In <cit.> it was developed an analytic method to find the multiplication table(s) of the set(s) involved in the S-expansion for reaching a target Lie algebra from a starting one, after having properly chosen the partition over subspaces of the considered algebras (see also <cit.>). Let us briefly review the procedure.
In order to derive the multiplication table of the set S, the following identification criterion between the S-expanded generators of the initial Lie algebra and the generators of the target one is introduced:
T̃_A = T_A,αλ_α T_A,
where T_A are the generators in the subspace V_A of the starting algebra, T̃_A are the generators in the subspace Ṽ_A of the target algebra, and where λ_α∈ S is an element of the set S – whose subsets are taken, within the S-expansion method, to be in resonance <cit.> with the 𝒢-partition in subspaces.
One has to perform the identification (<ref>) for each element of the set S, associating each element of each subset with the generators in the subspace related to the considered subset.
The whole procedure of association and identification does not affect the internal structure of the generators of the starting algebra.
Under the identification (<ref>), the commutation relations between the generators of the target algebras are linked with the commutation relations of the S-expanded ones and, factorising the elements of the set S, the multiplication relations between these elements are fixed.
For the target algebra one writes
[ T̃_A, T̃_B ] =C̃_AB^ CT̃_C,
where T̃_A, T̃_B , and T̃_C are the generators in the subspaces Ṽ_A, Ṽ_B, and Ṽ_C of the partition over the target Lie algebra, respectively. With C̃_AB^ C we denote the structure constants of the target Lie algebra.
For the starting algebra we have
[T_A, T_B ]= C_AB^ C T_C ,
with C_AB^ C the corresponding structure constants.
Then, the commutation relations of the expanded algebra are written in terms of the generators of the starting algebra:
[T_(A,α),T_(B,β)]= K_αβ^ γ C_AB^ C T_(C,γ),
or
[λ_α T_A,λ_β T_B ] = K_αβ^ γ C_AB^ C λ_γ T_C,
where the λ's are the elements of the set S and where K_αβ^ γ is the so-called two-selector, defined as
K_αβ^ γ = { 1 , when λ_α·λ_β = λ_γ,
0 , otherwise. .
One can now write the structure constants of the target algebra in terms of the two-selector and of the structure constants of the starting one:
C̃_AB^ C := C_(A,α)(B,β)^ (C,γ)= K_αβ^ γC_AB^ C.
By exploiting the identification (<ref>), one can now write the commutation relations of the target algebra (<ref>) in terms of the commutation relations between the S-expanded generators of the starting one, factorising the elements of the set S out of the commutators.
In this way, the following relations are obtained:
(λ_α·λ_β ) [T_A, T_B ] =K_αβ^ γ C_AB^ Cλ_γ T_C.
Comparing the commutation relations (<ref>) with the ones of the starting algebra in (<ref>), one finds
λ_α·λ_ β = λ_γ .
Repetition of this procedure for all the commutation rules of the target algebra yields the multiplication rules between the elements of the set S, that is its multiplication table.
In the following, we adopt and adapt the above identification criterion and perform an analogous procedure in order to derive the algebraic structure underlying the mapping between bosonic and fermionic algebras.
§ ALGEBRAIC APPROACH TO THE BOSON-FERMION MAPPING
We consider bosonic and fermionic creation and annihilation operators acting on modes, e.g., sites of a lattice. For the sake of clarity, we label bosonic modes with I,J,…=1,…,N_B and fermionic modes with i,j,… = 1, …, N_F.
We consider the bosonic algebra 𝒢_B generated by {a_I,a^†_I}, being a_I and a^†_I the bosonic annihilation and creation operators, respectively, acting on the mode I. They satisfy the following commutation relations:
[a_I,a_J ^†] = a_I a_J ^† - a_J ^† a_I = δ_IJ ,
[a_I,a_J]=[a_I ^† ,a_J ^†]=0.
As target algebra, we consider the fermionic algebra 𝒢_F generated by { c_i,c^†_i}, where c_i and c^†_i are fermionic annihilation and creation operators, respectively, acting on the mode i. These generators satisfy the following anticommutation relations:
{c_i,c_j^†}= δ_ij ,
{c_i,c_j}={c_i^†, c_j ^†}=0.
Now, our goal is to find the algebraic structure linking the bosonic algebra 𝒢_B and the fermionic algebra 𝒢_F. To this aim, we apply the identification criterion developed in the S-expansion context, adapted to our framework.
As we shall see, two main different identifications are possible.
§.§ Mapping that preserves the creation/annihilation operation
We write the following identification relations between the fermionic and bosonic generators:
{ c_i=λ_i^I a_I,
c_i^† = λ_i^I† a_I^† ,
.
where λ_i^I and λ_i^I† are the elements of a set Λ. Notice that the λ's carry mode labels, and the contracted indices in (<ref>) are summed over (summation is implied).
We extract the algebraic properties of the set Λ involved in the mapping – which, at this stage, corresponds to consistently writing, using the identification (<ref>), fermions in terms of bosons – by analysing the (anti)commutation relations of the fermionic and of the bosonic algebras.
We start by considering the first anticommutation relation in (<ref>).
Substituting the expressions in (<ref>) for the fermionic generators c_i and c_j^†, we obtain
{c_i,c_j^†} = δ_ij
= {λ_i^I a_I , λ_j^J† a_J^†}= (λ_i^I ·λ_j^J †) a_I a_J^† + (λ_j^J†·λ_i^I ) a_J^†a_I .
Now we see that, requiring, as a consistency relation to be fulfilled by the λ's,
λ_i^I ·λ_j^J† = -λ_j^J†·λ_i^I ,
we get
{c_i,c_j^†} =δ_ij
= λ_i^I ·λ_j^J† (a_I a_J^† - a_J^†a_I)= λ_i^I ·λ_j^J† [a_I , a_J^†] = λ_i^I ·λ_j^J†δ_IJ ,
where in the last step we have used the first commutation relation in (<ref>). Hence, we are left with the following relation:
λ_i^I ·λ_jI^† = δ_ij.
On the other hand, taking the anticommutation relation {c_i,c_j}=0
and using the identification (<ref>), we can write
{c_i,c_j} =0
= {λ_i^I a_I, λ_j^J a_J} =λ_i^I ·λ_j^J [a_I, a_J] = 0,
where we have also introduced and exploited the consistency requirement
λ_i^I ·λ_j^J = -λ_j^J ·λ_i^I
and used the commutation relation [a_I,a_J]=0.
Analogously, considering the anticommutation relation {c_i^† , c_j^†}=0, using the bosonic commutation relation [a_I^†,a_J^†]=0, and implementing the identification (<ref>)
we get
λ_i^I†·λ_j^J† = -λ_j^J†·λ_i^I† .
Hence, we end up with the multiplication rules (<ref>), (<ref>), (<ref>), and (<ref>) between the elements of Λ.
The algebraic structure obtained is thus
{λ_i^I , λ_j^J†} = λ_i^I ·λ_j^J† + λ_j^J†·λ_i^I = 0 ,
{λ_i^I , λ_j^J } = λ_i^I ·λ_j^J + λ_j^J ·λ_i^I = 0 ,
{λ_i^I†, λ_j^J†} = λ_i^I†·λ_j^J† + λ_j^J†·λ_i^I† = 0 ,
which corresponds to a graded Grassmann algebra, involving anticommuting Grassmann-type variables.[The anticommuting elements λ_i^I and λ_i^I† are ℤ_2 odd-graded (fermionic) elements.
The algebra they generate is ℤ_2-graded.] Furthermore, one may call it a “deformed" Grassmann algebra, given that the λ,λ^†'s obey the multiplication rule (<ref>) – which can be seen as an extra condition with respect to those appearing in a standard Grassmann algebra.
Note that Pauli exclusion principle on fermions is still naturally satisfied – and it will actually be so also in the other mappings we will present, due to the fact that the original (anti)commutation relations are respected.
Inverse mapping of fermionic into bosonic operators
One can also consider the inverse mapping of fermions into bosons, by applying an analogous procedure, i.e. by writing an identification
{ a_I=λ_I^i c_i,
a_I^† =λ_I^i † c_i^† ,
.
with λ_I^i and λ_I^i † the elements of the set involved in the mapping. In this case, from the analysis of the first commutation relation in (<ref>) we get
[a_I,a_J ^†] = δ_IJ
= [λ_I^i c_i , λ_J^j† c_j^† ] = (λ_I^i ·λ_J^j †) c_i c_j^† - (λ_J^j†·λ_I^i ) c_j^†c_i
= λ_I^i ·λ_J^j† (c_i c_j^† + c_j^†c_i)= λ_I^i ·λ_J^j†{c_i , c_j^†} = λ_I^i ·λ_J^j†δ_ij,
where in the second line we have implemented the consistency requirement
λ_I^i ·λ_J^j† = -λ_J^j†·λ_I^i ,
and made use of the anticommutation relation {c_i,c_j}=δ_ij. Hence, from (<ref>) we also get
λ_I^i ·λ_Ji^† = δ_IJ.
On the other hand, we have
[a_I,a_J] = 0
= [λ_I^i c_i , λ_J^j c_j ] = (λ_I^i ·λ_J^j) c_i c_j - (λ_J^j·λ_I^i ) c_j c_i
= λ_I^i ·λ_J^j (c_i c_j + c_j c_i)= λ_I^i ·λ_J^j{c_i , c_j } = 0 ,
where we have used, for consistency,
λ_I^i ·λ_J^j = -λ_J^j·λ_I^i .
Analogously, from the analysis of [a^†_I,a^†_J] = 0, via {c^†_i , c^†_j } = 0, we can derive, for consistency, the multiplication rule
λ_I^i†·λ_J^j† = -λ_J^j†·λ_I^i† .
Therefore, in the case of the inverse mapping we end up with the multiplication rules (<ref>), (<ref>), (<ref>), and (<ref>) between the elements of the set underlying the mapping.
Also in this case, the algebraic structure obtained corresponds to a graded, deformed Grassmann algebra, involving anticommuting Grassmann-type variables, and reads
{λ_I^i , λ_J^j†} = 0 ,
{λ_I^i , λ_J^j } = 0 ,
{λ_I^i†, λ_J^j†} = 0 ,
together with the extra relation (<ref>).
Let us observe that, considering the identifications (<ref>) and (<ref>) together,
one may then also write
a_J = λ_J^i c_i = λ_J^i λ_i^I a_I = (λ_J^i ·λ_i^I ) a_I ⇒ λ_J^i ·λ_i^I = δ_J^I ,
a^†_J = λ_J^i† c^†_i = λ_J^i†λ_i^I† a^†_I = (λ_J^i†·λ_i^I†) a^†_I ⇒ λ_J^i†·λ_i^I† = δ_J^I ,
c_j = λ_j^I a_I = λ_j^I λ_I^i c_i = ( λ_j^I ·λ_I^i ) c_i ⇒ λ_j^I ·λ_I^i = δ_j^i ,
c^†_j = λ_j^I† a^†_I = λ_j^I†λ_I^i† c^†_i = ( λ_j^I†·λ_I^i†) c^†_i ⇒ λ_j^I†·λ_I^i†= δ_j^i ,
ending up with (multiplicative inverse) relations between the λ_I^i,λ_I^i† and the λ_i^I,λ_i^I† elements.
§.§ Mapping that exchanges the creation and annihilation operations
Another identification that one may consider, in contrast to (<ref>), is the following:
{ c_i=_i^I† a^†_I,
c_i^† =_i^I a_I ,
.
where the creation/annihilation of bosons is translated into the corresponding annihilation/creation of fermions.
From the study of the first anticommutation relation in (<ref>) we obtain
{c_i,c_j^†} = δ_ij
= {_i^I† a_I^† , _j^J a_J }= (_i^I†·_j^J) a_I^† a_J + (_j^J·_i^I† ) a_J a_I^†
= _i^I†·_j^J (a_I^† a_J - a_J a_I^†)=-_i^I†·_j^J [a_J , a_I^†] = _j^J ·_i^I†δ_JI ,
where we have required, for consistency, the relation
_i^I†·_j^J = - _j^J·_i^I†
to hold (which is reminiscent of the relation (<ref>) previously obtained for the λ's and λ^†'s elements),
and where we have also used the first commutation relation in (<ref>). Therefore, we end up with the relation
_j^I ·_iI^† = δ_ij
between the 's and ^†'s (analogous to the relation (<ref>) previously obtained for the λ's and λ^†'s).
Then, considering the other anticommutation relations in the fermionic algebra we get
{c_i,c_j} =0
= {_i^I† a^†_I, _j^J† a^†_J} =_i^I†·_j^J† [a^†_I, a^†_J] = 0
and
{c^†_i,c^†_j} =0
= {_i^I a_I, _j^J a_J} =_i^I·_j^J [a_I, a_J] = 0,
where we have also introduced and exploited the consistency requirements
_i^I†·_j^J† = -_j^J†·_i^I†
and
_i^I ·_j^J = -_j^J ·_i^I ,
respectively analogous to (<ref>) and (<ref>) previously obtained for the λ^†'s and λ's.
Hence, the algebraic structure underlying the mapping that converts the creation operation into the annihilation one and vice versa is the same we have obtained in the case of the mapping preserving the creation/annihilation operation, namely a (deformed, due to (<ref>)) Grassmann algebra involving anticommuting Grassmann-type variables.
Inverse mapping of fermionic into bosonic operators
In a similar way, we may now introduce the following identification:
{ a_I=_I^i† c^†_i,
a_I^† =_I^i c_i ,
.
translating the creation/annihilation of fermions into the annihilation/creation of bosons – which is the inverse mapping of (<ref>). Thus, from the analysis of the bosonic algebra we get
[a_I,a_J ^†] = δ_IJ
= [_I^i† c^†_i , _J^j c_j ] = (_I^i†·_J^j) c^†_i c_j - (_J^j·_I^i† ) c_j c^†_i
= _I^i †·_J^j (c_j c_i^† + c_i^†c_j)= _I^i†·_J^j{c_j , c_i^†} = _I^i†·_J^jδ_ji,
where we have also implemented the consistency requirement
_I^i ·_J^j† = -_J^j†·_I^i ,
analogous to the relation (<ref>) previously written for the λ's and λ^†'s. Therefore, from (<ref>) we are left with the relation
_I^i†·_Ji = δ_IJ ⇒ _Ji·_I^i† = - δ_IJ.
Moreover, we have
[a_I,a_J] = 0
= [_I^i† c^†_i , _J^j† c^†_j ] = (_I^i†·_J^j†) c^†_i c^†_j - (_J^j†·_I^i† ) c^†_j c^†_i
= _I^i†·_J^j† (c^†_i c^†_j + c^†_j c^†_i)=_I^i†·_J^j†{c^†_i , c^†_j } = 0 ,
where we have used
_I^i†·_J^j† = -_J^j†·_I^i† ,
reminiscent of (<ref>) among the λ^†'s.
Analogously, from the analysis of [a^†_I,a^†_J] = 0, via {c_i , c_j } = 0, we derive the multiplication rule
_I^i ·_J^j = -_J^j·_I^i ,
reminiscent of (<ref>) for the λ's.
Hence, the algebraic structure obtained for the 's and ^†'s is, again, a graded (deformed) Grassmann algebra, with the deformation due to the extra relation (<ref>).
Notice that, as previously observed in the case of the mapping that preserves the creation/annihilation operation, one may now consider (<ref>) and (<ref>) together and write
a_J = _J^i† c^†_i = _J^i†_i^I a_I = (_J^i†·_i^I) a_I ⇒ _J^i†·_i^I= δ_J^I ,
a^†_J = _J^i c_i = _J^i_i^I† a^†_I = (_J^i·_i^I†) a^†_I ⇒ _J^i·_i^I† = δ_J^I ,
c_j = _j^I† a^†_I = _j^I†_I^i c_i = ( _j^I†·_I^i ) c_i ⇒ _j^I†·_I^i = δ_j^i ,
c^†_j = _j^I a_I = _j^I_I^i† c^†_i = (_j^I·_I^i†) c^†_i ⇒ _j^I·_I^i† = δ_j^i ,
obtaining (multiplicative inverse) relations between the _I^i,_I^i† and the _i^I,_i^I† elements.
§.§ Gauge invariance in second quantization and role of the Grassmann-type variables
Let us conclude this section with an observation on gauge invariance and symmetry reduction in our context. In second quantization, gauge transformations correspond to phase transformations of the creation and annihilation operators for fermions/bosons. These transformations reflect the invariance of the system under local or global phase changes – in particular, the latter leave physical observables, such as occupation numbers, unchanged.
For fermionic creation and annihilation operators, for a global U(1) gauge transformation the operators transform as
c_i → c'_i = e^iθ c_i ,
c_i^†→c'_i^† = e^-iθ c_i^† ,
where θ is the phase angle of the transformation. This kind of transformation corresponds to a global symmetry where the phase factor is the same for all operators.
Analogously, for bosonic creation and annihilation operators one may write
a_I → a'_I = e^iθ a_I ,
a_I^†→a'_I^† = e^-iθ a_I^† .
On the other hand, for a local U(1) gauge transformation the phase factor can depend on “position" (on the mode index, e.g. site of a lattice model). While global gauge transformations leave physical observables unchanged because they apply a constant phase to all states in the system, local gauge transformations can affect observables unless they are compensated by other modifications in the system (such as the introduction of gauge fields). Let us denote the aforementioned position or mode dependence by θ_i (or θ_I), reflecting a local symmetry.
The fermionic creation and annihilation operators then transform as
c_i → c'_i = e^iθ_i c_i ,
c_i^†→c'_i^† = e^-iθ_i c_i^† ,
while for bosons we have
a_I → a'_I = e^iθ_I a_I ,
a_I^†→a'_I^† = e^-iθ_I a_I^† .
Taking this into account and exploiting the identifications (<ref>) and (<ref>) inducing the mappings previously discussed, we find the θ_I gauge transformations of the different λ's and λ^†'s elements to be
λ_i^I →λ'_i^I = e^-iθ_Iλ_i^I , λ_i^I†→λ'_i^I† = e^iθ_Iλ_i^I† , λ_I^i →λ'_I^i = e^iθ_Iλ_I^i , λ_I^i†→λ'_I^i† = e^-iθ_Iλ_I^i† ,
while the θ_i gauge transformations are
λ_i^I →λ'_i^I = e^iθ_iλ_i^I , λ_i^I†→λ'_i^I† = e^-iθ_iλ_i^I†, λ_I^i →λ'_I^i = e^-iθ_iλ_I^i , λ_I^i†→λ'_I^i† = e^iθ_iλ_I^i† .
Correspondingly, the creation and annihilation operators c^†_i,c_i and a^†_I,a_I can be seen as composite objects, invariant under (part of) the gauge symmetry. In other words, one may formally write, omitting mode labels just to lighten the notation,
a^λ:=λ^-1a=λ^-1(λ c)=c , a^†λ^†:=λ^† -1a^†=λ^† -1(λ^† c^†)=c^† ,
and, considering the λ's and λ^†'s involved in the inverse mapping,
c^λ:=λ^-1a=λ^-1(λ a)=a , c^†λ^†:=λ^† -1a^†=λ^† -1(λ^† a^†)=a^† .
This allows to interpret the creation and annihilation bosonic (fermionic) operators as dressed fermionic (bosonic) ones,[This can be seen as a self-dressing, as it is extracted from the operators themselves.] where the dressing is implemented by the Grassmann-type variables λ,λ^† – which one may therefore name dressing Grassmannian variables. As shown in (<ref>)-(<ref>), under local gauge transformations the latter have their own phase adjusted to cancel the phase change in the bosonic/fermionic operators, thus creating operators that are gauge-invariant under the symmetry that has been reduced.
Note that, depending on the type of system being considered – i.e. whether it is fermionic or bosonic, part of the above mentioned gauge symmetry transformations of the λ,λ^†'s can be interpreted as residual gauge symmetries of the (partially) dressed operators.
The same arguments above can be applied, in an analogous way, to the mappings that exchanges the creation and annihilation operations. In this case we get, taking into account (<ref>)-(<ref>) and considering the mappings (<ref>)/(<ref>),
_i^I →'_i^I = e^-iθ_I_i^I , _i^I†→'_i^I† = e^iθ_I_i^I† , _I^i →'_I^i = e^-iθ_I_I^i , _I^i†→'_I^i† = e^iθ_I_I^i† ,
and
_i^I →'_i^I = e^-iθ_i_i^I , _i^I†→'_i^I† = e^iθ_i_i^I†, _I^i →'_I^i = e^-iθ_i_I^i , _I^i†→'_I^i† = e^iθ_i_I^i† ,
under θ_I and θ_i gauge transformations, respectively. Therefore, one may formally write, on the one hand,
a^^†:=^† -1a=^† -1(^† c^†)=c^† , a^†:=^-1a^†=^-1( c)=c,
and, on the other,
c^^†:=^† -1a^†=λ^† -1(^† a^†)=a^† , c^†:=^-1a=^-1( a)=a ,
considering the 's and ^†'s of the inverse mapping.
§ APPLICATION TO THE HAMILTONIANS OF THE BOSONIC/FERMIONIC HARMONIC OSCILLATORS
We will now apply the algebraic mappings and reasoning presented above to the case of the (free) Hamiltonian of the bosonic and fermionic harmonic oscillators.
Here we explicitly apply the mapping that preserves the creation/annihilation operation; the same reasoning can be performed for the mapping that exchanges the creation/annihilation operation, leading, in a straightforward way, to algebraically analogous results.
The Hamiltonian of the bosonic harmonic oscillator is
H_B = ħω/2{a^†_I,a_I}=ħω/2( a^†_I a_I + a_I a^†_I )=ħω(a^†_I a_I + 1/2),
where summation over I is implied.
Applying the identification (<ref>) to H_B and working the way down with the induced algebraic mapping, exploiting also the previously derived relations between the associated variables λ,λ^†'s, we get
H_B = ħω/2λ_I^i†·λ_I^j (c^†_i c_j - c_j c^†_i ) = ħω/2λ_I^i†·λ_I^j [c^†_i,c_j] .
In other words, with our procedure one can consistently derive the Hamiltonian H_B starting from the commutator [c^†_i,c_j] between fermionic operators, properly building, by means of λ,λ^†, an object that is invariant under θ_i gauge transformations – therefore this symmetry is reduced – and also invariant under θ_I gauge transformations by construction (as expected for H_B).
In a similar way, one can consider the Hamiltonian of the so-called fermionic harmonic oscillator
H_F = ħω/2 [c^†_i ,c_i] ,
and write, by applying the identification (<ref>) and using the relations between the λ,λ^†'s underlying the associated mapping,
H_F = ħω/2λ_i^I†·λ_i^J ( a^†_I a_J + a_J a^†_I ) = ħω/2λ_i^I†·λ_i^J {a^†_I,a_J} .
We can therefore see that the Hamiltonian H_F can be derived from the anticommutator {a^†_I,a_J} between bosonic operators. This is done by building, via the Grassmann-type variables λ,λ^†'s appearing in (<ref>), an object that is invariant under θ_I gauge transformations. By construction, it is also invariant under θ_i transformations, as indeed expected for H_F.
Under the above perspective, the bosonic (fermionic) Hamiltonian H_B (H_F) can be therefore seen as dressed objects, derived from bare commutation (anticommutation) relations between creation and annihilation operators.
§ CONCLUSION
In this work, we have presented an algebraic approach to the mapping of Lie algebras 𝒢_B of bosonic creation and annihilation operators into algebras 𝒢_F of fermionic creation and annihilation operators, and vice versa.
We have introduced a specific identification criterion, inherited from a Lie algebra expansion method – known as S-expansion – and adapted to our purposes, between the bosonic and the fermionic generators. We have then used the (anti)commutation relations of the bosonic and fermionic algebras in order to determine the algebraic structure underlying the mapping.
The latter corresponds to a graded, deformed Grassmann algebra, involving anticommuting Grassmann-type variables.
The deformation is due to an extra relation among the elements of the algebra necessary for the consistency of the construction based on the identification criterion adopted case by case.
We have presented different mappings, all based on a Grassmann-type algebra, that either preserve the creation/annihilation operations or exchange them. In both cases, we have analysed the mapping of bosonic to fermionic operators and its inverse.
We have then discussed the role played by the Grassmann-type variables concerning gauge invariance in second quantization.
We have found that, within our procedure for the various mappings, the bosonic/fermionic creation and annihilation operators can be seen as dressed operators, where the dressing is provided by the Grassmann-type variables underlying the mapping, hence interpretable as dressing Grassmannian variables.
This kind of objects might also be extracted from the physical content of the model by considering, e.g., the polar decomposition of (complex) eigenvalues, which separates magnitude and phase.
We leave such analysis to future investigations.
We have also provided an example of application to the case of the Hamiltonians of the bosonic and fermionic harmonic oscillators, showing that they can be seen as dressed objects, derived from the – bare – commutation (anticommutation) relations between the fermionic (bosonic) creation and annihilation operators.
Future perspectives include the application/extension of the algebraic method here developed within a field-theoretic context and the analysis of various Hamiltonian systems (e.g., Hubbard models) by means of the here proposed mappings, considering also on-site and in-site interaction terms.
Finally, it would be interesting to see if and how our approach actually makes contact with supergeometry and/or supersymmetry, in particular concerning the way in which we recover gauge invariance, especially in the context of the so-called unconventional supersymmetry (Ususy) <cit.>, which has been shown to play a relevant role in
the construction of analogue (supergravity) models, providing a macroscopic description
of the electronic properties of graphene-like materials. There, supersymmetry is not manifest, but the description of these kind of systems is still derived starting from a supergeometric setup and exploiting the so-called matter Ansatz, constructed with a bosonic field and a spin-1/2 fermion field, invariant under Nieh-Yan-Weyl transformations <cit.>. Models exhibiting Ususy do not require the matching of bosonic and fermionic degrees of freedom typical of supersymmetric theories; they involve dynamical spin-1/2 fermion fields and are particularly appealing because they are based on a Weyl-invariant action. Future investigations in this direction will be carried out under a field-theoretical perspective, at both classical and quantum level.
§ ACKNOWLEDGMENT
We wish to thank Serena Fazzini for the inspiring discussions during the initial stages of this work.
D.M.P. acknowledges financial support
from the Chilean government through Fondecyt grants Grant #11240533.
| Bosons and fermions have different characteristics and are distinguished by their intrinsic properties, particularly their spin and statistics. Bosons have integer spin, obey Bose-Einstein statistics, meaning that multiple bosons can occupy the same quantum state simultaneously – which leads to phenomena such as Bose-Einstein condensation and the behavior of photons in a laser. Fermions obey Fermi-Dirac statistics and are subject to the Pauli exclusion principle, which states that no two fermions can occupy the same quantum state simultaneously.
In quantum field theory (QFT), bosons are typically associated with fields that mediate interactions, fermions are the building blocks of matter. In particle and nuclear physics, composite particles can be bosons if their constituent fermions pair up in such a way that their total spin is an integer (e.g., mesons, made of one quark and one antiquark), while composite particles can be fermions if their constituent fermions combine to give a half-integer spin (e.g., baryons, such as protons and neutrons, which are made of three quarks).
Despite their differences, systems and theories of bosons and fermions can be mapped into each other and one can write, when certain conditions are met and in well-defined regimes, bosons in terms of fermions, and vice versa, allowing a boson-fermion correspondence. For example, the bosonization technique <cit.> is a well-established analytic tool for investigating the low energy regime of one-dimensional interacting fermionic systems, essentially consisting in linearizing the spectrum around the Fermi points, passing to the continuum limit, and finally expressing the fermionic operators in terms of bosonic fields.
On the other hand, with the so-called fermionization technique, involving Jordan-Wigner transformations <cit.>, it is possible to map spin and bosonic systems into fermionic ones – see <cit.> for a review on bosonization and fermionization.
Furthermore, the Fock space for fermion fields can be identified with the Fock space for boson fields, provided the overall numbers of internal degrees of freedom (d.o.f.) are the same. As a consequence, the respective free field Hamiltonian systems are equivalent, or, as also frequently said, dual.
The underlying principles of connecting bosonic and fermionic descriptions through dualities are applicable across various domains in QFT, see e.g. the pivotal work <cit.>.
Another possible way to relate bosons and fermions may be dictated by supersymmetry. One of the mathematical tools used to formulate and understand supersymmetry is Grassmann-type variables. In particular, supersymmetric theories can be formulated in a way that extends the concept of spacetime into a new structure called superspace, which includes both the usual spacetime coordinates and (anticommuting) Grassmann coordinates.
In this work, we present an algebraic approach to the mapping of the algebra of bosonic creation and annihilation operators (second quantization) into the algebra of fermionic operators, and vice versa, which allows to systematically derive the algebraic structure underlying the mapping. Our procedure is based on the introduction of a proper identification criterion between bosonic and fermionic generators, inherited from a Lie algebra expansion method, called S-expansion <cit.>, and adapted to our purposes.
The basis of the S-expansion consists in combining the inner multiplication law of a semigroup S with the structure constants of a Lie algebra 𝒢. The new Lie algebra thus obtained is called an S-expanded algebra. From the physical point of view, several theories have been extensively studied using the S-expansion method, enabling numerous results over recent years, especially in the context of gravitational theories – see e.g. <cit.>.
The identification criterion we adopt here is reminiscent of the one described in <cit.>. However, the algebraic structure we obtain is not that of a semigroup. Rather, it corresponds to a graded (deformed) Grassmann algebra, involving anticommuting variables.
It is here derived from the analysis of the (anti)commutation relations of the bosonic and fermionic algebras.
The reminder of the paper is organised as follows: In Section <ref> we briefly review the identification criterion of the S-expansion. This criterion is then adapted to write the fermionic generators in terms of bosonic ones – converting, via the elements of the algebraic structure involved, the bosonic generators into fermionic ones – and vice versa. In Section <ref> we describe our procedure and derive the algebraic structure underlying the mapping, both in the case in which the identification considered preserve the creation and annihilation operations and in the case in which bosonic/fermionic creation operators are mapped into fermionic/bosonic annihilation operators.
Moreover, we discuss the role played by the Grassmann-like variables underlying the mapping in realising gauge invariance
in second quantization, within our procedure. In Section <ref> we provide the application to the case of the bosonic and fermionic harmonic oscillator Hamiltonians.
Section <ref> is devoted to the conclusion. | null | null | null | null | In this work, we have presented an algebraic approach to the mapping of Lie algebras 𝒢_B of bosonic creation and annihilation operators into algebras 𝒢_F of fermionic creation and annihilation operators, and vice versa.
We have introduced a specific identification criterion, inherited from a Lie algebra expansion method – known as S-expansion – and adapted to our purposes, between the bosonic and the fermionic generators. We have then used the (anti)commutation relations of the bosonic and fermionic algebras in order to determine the algebraic structure underlying the mapping.
The latter corresponds to a graded, deformed Grassmann algebra, involving anticommuting Grassmann-type variables.
The deformation is due to an extra relation among the elements of the algebra necessary for the consistency of the construction based on the identification criterion adopted case by case.
We have presented different mappings, all based on a Grassmann-type algebra, that either preserve the creation/annihilation operations or exchange them. In both cases, we have analysed the mapping of bosonic to fermionic operators and its inverse.
We have then discussed the role played by the Grassmann-type variables concerning gauge invariance in second quantization.
We have found that, within our procedure for the various mappings, the bosonic/fermionic creation and annihilation operators can be seen as dressed operators, where the dressing is provided by the Grassmann-type variables underlying the mapping, hence interpretable as dressing Grassmannian variables.
This kind of objects might also be extracted from the physical content of the model by considering, e.g., the polar decomposition of (complex) eigenvalues, which separates magnitude and phase.
We leave such analysis to future investigations.
We have also provided an example of application to the case of the Hamiltonians of the bosonic and fermionic harmonic oscillators, showing that they can be seen as dressed objects, derived from the – bare – commutation (anticommutation) relations between the fermionic (bosonic) creation and annihilation operators.
Future perspectives include the application/extension of the algebraic method here developed within a field-theoretic context and the analysis of various Hamiltonian systems (e.g., Hubbard models) by means of the here proposed mappings, considering also on-site and in-site interaction terms.
Finally, it would be interesting to see if and how our approach actually makes contact with supergeometry and/or supersymmetry, in particular concerning the way in which we recover gauge invariance, especially in the context of the so-called unconventional supersymmetry (Ususy) <cit.>, which has been shown to play a relevant role in
the construction of analogue (supergravity) models, providing a macroscopic description
of the electronic properties of graphene-like materials. There, supersymmetry is not manifest, but the description of these kind of systems is still derived starting from a supergeometric setup and exploiting the so-called matter Ansatz, constructed with a bosonic field and a spin-1/2 fermion field, invariant under Nieh-Yan-Weyl transformations <cit.>. Models exhibiting Ususy do not require the matching of bosonic and fermionic degrees of freedom typical of supersymmetric theories; they involve dynamical spin-1/2 fermion fields and are particularly appealing because they are based on a Weyl-invariant action. Future investigations in this direction will be carried out under a field-theoretical perspective, at both classical and quantum level. |
http://arxiv.org/abs/2409.17239v1 | 20240925180007 | LensWatch: II. Improved Photometry and Time Delay Constraints on the Strongly-Lensed Type Ia Supernova 2022qmx ("SN Zwicky") with HST Template Observations | [
"Conor Larison",
"Justin D. R. Pierel",
"Max J. B. Newman",
"Saurabh W. Jha",
"Daniel Gilman",
"Erin E. Hayes",
"Aadya Agrawal",
"Nikki Arendse",
"Simon Birrer",
"Mateusz Bronikowski",
"John M. Della Costa",
"David A. Coulter",
"Frédéric Courbin",
"Sukanya Chakrabarti",
"Jose M. Diego",
"Suhail Dhawan",
"Ariel Goobar",
"Christa Gall",
"Jens Hjorth",
"Xiaosheng Huang",
"Shude Mao",
"Rui Marques-Chaves",
"Paolo A. Mazzali",
"Anupreeta More",
"Leonidas A. Moustakas",
"Ismael Pérez-Fournon",
"Tanja Petrushevska",
"Frédérick Poidevin",
"Armin Rest",
"Anowar J. Shajib",
"Raphael Shirley",
"William Sheu",
"Louis-Gregory Strolger",
"Sherry H. Suyu",
"Tommaso Treu",
"Yossef Zenati"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.CO",
"astro-ph.GA"
] |
0000-0003-2037-4619]C. Larison
Conor Larison
[email protected]
Department of Physics & Astronomy, Rutgers, State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854, USA
0000-0002-2361-7201
]J. D. R. Pierel
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0002-8092-2077]M. J. B. Newman
Department of Physics & Astronomy, Rutgers, State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854, USA
0000-0001-8738-6011]S. W. Jha
Department of Physics & Astronomy, Rutgers, State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854, USA
0000-0002-5116-7287]D. Gilman
Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL 60637, USA
0000-0003-3847-0780]E. E. Hayes
Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
0009-0008-1965-9012]A. Agrawal
Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 W. Green St., IL 61801, USA
0000-0001-5409-6480]N. Arendse
Oskar Klein Centre, Department of Physics, Stockholm University, SE-10691 Stockholm, Sweden
0000-0003-3195-5507]S. Birrer
Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
0000-0002-1537-6911]M. Bronikowski
Center for Astrophysics and Cosmology, University of Nova Gorica, Vipavska 11c, 5270 Ajdovščina, Slovenia
0000-0001-6711-8140]S. Chakrabarti
Department of Physics and Astronomy, University of Alabama, Huntsville, Huntsville, Alabama 35899
0000-0003-0928-2000]J. M. Della Costa
NSF NOIRLab, Tucson, AZ 85719, USA
0000-0003-4263-2228]D. A. Coulter
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0003-0758-6510]F. Courbin
ICC-UB Institut de Ciències del Cosmos, Universitat de Barcelona, Martí Franquès, 1, E-08028 Barcelona, Spain
ICREA, Pg. Lluís Companys 23, Barcelona, E-08010, Spain
0000-0002-2376-6979]S. Dhawan
Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
0000-0001-9065-3926]J. M. Diego
Instituto de Física de Cantabria (CSIC-UC), Avda. Los Castros s/n, 39005 Santander, Spain
0000-0002-8526-3963]C. Gall
DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 155A, 2200 Copenhagen, Denmark
0000-0002-4163-4996]A. Goobar
Oskar Klein Centre, Department of Physics, Stockholm University, SE-10691 Stockholm, Sweden
0000-0002-4571-2306]J. Hjorth
DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 155A, 2200 Copenhagen, Denmark
0000-0001-8156-0330]X. Huang
Department of Physics & Astronomy, University of San Francisco, San Francisco, CA 94117
0000-0001-5975-290X]J. Johansson
Oskar Klein Centre, Department of Physics, Stockholm University, SE-10691 Stockholm, Sweden
0000-0001-8317-2788]S. Mao
Department of Astronomy, Tsinghua University, Beijing 100084 China
0000-0001-8442-1846]R. Marques-Chaves
Department of Astronomy, University of Geneva, 51 Chemin Pegasi, 1290 Versoix, Switzerland
0000-0001-6876-8284]P. A. Mazzali
Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool L3 5RF, UK
Max-Planck-Institut für Astrophysik, Karl-Schwarzschild Straße 1, 85748 Garching, Germany
0000-0001-7714-7076]A. More
Inter-University Centre for Astronomy and Astrophysics, Ganeshkhind, Pune 411007, India
Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo, 5-1-5, Kashiwa, Chiba 277-8583, Japan
0000-0003-3030-2360]L. A. Moustakas
Jet Propulsion Laboratory, California Institute of Technology
0000-0002-2807-6459]I. Pérez-Fournon
Instituto de Astrofísica de Canarias, C/Vía Láctea, s/n, E-38205 San Cristóbal de La Laguna, Tenerife, Spain
Universidad de La Laguna, Dpto. Astrofísica, E-38206 San Cristóbal de La Laguna, Tenerife, Spain
0000-0003-4743-1679]T. Petrushevska
Center for Astrophysics and Cosmology, University of Nova Gorica, Vipavska 11c, 5270 Ajdovščina, Slovenia.
0000-0002-5391-5568]F. Poidevin
Instituto de Astrofísica de Canarias, Vía Láctea, 38205 La Laguna, Tenerife, Spain
Universidad de La Laguna, Departamento de Astrofísica, 38206 La Laguna, Tenerife, Spain
0000-0002-4410-5387]A. Rest
Johns Hopkins University, William H. Miller III Department of Physics and Astronomy, 3400 North Charles St., Baltimore, MD 21218, USA
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0002-5558-888X]A. J. Shajib
Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL 60637, USA
Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637, USA
0000-0002-1114-0135]R. Shirley
Max-Planck-Institut für extraterrestrische Physik, Giessenbachstr. 1, 85748 Garching, Germany
0000-0002-7756-4440]L. G. Strolger
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0001-5568-6052]S. H. Suyu
Technical University of Munich, TUM School of Natural Sciences, Physics Department, James-Franck-Str. 1, 85748 Garching, Germany
Max-Planck-Institut für Astrophysik, Karl-Schwarzschild Straße 1, 85748 Garching, Germany
0000-0002-8460-0390]T. Treu
Physics and Astronomy Department University of California Los Angeles CA 90095
0000-0002-0632-8897]Y. Zenati
Johns Hopkins University, William H. Miller III Department of Physics and Astronomy, 3400 North Charles St., Baltimore, MD 21218, USA
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
§ ABSTRACT
Strongly lensed supernovae (SNe) are a rare class of transient that can offer tight cosmological constraints that are complementary to methods from other astronomical events. We present a follow-up study of one recently-discovered strongly lensed SN, the quadruply-imaged Type Ia SN 2022qmx (aka, “SN Zwicky”) at z=. We measure updated, template-subtracted photometry for and derive improved time delays and magnifications. This is possible because SNe are transient, fading away after reaching their peak brightness. Specifically, we measure point spread function (PSF) photometry for all four images of in three Hubble Space Telescope WFC3/UVIS passbands (F475W, F625W, F814W) and one WFC3/IR passband (F160W), with template images taken ∼ 11 months after the epoch in which the SN images appear. We find consistency to within 2σ between lens model predicted time delays (≲1 day), and measured time delays with colors (≲2 days), including the uncertainty from chromatic microlensing that may arise from stars in the lensing galaxy. The standardizable nature of SNe Ia allows us to estimate absolute magnifications for the four images, with images A and C being elevated in magnification compared to lens model predictions by about 6σ and 3σ respectively, confirming previous work. We show that millilensing or differential dust extinction is unable to explain these discrepancies and find evidence for the existence of microlensing in images A, C, and potentially D, that may contribute to the anomalous magnification.
§ INTRODUCTION
Supernovae (SNe) that have been multiply-imaged by strong gravitational lensing are naturally rare and revealing astronomical events. They require an unlikely alignment along the line of sight between an observer, the background source that is lensed, and the foreground lens (galaxy or cluster). This, combined with the fact that most SNe rise and fade over the course of a few rest-frame weeks, makes strongly lensed SNe extremely elusive.
The geometry of the lensing system and gravitational potential differences across the lens plane determine the delay in arrival of the SN images relative to one another. Measurements of this “time delay” can provide an angular diameter distance, which in turn can constrain the Hubble constant (H_0) and the dark energy equation of state (w) directly <cit.>. Strongly lensed SNe have several advantages over strongly lensed quasars, which have often been used for time-delay measurements <cit.>:
* SNe fade on a short timescale (over weeks to months), so more accurate models of the lensing system can be made post-SN discovery, as the source and lens fluxes would remain highly blended otherwise <cit.>.
* SNe (especially of Type Ia) have predictable light curves, thus simplifying time-delay measurements over a more stochastic system such as an active galactic nucleus (AGN).
* Strongly lensed SNe are also affected by microlensing <cit.>; however, chromatic effects are mitigated given sufficient early-time light curve coverage <cit.>. Microlensing can still be a significant source of uncertainty for SNe with small time delays <cit.> as we will show later.
* Perhaps most important in terms of logistical constraints is that lensed SNe require much shorter observing campaigns than lensed AGN, potentially going from a decade of extended observations for a single lensed AGN down to a few epochs over a year for a strongly lensed SN <cit.>.
These advantages been shown and explored for decades <cit.>, but observations to date have not been optimized for this type of phenomenon, as evidenced by only a few detections being made by recent wide-field astronomical surveys <cit.>. Despite few discoveries, we now have a confirmed sample of eight multiply-imaged SNe (SN Refsdal, SN 2016geu, SN Requiem, AT 2022acfv, SN Zwicky, SN 2022riv, SN H0pe, and SN Encore), including two that are lensed by a galaxy <cit.> and six that have been lensed by a foreground galaxy cluster <cit.>. Our expanding sample size is a direct result of the combined efforts of many dedicated programs to find these rare events <cit.>.
Type Ia SNe (SNe Ia) are especially useful as strongly lensed sources as we possess well-constrained templates of their light curves <cit.>, which can be used to standardize their brightness <cit.>. This standardizability can be used to break the mass-sheet degeneracy, a large systematic effect where one could add additional sheets of mass to the lensing plane without influencing the geometry of the system <cit.>, albeit only when millilensing and microlensing are mostly mitigated <cit.>.
We have been fortunate enough to discover three cluster-scale SN lensing systems with time delays that can precisely measure H_0: SN Refsdal, SN H0pe and SN Encore <cit.>. In contrast, the two existing galaxy-scale systems with lensed SNe do not currently provide such precise cosmological results, but they remain important to investigate, as they are expected to be much more common than these cluster-scale lenses in the era of the Vera C. Rubin Observatory <cit.>. With a larger statistical sample of such systems, it then becomes possible to measure H_0 to a percent level, enough to make it competitive in the current field <cit.>. However, in order to achieve this level of precision, we must first measure the time delays of each of our SN systems in the sample to a similarly high precision. To do this, we must better understand the impacts of microlensing to our error budget, the shortfalls of our current lens models on galaxy-galaxy systems, and the importance of follow-up campaigns to increase the quality and quantity of our photometry.
In this paper, we use additional observations to investigate one such galaxy-scale system, . was discovered in 2022 August by the Zwicky Transient Facility <cit.>[<https://www.wis-tns.org/object/2022qmx>], and was subsequently classified and analyzed by <cit.>. The time delays, magnifications, and lens models of the SN & galaxy-galaxy system were then analyzed and reported in <cit.>. This work represents the second paper in a series of papers for the LensWatch program[<https://www.lenswatch.org>], a direct follow-up to that provides improved time-delay and magnification measurements of from template-subtracted photometry. Section <ref> presents the template observation characteristics of . Our analysis of (including photometry and measurements of time delays and magnifications, as well as analysis investigating microlensing and millilensing) are reported in Section <ref>. Finally, we conclude with a discussion of the implications of these results in Section <ref>.
§ LENSWATCH OBSERVATIONS OF WITH HST
§.§ LensWatch Observations Post-discovery
As summarizes, roughly 12 days after the spectroscopic classification of , we used a non-disruptive target of opportunity (ToO) trigger to obtain WFC3/UVIS and IR images of the lensing system. Observations of were made with WFC3/UVIS (0.04/pix) to resolve the multiple images, specifically in the F475W, F625W, and F814W filters to provide non-overlapping coverage across the full optical wavelength range (∼ 3,500–6,000 in the rest-frame). Additionally, we included WFC3/IR F160W observations to provide overall calibration to ground-based NIR data and potentially useful information about the lensing system.
§.§ Template Observations
As part of the Cycle 28 LensWatch program, we are able to take template observations of lensed SN targets after the SN light has faded. As mentioned in Section <ref>, one of the main advantages of studying strongly lensed SNe is that they are transient events that almost completely fade on the order of months (depending on the SN type and observed filter). Template observations were taken August 10, 2023, approximately 11 months after the LensWatch ToO observations of . This corresponds to approximately 8 months in the supernova rest-frame, when a typical SN Ia has an optical luminosity less than about 1% of its peak <cit.>, a negligible contribution for our measurements.
Templates were obtained in all four filters used and summarized in : WFC3/UVIS F475W, F625W, F814W, and WFC3/IR F160W. To avoid any unnecessary issues with aligning and scaling SN image and template observations, we used the same dithering patterns, pointings, position angles, and exposure times as were used in the initial observations. Each filter had three dither positions, resulting in a total of 12 exposures across all filters.
§ ANALYSIS AND RESULTS
§.§ Template Photometry
For future discovered strongly lensed transients, it may not always be possible to obtain template images of every strongly lensed SN in all observed filters, although workarounds are being investigated, including using data from Euclid (Akhaury et al., in prep.). Therefore, it is important to examine how well we can measure image fluxes with and without the template photometry we have available.
In order to do PSF photometry, we first subtract our template images from the exposures containing the SN images. Due to careful planning of our follow-up template epochs, we were able to align the images using a simple pixel offset without a need to align to a stellar or SDSS object catalog for the UVIS filters (we will discuss the alignment process for the IR filter later, as it differed in scope). We were also able to make subtractions without scaling our observations due to the stability of the HST instruments and our consistent exposure times.
Our subtractions were done using the SN and template WFC3/UVIS “FLC” images, which are individual exposures that have been bias-subtracted, dark-subtracted, and flat-fielded but not yet corrected for geometric distortion. Because the PSF solution can vary across the detector, it is important to use these images that are still uncorrected for distortion for PSF photometry instead of final “drizzled” products, which can introduce inconsistencies into the modeling of a PSF. We use the standard PSF models[<https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/psf>] to represent the PSF, which also take into account spatial variation across the detector.
For our WFC3/IR subtractions, we used the “FLT” image, which lacks a charge transfer efficiency (CTE) correction that is necessary for UVIS images, but is not applicable to the IR data.
For each UVIS filter, we use centroiding to determine the positions of all four SN images across the three FLCs. We also estimate a uniform background for each image position, using a mode estimator algorithm. For each UVIS filter, we then perform forced PSF photometry by implementing a Bayesian nested sampling routine[dynesty: <https://dynesty.readthedocs.io/en/stable/>] to constrain the (common) SN flux in all three FLCs for all four SN images using the package, space_phot[<https://space-phot.readthedocs.io/en/latest/>] <cit.>. Each PSF was fit to the multiple SN images within a 5×5 pixel square to limit the contamination from the other SN images, which captures ∼99% of the total SN flux given the FWHM of ∼ 2 pixels.
The final measured flux is the integral of each PSF model; then, corrected fluxes were converted to AB magnitudes using the time-dependent inverse sensitivity and filter pivot wavelengths provided with each data file. The final measured magnitudes and colors are reported in Table <ref> and compared to the same measurements obtained in . We note that the uncertainties on the values from are underrepresented, as systematic uncertainties due to the variable background from the lens galaxy were not taken into account. Therefore, the differences seen between these and the updated measurements are less statistically significant than they may first appear. The updated fluxes are larger than the previous measuremeents due to an oversubtraction of the background for the latter, which resulted in large residuals at the image positions, as seen in Figure 7 of .
We also perform PSF photometry for our WFC3/IR F160W subtractions. We align each original FLT image to its template image using Astroalign and then create subtractions <cit.>. Our fitting method with space_phot does not handle blended sources, as the PSFs are all fit and subtracted individually, so we turn to a new method using <cit.>.
was created to provide accurate PSF photometry for stellar sources in crowded fields in images. Unlike the PSF fitting routine described above, provides iterative PSF photometry, fitting the brighter PSF sources and then subtracting them before fitting fainter nearby sources. We attempt thus to apply to our F160W data. First, we take the inner portion of our subtracted FLT images and replace the centers of our template images with these subtracted “stamps”, so that the sky gradient remains roughly continuous for to accurately measure the sky background. Then, we pedestal that subtracted region to match the background of the template image, so that the sky gradient on the full image is smooth.
In order to run on the F160W frames, in which the SN images are not very well separated, we rely on accurate positions from the original F475W FLT images that have clearly resolved SN images instead. To do this, we do a first-pass at alignment between the F475W drizzled image and the F160W drizzled image using TweakReg <cit.>. We then run on all the F475W frames and the F160W frames simultaneously, with the F475W drizzled image as the reference image for alignment. The PSF photometry obtained for the F475W images agrees with what was obtained by to within 1σ, thus providing a check on the accuracy of the method compared to the method outlined above. Adding the measured PSF fluxes from and using aperture photometry on the full, unresolved flux from the four SN images in the F160W band, we find that the measured fluxes account for ∼98% of the total observed flux.
We show the results of our PSF modelling and subtraction, as well as the template subtractions in Figure <ref>.
We postulate that the brighter the SN image is when compared to the total flux of the contaminating lens + host galaxy, the less the photometry will change with a template image. Comparing the template-subtracted photometry to the original, we expect the brighter SN images to be affected less by the background light. In Figure <ref> we test this by examining the ratio of the image fluxes after and before template-subtraction as a function of the ratio of the old flux to the lens flux. We find that the ratio of the old flux to the lens flux for the flux change to be negligible is 1.28 ± 0.55. Therefore, if the flux of the SN is about equal to, or greater than, the contaminating flux of the lens + host galaxies, the photometry will probably be robust to changes with template imaging. In the case of , we do not have photometry past this critical point, so we can not test if the bias from the contaminating flux persists, but this would be an important test for future galaxy-scale lens systems.
§.§ Updated Time Delays and Magnifications from Template Photometry
We use the updated photometry from Table <ref> to constrain the time delays and magnifications for the multiple images of in the manner of <cit.>; . As summarized in , measuring the difference in time of peak brightness for each image directly <cit.> is not possible with a single epoch, so we instead constrain the age of each SN image given a single light curve model. The relative age difference for each image is also a measure of the time delay, though we note this method is only possible because we have a reliable model for the light (and color) curve evolution as is a SN Ia.
We fit the photometry of the multiple images simultaneously using the software package <cit.>, where we also include Milky Way dust extinction (E(B-V)=0.16 mag, R_V=3.1) based on the maps of <cit.> and extinction curve of <cit.>. We also include additional uncertainty introduced by chromatic microlensing in the same manner as , which used the simulations of <cit.>. These are ∼0.05, 0.05, and 0.11 mag of additional color uncertainty in rest-frame U-B (∼F475W-F625W), B-V (∼F625W-F814W), and U-V (∼F475W-F814W) respectively. We add these uncertainties in quadrature to the color uncertainties from our photometric measurements for the fitting process. We note that this is by far the largest source of uncertainty in our time-delay measurements. We do not have estimates for chromatic microlensing uncertainty for the colors containing the F160W band; therefore, we fit only with the WFC3/UVIS bands, which is also a more direct comparison to . However, in Section <ref>, we will look at the predictions that an extended SALT2 model fit with the WFC3/UVIS bands makes for the F160W point and what this may tell us about chromatic microlensing.
We follow the methods outlined in detail by , originally adapted from the analysis of <cit.>, to measure time delays for . This process uses the SN color curves to constrain the time delay (with the “Color” method), and then fits for relative magnifications using a nested sampling method using SNCosmo and assuming a SALT2 SN model <cit.>. In this case, we will use the most up-to-date version in the literature <cit.>. A well-sampled, unresolved light curve exists for and in was fit with SALT2 to give t_pk=, c=, and x_1=. We therefore follow and allow the t_pk parameter, which here describes the time of peak for image A, to vary only within fifteen days of . We also fix x_1 to the parameter derived by (x_1=), mainly to ensure an accurate light curve standardization. After these time delays have been measured with the Color method, we fix all best-fit parameters and find each image's apparent magnitude parameter, m_B, from a fit to the SALT2 model.
As a check, we also fit for time delays using , a Gaussian Process-based time delay fitting code <cit.>. We use a version of that allows us to leverage a template SED; in this case we use SALT2. To minimize systematics from template choice, GausSN allows for chromatic deviations from the SN light curve template. This added flexibility of the model leads to larger uncertainties on the time delay of SN Zwicky, as there is only one epoch of data per image to constrain any such deviations. Therefore, we have to rely on stronger assumptions about the true underlying shape of the light curve, as we do with , to get a more precise constraint on the time delay and magnification. Therefore, although we find consistent time delays with as with the method, we proceed with our results for comparison with results from .
Following our above method, we convert the m_B parameter values we obtained to absolute magnitudes by applying a fiducial SN Ia standardization to . Specifically, we apply light curve corrections using the SALT2 parameterization for stretch (x_1=, with luminosity coefficient α=0.14) and our measured color (c = 0.11 with a luminosity coefficient of β=3.1) in the manner of <cit.> to obtain absolute magnitude estimates. We then compare the distance modulus of each image to the value predicted by a flat ΛCDM model (with H_0=70 km s^-1 Mpc^-1, Ω_m=0.3) for an average SN Ia <cit.> at z= converted to the CMB frame. We combine the statistical uncertainties on each measured magnification with a systematic uncertainty based on the intrinsic scatter of SN Ia absolute magnitudes <cit.>. Although different values for each of these coefficients could be used based on the study and sample, we choose these values to maintain consistency with . To be specific, we apply the standardization using the equation:
μ_obs = m_B + α x_1 - β c - M_B
where μ_obs represents the inferred distance modulus, and
α, β, and M_B are the assumed values we discuss above.
The updated measured time delays and magnifications (with subscript “updated”) are shown in Table <ref> compared with the measured and lens model-predicted values from (with subscripts “P23 meas” and “P23 pred” respectively). The model-predicted values we present are the joint modeling results from the “Final" column of Table 3 in , which represents the weighted averages of the results from the individual model results, and are not from an individual lens model. The posterior distributions for all parameters fit with (using the conversions listed above) are shown in Figures <ref> and <ref>. While the relative time delays are too small and associated uncertainties too large to provide a useful direct cosmological constraint, these results provide a valuable check on our lens modeling predictions.
§.§ Insights From F160W Photometry
Microlensing simulations in the F160W band are beyond the scope of this project. Moustakas et al. (in prep.) and Arendse et al. (in prep.) will present in-depth microlensing analyses of . Nevertheless, with this new data we can now explore deviations from our lens model and SN light curve predictions in the near-infrared, which was previously not possible.
We can take the value we obtain for color, c=0.11 from the method and create a prediction for the F160W band for all four of our SN images, fitting with the SALT2-extended model, a model that has trained SALT2 on SN Ia near-infrared (NIR) photometry <cit.>. We show the results of this method in Figure <ref>. The SALT fits show that the predicted fluxes for all images appear to be higher than what we actually measure for our single epoch. However, there is a large amount of model uncertainty (shown as the gray regions in Figure <ref>). For images A, C, and D, there is a ≳1σ discrepancy; however, for image B the offset is only about 0.5σ. We also find a similar disagreement when fitting SALT3-NIR, another SALT template fitter that includes NIR training <cit.>. Unfortunately, this model also includes significant model uncertainty at the epoch of our observations, thus providing a similar decrement in flux as with SALT2-extended.
Apart from our SN models, we can also look at the lens model predictions for individual image fluxes based on the total flux of the combined four images in the F160W band. Because we know that our time delays are small, on the order of ∼1 day, we approximate that the de-magnified flux of each image is the same. Therefore, we use the total integrated flux of all four images combined to predict the flux of each image using lens model predicted magnifications. For a total flux, f_tot, a demagnified flux that is equal for all images, f, and predicted magnifications for each image, μ_i: i ∈{ A | B | C | D }, the flux, f, and individual predicted fluxes with magnifications can be approximated as,
f = f_tot/∑_iμ_i
f_i = f μ_i
We measure the total aperture flux of the four combined images in the F160W frames and use the predicted magnifications for each image from the joint lensing predictions in to make predictions for the fluxes of each image. We then compare these predictions to the measured fluxes from our photometry. We show the results in Figure <ref>. We find that there is an agreement/disagreement between the joint modeling predictions and the measured photometry of ∼ 2.67σ, -0.74 σ, 0.57σ, and -1.54σ for images A–D respectively, showing tight agreement for images B and C but a potential offset for images A and D. Each individual lens model also fails to predict the F160W flux for images A and D, while the joint predictions have the least significant offset.
§.§ Millilensing Test
In order to help explain the discrepancies in absolute magnifications, we explore the possible presence of millilensing, where dark matter subhalos along the line of sight (LOS) to the images may be adding additional magnification to our flux estimates <cit.>.
We use pyHalo[<https://github.com/dangilman/pyHalo>] <cit.>, an open-source python package that generates realistic simulations of dark matter substructure around image positions and along the lines of sight to the SN images and measures magnifications using lenstronomy. Working off a framework made for the cluster analysis of <cit.>, we run three sets of 1,000 realizations to test the overall effect of millilensing. This model assumes a convergence and shear at each image position, which we set based on the results from the four lens models in to test the impact of millilensing on each of them. We also set the bounds of simulated halo and subhalo masses to m_L = 10^5.5 M_⊙ and m_H = 10^9 M_⊙, where m_L is the lower mass limit and m_H the higher. These limits were based on results from <cit.>, who carried out tests with a lower minimum mass threshold and showed that any mass lower than that which we assume would not impact the results. Our upper limit is based on the assumption that halos more massive than 10^9 M_⊙ would host a visible galaxy. The amount of substructure that is present in a pyHalo simulation is based on the parameter, f_sub, which sets what percentage of the dark matter that is impacting the image magnifications is in dark matter subhalos. To test three degrees of millilensing in the system, we assume three different values of f_sub: 1%, 5%, and 10% for our three runs, which are based on both theoretical and observational results <cit.>. The simulated halos and subhalos have a Navarro-Frank-White (NFW: <cit.>) mass profile. For the lens plane subhalos, we assume a power-law subhalo mass function and place subhalos uniformly around each image. For the LOS halos, we assume a Sheth-Tormen mass function and place subhalos in a cylindrical volume around each image <cit.>. The results of the simulations for the “LS1" model from are shown in Figure <ref>, which presents the absolute magnification distributions from the three configurations of f_sub, as well as the smooth model results from lenstronomy. The uncertainties in the magnifications due to millilensing are of similar magnitude across all four models. The largest impact appears to be for image D, which shows a 1σ uncertainty that is a few percent of the median value. For images A and C, the images that have the largest discrepancies between predicted and measured magnifications across all lens models, there are uncertainties due to millilensing of ≲ 1% in the runs with the largest spread, f_sub = 10%. Thus, we find that the effects due to millilensing are negligible compared to the macromagnification uncertainties.
We can make sense of this small effect in the case of , as the redshift of the lensed SN is at . This, combined with the fact that this is a galaxy-scale lensing system with a relatively small Einstein radius (θ_E ∼ 0.168": ), makes a chance alignment of a massive subhalo along the LOS less likely. There is a chance, however, that a dark matter subhalo could fall within one Einstein radius of a SN image, therefore causing a substantial difference in magnification, although our simulations show that this would be very unlikely. Such anomalous magnifications have been used to identify millilenses in the past <cit.>. In the case of , this would have to account for the aberrant magnifications in both images A and C, which both lie close to the center of the lens galaxy, where tidal stripping and heating reduce the possibility of substructures that could act as millilenses.
§ DISCUSSION AND CONCLUSIONS
We have presented updated photometry in the WFC3/UVIS F475W, F625W, and F814W bands for , as well as new WFC3/IR F160W photometry, by using template images obtained by the LensWatch collaboration. We use the resulting colors with an additional uncertainty due to chromatic microlensing to infer time delays of (0.52^+2.11_-1.55, 1.97^+2.28_-1.50, 0.78^+2.03_-1.69 days for images B-D relative to image A). The time delays for images B and D are consistent with both the lens modeling predictions and measured values from to within <1σ. The time delay for image C is slightly elevated compared to , but is still consistent with the previously-measured time delay within 1σ and with the model prediction to within 2σ. Overall, the time delays are still on the order of one day or less. Using the color parameter, c and time delays obtained with the color method, we fit the light curves for their apparent magnitudes and apply a fiducial light curve standardization to obtain absolute magnifications of 9.13^+5.21_-0.85, 4.04^+1.85_-0.50, 7.79^+3.88_-0.81, and 4.22^+1.99_-0.46 for images A, B, C, and D respectively. These values are within 1σ of the measured values from ; however, they reinforce the tension between the formerly-measured absolute magnifications and the lens model predictions. The absolute magnifications of images B and D are within 1σ of lens model predictions, but there is a statistically significant offset for images A and C of 5.91σ and 2.87σ respectively. Thus, it is confirmed with the new template photometry that the magnification predictions of the lens models under-predict the measured magnifications of images A and C significantly.
Before we examine other potential effects that could be adding to this measured discrepancy, we should also inspect what changes to the lens models themselves could close the observed gap in magnifications. For instance, because images A and C lie so close to the center of the galaxy, an added baryonic component of the lens model, with a more stretched light distribution than used in could impact the magnification measurements significantly <cit.>. For instance, if the lens galaxy were a disc galaxy, the disc could affect the lens model predicted magnifications <cit.>. We note, however, that we do not see any clear disc structure in the color image of the lens galaxy from the template images. This, along with the lens galaxy spectrum presented in which shows only absorption features, makes a clear argument that the lens galaxy is elliptical.
Without an updated lens model to potentially explain the differences in magnification measurements and predictions, we must examine the effects of microlensing, millilensing, and/or differential dust extinction to explain our results <cit.>.
With updated template photometry, the evidence for differential dust extinction across the four images becomes more tenuous than in . Image C has the largest difference in F475W - F814W color than the other images, with a difference of about 0.1±0.06 mag, within 2σ of standard error. We also fit each SN image separately with the SALT2 model, fixing only the x_1 parameter and allowing the c parameter to vary, which captures the combined effects of intrinsic color differences between SNe Ia and the extinction from the intervening dust. We find c values of 0.092 ± 0.052, 0.059 ± 0.060, 0.083 ± 0.061, and 0.067 ± 0.061 for images A-D respectively, which are consistent within 1σ. We also fit each image with a SALT2 model that includes an additional extinction parameter of E(B - V) from the lens galaxy. The c values and E(B - V) for these fits are all in very close agreement, well within 1σ, with assumed R_V values around 2.0. We then perform a final test by allowing R_V to vary, and find close agreement among all three parameters: c, E(B - V), and R_V. Based on these tests, there seems to be very little evidence for differential dust extinction affecting our photometry.
In order to test for millilensing as a possible explanation of the added magnification that we measure, we ran three sets of 1,000 realizations of a lensing simulation which included dark matter subhalos using pyHalo. We find that the effect of the subhalos is negligible in the final magnification measurements, owing to the small cosmological volume over which the dark matter halos may affect our measurements. Therefore (unlike for a cluster-scale lensing system), we can assume that millilensing is not affecting our results at the scale of the discrepancy we are seeing. Therefore, we turn to microlensing as the remaining justification for the disagreement.
With new photometric measurements in the F160W filter, we may potentially reveal interesting microlensing effects that could explain the discrepancy between lens model predictions and photometric measurements. As explored in Section <ref>, the F160W flux we measure for all images are lower than the prediction of the best fit SALT2 model to the optical data. For images A, C and D, this is a ≳1σ discrepancy, with image D being about 1.5σ lower than predicted. Image B is only about 0.5σ from the SN Ia model prediction. Due to the model uncertainty in the F160W wavelength regime at the phase of , we cannot conclusively say whether this evidence points to chromatic microlensing or not; however, this analysis hints that image D may be systematically less luminous than expected given the optical data and the SN Ia model. This cannot be due to de-magnification from chromatic microlensing, however, as the macrolensing magnification parity of image D is positive <cit.>. In conjunction with the SN model analysis, we also compare these fluxes to what we would expect from the lens models presented in and find that images B and C are in good agreement with the lens models, to within 1σ, while A is far above the predicted flux by 2.67σ and image D is below the predicted flux by 1.54σ. We also note that images A and C are much closer to the center of the galaxy, thus increasing their chances of being microlensed by stars in the lens plane. Combining these two analyses, we can constrain a possible explanation for the predicted and measured magnifications, summarized below for each image:
* Image A: The absolute magnification we measure for image A is 5.91σ higher than was predicted by the previous joint modeling analysis. The predicted flux in the F160W band based on the lens model predictions was also elevated by 2.67σ. The measured F160W flux was slightly lower than expected given the SALT2 model fit, by about 1σ. Therefore, there is strong support from the data that image A is experiencing significant microlensing, with a possible chromatic effect that is causing an excess of flux in the optical compared to the NIR.
* Image B: The absolute magnifications measured for image B, as well as the flux predictions for B from both the SN Ia model and lens model predictions are all within tight agreement. This indicates that image B is not affected by microlensing at any detectable level.
* Image C: There is a 2.87σ offset between the measured absolute magnification of image C and the lens model prediction from the optical filters. The F160W flux agrees strongly with joint lens model predictions, however. This, along with about a 1σ deficit in the SALT2 predicted flux, show that image C may be experiencing chromatic microlensing that is magnifying the optical bands much more than the NIR (similar to image A).
* Image D: Like image B, the absolute magnification measured for image D is in good agreement with lens model predictions. However, the predicted F160W flux from the lens models is 1.54σ lower than the predicted amount from the joint lens models, and there is a similarly significant decrement of the predicted F160W flux using the SALT model prediction. If the parity of image D was negative, this evidence could tentatively point toward the possibility of chromatic de-magnification. However, since the parity of image D is positive, this is not possible. Therefore, image D is likely not experiencing significant microlensing (similar to image B).
Our analysis in this paper shows that even for a strongly lensed SN Ia system with only one epoch of resolved photometry, such as , we are able to recover uncertainties on our time delays of only 1 - 2 days. Each individual image, as exemplified by this system, may be affected by microlensing differently, as the density and distribution of stars in the lens plane around the locations of the SN images is variable. However, if we are able to sample the light curve of a lensed SN earlier and with much more data, these effects will be greatly mitigated.
The number of known galaxy-galaxy scaled strongly-lensed SN systems will increase by a few orders of magnitude with the upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time <cit.> and Nancy Grace Roman Space Telescope <cit.> observations. This study shows that it will be imperative to maximize the number of systems we find that are in the ∼3 rest-frame week window of time post-explosion, where the effects of microlensing are “achromatic” <cit.>, thus limiting the type of systematic effects we wrestle with here (given that our observations are at ≳ 6 rest-frame weeks). And when such resolved, early-time observations are unavailable, follow-up campaigns from space such as those with LensWatch will be vital to creating the quality datasets we need to perform precise cosmological analyses in the future. A significant sample of these objects will allow for high precision measurements of cosmological parameters, including H_0. Thus, it is imperative to carefully study the sample of strongly lensed SNe that are currently available in preparation for such large-scale efforts.
Acknowledgements
This paper is based in part on observations with the NASA/ESA Hubble Space Telescope obtained from the Mikulski Archive for Space Telescopes at STScI. These observations are associated with program #16264. CL acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2233066 and DOE award DE-SC0010008 to Rutgers University. JDRP is supported by NASA through a Einstein Fellowship grant No. HF2-51541.001 awarded by the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. This work has been enabled by support from the research project grant ‘Understanding the Dynamic Universe’ funded by the Knut and Alice Wallenberg Foundation under Dnr KAW 2018.0067. SD acknowledges support from a Kavli Fellowship and a JRF at Lucy Cavendish College. JMD acknowledges support from project PID2022-138896NB-C51 (MCIU/AEI/MINECO/FEDER, UE) Ministerio de Ciencia, Investigación y Universidade. CG is supported by a VILLUM FONDEN Young Investigator Grant (project number 25501). DG acknowledges support for this work provided by The Brinson Foundation through a Brinson Prize Fellowship grant. EEH is supported by a Gates Cambridge Scholarship (#OPP1144). This work was supported by research grants (VIL16599, VIL54489) from VILLUM FONDEN. XH acknowledges the University of San Francisco Faculty Development Fund. This work is partly supported by the National Science Foundation of China (Grant No. 11821303 to SM). The work of LAM was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. IPF acknowledges financial support from the Spanish Agencia Estatal de Investigació n del Ministerio de Cienciae Innovación (AEI–MCINN) under grant PID2022-137779OB-C44. FP acknowledges support from the Spanish Ministerio de Ciencia, Innovación y Universidades (MICINN) under grant numbers PID2022-141915NB-C21. AJS was supported by NASA through the NASA Hubble Fellowship grant HST-HF2-51492 by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. SHS thanks the Max Planck Society for support through the Max Planck Fellowship. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771776). This research is supported in part by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2094 – 390783311. TT acknowledges support by grant HST-GO-16264. This research made use of Montage. It is funded by the National Science Foundation under Grant Number ACI-1440620, and was previously funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology.
§ DATA AVAILABILITY
The data and scripts behind the figures in this study can be found at <https://github.com/Conor-Larison/lenswatch_ii> and will be available upon request to the corresponding author.
Astroalign <cit.>, Astropy <cit.>, Astroquery <cit.>, Corner <cit.>, Colossus <cit.>, DrizzlePac <cit.>, Dynesty <cit.>, IPython <cit.>, Jupyter <cit.>, lenstronomy <cit.>, Matplotlib <cit.>, NumPy <cit.>, pandas <cit.>, pyHalo <cit.>, SciPy <cit.>, space_phot <cit.>
aasjournal
| Supernovae (SNe) that have been multiply-imaged by strong gravitational lensing are naturally rare and revealing astronomical events. They require an unlikely alignment along the line of sight between an observer, the background source that is lensed, and the foreground lens (galaxy or cluster). This, combined with the fact that most SNe rise and fade over the course of a few rest-frame weeks, makes strongly lensed SNe extremely elusive.
The geometry of the lensing system and gravitational potential differences across the lens plane determine the delay in arrival of the SN images relative to one another. Measurements of this “time delay” can provide an angular diameter distance, which in turn can constrain the Hubble constant (H_0) and the dark energy equation of state (w) directly <cit.>. Strongly lensed SNe have several advantages over strongly lensed quasars, which have often been used for time-delay measurements <cit.>:
* SNe fade on a short timescale (over weeks to months), so more accurate models of the lensing system can be made post-SN discovery, as the source and lens fluxes would remain highly blended otherwise <cit.>.
* SNe (especially of Type Ia) have predictable light curves, thus simplifying time-delay measurements over a more stochastic system such as an active galactic nucleus (AGN).
* Strongly lensed SNe are also affected by microlensing <cit.>; however, chromatic effects are mitigated given sufficient early-time light curve coverage <cit.>. Microlensing can still be a significant source of uncertainty for SNe with small time delays <cit.> as we will show later.
* Perhaps most important in terms of logistical constraints is that lensed SNe require much shorter observing campaigns than lensed AGN, potentially going from a decade of extended observations for a single lensed AGN down to a few epochs over a year for a strongly lensed SN <cit.>.
These advantages been shown and explored for decades <cit.>, but observations to date have not been optimized for this type of phenomenon, as evidenced by only a few detections being made by recent wide-field astronomical surveys <cit.>. Despite few discoveries, we now have a confirmed sample of eight multiply-imaged SNe (SN Refsdal, SN 2016geu, SN Requiem, AT 2022acfv, SN Zwicky, SN 2022riv, SN H0pe, and SN Encore), including two that are lensed by a galaxy <cit.> and six that have been lensed by a foreground galaxy cluster <cit.>. Our expanding sample size is a direct result of the combined efforts of many dedicated programs to find these rare events <cit.>.
Type Ia SNe (SNe Ia) are especially useful as strongly lensed sources as we possess well-constrained templates of their light curves <cit.>, which can be used to standardize their brightness <cit.>. This standardizability can be used to break the mass-sheet degeneracy, a large systematic effect where one could add additional sheets of mass to the lensing plane without influencing the geometry of the system <cit.>, albeit only when millilensing and microlensing are mostly mitigated <cit.>.
We have been fortunate enough to discover three cluster-scale SN lensing systems with time delays that can precisely measure H_0: SN Refsdal, SN H0pe and SN Encore <cit.>. In contrast, the two existing galaxy-scale systems with lensed SNe do not currently provide such precise cosmological results, but they remain important to investigate, as they are expected to be much more common than these cluster-scale lenses in the era of the Vera C. Rubin Observatory <cit.>. With a larger statistical sample of such systems, it then becomes possible to measure H_0 to a percent level, enough to make it competitive in the current field <cit.>. However, in order to achieve this level of precision, we must first measure the time delays of each of our SN systems in the sample to a similarly high precision. To do this, we must better understand the impacts of microlensing to our error budget, the shortfalls of our current lens models on galaxy-galaxy systems, and the importance of follow-up campaigns to increase the quality and quantity of our photometry.
In this paper, we use additional observations to investigate one such galaxy-scale system, . was discovered in 2022 August by the Zwicky Transient Facility <cit.>[< and was subsequently classified and analyzed by <cit.>. The time delays, magnifications, and lens models of the SN & galaxy-galaxy system were then analyzed and reported in <cit.>. This work represents the second paper in a series of papers for the LensWatch program[< a direct follow-up to that provides improved time-delay and magnification measurements of from template-subtracted photometry. Section <ref> presents the template observation characteristics of . Our analysis of (including photometry and measurements of time delays and magnifications, as well as analysis investigating microlensing and millilensing) are reported in Section <ref>. Finally, we conclude with a discussion of the implications of these results in Section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17644v1 | 20240926084707 | Model-Based Machine Learning for Max-Min Fairness Beamforming Design in JCAS Systems | [
"Mengyuan Ma",
"Tianyu Fang",
"Nir Shlezinger",
"A. L. Swindlehurst",
"Markku Juntti",
"Nhan Nguyen"
] | eess.SP | [
"eess.SP"
] |
High- and low-energy many-body effects of graphene in a unified approach
Francesco Mauri
Received XXX; accepted XXX
========================================================================
§ ABSTRACT
Joint communications and sensing (JCAS) is expected to be a crucial technology for future wireless systems. This paper investigates beamforming design for a multi-user multi-target JCAS system to ensure fairness and balance between communications and sensing performance. We jointly optimize the transmit and receive beamformers to maximize the weighted sum of the minimum communications rate and sensing mutual information. The formulated problem is highly challenging due to its non-smooth and non-convex nature. To overcome the challenges, we reformulate the problem into an equivalent but more tractable form. We first solve this problem by alternating optimization (AO) and then propose a machine learning algorithm based on the AO approach. Numerical results show that our algorithm scales effectively with the number of the communications users and provides better performance with shorter run time compared to conventional optimization approaches.
Max-min fairness, beamforming, joint communications and sensing , machine learning.
§ INTRODUCTION
Joint communications and sensing (JCAS) is a pivotal technology for future wireless communications systems, enabling communications and sensing functionalities on a unified hardware platform. This integration facilitates spectrum sharing and reduces hardware costs <cit.>. However, the shared and limited resources for these dual functionalities pose significant challenges for JCAS transceiver design <cit.>. Beamforming plays a key role in addressing this challenge by enhancing spectral efficiency and improving sensing accuracy <cit.>.
Recent literature on beamforming design for JCAS systems has focused on three main optimization goals: maximizing sensing performance, maximizing communications performance, and balancing the tradeoff between them. Sensing-oriented designs <cit.> optimize sensing while ensuring communication capabilities. In contrast, communication-oriented designs <cit.> prioritize communications performance under sensing constraints. Tradeoff-oriented works <cit.> maximize a weighted sum of sensing and communication utility functions. Communications performance is typically measured by spectral efficiency or SINR, while sensing performance is evaluated using metrics like beampattern matching <cit.>, SCNR <cit.>, sensing mutual information (MI) <cit.>, and the Cramér–Rao lower bound <cit.>.
Despite these advances, achieving fairness among multiple communications users and sensing targets remains a significant challenge due to the inherently non-smooth nature of max-min optimization problems <cit.>. Conventional optimization-based approaches <cit.> typically result in high complexity and involve numerous algorithm parameters that need to be well tuned to ensure performance. In contrast, model-based machine learning (ML) methodologies <cit.> facilitate real-time operation and satisfactory performance for beamforming design <cit.>. For example, beamforming optimizers have been unfolded into into deep learning models in <cit.>, achieving improved computational efficiency and performance. However, current unfolded algorithms are not applicable for addressing the fairness problem because of their problem-specific network structures. Furthermore, the current literature lacks unfolded learning models specifically designed for max-min fairness problems, which typically involve complex objective functions.
In this paper, we fill this gap by jointly optimizing beamformers for both transmission and sensing, aiming to maximize the weighted sum of the minimum communications rate and sensing MI under per-antenna transmit power constraints. Given the non-smooth and non-convex nature of the problem, we first reformulate it into an equivalent but more tractable form and solve it using an alternating optimization (AO) procedure. Then, we propose a ML algorithm based on the AO framework. Numerical results demonstrate that our algorithm scales effectively with the number of communications users and achieves better performance with shorter execution times compared to conventional optimization-based methods.
§ SYSTEM MODEL AND PROBLEM FORMULATION
JCAS System:
We consider a downlink monostatic JCAS system, where a base station (BS) is equipped with transmit antennas and radar receive antennas. The same waveform is used for transmission and sensing <cit.>. The BS simultaneously transmits signals to K single-antenna users for communications while probing M targets. Letting =[s_1,…, s_K]^∈^K× 1∼𝒞𝒩(0,) be the vector of communications symbols, and =[_1,…,_K]∈^× K be the precoding matrix, the transmitted signal is expressed as
=. We assume an equal power allocation for all antennas, such that diag(𝐖𝐖^)̋≼/1_, where is the transmit power budget.
The signal received by communications user k is given by
y_k=𝐡_k^w_ks_k+𝐡^_̋k∑_j≠ k^K𝐰_js_j +n_k,
where 𝐡_k∈ℂ^× 1 denotes the channel between the BS and user k, and n_k∼𝒞𝒩(0,σ_c k^2) represents additive white Gaussian noise (AWGN) with variance σ_c k^2. Thus, the SINR of the intended symbol at user k can be expressed as
γ_c k=|𝐡_k^w_k|^2/∑_j≠ k^K|𝐡^_̋k𝐰_j|^2+σ_c k^2.
After transmission, the BS radar antennas receive echos reflected by the M sensed targets as well as C clutter scatterers around them. The received echo signal at the BS is written as
𝐲_s=∑_m=1^M𝐆_m𝐱+∑_j=M+1^M+C𝐆_j𝐱+𝐧_s,
where 𝐆_i=ζ_s iα_s i𝐚_r(φ_i)𝐚_t^(̋φ_i), i={1,…,M+C}, and 𝐧_s∼𝒞𝒩(0,σ_s^2𝐈) is AWGN. Here, ζ_s i, α_s i, and φ_i represent the path loss, complex gain, and angle associated with target/clutter i, while 𝐚_r(·) and 𝐚_t(·) are the normalized receive and transmit array steering vectors. The BS employs a receive combining matrix 𝐅=[𝐟_1,…,𝐟_M]∈ℂ^× 1, such that the received signal associated with target m is given by
r_m=𝐟_m^y_s= 𝐟_m^G_m𝐱+𝐟_m^∑̋_j≠ m^M+C𝐆_j𝐱+𝐟_m^n_s.
Thus, the SCNR can be expressed as
γ_s m=𝐟_m^G_m𝐖^2 /∑_j≠ m^M+C𝐟_m^G_j𝐖^2+σ_s^2𝐟_m^2 .
Problem Formulation:
We aim to jointly optimize the transmit precoding matrix 𝐖 and receive combining matrix 𝐅 to 1) ensure fairness among communications users and
sensed targets; and 2) achieve a balanced tradeoff between communications and sensing performance. To achieve these goals, we employ the utility function h(,)= min_k∈𝒦{log(1+γ_c k)}+δmin_m∈ℳ{log(1+γ_s m )}, where log(1+γ_c k) and log(1+γ_s m ) represent the communications rate and sensing MI <cit.>, respectively, and δ≥ 0 balances their performance tradeoff. The joint design problem is then formulated as
max_𝐖∈𝒮,𝐅 h(,),
where 𝒮 = {𝐖: diag(𝐖𝐖^)̋≼/1_}. The max-min objective guarantees fairness while the communications and sensing performance are balanced by adjusting δ. However, problem (<ref>) is challenging, as the
utility function includes the non-smooth point-wise minima in h(,), non-convex fractional SINRs and SCNRs, and strongly coupled variables.
§ MODEL-BASED ML OPTIMIZER
We herein propose a learning-aided optimizer for fairness JCAS beamforming. Specifically, we first formulate a surrogate objective to cope with the complex objective in (<ref>). We identify an AO method suitable for the resulting optimization.
Then, we leverage model-based ML, and particularly deep unfolding <cit.>, to enhance performance by converting the optimizer into a ML model.
Surrogate Objective:
Introducing the variables 𝐳_c=[z_c1,…,z_c K]^ and 𝐳_s=[z_s1,…,z_s M]^, we rewrite (<ref>) as
max_𝐖∈𝒮,𝐅 min_𝐳_c∈𝒵_c,
𝐳_s∈𝒵_s{∑_k=1^K z_c kr_c k+δ∑_m=1^M z_s mr_s m},
where 𝒵_c={𝐳_c:𝐳_c≽0, 1_K^𝐳_c=1 } and 𝒵_s={𝐳_s:𝐳_s≽0, 1_M^𝐳_s=1 } are compact convex simplices, and r_c k=log(1+γ_c k), r_s m=log(1+γ_s m ). As the optimal solution is located at the vertex of the simplices, solving (<ref>) directly can lead to oscillation near the optimal point. To address this, we approximate {_c, _s} by
z_c k≈exp(-μ_c r_c k )/∑_j=1^Kexp(-μ r_cj) ,
z_s m≈exp(-μ_s r_s m )/∑_j=1^Mexp(-μ r_sj) , ∀ m,k,
where μ_s and μ_c are continuous-valued parameters. These approximations become tight as μ_s, μ_c,→∞ <cit.>.
To deal with the non-convex objective, we introduce auxiliary variables ξ_c=[ξ_c1,…,ξ_c K]^∈ℝ^K× 1, ξ_s=[ξ_s1,…,ξ_s M]^∈ℝ^M× 1, θ_c=[θ_c1,…,θ_c K ]^∈ℂ^K× 1, and Θ_s=[θ_s1^,̋…,θ_s M^]̋^∈̋ℝ^M× K, and employ the quadratic transformation <cit.> to obtain an equivalent but more tractable problem
max_𝐖∈𝒮,
𝐅,ξ,Θ min_𝐳_c∈𝒵_c,
𝐳_s∈𝒵_s{∑_k=1^K z_c kf_c k+δ∑_m=1^M z_s m f_s m},
where ξ={ξ_c,ξ_s}, Θ={θ_c,Θ_s}, f_c k=log(1+ξ_c k)+2√(1+ξ_c k){𝐡_k^w_k θ_c k^*}
-|θ_c k|^2(∑_j=1^K|_k^w_j|^2+σ_c k^2 )-ξ_c k and f_s m=log(1+ξ_s m)+2√(1+ξ_s m){𝐟_m^G_m𝐖θ_s m^}̋
-θ_s m^2(∑_j=1^M+C𝐟_m^G_j𝐖^2+σ_s^2𝐟_m^2 )-ξ_s m.
AO Optimizer:
We can solve the problem (<ref>) via AO. Specifically, the auxiliary variables {ξ,Θ} are related to the design variables 𝐅 and via
ξ_c k=γ_c k, ξ_s m=γ_s m, θ_c k=√(1+ξ_c k)𝐡_k^w_k/∑_j=1^K|𝐡_k^w_j|^2+σ_c k^2 ,
θ_s m=√(1+ξ_s m)𝐟_m^G_m𝐖/∑_j=1^M+C𝐟_m^G_j𝐖^2+σ_s^2𝐟_m^2 .
Hence, given {ξ,Θ} and , is updated by
𝐟_m=√(1+ξ_s m)/θ_s m^2(∑_j=1^M+C𝐆_j𝐖𝐖^G_j^+̋σ_s^2𝐈)^-1𝐆_m𝐖θ_s m^.̋
The proposed AO method thus operates in an iterative fashion. In each iteration, AO first uses 𝐅 and from the previous iteration to set the auxiliary variables (<ref>), which in turn are used to update via (<ref>). Then, 𝐖 is updated using projected gradient descent (PGD). Specifically, we define Σ_1=diag{z_c1√(1+ξ_c1)θ_c1,…,z_c k√(1+ξ_c k)θ_c k}, 𝐇=[𝐡_1,…,𝐡_K ], Σ_2=diag{z_c1|θ_c1|^2,…,z_c k|θ_c k|^2 }, 𝐗=δ∑_m=1^Mz_s m√(1+ξ_s m)θ_s m^f_m^G_m, and 𝐘=δ∑_j=1^M+C𝐆_j^(∑_m=1^Mz_s mθ_s m^2 𝐟_m𝐟_m^)𝐆_j. With the other variables fixed, the subproblem with respect to 𝐖 is formulated as min_𝐖∈𝒮 g(),
where g() = tr(𝐖𝐖^(̋𝐘+𝐇Σ_2𝐇^)̋ )-2{tr(𝐖 (𝐗+Σ_1^H^)̋ ) } is convex with respect to . Hence, PGD can solve this problem. The gradient of g() is given by
∇_g = 2(+ _2^)̋ -2(_1+^)̋.
Let Π_𝒮(·) denote the projection onto the set 𝒮:
Π_𝒮()= √(P_t/)(^)̋^-1/2,
where (·) is a diagonal matrix whose diagonal entries are taken from the argument. The PGD update is thus
←Π_𝒮(-β∇_g /∇_ g_F|_=),
where β is the step size and is the previous iterate of .
Unfolded Optimizer: In the AO procedure, updating requires a proper step size to ensure fast convergence. However, selecting an appropriate step size using techniques such as the backtracking search algorithm <cit.> introduces additional complexity. Furthermore, the parameters {μ_s, μ_c} also influence performance and typically require manual adjustment based on empirical experience. We overcome these difficulties by converting the AO into a discriminative ML model <cit.>, which can be viewed as a deep neural network (DNN). The DNN is trained to learn iteration-specific step sizes and {μ_s, μ_c} in an unsupervised manner, based on the original objective function h(,).
The proposed DNN consists of L_ out outer layers, and the structure of the ℓth outer layer is illustrated in Fig. <ref>. In each outer layer, {𝐳_c,𝐳_s,ξ,Θ,} and are updated successively, where the former ones have closed-form expressions given by (<ref>), (<ref>), and (<ref>). To efficiently obtain , we unfold the PGD algorithm into L_ in inner layers. Specifically, with the trainable step size {β_ℓ,i}_ℓ=1,i=1^L_ out,L_ in, ^[ℓ, i] is updated by
^[ℓ, i]←Π_𝒮(^[ℓ, i-1] -β_ℓ,i∇_g /∇_ g_F|_=^[ℓ, i-1]).
In each iteration, is obtained directly with closed-form solution (<ref>). In contrast, is updated over unfolded DNN layers, and its update speed is restricted by the power constraint. Through simulations, we found that the update speed of is much slower than that of , causing slow convergence of the overall algorithm. To overcome this, we propose updating over I_ w iterations of the L_ in inner layers. Our simulations show that adjusting I_ w can significantly accelerate the convergence, as will be demonstrated in Section <ref>.
The proposed AO-unfolded ML algorithm for solving (<ref>) is outlined in Algorithm <ref>. The input to the algorithm, denoted by d, is the system configuration d = {,,,σ_ ck,σ_ s}, where represents {_m}_m=1^M+C. In step 1, we first set {μ_s,μ_c} with predefined values and initialize and for each channel realization. Steps 2–13 represent the update of {𝐳_c,𝐳_s,ξ,Θ, , } with L_ out outer layers, while in steps 5–12, the update of is performed with L_ in inner layers and I_ w loops. We obtain {, } from the output of the L_ outth outer layer, as in step 14.
The complexity of Algorithm <ref> is dominated by the matrix multiplications required to update , , and . Computing and results in a complexity of (L_ out(M+C)(^2+^2)). The complexity of updating is (2L_ outL_ inI_ wK^2). Therefore, the overall complexity is approximately L_ out((M+C)(^2+^2)+2L_ inI_ wK^2).
Training:
The trainable parameters of Algorithm <ref> are thus ϕ ={μ_s,μ_c,{β_ℓ,i}_ℓ=1,i=1^L_ out,L_ in}. These are learned from a data set 𝒟 comprised of B potential JCAS settings, i.e., 𝒟 = { d_b}_b=1^B. While the AO optimizer is derived from the proposed surrogate objective, the parameters of the unfolded AO are trained to maximize the original objective in (<ref>).
Specifically, letting ^[ℓ]( d;ϕ), ^[ℓ]( d;ϕ) denote the output of the ℓth iteration of Algorithm <ref> with input d and parameters ϕ, the training loss used to guide the tuning of ϕ from 𝒟 is
_𝒟( ϕ) =-1/B∑_b=1^B∑_ℓ=1^L_ outλ_l h(^[ℓ]( d_b;ϕ), ^[ℓ]( d_b;ϕ)),
where λ_l=1/ℓ is the weight associated with the ℓth outer layer. The decreasing weight sequence prioritizes the initial layers for learning, allowing the solution to rapidly approach the optimal point in the beginning, and then gradually refined by the latter layers. In doing so, the overall convergence speed is slightly improved based on our simulations.
§ NUMERICAL RESULTS
In this section, we evaluate the performance of the proposed model-based ML algorithm[The code source is available online at <https://github.com/WillysMa/JCAS_BF_Design_MaxMin>.]. Throughout the simulations, we set ==16, M=C=2, and K=4 for training and testing unless otherwise stated. We adopt the Rician fading model for communications channels with Rician factor of 3 dB. The path loss is modeled as ζ_x = ζ_0 d_x^ϵ_x, where ζ_0 = -30 dB is the reference path loss at 1 m, and (·)_x with x∈{c, s} represent the parameters for communications and sensing channels, respectively. Accordingly, we assume the path loss exponents ϵ_c = 3, ϵ_c = 2 and distances from the BS to the kth user and mth target as d_c, k=100+20η_c, k and d_s, m=10+2η_s, m, where η_ck,η_sm∼𝒩(0,1).
The sizes of the training and testing sets are 500 and 100, respectively. We train over over 30 epochs with batch size of 32. For initialization, we set β_ℓ,i=0.01, ∀ℓ,i and μ_s= μ_c=10 and initialize based on the maximum ratio transmission method, i.e.,^[0]=Π_𝒮(). We set ^[0] by maximizing the SCNR in (<ref>) for the M targets, which is a generalized eigenvalue problem whose solution is established in the literature <cit.>. For all results reported below, we set L_ out=150, L_ in=3, I_ w=2 for training. All results are obtained by averaging over 100 channel realizations.
For comparison, we consider AO optimizers employing either fixed (“Fixed β”) or dynamic (“Dynamic β”) step sizes to update . For the former, the step size is set to 0.01, which is the same as the initial value for β_ℓ,i ∀ℓ,i. For the latter, the step size for each update is found by the backtracking line search method. The steps of the two algorithms are the same as in Algorithm <ref> except we set I_ w=1.
We also compare with the heuristic method for updating in <cit.>.
In Figs. <ref> and <ref>, we show the convergence of the considered schemes with δ∈{1,10} and I_ w = {2,4}. It is observed that the proposed algorithm converges to the largest objective values with fewer iterations compared to the benchmark schemes. The gain is more significant for smaller δ. Furthermore, increasing I_ w accelerates the convergence and leads to a higher objective value of the proposed method. However, a large I_ w may cause oscillation near the local optimal point, as can be observed in Fig. <ref> for I_ w = 4.
Fig. <ref> shows the minimum SINRs and SCNRs for K = {2,4,…,12} and δ=1. As K increases, inter-user interference becomes more significant. Thus, the minimum SINR of all the schemes considered decreases, contributing less to the objective function h(,) in (<ref>). As a result, the minimum SCNR increases slightly with K. We note here that the DNN model is trained with the dataset obtained for K=4. However, it works for various values of K during online inference because its model structure is independent of K. Among the compared schemes, the proposed method achieves the largest minimum SINR/SCNR, with its gain being more significant for the communications functions. With K=10, the SINR improves by up to 200% compared to the scheme using backtracking line search. We further show the tradeoff between the minimum SINR and SCNR of the considered schemes for δ∈ [0, 100] in Fig. <ref>. It is seen that the proposed unfolded AO scheme achieves the best tradeoff between the minimum SINR and minimum SCNR.
In Table <ref>, we show the average run time of the considered algorithms for δ=1. All the compared schemes are performed on the same platform. Based on the results in Fig. <ref>, we set L_ out= {150, 100} for I_ w={2,4}, respectively, for Algorithm <ref>. Meanwhile, the benchmark schemes are set to run until convergence. It is observed from Table <ref> that the proposed scheme with I_ w=4 performs the fastest, and its run time remains almost constant when K increases. Furthermore, the reduction in run time of the proposed scheme is more significant with larger K. For example, when K = 10, Algorithm <ref> with I_ w=4 achieves a run time reduction of 67%, 75%, and 92% compared to the dynamic step size strategy, the heuristic method, and the fixed step size strategy, respectively. Note that this reduction is achieved while better performance is guaranteed, as seen from Fig. <ref>.
§ CONCLUSIONS
This paper proposed an efficient beamforming design for multi-user multi-target JCAS systems, aiming to ensure fairness and balance between communications and sensing performances. We jointly optimized the transmit and receive beamformers to maximize the weighted sum of the minimum communications rate and sensing MI. To address this challenging problem, we proposed a model-based ML algorithm. Numerical results demonstrate that our algorithm scales well with the number of communications users while delivering better performance with shorter run time compared to conventional optimization-based methods.
IEEEbib
| Joint communications and sensing (JCAS) is a pivotal technology for future wireless communications systems, enabling communications and sensing functionalities on a unified hardware platform. This integration facilitates spectrum sharing and reduces hardware costs <cit.>. However, the shared and limited resources for these dual functionalities pose significant challenges for JCAS transceiver design <cit.>. Beamforming plays a key role in addressing this challenge by enhancing spectral efficiency and improving sensing accuracy <cit.>.
Recent literature on beamforming design for JCAS systems has focused on three main optimization goals: maximizing sensing performance, maximizing communications performance, and balancing the tradeoff between them. Sensing-oriented designs <cit.> optimize sensing while ensuring communication capabilities. In contrast, communication-oriented designs <cit.> prioritize communications performance under sensing constraints. Tradeoff-oriented works <cit.> maximize a weighted sum of sensing and communication utility functions. Communications performance is typically measured by spectral efficiency or SINR, while sensing performance is evaluated using metrics like beampattern matching <cit.>, SCNR <cit.>, sensing mutual information (MI) <cit.>, and the Cramér–Rao lower bound <cit.>.
Despite these advances, achieving fairness among multiple communications users and sensing targets remains a significant challenge due to the inherently non-smooth nature of max-min optimization problems <cit.>. Conventional optimization-based approaches <cit.> typically result in high complexity and involve numerous algorithm parameters that need to be well tuned to ensure performance. In contrast, model-based machine learning (ML) methodologies <cit.> facilitate real-time operation and satisfactory performance for beamforming design <cit.>. For example, beamforming optimizers have been unfolded into into deep learning models in <cit.>, achieving improved computational efficiency and performance. However, current unfolded algorithms are not applicable for addressing the fairness problem because of their problem-specific network structures. Furthermore, the current literature lacks unfolded learning models specifically designed for max-min fairness problems, which typically involve complex objective functions.
In this paper, we fill this gap by jointly optimizing beamformers for both transmission and sensing, aiming to maximize the weighted sum of the minimum communications rate and sensing MI under per-antenna transmit power constraints. Given the non-smooth and non-convex nature of the problem, we first reformulate it into an equivalent but more tractable form and solve it using an alternating optimization (AO) procedure. Then, we propose a ML algorithm based on the AO framework. Numerical results demonstrate that our algorithm scales effectively with the number of communications users and achieves better performance with shorter execution times compared to conventional optimization-based methods. | null | null | null | null | null |
http://arxiv.org/abs/2409.17109v1 | 20240925172427 | Unveiling Ontological Commitment in Multi-Modal Foundation Models | [
"Mert Keser",
"Gesina Schwalbe",
"Niki Amini-Naieni",
"Matthias Rottmann",
"Alois Knoll"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
123
A,B]Mert KeserCorresponding Author. Email: [email protected][Equal contribution.]
C]Gesina Schwalbe
Corresponding Author. Email: [email protected]
D]Niki Amini-Naieni
E]Matthias Rottmann
B]Alois Knoll
[A]Technical University of Munich, Germany
[B]Continental AG, Germany
[C]University of Lübeck, Germany
[D]University of Oxford, UK
[E]University of Wuppertal, Germany
§ ABSTRACT
Ontological commitment, i.e., used concepts, relations, and assumptions, are a corner stone of qualitative reasoning (QR) models.
The state-of-the-art for processing raw inputs, though, are deep neural networks (DNNs), nowadays often based off from multimodal foundation models. These automatically learn rich representations of concepts and respective reasoning.
Unfortunately, the learned qualitative knowledge is opaque, preventing easy inspection, validation, or adaptation against available QR models.
So far, it is possible to associate pre-defined concepts with latent representations of DNNs, but extractable relations are mostly limited to semantic similarity.
As a next step towards QR for validation and verification of DNNs:
Concretely, we propose a method that extracts the learned superclass hierarchy from a multimodal DNN for a given set of leaf concepts.
Under the hood we
(1) obtain leaf concept embeddings using the DNN's textual input modality;
(2) apply hierarchical clustering to them, using that DNNs encode semantic similarities via vector distances; and
(3) label the such-obtained parent concepts using search in available ontologies from QR.
An initial evaluation study shows that meaningful ontological class hierarchies can be extracted from state-of-the-art foundation models. Furthermore, we demonstrate how to validate and verify a DNN's learned representations against given ontologies.
Lastly, we discuss potential future applications in the context of QR.
§ INTRODUCTION
One of the basic ingredients of QR models is an ontology specifying the allowed concepts, relations, and any prior assumption about them; more precisely, the commitment to (a subset of an) ontology with associated semantic meaning of concepts and relations <cit.>.
Thanks to years of research, large and rich ontologies like Cyc <cit.>, SUMO <cit.>, or ConceptNet <cit.>
are readily available for building or verifying QR models.
Meanwhile, however, DNNs have become the de-facto state of the art for many applications that hardly allow a precise input specification <cit.>, such as processing of raw images (computer vision), e.g., for object detection <cit.>, or processing of unstructured natural language text <cit.>.
This machine learning approach owes its success to its strong representation learning capabilities:
DNNs automatically learn highly non-linear mappings (encoding) from inputs to vectorial intermediate representations (latent representations or vectors) <cit.>,
and reasoning-alike processing rules <cit.>
from these to a desired output.
Availability of large text and image datasets have further sparked the development of multimodal so-called foundation models <cit.>.
These are large general-purpose DNNs trained to develop semantically rich encodings suitable for a variety of tasks <cit.>.
This is oft achieved by training them to map textual descriptions and images onto matching vectorial representations (text-to-image alignment) <cit.>,
using multimodal inputs of both images and text.
The prospect.
Foundation models come with some interesting prospects regarding their learned knowledge:
(1) One can expect foundation models to learn a possibly interesting and useful ontology, giving insights into concepts <cit.>
and concept relations <cit.>
prevalent in the training data; and
(2) such sufficiently large models can also develop sophisticated reasoning chains on the learned concepts <cit.>.
From the point of perspective of QR, this raises the question, whether this learned knowledge is consistent with the high quality available ontologies and QR models.
This opens up well-grounded verification and validation criteria for safety or ethically critical applications. As a first step towards this, this paper defines techniques for extraction and verification of simple class hierarchies.
Future prospects encompass to use the extracted knowledge from DNNs for knowledge retrieval, and
ultimately gain control over the learned reasoning: This would enable the creation of powerful hybrid systems <cit.>
that unite learned encoding of raw inputs like images with QR models.
The problem.
Unfortunately, the flexibility of DNNs in terms of knowledge representation comes at the cost of interpretability <cit.>;
and, being purely statistical models, they may extract unwanted and even unsafe correlations <cit.>.
The opaque distributed latent representations of the input do not readily reveal which interpretable concepts have been learned, nor what reasoning is applied to them for obtaining the output.
This is a pity, not least because that hinders verification of ethical and safety properties.
Take as an example the ontological commitment: Which hierarchical subclass-relations between concepts are considered? An example is shown in Fig. <ref>.
This directly encodes the learned bias, which commonalities between classes are taken into account, and which of these are predominant for differentiating between classes.
The same example also nicely illustrates the issue with wrongly learned knowledge: The models may focus on irrelevant but correlated features to solve a task, such as typical background of an object in object detection <cit.>.
A whole research field, explainable artificial intelligence (XAI), has evolved that tries to overcome the lack of DNN interpretability <cit.>.
To date it is possible to partly associate learned representations with interpretable symoblic concepts (1-ary predicates) <cit.>,
such as whether an image region is a certain object part (e.g., isLeg), or of a certain texture (e.g., isStriped) <cit.>.
However, extraction of learned relations is so far focused on simple semantic similarity of concepts <cit.>;
hierarchical relations that hold across subsequent layers, i.e., across subsequent encoding steps <cit.>;
or hierarchies obtained when subdividing a root concept <cit.>.
And while first works recently pursued the idea to extract superclass hiearchies from given leaves, these are still limited to simple classifier architectures <cit.>.
A next step must therefore be: Given a set of (hierarchy leaf) concepts, how to extract (1) the unifying superclasses, and (2) the resulting class hierarchy with subclass relationships from any semantically rich intermediate output of a DNN, preferrably from the embedding space of foundation models.
Approach.
We here propose a simple yet effective means to get hold of these encoded class hierarchies in foundation models;
thereby taking another step towards unveiling and verifying the ontological commitment of DNNs against known QR models respectively ontologies.
Building on <cit.> and <cit.>,
our approach leverages two intrinsic properties of the considered computer vision models:
* Vision DNNs generally encode learned concept similarities via distances in their latent representation vector space <cit.>.
This makes it reasonable to find a hierarchy of superclass representations by means of hierarchical clustering <cit.>.
* Foundation models accept textual descriptions as inputs, trained for text-to-image alignment.
This allows to cheaply establish an approximate bijection of textual concept descriptions to representations: A description is mapped by the DNN to a vector representation, and a given representation is assigned to that candidate textual description mapped to the most similar (=close by) vector <cit.>.[
This could be replaced by the mentioned approximate concept extraction techniques for models without decoder and text-to-image alignment.]
Contributions.
Our main contributions and findings are:
* An approach
to extract and complete a simple learned ontology, namely a superclass hierarchy with given desired leaf concepts (<ref>), from intermediate representations of any multimodal DNN, which allows to manually validate DNN-learned knowledge against QR models (see <ref>);
* An approach to test the consistency of multimodal DNNs against a given class hierarchy, e.g., from standard ontologies;
* An initial experimental validation showing that
the approach can extract meaningful ontologies,
and reveal inconsistencies with given ontologies;
* A thorough discussion of potential applications for QR extraction and insertion from / into DNNs.
§ RELATED WORK
Extraction of learned ontologies.
Within the field of XAI <cit.>,
the subfield of concept-based XAI (c-XAI) has evolved around the goal to associate semantic concepts with vectors in the latent representations <cit.>.
For analysis purposes, methods here allow to both extract representations which match given concept specifications (supervised approach) <cit.>
as well as mine meanings for the most prevalent representations used by the DNN (unsupervised approach) <cit.>.
Notably, we here utilize the supervised approach by Yuksekgonul et al. <cit.>
which directly utilizes the text-to-image alignment in multimodal DNNs.
Such associations have found manifold applications in the inspection of DNNs' learned ontology, such as:
Which concepts from a given ontology are learned <cit.>?
And how similar are representations of different concepts <cit.>?
This was extended to questions about the QR of the models, such as
sensitivity of later concept representations (or outputs) to ones in earlier layers <cit.>,
or compliance with pre-defined logical rules <cit.>.
However, very few approaches so far explored more specific relations between concept representations within the same layer's representation space. In particular, specific relations beyond general semantic similarity, such as class hierarchies. This is a severe gap when trying to understand the learned ontological relations between concepts:
DNNs develop increasing levels of abstraction across subsequent layers <cit.>,
rendering the concepts occurring in their representation spaces hardly comparable.
Notably, Wan et al. <cit.> challenged this gap and applied hierarchical clustering on DNN representations. However, their association of given concepts to latent representations is limited to last layer's output class representations, which we want to resolve.
Furthermore, existing work was devoted only to single kinds of relations. We here want to show that these efforts can be unified under the perspective of investigating ontological commitment of DNNs.
§ BACKGROUND
§.§ Deep neural network representations
DNNs.
Mathematically speaking, deep neural networks are (almost everywhere) differentiable functions FR^n→R^m
which can be written in terms of small unit functions, the so-called neurons fR^n→R, by means of
the standard concatenation operation f∘ g x↦ f(g(x)),
linear combination x↦ Wx+b,
and product a, b↦ a· b.
Typically, the linear weights W and biases b serve as trainable parameters, which can be optimized in an iterative manner using, e.g., stochastic gradient descent.
Neurons are typically arranged in layers, i.e., groups where no neuron receives outputs from the others.
Due to this Lego-principle, DNNs are theoretically capable of approximating any continuous function (on a compact subspace) up to any desired accuracy <cit.>,
and layers can be processed highly parallel.
In practice, this is a double-edged sword: DNNs of manageable size
show astonishing approximation capabilities for target functions like detection or pixel-wise segmentation of objects in images <cit.>.
However, they also tend to easily extract irrelevant correlations in the data, leading to incorrect <cit.>
or even non-robust <cit.>
generalization respectively reasoning on new inputs.
Latent representations.
In the course of an inference of an input x, each layer L of the DNN produces as intermediate output a vector F_→ L(x)∈R^n, each entry being the output of one of the n neurons of L.
This vectorial encoding of the input is called the latent representation of the input within L, and the vector space R^n hosting the representations is called the latent space.
Interestingly, it was shown that DNNs encode semantically meaningful information about the input in their latent representations, with abstraction increasing the more layers are passed
(e.g., starting with colors and textures, to later develop notions of shapes and objects) <cit.>.
Concept embeddings.
An emergent property of these representations is that in some layers, a concept C
(e.g., color Red, or object part Leg),
can be encoded as prototypical vector e(C) within this latent space. These are called concept (activation) vectors <cit.>
or concept embeddings <cit.>.
The mapping e𝒞→R^n from a set of human-interpretable concepts to their embeddings even preserves semantic similarities to some extend:
Examples are the reflection of analogical proportions <cit.>
in word vector spaces (DNNs with textual inputs trained for natural language processing),
like
e(King)-e(Queen)=e(Man)-e(Woman)
<cit.>;
and their analogues in standard computer vision architectures trained for object classification or detection: e(Green)+e(Wood)=e(Tree) <cit.>.
Our approach relies on these natural translation of semantic to vector operations/properties.
In particular, we assume that the relation IsSimilarTo[We here assume that IsSimilarTo is reflexive and symmetric, following geometrical instead of psychological models of similarity <cit.>.]
on input instances x is mapped to some distance metric d like Euclidean or cosine distance by the DNN representations:
∀C, C'IsSimilarTo(C, C') ⇔ d(e(C), e(C')≈0.[
For optimization, the relative formulation can be more convenient:
∀C,C', C”C more similar to C' than to C”⇒ d(e(C),e(C'))≤ d(e(C),e(C”)).]
Concretely, we use the translation of similarity relations to find a superclass concept representation via interpolation.
Text-to-image alignment.
In the case of multimodal DNNs that accept both textual and image inputs, the training often encompasses an additional (soft) constraint:
Given textual descriptions of an input image, these must be mapped to the same/a similar latent representation as their respective image.
While pure language models suffer from the impossibility to learn the true meaning of language concepts without supervision <cit.>,
this additional supervision might help the model to develop representations that better match the human understanding of the word/concept.
We here leverage this intrinsic mapping to associate textual or graphical descriptions of our concepts with latent representations.
When using textual decriptions, good text-to-image alignment is an
important assumption; but, sadly, even with explicit training
constraints this is not guaranteed <cit.> (cf. distance of image and text embeddings in <ref>).
We show both the influence of text-to-image alignment on our method, how it can be reduced, and how to use our method in order to identify issues with the learned meaning of concepts, which opens up options to fix the representations.
§.§ Ontologies
When modeling any problem or world, a basis of the model is to know
what the model is talking about.
This is exactly answered by the underlying ontology, i.e., a definition of what categories/properties and relations are used in the model.
We here adopt the definition from <cit.>.
An ontology is a pair (𝒱, 𝒜) constituted by
a vocabulary 𝒱=𝒞∪ℛ of
a set of unary predicates 𝒞 (the concepts corresponding to class memberships and other properties) and
a set of binary predicates ℛ (the instance relations)
used to describe a certain reality,
and which are further constraint by a set 𝒜 of explicit assumptions in the form of a first- (or higher-)order logic theory on the predicates.
A relation we will use further is IsSimilarTo∈ℛ. Also spatial relations like IsCloseBy <cit.> and LeftOf, TopOf, etc. <cit.> have been defined and used in literature for latent space representations of objects.
Simple examples of assumptions that relate the concept sets are, e.g., the subclass relationship we investigate in this paper: IsSuperclassOf(C',C) :⇔ (∀ vC(v) ⇒C'(v)) (cf. <ref>).
This can also be seen as a relation between concepts, by interpreting the unary concept predicates C as sets of objects (e.g., classes) via v∈ C :⇔ C(v).
The validity of concept embeddings also gives rise to assumptions about concepts (∀ v C(v) ⇔IsSimilarTo(v, e(C))).
Note that, given embeddings, we can formulate relations between concepts using instance relations R∈ℛ via R(C,C') :⇔ R(e(C), e(C')). An example would be isSimilarTo(cat,dog).
The first challenge in extracting learned QR from DNNs is to find/explain the ontology that is used within the reasoning process of the DNN. Unraveling an ontology as done in <ref> above breaks this step roughly down into:
* Find the concepts 𝒞 (and their embeddings) used by the model.
* Find the relations ℛ that may be formulated on vector instances.
* Simple assumptions 𝒜_s⊆𝒜: How are concept related.
* Identify further assumptions 𝒜∖𝒜_s that the model applies.
Note that the layer-wise architecture of DNNs partitions the representations into objects (vectors) in the different latent spaces. For a layer L we denote v in the latent space of L as L(v).
This gives rise to a partition of the concept, relation, and assumption definitions, allowing to conveniently split up above steps as follows:
* What concepts 𝒞_i⊂𝒞 are encoded within the ith layer L_i
(∀ C∈𝒞_i, v L_i(v)⇒ C(v))?
* What assumptions 𝒜_i,i hold for which items within the same ith latent space
(∀ A∈𝒜_i,(v^(s))_s⋁_s L_i(v^(s)) ⇒ A(v^(1), …))?
* What assumptions 𝒜_i,j, i ≠ j, hold between items of different latent spaces?
Task <ref> is (somewhat) solved by methods from c-XAI, where both learned concepts <cit.>
as well as their distribution over different layer representation spaces <cit.>
are investigated.
<ref> and <ref> show the yet-to-be-filled gaps: Investigated relations between items, item groups respectively concepts within the same arbitrary latent space (=<ref>). These so far only concern general semantic similarity, and relations across latent spaces only sensitivity. That falls far behind the richness of natural language; in particular it misses out on concept and instance relations of the kind C is similar to C' with respect to feature F respectively C, C' both are F, and counterpart C differs from C' with respect to feature F[
C, C' both are F (∀ x(C(x)∨C'(x))⇒F(x)) rewrites to IsSuperclassOf(F,C)∧IsSuperclassOf(F,C'); the differs-case to IsSuperclassOf(F,C)∧IsSuperclassOf(F,C').
].
In other words, the relation IsSuperclassOf is missing, despite known to be learned <cit.>.
This inhibits the expressivity of extracted constraints such as obtained in <cit.>,
as this directly relies on the richness of available vocabulary.
The method proposed in this paper thus sorts in as follows: We extend the extraction of relations relevant to point <ref> (relations amongst concepts within the same layer representation space) by allowing to extract the IsSuperclassOf relation between concepts.
§.§ Hierarchical clustering
Hierarchical clustering <cit.> aims to find for a given set M a chain of partitions ℳ_1≤ℳ_2≤…≤{M} connected by inclusion[
To be precise: ℳ≤ℳ' ⇔∀ M∈ℳ∃ M'∈ℳ' M⊆ M'
], i.e., assign each point in M to a chain of nested clusters M_1,i_1⊆ M_2,i_2…⊆ M, as illustrated in <ref>.
Such a hierarchy can be depicted using a dendrogram as in <ref>.
There are two regimes for hierarchical clustering: Divisive breaks up clusters top-down, while agglomerative starts from the leaves ℳ_1={{p}| p∈ M} and iteratively merges clusters bottom-up <cit.>.
We here employ hierarchical clustering to find a hierarchy of subsets of latent representation vectors. Since we start with given leaf vectors, this work uses standard agglomerative hierarchical clustering <cit.>.[We here use the scikit-learn implementation at <https://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html>]
This optimizes the partitions for small distance between the single points within a cluster (affinity) and a large distance between the sets of points making up different clusters (linkage),
typically at a complexity of 𝒪(|M|^3).
§ APPROACH
This section details our approach towards extracting a globally valid approximation of a DNN's learned concept hierarchy, given the hierarchy's desired leaf concepts. The goal is to allow manual validation or verification testing against existing ontologies from QR.
Recall that this both requires a guided exploration of the learned concepts (which parent classes did the model learn?), as well as an exploration of the applicability of the superclass relation (which superclasses/features are shared or different amongst given concepts?).
We will start in <ref> by detailing how to obtain the extracted class hierarchy (here simply referred to as ontology). This is followed by an excursion on how to conduct a kind of instance-based inference using the global taxonomy (<ref>, which is then used in <ref> where we discuss techniques for validation and verification of DNN learned knowledge.
§.§ Extracting an ontology
Overview
The steps to extract our desired ontology are (explained in detail further below):
(1) obtain the embeddings e(C_i),
(2) apply hierarchical clustering to obtain superclass representations as superclass cluster centers,
(3) decode the obtained superclass representations into a human-interpretable description.
Ingredients.
We need as ingredients
our trained DNN F,
some concept encoder e (in our case defined using the DNN, see Step 1 below),
the finite set (C_i)_i=𝒞_leaf of leaf concepts for which we want to find parents classes, and
the choice of layer L in which we search for them.
Furthermore, to ensure human interpretability of the results, we constrain both our leaf concepts as well as our solution parent concepts to come from a given concept bank 𝒞 of human-interpretable concepts[
The concept bank restriction makes this essentially a search problem.].
We furthermore need per concept C∈𝒞:
A textual description toText(C) of C as textual specification;
optionally a set toImages(C) containing the concept as graphical specification (see Step 1), as available, e.g., from many densely labeled image datasets <cit.>; and
optionally a set Parents(C) of candidates for parent concepts of C (for more efficient search).
The following assumptions must be fulfilled, in order to make our approach applicable:
* Text-to-image alignment:
The DNN should accept textual inputs, and be trained for text-to-image alignment,
such that for a suitable textual description T of any concept C∈𝒞 one can reasonably assume e(C)≈ F_→ L(T).
We use this to find embeddings: The embedding of a visual concept C can be set to the DNN's text encoding F_→ L(T) of a suitable textual description T of C.
* Existence of embeddings:
For all leaf concepts, embeddings e(C_i) of sufficient quality exist in the latent space of L.
* Concentric distribution of subconcepts:
Representations of subconcepts are distributed in a concentric manner around its parent.
Generally, this does not hold <cit.>,
but so far turned out to be a viable simplification as long as semantic similarities are well preserved by the concept embedding function e <cit.>.
I.e. for a superclass concept Parent with children set 𝒞_S we can choose
e(Parent)≈*mean_Child∈𝒞_S e(Child)
* Semantic interpolatability:
Consider a latent representation v that is close to or inbetween (wrt. linear interpolation) some embeddings e(C_i) and e(C_j). We assume that v can be interpreted to correspond to some concept, i.e.,
∃C∈𝒞e(C)-v_2<ϵ for some admissible error ϵ.
This is needed to make the averaging in the parent identification in (<ref>) above meaningful.
Note that Assumption <ref><ref> is very strong, stating that there is a correspondence between the semantic relations of natural language concepts, and the metric space structure of latent spaces. This is by no means guaranteed, but according to findings in word vector spaces <cit.> and also image model latent spaces <cit.> a viable assumption for the structure of learned semantics in DNNs.
Step 1: Obtain the embeddings e(C_i).
We here leverage the text-to-image alignment to directly define the concept-to-vector mapping e:
e(C) mean_x∈toDNNInput(C)F_→ L(x).
Following <cit.>, the toDNNInput function can be a mapping from concept to a single textual description <cit.> or to a set of representative images <cit.>.
* Textual concepts:
The naive candidate for a textual description toDNNInput(C)toText(C).
However, some additional prompt engineering may be necessary, i.e., manual adjustment and finetuning of the formulation <cit.>.
For example, following <cit.>
we replace C by an image of C for the prompting.
* Visual concepts:
Here we take the graphical toImages(C) specification of our concept.
One could then employ standard supervised c-XAI techniques to find a common representing vector for the given images, e.g., as the weights of a linear classifier of the concept's presence <cit.>.
We here instead simply feed the DNN with each of the images and capture its respective intermediate latent representations, which is valid due to the concentricity assumption.
If the text-to-image alignment is low, we found image representations of concepts to yield more meaningful results.
Step 2: Hierarchical clustering.
Employ any standard hierarchical agglomerative clustering technique to find a hierarchy of partitionings of the set of given concept embeddings.
Each partitioning level represents one level of superclasses, with one cluster per class (see the simple example in <ref>).
As of (<ref>), the mean of the cluster's embedding vectors is the embedding of its corresponding superclass (the cluster center).
Note that the hierarchical clustering in principle allows to: (a) start off with more than one vector per leaf concept, e.g., coming from several image representations or from jointly using embeddings from textual and image representations; (b) weight the contribution of each child to the parent. This, however, is only viable together with means to automatically determine the weights, and not further pursued here.
Step 3: Decoding of cluster centers.
We here use a two-step search approach to assign each cluster center a concept from the concept bank 𝒞.
Given a cluster center p, the first optional step is to reduce the search space by selecting a subset of candidate concepts from 𝒞. Following <cit.>,
(1a) we collect for every leaf concept C the set of those concepts that, according to the ConceptNet knowledge graph <cit.>,
are related to C by any of the relations in ℛ_concepts={hasA, isA, partOf, HasProperty, MadeOf}:
Parents(C) {P|⋁_R∈ℛ_concepts R(P, C)} .
(1b) The union 𝒫=⋃_C leaf in clusterParents(C) of these sets serves as candidate set for p.
Note that this is a simplification that allows to capture as superclass any best fitting commonality between the leaf concepts (e.g., background context like indoor or biological relation like mammal for {cat,dog} as in <ref>). Generally, there is a trade-off between very specific relation definitions, and fidelity to the learned knowledge of the model. The trade-off can be controlled by the broadening or narrowing of the candidate set. The here chosen broad definition of the IsSuperClass relationship between concepts favors fidelity to the model's learned knowledge. Investigating effects of more narrow concept candidate sets is future work.
(2) In the second step, the concept for p is then selected from the candidate set 𝒫 to be the one with minimum distance embedding (embeddings again obtained as in Step 1):
e^-1(p)_P∈𝒫p-e(P)_2.
The final result then is a hierarchy tree, where leaf nodes are the originally provided concepts, inner nodes are the newly extracted superclasses, and the connections represent the IsSuperclassOf relation.
In the experimental section we will more closely investigate the influence of the proposed variants with/without prompt engineering and with/without finetuning.
§.§ Inference of an ontology
The such obtained ontology can be used for outlier-aware inference, i.e., classification of new input samples to one of the leaf concepts.
This will be useful not only as an interesting standalone application in safety-relevant classification scenarios,
but in particular for the validation.
The baseline of the inference is the k-nearest neighbor classifier: It directly compares the latent representation of a new input with each available concept embedding; and then assigns the majority vote of the k nearest concept embeddings.
To enrich the inference process with information from the ontology, one instead traverses the ontology tree, at each node branching off towards the closest child node.
Note that this allows to easily insert an outlier criterion: If at a parent class P none of the children nodes is closer than a threshold, the sample is considered an outlier of class P. This neatly preserves the maximum amount of information available about the properties of the sample, and, thus, eases subsequent handling of the unknown input. For example, an outlier of (parent-)class StaticObject should be treated differently than one of (parent-)class Animal.
Hyperparameters of this inference procedure are the choice of similarity, including whether to take into account the size (variance/width) of the cluster, e.g., by favoring wide over near-to-point-estimate clusters; and the threshold for being an outlier.
§.§ Validating and comparing learned ontologies
We now get to the core goal of this paper: Verify or validate a given DNN using QR. For this we start with validation of an extracted ontology from <ref>, and discuss how to measure its fidelity to DNN learned knowledge, and alignedness to human prior knowledge, which here corresponds to the expected image-to-concept matching. Lastly, we show how one can encode a given ontology as contextualized embeddings to verify a DNN against given prior knowledge from QR.
Human-alignedness.
One main desirable of a DNN's ontology is that it well aligns with the semantics that humans would expect and apply for the respective task. Any mismatch may either bring insights to the human on alternative solutions, or, more probably, indicates a suboptimal solution or even Clever Hans effect of the learned representations.
A straight-forward way to measure the human-alignedness is to test the prediction accuracy of the ontology when used for inference (see <ref>) on human-labeled samples. If human labels deviate often from the predictions, this indicates a bad alignment of the semantics the DNN has learned for the concepts from those a human would expect.
Other means to estimate the human-alignedness (not yet investigated in this work) are direct qualitative user studies, where human evaluators manually check the consistency of the obtained ontology tree with their own mental model;
or automatic checking of consistency against given world knowledge or common sense ontologies like Cyc <cit.>.
Lastly, the improvement in humans' predictions about the behavior of the model, a typical human-grounded XAI metric <cit.>,
could quantify in how far humans can make sense of the ontology.
A different aspect of human-alignedness is how well the ontology, in particular the inference scheme it defines, generalizes to novel concepts (semantic outliers) that so far have not occurred in leaves or nodes. The gerenalization can be measured as the performance in assigning a correct parent node.
A special case here are blended cases where the novel concept unifies features of very different classes, such as a cat with wheel as walking support. The uncertainty of the model in such blended cases can be qualitatively compared against human one, potentially uncovering a bias.
Text-to-image alignment.
The to-be-expected performance of cross-modal inference of the ontology (i.e., ontology defined using textual concepts, but inference done on images) directly depends on the quality of the text-to-image alignment. This motivates a use as an indicator for suboptimal text-to-image alignment.
Fidelity.
Fidelity of the ontology, respectively shortcomings in the simplified modeling of the ontology, can be measured by the deviation between the baseline inference directly on the leaves, and the ontology inference.
Inference on the leaf concepts C_i means we predict for an image x the output class C for which the textual embedding is closest to the embedding of x, proximity measured with respect to some distance d (here: cosine similarity):
C_C∈ (C_i)_i d(
F_→ L(toText(C), F_→ L(x))
)
This is referred to as naive zero-shot approach, following research on using foundation models on specialized tasks without finetuning (=with training on zero samples) <cit.>.
The reason to choose this as a baseline is that the ideal tree should sort samples into the same leaf neighborhood as direct distance measurement would do. Simplifications that may infringe this equality are unequal covariances (≈ widths) of sibling class clusters; the chosen similarity measure; or assuming perfect text-to-image alignment.
Verification against a given ontology.
The previous extraction techniques yield an inspectable representation of the ontology learned by a model. This allows manual validation of the learned knowledge against models from QR.
Alternatively, one could directly verify a multimodal model against consistency with a given ontology: In short, we propose to modify the leaf concept embeddings from Step 1 such that they additionally encode their local part of the ontology, i.e., information about all desired parents of the leaf, as context.
One can then measure the performance of naive inference (see <ref>) on these contextualized leaf nodes as defined in (<ref>).
A higher performance then means a better alignment of the context of a leaf concept with its image representations. This even would allow to narrow down unalignedness to specific concepts (those with bad inference results).
We suggest as point of attack for contextualization is the textual encoding: Let C be a leaf concept at depth d in the tree with chain of parents (P_i)_i=1^d from root to leaf. We can now follow <cit.> and modify the original tT=toText function of a leaf concept to:
toText'(C) tT(P_1), …, tT(P_d), tT(C)
E.g., cat may turn into animal, pet, cat.
The effect is that the obtained embedding (possibly after prompt finetuning as above) is shifted towards including the desired context;
and all leaves together encode the complete ontology.
§ EXPERIMENTS
§.§ Settings
Models under test.
In our experiments, we utilized CLIP <cit.>, one of the first multimodal foundation model family accepting both text and images <cit.>.
For text-to-image alignment CLIP was trained to map an image and its corresponding text descriptions onto a similar (with respect to cosine similarity) latent space representation.
This general-purpose model captures rich semantic information, and achieves impressive performance compared to task-specific models across various applications, including image captioning <cit.>, recognition of novel unseen objects <cit.>, and retrieval tasks <cit.>.
This makes it a common choice as basis for training or distilling more specialized models <cit.>,
and thus a highly interesting target for validation and verification of its learned knowledge and internalized QR.
In our experiments, we explored various CLIP backbones, including ResNet-50, as well as Vision Transformer (ViT) variants featuring different patch sizes and model capacities (e.g., ViT-B/32, ViT-L/14)[Pre-trained models and weights were obtained from: <https://github.com/openai/CLIP>].
Dataset.
The CIFAR-10 dataset <cit.> is a benchmark in the field of computer vision, consisting of 60,000 32×32 color images, split into 50,000 training and 10,000 test images. The images are equally distributed onto the 10 diverse classes airplane, ship, car, truck, bird, cat, dog, deer, horse, frog.
The choice of classes suits our initial study well, as they both exhibit pairs of semantically similar objects (e.g., car, truck), as well as mostly unrelated ones (e.g., car, cat), so we can expect a deep class hierarchy.
In our study, we conduct inference both of the baseline (naive zero-shot) and the proposed method on the CIFAR-10 test dataset <cit.>.
Fidelity baseline.
As discussed in <ref>, the inference on the leaf concepts (naive zero-shot approach) serves as baseline (maximum performance) for fidelity measurements. The closer the tree inference gets to the naive zero-shot performance, the higher the fidelity.
We here choose as distance metric the cosine distance CosDist(a,b) 1- a· ba·b (0 for a,b parallel, 1 for orthogonal, 2 for a=-b), going along with the training of CLIP.
Metrics.
Any quantitative classification performances are measured in terms of accuracy of the results on CIFAR-10 test images against their respective ground truth label.
§.§ Ablation Study: Influences on Human-Alignedness and Fidelity of Ontology Extraction
As detailed in <ref>, to measure the human-alignedness of the given multi-modal encoder model, we evaluated the performance when using our extracted ontology for inference of class labels on new images.
And as a fidelity indicator, we measure the performance drop between inference on the leaves (naive zero-shot approach) against that of inference on our tree.[
Performance against a ground truth is only a proxy; future experiments should directly compare predictions of the two.]
Both are measured in the course of an ablation study to identify the influence of different settings on the ontology's usefulness and quality.
Investigated influences.
Both the ontology extraction by means of agglomerative hierarchical clustering (see <ref>, as well as later the inference on new samples (see <ref>) rely on measuring similarities between embedding vectors.
However, due to being automatically optimized, the embeddings' optimal similarity metric is unknown.
Hence, we treat each choice of similarity metric as a hyperparameter, and investigate their influence on human-alignedness of the extracted ontology:
* Affinity:
Affinity typically influences which data points are most similar, i.e., closest related, in the final tree structure.
In our experiments, we tested the standard Manhattan (L_1), Euclidean (L_2, and cosine distances.
* Linkage:
This parameter determines the criterion used to merge clusters during the hierarchical clustering process, and in particular affects the shape and compactness of the clusters.
In our experiments, we tested the standard settings of Ward, complete, average, and single linkages.
Ward linkage minimizes the variance within clusters, while complete / average / single linkage focuses on the maximum / average / minimum distance between clusters.
* Inference similarity:
We use use the same choices as for affinity.
Next, we compare different settings for obtaining the leaf embeddings. The following variants are considered:
* Prompt tuning:
In case text embeddings are to be obtained, CLIP suggests using text prompts in the form a photo of a classname rather than simply classname, because the model is trained on image captions as text. If applied, this augmentation is done for both leaf and parent node textual embeddings.
* Text encoding vs. few-shot image encoding:
As described in <ref>, Step 1, the two different approaches to obtain leaf embeddings are text encoding and image encoding. We here only consider few-shot image encoding, i.e., specifying the concept via <10 images, which ensures manageable complexity of the hierarchical clustering algorithm[
Standard implementations have a complexity of 𝒪(n^3) for n leaf samples.].
Results.
An illustrative example of an ontology extracted from CLIP (ViT-L-14 backbone) using the prompt a photo of a classname is provided in <ref> for found-to-be-optimal settings according to the ablation study.
Consistently optimal hyperparameter settings with respect to human-alignedness and fidelity turned out to be affinity=Manhattan, linkage=complete, and inference similarity=cosine, which were also used to create the remainder of the ablation studies.
The accuracy results on CIFAR-10 of inference using the extracted ontology versus the naive-zero shot approach as a baseline for fidelity are given in Tabs.
<ref> for the prompt engineering settings, and
<ref> for the comparison of text and image encodings of the leafs.
Please note that we did not yet conduct a cross-validation,
so results should foremostly serve as guide for further investigations.
First findings.
In advance we manually validated the assumption of a good text-to-image alignment (Assumption <ref><ref>). For this we visualized the distribution and class separability of text and CIFAR-10 test sample embeddings in the latent spaces of the different CLIP backbones, results shown in <ref>.
The dimensionality-reduced visualizations suggest that with increasing parameter number, the clusters of different classes become more distinctly separated; and transformer-based backbones demonstrate superior separation.
Notably, across all backbones, the text inputs and images are encoded in separate regions of the latent space, indicating a clear distinction between these two modalities in the model's internal representation.
The prompt engineering, i.e., replacing the text prompt classname with a photo of classname turned out to be have a strong positive impact on human-alignedness and fidelity in case of the worse aligned CNN-based CLIP backbone, and still a notable one for the already good transformer backbones.
In contrast, using few images instead of text to obtain the leaf embedding resulted in worse performance. However, in our initial tests performance seemed to increase with the number of images: Dropping the few-shot constraint showed competitive results. In the following table, we replaced the leaf node information with the randomly-sampled training images in the respective class.
It should be noted, that a better performance of the textual embedding could possibly be attributed to a sub-optimal text-to-image alignment. This would be consistent with the insights into the distribution and class separability of image and text embeddings in the latent space in <ref> (with respect to Euclidean distance).
It should be further investigated, whether this must be attributed to disparity in metrics, the domain shift to CIFAR-10 inputs, or could serve as an indicator for bad text-to-image alignment wrt. the considered classes.
§.§ Ontology validation and verification
Validation: qualitative results.
A manual inspection of the obtained ontologies (see <ref> for an example) showed, that good human-alignedness also coincides with seemingly valid tree structures.
Seemingly valid here means, that a human inspector can easily find convincing arguments for the validity most of the splitting criteria of the nodes.
In <ref>,
two trees which are created with different parameters are compared. The tree on the left, which uses ViT-L/14 as a backbone, affinity clustering, and Manhattan linkage, achieves 92% accuracy on the classification task. In contrast, the tree on the right, created with a ResNet-50 backbone, affinity clustering, and Euclidean linkage, yields an accuracy of 45%. One of the reasons for the low accuracy score in the classification task for the tree on the right is that its decision process does not align well with human-like decision-making. For example, the structure first checks whether an object is a "vehicle" and then whether it is "meat". This decision process deviates from human-aligned reasoning, which can also be observed through manual inspection.
Furthermore, we identified the tendency that the superior vision transformer backbones also showed the seemingly more valid tree structures.
This possible architectural dependency of good ontological commitment should be further investigated.
Verification against a given ontology.
To exemplify the verification of ontological commitment against a given ontology, we chose the simple tree structure provided by <cit.> for CIFAR-10 dataset. To label the inner nodes of this tree, we utilized two external knowledge sources: WordNet <cit.> and GPT-4 <cit.>, in each case bottom-to-top queried for a textual description of a parent for sibling nodes.
We then used the ontology information to create contextualized leaf embeddings, as described in <ref>, and applied naive zero-shot inference on these contextualized leaves.
For WordNet, we labeled each node with the closest matching superclass. For GPT-4, we queried the model to provide the superclass of the given leaf nodes.
Initial verification results for the different given ontologies are shown in <ref>:
As expected, using the extracted learned ontology for the contextualization caused no change compared to the baseline of non-contextualized embeddings; this contextualization is supposed to be equivalent to the non-contextualized leaf embeddings from the perspective of the model.
However, the contextualization with external ontologies caused a strong drop in inference accuracy.
A closer look at the results showed that those leaves with parents mentioning technical terms (e.g., non-mammalian vertebrate) were mostly misclassified, indicating that the learned knowledge is inconsistent / not aware of these parts of the given ontologies.
Further research is needed on practical implications (e.g., thus induced error cases), and how to align the ontologies.
§ FUTURE WORK: APPLICATIONS AND NEXT STEPS
§.§ Applications of learned ontology extraction
Our method opens up several further interesting applications for the use of QR in DNN understanding, verification, and improvement.
Optimal learned reasoning representations.
As discussed above, access to the internal ontology of a DNN is key to understand its internal QR. In particular, an open research question is,
what kind of concept representations are DNNs optimized for, and, subsequently,
which kinds of reasoning would be supported by this? For example, qualitative spatial reasoning
would most benefit from a region-based representation of concepts,
while cone-based reasoning from cones as representations <cit.>.
The quantitative measurement of ontological commitment allows to do ablation studies on different representations of concepts and relations, e.g., different similarity measures.
DNN inspection.
The obtained ontologies open up new inspection possibilities for DNNs.
An interesting one could be to generate contrastive examples <cit.>:
Change a given input minimally such that the class/superclass changes, possibly under a constraint to remain within a given superclass.
Also, one could globally test the models against biases towards scenerios respectively background.
A bias is uncovered, if the commonality of two classes is based on background rather than functionally relevant features;
possibly supported on test samples generated by inpainting techniques.
Unfortunately, the text-to-image alignment training of foundation models may easily introduce such a bias, as concepts occurring in similar image scenarios additionally will occur in similar textual context.
E.g., one may expect cat and dog to be similar, as both often occur indoors.
Knowledge insertion.
The final goal of the introspection discussed above should be to not only be able to verify the learned ontological commitment, but also to control both the commitment, and subsequently the learned reasoning.
This might be achieved by adding penalties during training, determined by iterative ontology extraction and model finetuning.
Thus, a foundation model with acceptable ontological commitment may be obtained.
Lastly, to distill this knowledge of the large model into smaller specialized models,
standard model distillation techniques could be amended <cit.>.
Concretely, regularization terms can be added to (1) enforce that correspondences to some/most of the concepts, and to (2) enforce respective similarities and other relationships between the concepts.
§.§ Next steps
Our initial experiments are clearly limited in their extend, so immediate next steps should encompass more experiments on measuring human-alignedness respectively a larger ablation study on possible influence of the made assumptions.
Such can be domain shifts, like text-to-image, and real-to-synthetic image.
Experiments should include user studies, and comparison to existing ontologies;
Similarly, the outlier detection and handling capabilities of ontologies should be further investigated, both for novel as well as novel blended classes.
Lastly,
it can be investigated how to extend the here proposed approach from multimodal models to unimodal ones, allowing to compare the ontologies of large foundation models against that of state-of-practice small and efficient object detectors.
§ CONCLUSION
Altogether, this paper tackles the problem how to validate and verify a multimodal DNN's learned knowledge using QR. Concretely, we take the step to unveil the ontological commitment of DNNs, i.e., the learned concepts and (here: superclass-)relations.
For this, we proposed a simple yet effective approach to (1) uncover yet undiscovered superclasses of given subclasses as used by the DNN; and to (2) extract a full hierarchical class tree with the IsSuperClass-relationships; together with means to verify and validate the extracted part of the learned ontology.
Even though this initial proof-of-concept still relies on some simplifications, our initial experiments could already extract meaningful class hierarchies from concurrent multimodal DNNs, and reveal inconsistencies with existing ontologies.
These may serve as a basis to access further insights into the ontological commitment of DNNs, and subsequently validate and verify its learned QR. We are confident that, eventually, this could allow to control, i.e., correct and integrate, valuable prior knowledge from QR into DNNs, creating powerful yet verifiable and efficient hybrid systems. Thus, we hope to spark further interest into interdisciplinary research of QR for verification of DNNs within the QR community.
The paper was written in the context of the "NXT GEN AI Methods" research project funded by
the German Federal Ministry for Economic Affairs and Climate Action (BMWK), The authors would like to thank the consortium for the successful cooperation.
| One of the basic ingredients of QR models is an ontology specifying the allowed concepts, relations, and any prior assumption about them; more precisely, the commitment to (a subset of an) ontology with associated semantic meaning of concepts and relations <cit.>.
Thanks to years of research, large and rich ontologies like Cyc <cit.>, SUMO <cit.>, or ConceptNet <cit.>
are readily available for building or verifying QR models.
Meanwhile, however, DNNs have become the de-facto state of the art for many applications that hardly allow a precise input specification <cit.>, such as processing of raw images (computer vision), e.g., for object detection <cit.>, or processing of unstructured natural language text <cit.>.
This machine learning approach owes its success to its strong representation learning capabilities:
DNNs automatically learn highly non-linear mappings (encoding) from inputs to vectorial intermediate representations (latent representations or vectors) <cit.>,
and reasoning-alike processing rules <cit.>
from these to a desired output.
Availability of large text and image datasets have further sparked the development of multimodal so-called foundation models <cit.>.
These are large general-purpose DNNs trained to develop semantically rich encodings suitable for a variety of tasks <cit.>.
This is oft achieved by training them to map textual descriptions and images onto matching vectorial representations (text-to-image alignment) <cit.>,
using multimodal inputs of both images and text.
The prospect.
Foundation models come with some interesting prospects regarding their learned knowledge:
(1) One can expect foundation models to learn a possibly interesting and useful ontology, giving insights into concepts <cit.>
and concept relations <cit.>
prevalent in the training data; and
(2) such sufficiently large models can also develop sophisticated reasoning chains on the learned concepts <cit.>.
From the point of perspective of QR, this raises the question, whether this learned knowledge is consistent with the high quality available ontologies and QR models.
This opens up well-grounded verification and validation criteria for safety or ethically critical applications. As a first step towards this, this paper defines techniques for extraction and verification of simple class hierarchies.
Future prospects encompass to use the extracted knowledge from DNNs for knowledge retrieval, and
ultimately gain control over the learned reasoning: This would enable the creation of powerful hybrid systems <cit.>
that unite learned encoding of raw inputs like images with QR models.
The problem.
Unfortunately, the flexibility of DNNs in terms of knowledge representation comes at the cost of interpretability <cit.>;
and, being purely statistical models, they may extract unwanted and even unsafe correlations <cit.>.
The opaque distributed latent representations of the input do not readily reveal which interpretable concepts have been learned, nor what reasoning is applied to them for obtaining the output.
This is a pity, not least because that hinders verification of ethical and safety properties.
Take as an example the ontological commitment: Which hierarchical subclass-relations between concepts are considered? An example is shown in Fig. <ref>.
This directly encodes the learned bias, which commonalities between classes are taken into account, and which of these are predominant for differentiating between classes.
The same example also nicely illustrates the issue with wrongly learned knowledge: The models may focus on irrelevant but correlated features to solve a task, such as typical background of an object in object detection <cit.>.
A whole research field, explainable artificial intelligence (XAI), has evolved that tries to overcome the lack of DNN interpretability <cit.>.
To date it is possible to partly associate learned representations with interpretable symoblic concepts (1-ary predicates) <cit.>,
such as whether an image region is a certain object part (e.g., isLeg), or of a certain texture (e.g., isStriped) <cit.>.
However, extraction of learned relations is so far focused on simple semantic similarity of concepts <cit.>;
hierarchical relations that hold across subsequent layers, i.e., across subsequent encoding steps <cit.>;
or hierarchies obtained when subdividing a root concept <cit.>.
And while first works recently pursued the idea to extract superclass hiearchies from given leaves, these are still limited to simple classifier architectures <cit.>.
A next step must therefore be: Given a set of (hierarchy leaf) concepts, how to extract (1) the unifying superclasses, and (2) the resulting class hierarchy with subclass relationships from any semantically rich intermediate output of a DNN, preferrably from the embedding space of foundation models.
Approach.
We here propose a simple yet effective means to get hold of these encoded class hierarchies in foundation models;
thereby taking another step towards unveiling and verifying the ontological commitment of DNNs against known QR models respectively ontologies.
Building on <cit.> and <cit.>,
our approach leverages two intrinsic properties of the considered computer vision models:
* Vision DNNs generally encode learned concept similarities via distances in their latent representation vector space <cit.>.
This makes it reasonable to find a hierarchy of superclass representations by means of hierarchical clustering <cit.>.
* Foundation models accept textual descriptions as inputs, trained for text-to-image alignment.
This allows to cheaply establish an approximate bijection of textual concept descriptions to representations: A description is mapped by the DNN to a vector representation, and a given representation is assigned to that candidate textual description mapped to the most similar (=close by) vector <cit.>.[
This could be replaced by the mentioned approximate concept extraction techniques for models without decoder and text-to-image alignment.]
Contributions.
Our main contributions and findings are:
* An approach
to extract and complete a simple learned ontology, namely a superclass hierarchy with given desired leaf concepts (<ref>), from intermediate representations of any multimodal DNN, which allows to manually validate DNN-learned knowledge against QR models (see <ref>);
* An approach to test the consistency of multimodal DNNs against a given class hierarchy, e.g., from standard ontologies;
* An initial experimental validation showing that
the approach can extract meaningful ontologies,
and reveal inconsistencies with given ontologies;
* A thorough discussion of potential applications for QR extraction and insertion from / into DNNs. | §.§ Deep neural network representations
DNNs.
Mathematically speaking, deep neural networks are (almost everywhere) differentiable functions FR^n→R^m
which can be written in terms of small unit functions, the so-called neurons fR^n→R, by means of
the standard concatenation operation f∘ g x↦ f(g(x)),
linear combination x↦ Wx+b,
and product a, b↦ a· b.
Typically, the linear weights W and biases b serve as trainable parameters, which can be optimized in an iterative manner using, e.g., stochastic gradient descent.
Neurons are typically arranged in layers, i.e., groups where no neuron receives outputs from the others.
Due to this Lego-principle, DNNs are theoretically capable of approximating any continuous function (on a compact subspace) up to any desired accuracy <cit.>,
and layers can be processed highly parallel.
In practice, this is a double-edged sword: DNNs of manageable size
show astonishing approximation capabilities for target functions like detection or pixel-wise segmentation of objects in images <cit.>.
However, they also tend to easily extract irrelevant correlations in the data, leading to incorrect <cit.>
or even non-robust <cit.>
generalization respectively reasoning on new inputs.
Latent representations.
In the course of an inference of an input x, each layer L of the DNN produces as intermediate output a vector F_→ L(x)∈R^n, each entry being the output of one of the n neurons of L.
This vectorial encoding of the input is called the latent representation of the input within L, and the vector space R^n hosting the representations is called the latent space.
Interestingly, it was shown that DNNs encode semantically meaningful information about the input in their latent representations, with abstraction increasing the more layers are passed
(e.g., starting with colors and textures, to later develop notions of shapes and objects) <cit.>.
Concept embeddings.
An emergent property of these representations is that in some layers, a concept C
(e.g., color Red, or object part Leg),
can be encoded as prototypical vector e(C) within this latent space. These are called concept (activation) vectors <cit.>
or concept embeddings <cit.>.
The mapping e𝒞→R^n from a set of human-interpretable concepts to their embeddings even preserves semantic similarities to some extend:
Examples are the reflection of analogical proportions <cit.>
in word vector spaces (DNNs with textual inputs trained for natural language processing),
like
e(King)-e(Queen)=e(Man)-e(Woman)
<cit.>;
and their analogues in standard computer vision architectures trained for object classification or detection: e(Green)+e(Wood)=e(Tree) <cit.>.
Our approach relies on these natural translation of semantic to vector operations/properties.
In particular, we assume that the relation IsSimilarTo[We here assume that IsSimilarTo is reflexive and symmetric, following geometrical instead of psychological models of similarity <cit.>.]
on input instances x is mapped to some distance metric d like Euclidean or cosine distance by the DNN representations:
∀C, C'IsSimilarTo(C, C') ⇔ d(e(C), e(C')≈0.[
For optimization, the relative formulation can be more convenient:
∀C,C', C”C more similar to C' than to C”⇒ d(e(C),e(C'))≤ d(e(C),e(C”)).]
Concretely, we use the translation of similarity relations to find a superclass concept representation via interpolation.
Text-to-image alignment.
In the case of multimodal DNNs that accept both textual and image inputs, the training often encompasses an additional (soft) constraint:
Given textual descriptions of an input image, these must be mapped to the same/a similar latent representation as their respective image.
While pure language models suffer from the impossibility to learn the true meaning of language concepts without supervision <cit.>,
this additional supervision might help the model to develop representations that better match the human understanding of the word/concept.
We here leverage this intrinsic mapping to associate textual or graphical descriptions of our concepts with latent representations.
When using textual decriptions, good text-to-image alignment is an
important assumption; but, sadly, even with explicit training
constraints this is not guaranteed <cit.> (cf. distance of image and text embeddings in <ref>).
We show both the influence of text-to-image alignment on our method, how it can be reduced, and how to use our method in order to identify issues with the learned meaning of concepts, which opens up options to fix the representations.
§.§ Ontologies
When modeling any problem or world, a basis of the model is to know
what the model is talking about.
This is exactly answered by the underlying ontology, i.e., a definition of what categories/properties and relations are used in the model.
We here adopt the definition from <cit.>.
An ontology is a pair (𝒱, 𝒜) constituted by
a vocabulary 𝒱=𝒞∪ℛ of
a set of unary predicates 𝒞 (the concepts corresponding to class memberships and other properties) and
a set of binary predicates ℛ (the instance relations)
used to describe a certain reality,
and which are further constraint by a set 𝒜 of explicit assumptions in the form of a first- (or higher-)order logic theory on the predicates.
A relation we will use further is IsSimilarTo∈ℛ. Also spatial relations like IsCloseBy <cit.> and LeftOf, TopOf, etc. <cit.> have been defined and used in literature for latent space representations of objects.
Simple examples of assumptions that relate the concept sets are, e.g., the subclass relationship we investigate in this paper: IsSuperclassOf(C',C) :⇔ (∀ vC(v) ⇒C'(v)) (cf. <ref>).
This can also be seen as a relation between concepts, by interpreting the unary concept predicates C as sets of objects (e.g., classes) via v∈ C :⇔ C(v).
The validity of concept embeddings also gives rise to assumptions about concepts (∀ v C(v) ⇔IsSimilarTo(v, e(C))).
Note that, given embeddings, we can formulate relations between concepts using instance relations R∈ℛ via R(C,C') :⇔ R(e(C), e(C')). An example would be isSimilarTo(cat,dog).
The first challenge in extracting learned QR from DNNs is to find/explain the ontology that is used within the reasoning process of the DNN. Unraveling an ontology as done in <ref> above breaks this step roughly down into:
* Find the concepts 𝒞 (and their embeddings) used by the model.
* Find the relations ℛ that may be formulated on vector instances.
* Simple assumptions 𝒜_s⊆𝒜: How are concept related.
* Identify further assumptions 𝒜∖𝒜_s that the model applies.
Note that the layer-wise architecture of DNNs partitions the representations into objects (vectors) in the different latent spaces. For a layer L we denote v in the latent space of L as L(v).
This gives rise to a partition of the concept, relation, and assumption definitions, allowing to conveniently split up above steps as follows:
* What concepts 𝒞_i⊂𝒞 are encoded within the ith layer L_i
(∀ C∈𝒞_i, v L_i(v)⇒ C(v))?
* What assumptions 𝒜_i,i hold for which items within the same ith latent space
(∀ A∈𝒜_i,(v^(s))_s⋁_s L_i(v^(s)) ⇒ A(v^(1), …))?
* What assumptions 𝒜_i,j, i ≠ j, hold between items of different latent spaces?
Task <ref> is (somewhat) solved by methods from c-XAI, where both learned concepts <cit.>
as well as their distribution over different layer representation spaces <cit.>
are investigated.
<ref> and <ref> show the yet-to-be-filled gaps: Investigated relations between items, item groups respectively concepts within the same arbitrary latent space (=<ref>). These so far only concern general semantic similarity, and relations across latent spaces only sensitivity. That falls far behind the richness of natural language; in particular it misses out on concept and instance relations of the kind C is similar to C' with respect to feature F respectively C, C' both are F, and counterpart C differs from C' with respect to feature F[
C, C' both are F (∀ x(C(x)∨C'(x))⇒F(x)) rewrites to IsSuperclassOf(F,C)∧IsSuperclassOf(F,C'); the differs-case to IsSuperclassOf(F,C)∧IsSuperclassOf(F,C').
].
In other words, the relation IsSuperclassOf is missing, despite known to be learned <cit.>.
This inhibits the expressivity of extracted constraints such as obtained in <cit.>,
as this directly relies on the richness of available vocabulary.
The method proposed in this paper thus sorts in as follows: We extend the extraction of relations relevant to point <ref> (relations amongst concepts within the same layer representation space) by allowing to extract the IsSuperclassOf relation between concepts.
§.§ Hierarchical clustering
Hierarchical clustering <cit.> aims to find for a given set M a chain of partitions ℳ_1≤ℳ_2≤…≤{M} connected by inclusion[
To be precise: ℳ≤ℳ' ⇔∀ M∈ℳ∃ M'∈ℳ' M⊆ M'
], i.e., assign each point in M to a chain of nested clusters M_1,i_1⊆ M_2,i_2…⊆ M, as illustrated in <ref>.
Such a hierarchy can be depicted using a dendrogram as in <ref>.
There are two regimes for hierarchical clustering: Divisive breaks up clusters top-down, while agglomerative starts from the leaves ℳ_1={{p}| p∈ M} and iteratively merges clusters bottom-up <cit.>.
We here employ hierarchical clustering to find a hierarchy of subsets of latent representation vectors. Since we start with given leaf vectors, this work uses standard agglomerative hierarchical clustering <cit.>.[We here use the scikit-learn implementation at <
This optimizes the partitions for small distance between the single points within a cluster (affinity) and a large distance between the sets of points making up different clusters (linkage),
typically at a complexity of 𝒪(|M|^3). | null | null | null | Altogether, this paper tackles the problem how to validate and verify a multimodal DNN's learned knowledge using QR. Concretely, we take the step to unveil the ontological commitment of DNNs, i.e., the learned concepts and (here: superclass-)relations.
For this, we proposed a simple yet effective approach to (1) uncover yet undiscovered superclasses of given subclasses as used by the DNN; and to (2) extract a full hierarchical class tree with the IsSuperClass-relationships; together with means to verify and validate the extracted part of the learned ontology.
Even though this initial proof-of-concept still relies on some simplifications, our initial experiments could already extract meaningful class hierarchies from concurrent multimodal DNNs, and reveal inconsistencies with existing ontologies.
These may serve as a basis to access further insights into the ontological commitment of DNNs, and subsequently validate and verify its learned QR. We are confident that, eventually, this could allow to control, i.e., correct and integrate, valuable prior knowledge from QR into DNNs, creating powerful yet verifiable and efficient hybrid systems. Thus, we hope to spark further interest into interdisciplinary research of QR for verification of DNNs within the QR community.
The paper was written in the context of the "NXT GEN AI Methods" research project funded by
the German Federal Ministry for Economic Affairs and Climate Action (BMWK), The authors would like to thank the consortium for the successful cooperation. |
http://arxiv.org/abs/2409.17220v1 | 20240925180000 | Spatially correlated stellar accretion in the Lupus star forming region: Evidence for ongoing infall from the interstellar medium? | [
"Andrew J. Winter",
"Myriam Benisty",
"Carlo Manara",
"Aashish Gupta"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.SR"
] |
Evidence for ongoing infall from the interstellar medium?
Winter et al.
Max-Planck Institute for Astronomy (MPIA), Königstuhl 17, 69117 Heidelberg, Germany Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, 06300 Nice, France
European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching bei München, Germany
Growing evidence suggests that protoplanetary discs may be influenced by late stage infall from the interstellar medium (ISM). It remains unclear the degree to which infall shapes disc populations at ages ≳ 1 Myr.
We explored possible spatial correlations between stellar accretion rates in the Lupus star-forming region, which would support the hypothesis that infall can regulate stellar accretion.
We considered both the `clustered' stars towards the center of Lupus 3, and the `distributed' stars that are more sparsely distributed across the Lupus complex. We took the observed accretion rates in the literature and explore spatial correlations. In particular, we tested whether the clustered stars exhibit a radial gradient in normalised accretion rates, and whether the distributed stars have spatially correlated accretion rates.
We found statistically significant correlations for both the clustered and distributed samples. The clustered sample exhibits higher accretion rates in the central region, consistent with the expected Bondi-Hoyle-Lyttleton accretion rate. Stars that are spatially closer among the distributed population also exhibit more similar accretion rates. These results cannot be explained by the stellar mass distribution for either sample. Age gradients are disfavoured, though not discounted, because normalised disc dust masses are not spatially correlated across the region.
Spatially correlated stellar accretion rates within the Lupus star-forming region argue in favour of an environmental influence on stellar accretion, possibly combined with internal processes in the inner disc. Refined age measurements and searches for evidence of infalling material is a potential way to further test this finding.
Spatially correlated stellar accretion in the Lupus star-forming region
Andrew J. Winter 1, [email protected]
Myriam Benisty,1, 2
Carlo Manara,3
Aashish Gupta3
Received September 15, 1996; accepted March 16, 1997
===============================================================================================================
§ INTRODUCTION
The timescale for planet formation corresponds to the first few million years of stars' lifetimes, while they still host a `protoplanetary disc' of dust and gas <cit.>. This timescale is comparable to the typical timescale over which stars form from overdensities in the galactic interstellar medium <cit.>. As a result, planet formation in the protoplanetary disc occurs simultaneously with processes governing star formation, in regions of enhanced stellar and ISM density. Theoretical and observational evidence indicates that several processes, including dynamical encounters <cit.> and external photoevaporation <cit.> can feedback on the planet formation process. As a result, the external star formation environment may contribute to the diversity of exoplanets. However, how exactly the environment may sculpt planetary systems remains poorly understood.
One process which has been suggested as an important driver of disc evolution is late stage infall of gas from the ISM via the Bondi-Hoyle-Lyttleton (BHL) accretion mechanism, which would explain the observed scaling between accretion rates and stellar mass <cit.>. However, a lack of apparent correlation between stellar accretion rates and the environment (based on substantial accretion rates in Trumpler 37), as well as some theoretical problems in redistributing material to the inner disc, led to the widely-held conclusion that BHL accretion is not a primary driver of stellar accretion <cit.>. Recent years have resulted in a wealth of new observational evidence that motivates a second look at this conclusion. In particular, a recent surge in discoveries of examples of extended structures in molecular line emission <cit.> and infrared scattered light around class II discs <cit.> have provoked renewed interest in late stage infall. Theoretical studies <cit.> have suggest infall is a viable driver of observed structures such as spirals and streamers <cit.>, which are often associated with reflection nebulosity <cit.>, accretion outbursts <cit.> and misaligned inner discs <cit.>. The latter misalignments are apparent among a high fraction of discs <cit.>. Ongoing infall may also help explain an apparent mass budget problem for protoplanetary discs compared with the observed exoplanet population <cit.>, and the non-monotonic evolution of disc dust masses with time <cit.>. BHL accretion was also suggested as a possible solution for the presence of old but still accreting discs in star-forming regions <cit.>. From a theoretical perspective, recent simulations have suggested that BHL accretion is a substantial source of mass and angular momentum for discs up to ∼ 1 Myr after the formation of the host star <cit.>. Warps instigated by late-stage infall may propagate throughout the disc <cit.>, possibly driving instabilities and short-lived high levels of turbulence in the outer disc <cit.>. Shadows resulting from misalignment of the inner disc may also drive spirals and turbulence in the outer disc <cit.>. Even if inner disc material loses angular momentum by some other mechanism, such as magnetic winds <cit.>, infall appears a plausible mechanism to replenish this gas, possibly regulating stellar accretion. This replenishment seems particularly plausible given that the near infrared excess, probing inner disc inner disc material, is correlated with outer disc substructures such as spirals and shadows <cit.>, suggesting that processes that influence one also influence the other.
The question of the timescale over which late-stage accretion occurs is of critical importance. If the infall onto the disc is only substantial in the first < 1 Myr of disc evolution, then it may be reasonable to assume that this simply sets initial conditions, as conventionally assumed in theoretical studies of planet formation. On the other hand, if substantial infall occurs over the entire disc lifetime, disc replenishment may be a critical ingredient for planet formation models. Observationally, examples of infall have not only been uncovered for very young stars that are embedded in their natal environment <cit.>, but also stars older than 1 Myr such as AB Aur <cit.>, DR Tau <cit.>, S CrA <cit.> and SU Aur <cit.>. <cit.> find evidence of interaction between the star-disc system and the ISM around 16 percent of their sample of discs in Taurus observed in scattered light. From a theoretical perspective, <cit.> recently estimated that ∼ 20-70 percent of discs in the age range 1-3 Myr should be mostly composed of recently accreted material, based on the timescale for turbulent fluctuations of the ISM (although this finding may be contingent on stellar feedback). Given the potential critical importance to planet formation models, these studies motivate urgent empirical tests as to whether protoplanetary discs are undergoing significant environmental replenishment.
One approach for testing the importance of replenishment is to correlate observed stellar accretion rates with the external environment. In a medium where the gas relative velocity Δ v_gas≫ c_s, the sound speed, the BHL accretion rate Ṁ_BHL is proportional to the cross-section carved out by the radius R_BHL = 2 G m_*/Δ v_gas^2 within which the gas is captured, where m_* is the stellar mass. Then Ṁ_BHL∝ m_*^2 ρ_gas /Δ v_gas^3, where ρ_gas is the local gas density. Thus if BHL accretion is substantially altering disc evolution, then the stellar accretion rate Ṁ_acc corrected for the stellar mass dependence – i.e. Ṁ_acc/m_*^2 – may correlate with local density and anti-correlate with relative velocity of gas. However, finding evidence of such correlations is not necessarily a straight forward task for several reasons. Most importantly, we lack complete (3D) position and velocity information for both the stars and gas. For example, we cannot directly map ISM overdensities along the line-of-sight. The effective noise that this introduces into any statistical signal, alongside numerous other potential complicating factors, is problematic when we consider the modest sample sizes of homogeneously determined accretion rates in most star-forming regions.
In this work, we investigated whether spatial correlations in stellar accretion are evident in existing observational samples. There remain some approaches for searching for such correlations, even in the absence of very large, homogeneous samples of measured accretion rates (and stellar masses) in individual star-forming regions. One approach is to identify overdensities in tracers for ISM dust or gas that coincide with stellar overdensities. Barring improbable projection effects, such alignment would indicate that the local ISM from which the local stars form has not yet dispersed, and therefore stars and gas share a common physical location, not just on the plane of the sky. A second approach for a more dispersed stellar population is to correlate the spatial proximity of neighbouring stars with relative accretion rates. In a sub-structured star-forming region, stars that are closer together in projected separation are also more likely to be close together in (6D) position-velocity space. If the turbulent scale that dominates the fluctuations in local density and velocity of gas on the scale at which they accrete is greater than the interstellar spacing, then stars that are closer together spatially should also have more similar BHL accretion rates.
In the remainder of this work, we present our investigation into possible correlations between stellar accretion and spatial location in the Lupus star-forming region by adopting these approaches. In Section <ref> we discuss our approach, including the reasons for choosing Lupus and the datasets we employ. In Section <ref> we report on our findings and assess whether there is evidence of spatially-dependent stellar accretion within Lupus. We summarise our findings and steps for the future in Section <ref>.
§ APPROACH AND DATA
§.§ Normalised accretion rate
As discussed in Section <ref>, the BHL accretion rate is dependent both on the stellar mass and the properties of the local ISM. In order to search for correlations between stellar accretion Ṁ_acc and the environment it is necessary to isolate the environmental contribution. We therefore define the normalised stellar accretion rate:
ℳ̇≡Ṁ_acc m_*^-2.
This factors out the stellar mass dependence of the accretion rate which is expected theoretically (if BHL accretion regulates stellar accretion and until feedback cuts off the accretion flow – see Appendix G of ) and found empirically <cit.>. We are interested in determining whether ℳ̇ is randomly distributed or is correlated with position. We therefore require a measurement of both stellar mass and accretion rate for a sufficient sample in a local star-forming region with similar ages.
§.§ Data
In this work, we use the database of homogeneously compiled star/disc properties for the PP7 chapter by <cit.>. This sample is described in detail in Section 2.5 of that chapter, to which we refer the interested reader. In particular, for this work we required the stellar masses and accretion rates for stars in the Lupus region, which are compiled from the data of <cit.> and <cit.>. We note that these properties do not have associated individual uncertainties, which are only provided as typical values related to the method used to derive the properties. However, in the context of this work we are interested in inferring correlations, which statistically should only be weakened by the scatter introduced by ignored uncertainties. We limited our sample to stars that have measurements or upper limits for the standardised PP7 accretion rates, stellar mass estimates. This leaves a sample of 74 stars across the whole of Lupus. We further cut this sample by cross-matching with the <cit.> catalogue of 191 kinematically identified Lupus member candidates. Cross-matching with the PPVII catalogue left a sample of 61. We further reduced this to 57 stars that were described as `on cloud' by <cit.>, which are those that are spatially co-located with the Lupus 1–4 clouds.
§.§ Lupus star-forming region
The Lupus star-forming region is one of the closest low mass star-forming regions, at a distance of ∼ 160 pc <cit.>. It is an ideal laboratory to explore whether accretion rates are correlated with local stellar gas, both because its young stellar population is well-studied, and because the discs have intermediate ages <cit.>. If these discs are still substantially influenced by BHL accretion, this would strongly support the importance of this process throughout the disc lifetime.
Lupus is composed of several molecular clouds, among which the physical conditions of star formation vary. In many of the clouds, star formation is low density and the stellar population is dispersed. However, Lupus 3 is a much denser region, appearing to be a young, low mass cluster <cit.>. <cit.> find that the ages of stars in Lupus 3 are typically in the range 2.5-3 Myr, commensurate with the stars throughout the complex. However, <cit.> argue that star formation is also more advanced in Lupus 3 with respect to the other clouds, based on the high star formation efficiency and slowing star formation rate inferred from the comparative dearth of proto- and prestellar sources.
We show the distribution of stars surrounding the Lupus clouds in Figure <ref>. <cit.> demonstrate that this field contains stars associated with not only the Lupus clouds, but also stars associated with V1062 Sco and Upper Centaurus-Lupus. However, the PP7 sample already contains disc-hosting stars that have been previously identified as being associated with one of the Lupus clouds. Indeed, the vast majority of the sample are defined as `on cloud' (i.e. spatially coinciding with the Lupus clouds), with only four stars identified as `off cloud' by <cit.>. In our fiducial sample we retain only the `on cloud' stars, but this does not affect our results.
For this work, we performed a separate analysis of the clustered star-forming region centered on Lupus 3. We show a zoom-in panel on the central region of Lupus 3 in the bottom right of Figure <ref>. The stars within the solid red circle (radius 0.5^∘) we label the `clustered' sample in this work. In the clustered sample, there are 32 stars with stellar mass and accretion rate constraints in the PP7 catalogue with an associated membership inferred from Gaia kinematics by <cit.>. This region is not only densely populated with young stars with respect to the other parts of the cloud complex, but those young stars are centered on a peak in the 100 μm IRIS map <cit.>. This would suggest that the stellar population shares the same spatial location with residual gas from the star formation process. We may therefore expect enhanced BHL accretion in the dense core, with respect to the lower density outer regions — i.e. a radial gradient in ℳ̇.
The stars with accretion rate and stellar mass measurements in the less dense parts of the Lupus complex are distributed among the clouds Lupus 1-4. We label these the `distributed' stars, of which 25 kinematically identified `on cloud' Lupus members have measured masses and accretion rates in the PP7 sample. Most of the distributed stars appear to be similar ages ∼ 2-3 Myr <cit.>, but none are clearly associated with a stellar overdensity that is coincident with a peak in 100 μm emission. Even though we cannot determine the local gas density on a star-by-star basis among the distributed stars, we can assume that the local properties of the turbulent cloud are spatially correlated. In this case, if BHL is contributing to accretion rates, we should expect close neighbouring stars in the distributed population to have more similar ℳ̇.
Apart from Lupus, we could also have chosen to study correlations in Taurus or Chameleon I (Cham I), which are also nearby regions that host intermediate age stellar populations. Taurus does not presently contain a sufficient sample of homogeneously derived stellar masses and accretion rates, ruling it out. Cham I does contain such a sample in the PP7 database, however interpreting spatial correlations of accretion rates in this region is not straight forward. Stars in Cham I are distributed along a filament, but there is no clear stellar overdensity (cluster) associated with a monolithic ISM overdensity. Instead, the geometry is complex, and the cloud may be experiencing stellar feedback and dispersal. It is therefore not possible to define `clustered' and `distributed' populations as we have done for Lupus. Such a distinction is necessary since strong local gas density gradients in a bound region may confound efforts to spatially correlate accretion rates. We therefore restricted our attention to the Lupus region.
We focus the remainder of this work on the hypothesised correlations within the `clustered' and `distributed' samples in the Lupus star-forming region.
§ RESULTS AND DISCUSSION
§.§ Clustered stars
We first considered whether there is a radial gradient in ℳ̇ among the clustered population. To refine the nominal centre of our population, we adopted the median RA and Dec from the Lupus sample within our chosen spatial region (red circle, Figure <ref>). We then calculated the angular distance from the centre for each of the stars inside this circle. We show the distribution of normalised accretion rates as a function of separation in Figure <ref>. We exclude from our analysis an outlier, Sz106, which we discuss in Section <ref>. We justified our choice to normalise the accretion rate by m_*^2 based on the theoretical expectation. However, if this scaling is not the `true' physical scaling, this choice may also introduce spurious correlations if the distribution of stellar masses with measured disc and star properties in the PPVII sample are not spatially homogeneous. To ensure the robustness of our findings, we constructed a fiducial sample by restricting the stellar mass range 0.05 M_⊙ < m_* < 0.4 M_⊙, as shown in Figure <ref>. These limits are somewhat arbitrary, but motivated by an attempt to maximise the sample while minimising the range in stellar masses. We show in Figure <ref> that stellar masses in this fiducial sample are not spatially correlated.
Among our fidudicial sample, from a Spearman's rank correlation test we find a significant anti-correlation between the separation from the centre and ℳ̇, with significance p_S = 0.014 (p_S = 0.023 if we remove the stellar mass restriction). We also found a similar result by fitting a power-law using Linmix[<https://linmix.readthedocs.io/en/latest/>] <cit.>, with a form:
ℳ̇ = A ·(d/1')^b,
where d is the angular separation from the centre of the cluster. We find normalisation A = 3.5 × 10^-8 M_⊙^-1 yr^-1 with standard deviation 1.8 × 10^-8 M_⊙^-1 yr^-1 and power-law index
b = -0.56 ± 0.24 (pink line in Figure <ref>). We also find a similar result when including stars of all masses (cyan line in Figure <ref>) with b=-0.71± 0.23.
We may also ask whether the ISM conditions in Lupus 3 can feasibly drive BHL accretion onto the disc at a rate comparable to the observed stellar accretion rates. While we cannot directly observe the local density and relative gas velocity for individual stars, we can appeal to existing observational constraints to make order of magnitude estimates. The H_2 column density in the central regions of Lupus 3 is N_0 ∼ 10^22 cm^-2, over a region of radius ∼ 1 pc <cit.>. Although we do not know the true density profile, from these constraints and by adopting a Plummer density profile we can analytically estimate a reasonable local density and virialised velocity dispersion throughout the cloud. We estimated the normalised BHL accretion rate:
ℳ̇_BHL≈πG^2 ρ_gas/σ_v^3,
where the velocity dispersion σ_v and ISM density ρ_gas are both a function of separation from the centre of the cluster. The result of this exercise is shown by red lines in Figure <ref>, where we fix N_0 ∼ 10^22 cm^-2 and adopt a=0.5 pc and a=1 pc. From this simple estimate, we infer that the expected BHL accretion rate is comparable to or slightly exceeds the typical observed stellar accretion rates. This suggests that, even if material accreted onto the disc does not accrete onto the star with 100 percent efficiency, environmental infall is a plausible driver of disc evolution and stellar accretion.
Our findings suggest that stars in the higher density central regions of Lupus 3 accrete at a higher rate than stars in the outer regions. The BHL accretion rate implied by the approximate density and velocity dispersion in the central regions is comparable to the observed accretion rate. Taken together, these findings support the hypothesis that BHL accretion plays a role in regulating stellar accretion, although the substantial scatter may indicate that internal angular momentum transport processes in the inner disc also play an important role. We discuss caveats to this conclusion in Section <ref>.
§.§ Distributed stars
Outside of the clustered region the stellar population does not have a regular geometry. However, we would still expect stars that are close together to experience more similar BHL accretion rates, since the ISM properties should also be spatially correlated. If BHL accretion influences stellar accretion rates, then such a spatial correlation may also be evident across the measured ℳ̇ distribution. We therefore explored the ratio between the normalised accretion rates between stars i and j:
logℳ̇_r, ij = | logℳ̇_i - logℳ̇_j |,
such that ℳ̇_r, ij =ℳ̇_r, ji >1 for all i, j. A large (small) value for ℳ̇_r, ij means that the normalised accretion rates are very different (similar). We calculated this metric for all the pairs (counting each only once). For this calculation, we exclude upper limits leaving 26 accretion rate measurements, although we confirmed that our conclusions are not affected if we take upper limits as measurements. We then divided up the sample based on whether the pairs are closer than Δθ_th=1^∘. This choice is arbitrary (we subsequently varied it), but must be larger than typical interstellar spacing while substantially smaller than the scale of the whole region. The outcome of this experiment is shown in Figure <ref>. We find that close pairs have much more similar ℳ̇_r, with significance from a Kologorov-Smirnov (KS) test p_KS = 5.7 × 10^-3.
Strictly, since each measurement of separation is not independent, the KS test may somewhat overestimate significance. However, we performed a direct bootstrapping experiment by shuffling accretion rates between all individual star locations, measuring the probability of obtaining the observed KS test statistic among these randomly distributed synthetic samples. From this exercise we obtain a similarly significant result p_boot = 8.0× 10^-3, with the bootstrap experiments shown as faint dots in Figure <ref>. We note that the dispersion among the close stars (blue dots) is larger than the distant stars (red dots) because the subsample size is smaller.
We can further ask if there is a bias in terms of the stellar mass distribution within the observational sample. However, we found no significant correlation between stellar mass and position (Figure <ref>). The (insignificant) differences observed between the stellar mass ratios actually go in the wrong direction, with slightly larger differences in stellar masses among nearest neighbours.
Finally, we confirmed that our (arbitrary) choice of Δθ_th is not special. In Figure <ref> we show KS test results as a function of Δθ_th obtain a significant result over a wide range of Δθ_th.
We conclude that the normalised accretion rates among the distributed population are correlated, and that this correlation is not related to an inhomogeneous distribution of stellar masses. We discuss the degree to which this finding supports BHL accretion as a driver of stellar accretion in Section <ref>.
§.§ Age versus environment
§.§.§ Age estimates
If angular momentum transport is mediated by internal processes, either viscous or via MHD winds, we expect stellar accretion to decrease with stellar age <cit.>. Stellar age gradients have been inferred in other star-forming regions, such as the Orion Nebula cluster <cit.>. It is therefore plausible that stars on the outskirts of Lupus 3 are somewhat older than those in the core. Age differences may be even more substantial among the distributed population, which may form from separate gravitationally unstable regions of the ISM at different times. Age gradients generally may therefore partially or fully explain spatial correlations in accretion rates.
We estimate what kind of age gradients across Lupus might be needed to explain our results as follows. We first consider Upper Sco, in which stars have typical ages inferred by <cit.> ∼ 10-12 Myr <cit.>. The typical normalised accretion rate for a star of log m_* ∼ -0.5 M_⊙ in Upper Sco is ℳ̇≈ 1.3 × 10^-9 M_⊙^-1 yr^-1, compared to ℳ̇≈ 7.9 × 10^-9 M_⊙^-1 yr^-1 in Lupus, based on the fits in Figure 7 of that <cit.>. If disc evolution proceeds somewhat self-similarly in this age range, then we would need a substantial age gradient (a factor ≳ 2) between the inner and outer regions to reproduce the order of magnitude difference in ℳ̇ across our clustered sample (Figure <ref>). The factor few systematic differences in ℳ̇ across the distributed population (Figure <ref>) are more comparable to the differences between Upper Sco and Lupus, but would still require age gradients comparable to the age differences between the two regions. Overall, it would appear that the systematic variation in stellar age across Lupus would have to be of order the age itself in order to explain the observed correlations without appealing to an environmental origin.
Stellar ages inferred from isochrone fitting, for example, are notoriously uncertain for individual stars. It is therefore challenging to test whether age gradients may drive our results by directly estimating these ages. Based on the relative number of stars at different evolutionary stages (defined by the spectral energy distribution – SED – classification), <cit.> suggested that Lupus 3 is more evolved than Lupus 1, which is in turn older than Lupus 4. However, <cit.> used photometry from Gaia DR2, 2MASS, AllWISE and Spitzer c2d to fit SEDs to different stellar evolution models, and found that for Lupus 3 and Lupus 4 stellar ages were similar, approximately 2.5 -3.5 Myr, while sample sizes were prohibitively small across the other clouds. <cit.> and <cit.> used Gaia optical photometry to estimate ages averaged over all the clouds, both inferring an older age ∼ 6 Myr across clouds 1–4. The typical (model-dependent) uncertainty quoted based on the larger sample by <cit.> is approximately ± 0.5 Myr, although this is a statistical uncertainty and not a physical spread. The large scale systematic spatial age correlations across the Sco-Cen region hint that typically age differences of several Myr between stellar populations correspond to spatial distances of several tens of parsec.[<https://homepage.univie.ac.at/sebastian.ratzenboeck/wp-content/uploads/2023/05/scocen_age.html>] It would therefore seem unlikely for several Myr age differences across the Lupus clouds (on scales ∼ 10-20 pc). To yield the correlations we observe, this would also suggest that the intra-cloud age dispersion for the distributed population must be comparatively small, while in Lupus 3 the dispersion is large. These conditions, while not impossible, appear contrived.
§.§.§ Disc mass as a proxy
We also performed perhaps a more direct test of age differences by considering the properties of our sample. To do so, we reason that if a stellar age gradient were responsible for our findings, we would expect a spatial correlations of the disc masses. Observationally disc (dust) masses decrease on few Myr timescales <cit.>. We may therefore assume that normalised dust masses are a reasonable proxy for stellar age. Based on this premise, we tested for spatial dust mass correlations across Lupus as follows.
Analogously to accretion rates, we define the normalised dust mass:
ℳ_dust = M_dust/m_*^2,
where M_dust is the dust mass in the disc as in the PPVII dataset. We have used the same normalisation factor m_*^2 in this case, despite there being no clear theoretical motivation for such a scaling with stellar mass. However, observationally the accretion timescale (or M_dust/Ṁ_acc) is not strongly dependent on stellar mass <cit.>.
To test for correlations among the clustered sample, we restricted the range of stellar masses in our fiducial sample, as for stellar accretion rates. We show the outcome of this exercise in Figure <ref>. We found that, regardless of whether we consider the stellar mass-restricted or entire sample, there is no significant correlation between normalised dust mass and distance from the centre of Lupus 3. Fitting power-laws yielded indices b= -0.14 ± 0.42 and b = -0.25 ± 0.33
respectively. We conclude that disc masses do not appear correlated with separation from cluster centre.
For the distributed population, as for the clustered population, if age were responsible for the spatial correlations of the normalised accretion rates, we would expect this correlation to be evident also among the normalised disc masses. Performing a spatial correlation test for the normalised disc dust masses among the distributed population, we find no such evidence (Figure <ref>).
We conclude that in Lupus normalised disc (dust) masses do not correlate strongly with spatial location, in contrast to normalised stellar accretion rates. We interpret this as evidence that infall rather than age differences drive the latter correlation. This is because:
* Among the broader sample protoplanetary discs, there is no evidence that accretion rates evolve substantially faster. In fact, based on Figure 8 of <cit.>, ℳ_dust for log m_* = - 0.5 M_⊙ decreases from 1.6× 10^-2 at the age of Lupus to 1.8× 10^-3 at the age of Upper Sco (a factor 9 compared to a factor 6 in ℳ̇). This hints that accretion rates may even evolve marginally slower than dust masses <cit.>. Therefore if age differences were resulting in differences in stellar accretion, it should also result in differences in disc (dust) masses.
* If the origin of our findings is environmental infall, we may not expect such a strong correlation between spatial location and disc mass. This is because disc mass is a time-integrated quantity, which is dependent on the whole disc evolution history. The present-day BHL accretion rate may not be the same as the typical historic rate. By contrast, stellar accretion may be enhanced by turbulence driven by infall onto the outer disc, which is short-lived <cit.>. The accretion rate is therefore a more direct probe of the local environment than disc mass.
Despite the above arguments, we do not categorically rule out age as a factor in our findings. Interpreting our findings as having an environmental origin rests on the assumption that normalised disc dust masses (continuum fluxes) are at least as closely correlated with stellar age as stellar accretion rates are. While present data would suggest this is a safe assumption, large samples with homogeneous age determinations, or correlation experiments with different probes of the total disc mass may help to strengthen or rebuke the infall scenario. If age is responsible for our findings, this suggests that dust masses must evolve substantially more slowly than accretion rates. In either case, benchmarking models without environmental effects at a specific age to individual star-forming regions may result in incorrect conclusions.
§.§ Sample choice and outliers
Although our results are statistically significant, the sample size remains small. Our approach has also made it necessary to make some choices, some of which can alter our results. In particular, we have chosen a cut-off radius of 0.5^∘ in which to define our clustered sample. This choice is arbitrary, and is made to simply contain the majority of the stars that appear to be approximately spherically distributed around the 100 μm peak.
We therefore tested our conclusions regarding the correlation with separation from the cluster centre against this cut-off radius. First, we find that if we choose instead to limit the sample to those stars within ≲ 0.4^∘, the significance drops below 2 σ due to the reduced sample size. The power-law fit to the between the separation and ℳ̇ remains similar regardless of the cut-off radius down to ∼ 0.2^∘.
If we instead increase the cut-off radius which defines the clustered sample to 0.7^∘, as shown by the dashed red circle in Figure <ref>,
then we include the curious case of J16092697-3836269, with stellar mass 0.15 M_⊙. This is an interesting outlier, which has the largest ℳ̇ across the entire region. It clearly does not fit the trend of the majority of the rest of the clustered sample as shown in Figure <ref>.
It also has an estimated dust mass of M_d = 1.34 M_⊕ and an accretion rate of 7.9 × 10^-9 M_⊙ yr^-1.
Assuming a gas-to-dust ratio of 100, this implies an accretion timescale of τ_acc∼ 5 × 10^4 years, which does not seem compatible with the ∼ 3 Myr age of the majority of the cluster. Given the anomalously short τ_acc, we are justified in excluding it from the clustered sample. In this case, we retained a significant correlation for our broader cluster sample (Spearman rank test p_S = 0.010). If we instead included the outlier, this erases the significance according to a Spearman test. This highlights that, while we have justified our choices, we are limited by a relatively small sample size that means a small number of outliers can alter our results.
There is a second outlier in the clustered region, which we have excluded in our statistical test due to our choice of mass cut. This is Sz 106, which accretes at a very low level despite being positioned close to the center of the clustered region. Interestingly, this object has been designated `sub-luminous' by <cit.> and <cit.>, who pointed out that the star is significantly less luminous than expected for its spectral type. <cit.> hypothesise that this is an extinction effect, either due to circumstellar material or an edge on disc. An alternative explanation is an episodic accretion event that could produce a smaller radius, higher temperature, and lower luminosity <cit.>, but this would also imply enhanced surface gravity which is not observed. Whether or not extinction is the origin for the luminosity dip, the accretion rate may be underestimated for this star and we are therefore justified in considering this anomalous. In fact, the known exceptional state of the star in a sense further supports the apparent trend inferred from Figure <ref>, which similarly identifies Sz 106 as an outlier.
§.§ Systems with evidence of infall
In Figure <ref> we have highlighted a number of systems which there is some evidence of infall in the literature. While this list may not be complete, they represent interesting case studies that are plausible evidence of ongoing infall:
* IM Lup has a very extended (∼ 400 au) gas disc with high degree of outer disc turbulence with a turbulent velocity ∼ 0.2-0.6 c_s <cit.>. IM Lup also has a relatively high normalised accretion rate ℳ̇ = 2.7× 10^-8 M_⊙ yr^-1. While IM Lup has not previously been suggested as a candidate for infall, it does exhibit an extended and asymmetric halo in ^12CO out to ∼ 1000 au <cit.>. This has been previously suggested to be evidence of an externally-driven weak photoevaporative wind <cit.>, although such asymmetric extended structures could also be interpreted as infalling material. Such infall may also explain the large scale m=2 spiral structures observed in the mm-continuum <cit.>. Modelling is required to examine if infall is a plausible driver of the observed structure.
* RU Lup has a disc that exhibits large-scale spiral-like structures, extending out to ∼ 1500 au, far beyond the ∼ 120 au Keplerian disc <cit.>, as well as reflection nebulosity <cit.>. The spiral structures may be signatures of infalling material <cit.>. Interestingly, this large scale is comparable to the extent of the halo around IM Lup. Interpreted in the context of BHL accretion, this would suggest a similar accretion radius, corresponding to similar local gas relative velocity. RU Lup also has a very high ℳ̇ = 3.3 × 10^-7 M_⊙^-1 yr^-1.
* HD 142527 also exhibits both reflection nebulosity ∼700 au spiral structures <cit.>, exceeding the disc radius of about 200–300 au <cit.>. HD 142527 also shows a misalignment between the inner and outer disks <cit.>, which may also be a consequence of late infall <cit.>. HD 142527 is not included in the PP7 catalogue of star/disc properties, and we therefore did not consider it in the statistics we present in this work. However, based on the accretion rate inferred by <cit.>, the normalised accretion rate (ℳ̇∼ 6 × 10^-8 M_⊙^-1 yr^-1) is high.
* HT Lup exhibits reflection nebulosity, as well as a crescent-shaped emission extended beyond observed in near-IR polarimetric observations <cit.> and Herschel maps <cit.>, which may also be a signpost of infall <cit.>. However, HT Lup is in fact a triple system <cit.>, where the overall luminosity is dominated by HT Lup A <cit.>, with a mass 1.3± 0.2 M_⊙ <cit.> or 1.32 M_⊙ in the PP7 database. Interestingly, HT Lup B hosts a disc that is apparently counter-rotating (or possibly perpendicular) with respect to the disc around HT Lup A <cit.>. In the PP7 database, HT Lup also only has an upper limit constraint on the stellar accretion rate, such that ℳ̇ < 4.4 × 10^-9 M_⊙^-1 yr^-1. However, recent observations of HT Lup with VLT/MUSE by <cit.> have uncovered evidence of variability in the accretion rate onto HT Lup B (which has a dynamical mass ∼ 0.09 M_⊙) by a factor several, corresponding to Ṁ_acc∼ 0.7- 2.9 × 10^-9 M_⊙ yr^-1, or a very large ℳ̇∼ 0.9 -3.7 ×10^-7 M_⊙^-1 yr^-1. Accretion onto HT Lup A is only somewhat variable (∼ 30 percent), with ℳ̇∼ 3 × 10^-9 M_⊙^-1 yr^-1. The variable accretion rates in this system may be due to mutual gravitational perturbations between stellar components. However, this would suggest a small physical (line-of-sight) separation that would make explaining counter-rotating discs challenging. HT Lup is also very close to 2MASS J15450887-3417333, which has one of the largest normalised accretion rates (ℳ̇ = 8.2 × 10^-7 M_⊙^-1 yr^-1) across the PP7 sample.
* HN Lup exhibits the least compelling evidence of infall due to the absence of necessary ALMA observations. However, it exhibits reflection nebulosity, suggesting that it is embedded in clouds and are likely to accrete material from them <cit.>. Like RU Lup, HN Lup also has relatively large ℳ̇ = 5.1 × 10^-8 M_⊙^-1 yr^-1.
To summarise, two of the three cases where we have constraints on ℳ̇ from the PP7 database and evidence of infall have particularly large normalised accretion rates, and one component of the third system (HT Lup B) exhibits a high normalised accretion rate and variability. Added to these three systems, HD 142527 has a high ℳ̇, and IM Lup has a more moderate but still substantial ℳ̇. All of these cases are in the distributed population. This is probably a selection bias. For example, reflection nebulae catalogues used in <cit.> are expected to be highly incomplete, particularly in high-density regions (such as Lupus 3) that suffer from higher extinction. A more complete sample of infall candidates is required to statistically correlate the environment with stellar accretion. Nontheless, the evidence of ongoing infall onto discs around several young stars in Lupus is compelling. Given the short free-fall timescale material at ∼ 1000 au scales (≲ 10^4 yr), this would suggest that this process is either relatively common or sustained by the broader environment even up to the ∼ 1-3 Myr age of Lupus.
§ CONCLUSIONS
In this work, we have investigated the possibility that accretion rates in the Lupus star-forming region are spatially correlated, which may be evidence of environmentally regulated stellar accretion. To do so, we divided stars into the clustered population in Lupus 3 centered on a substantial remaining gaseous reservoir, and the distributed population across the rest of the Lupus cloud complex, for which stars are not clearly associated with a particular gaseous overdensity. We found statistically significant correlations both within the clustered population and among the distributed population. In the clustered population, the stellar accretion rates are commensurate with the BHL accretion rate we estimate based on the local gas density. Our results cannot be explained by spatial variations of the stellar mass distribution among the observational sample. Age gradients are also disfavoured, since unlike the normalised accretion rates, the normalised disc masses do not exhibit any similar spatial gradient. However, we cannot categorically rule out the role of age gradients, which may be present particularly among the distributed population.
Our results suggest that late stage infall from the ISM onto the protoplanetary disc plays a role in regulating stellar accretion, probably alongside internal processes that drive angular momentum transport in the inner part of the disc. The apparent dependence of stellar evolution on external environment would not be expected if these latter internal processes were solely responsible for disc evolution. Future studies focused on accurate and homogeneous age determinations or correlating infall occurrence with disc properties may in future strengthen or rebuke this interpretation.
Regardless of their origin, our findings are strong evidence that disc populations should not be considered the result of isolated disc evolution from a well-defined initial condition. Even for star-disc systems at the typical age of Lupus (∼ 1-3 Myr), stellar accretion rates systematically vary across the region.
We thank the referee for a considered report, which substantially improved the discussion presented in this paper. AJW has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101104656. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon research and innovation programme for the `PROTOPLANETS' project, grant No. 101002188, and `WANDA', grant No. 101039452. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
aa
| The timescale for planet formation corresponds to the first few million years of stars' lifetimes, while they still host a `protoplanetary disc' of dust and gas <cit.>. This timescale is comparable to the typical timescale over which stars form from overdensities in the galactic interstellar medium <cit.>. As a result, planet formation in the protoplanetary disc occurs simultaneously with processes governing star formation, in regions of enhanced stellar and ISM density. Theoretical and observational evidence indicates that several processes, including dynamical encounters <cit.> and external photoevaporation <cit.> can feedback on the planet formation process. As a result, the external star formation environment may contribute to the diversity of exoplanets. However, how exactly the environment may sculpt planetary systems remains poorly understood.
One process which has been suggested as an important driver of disc evolution is late stage infall of gas from the ISM via the Bondi-Hoyle-Lyttleton (BHL) accretion mechanism, which would explain the observed scaling between accretion rates and stellar mass <cit.>. However, a lack of apparent correlation between stellar accretion rates and the environment (based on substantial accretion rates in Trumpler 37), as well as some theoretical problems in redistributing material to the inner disc, led to the widely-held conclusion that BHL accretion is not a primary driver of stellar accretion <cit.>. Recent years have resulted in a wealth of new observational evidence that motivates a second look at this conclusion. In particular, a recent surge in discoveries of examples of extended structures in molecular line emission <cit.> and infrared scattered light around class II discs <cit.> have provoked renewed interest in late stage infall. Theoretical studies <cit.> have suggest infall is a viable driver of observed structures such as spirals and streamers <cit.>, which are often associated with reflection nebulosity <cit.>, accretion outbursts <cit.> and misaligned inner discs <cit.>. The latter misalignments are apparent among a high fraction of discs <cit.>. Ongoing infall may also help explain an apparent mass budget problem for protoplanetary discs compared with the observed exoplanet population <cit.>, and the non-monotonic evolution of disc dust masses with time <cit.>. BHL accretion was also suggested as a possible solution for the presence of old but still accreting discs in star-forming regions <cit.>. From a theoretical perspective, recent simulations have suggested that BHL accretion is a substantial source of mass and angular momentum for discs up to ∼ 1 Myr after the formation of the host star <cit.>. Warps instigated by late-stage infall may propagate throughout the disc <cit.>, possibly driving instabilities and short-lived high levels of turbulence in the outer disc <cit.>. Shadows resulting from misalignment of the inner disc may also drive spirals and turbulence in the outer disc <cit.>. Even if inner disc material loses angular momentum by some other mechanism, such as magnetic winds <cit.>, infall appears a plausible mechanism to replenish this gas, possibly regulating stellar accretion. This replenishment seems particularly plausible given that the near infrared excess, probing inner disc inner disc material, is correlated with outer disc substructures such as spirals and shadows <cit.>, suggesting that processes that influence one also influence the other.
The question of the timescale over which late-stage accretion occurs is of critical importance. If the infall onto the disc is only substantial in the first < 1 Myr of disc evolution, then it may be reasonable to assume that this simply sets initial conditions, as conventionally assumed in theoretical studies of planet formation. On the other hand, if substantial infall occurs over the entire disc lifetime, disc replenishment may be a critical ingredient for planet formation models. Observationally, examples of infall have not only been uncovered for very young stars that are embedded in their natal environment <cit.>, but also stars older than 1 Myr such as AB Aur <cit.>, DR Tau <cit.>, S CrA <cit.> and SU Aur <cit.>. <cit.> find evidence of interaction between the star-disc system and the ISM around 16 percent of their sample of discs in Taurus observed in scattered light. From a theoretical perspective, <cit.> recently estimated that ∼ 20-70 percent of discs in the age range 1-3 Myr should be mostly composed of recently accreted material, based on the timescale for turbulent fluctuations of the ISM (although this finding may be contingent on stellar feedback). Given the potential critical importance to planet formation models, these studies motivate urgent empirical tests as to whether protoplanetary discs are undergoing significant environmental replenishment.
One approach for testing the importance of replenishment is to correlate observed stellar accretion rates with the external environment. In a medium where the gas relative velocity Δ v_gas≫ c_s, the sound speed, the BHL accretion rate Ṁ_BHL is proportional to the cross-section carved out by the radius R_BHL = 2 G m_*/Δ v_gas^2 within which the gas is captured, where m_* is the stellar mass. Then Ṁ_BHL∝ m_*^2 ρ_gas /Δ v_gas^3, where ρ_gas is the local gas density. Thus if BHL accretion is substantially altering disc evolution, then the stellar accretion rate Ṁ_acc corrected for the stellar mass dependence – i.e. Ṁ_acc/m_*^2 – may correlate with local density and anti-correlate with relative velocity of gas. However, finding evidence of such correlations is not necessarily a straight forward task for several reasons. Most importantly, we lack complete (3D) position and velocity information for both the stars and gas. For example, we cannot directly map ISM overdensities along the line-of-sight. The effective noise that this introduces into any statistical signal, alongside numerous other potential complicating factors, is problematic when we consider the modest sample sizes of homogeneously determined accretion rates in most star-forming regions.
In this work, we investigated whether spatial correlations in stellar accretion are evident in existing observational samples. There remain some approaches for searching for such correlations, even in the absence of very large, homogeneous samples of measured accretion rates (and stellar masses) in individual star-forming regions. One approach is to identify overdensities in tracers for ISM dust or gas that coincide with stellar overdensities. Barring improbable projection effects, such alignment would indicate that the local ISM from which the local stars form has not yet dispersed, and therefore stars and gas share a common physical location, not just on the plane of the sky. A second approach for a more dispersed stellar population is to correlate the spatial proximity of neighbouring stars with relative accretion rates. In a sub-structured star-forming region, stars that are closer together in projected separation are also more likely to be close together in (6D) position-velocity space. If the turbulent scale that dominates the fluctuations in local density and velocity of gas on the scale at which they accrete is greater than the interstellar spacing, then stars that are closer together spatially should also have more similar BHL accretion rates.
In the remainder of this work, we present our investigation into possible correlations between stellar accretion and spatial location in the Lupus star-forming region by adopting these approaches. In Section <ref> we discuss our approach, including the reasons for choosing Lupus and the datasets we employ. In Section <ref> we report on our findings and assess whether there is evidence of spatially-dependent stellar accretion within Lupus. We summarise our findings and steps for the future in Section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17237v1 | 20240925180003 | JOYS+ study of solid state $^{12}$C/$^{13}$C isotope ratios in protostellar envelopes: Observations of CO and CO$_2$ ice with JWST | [
"N. G. C. Brunken",
"E. F. van Dishoeck",
"K. Slavicinska",
"V. J. M. le Gouellec",
"W. R. M. Rocha",
"L. Francis",
"L. Tychoniec",
"M. L. van Gelder",
"M. G. Navarro",
"A. C. A. Boogert",
"P. J. Kavanagh",
"P. Nazari",
"T. Greene",
"M. E. Ressler",
"L. Majumdar"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Observations of CO and CO_2 ice with JWST
Leiden Observatory, Leiden University, 2300 RA Leiden, The Netherlands
[email protected]
Max-Planck-Institut für Extraterrestrische Physik, Gießenbachstraße 1, 85748 Garching, Germany
Laboratory for Astrophysics, Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, The Netherlands
NASA Ames Research Center, Space Science and Astrobiology Division M.S 245-6 Moffett Field, CA 94035, USA
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA
INAF-Osservatorio Astronomico di Capodimonte, Salita Moiariello
16, 80131 Napoli, Italy
Institute for Astronomy, University of Hawai’i at Manoa, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
Department of Experimental Physics, Maynooth University, Maynooth, Co. Kildare, Ireland
European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching bei München, Germany 510
School of Earth and Planetary Sciences, National Institute of Science Education and Research, Jatni 752050, Odisha, India
Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India
N.G.C. Brunken et al.
The carbon isotope ratio is a powerful tool for studying the evolution of stellar systems due to its sensitivity to the local chemical environment. Recent detections of CO isotopologues in disks and exoplanet atmospheres revealed high variability in the isotope abundance, pointing towards significant fractionation in these systems. In order to fully understand the evolution of this quantity in stellar and planetary systems however, it is crucial to trace the isotope abundance from stellar nurseries to the time of planet formation. During the protostellar stage the multiple vibrational modes of CO_2 and CO ice, that peak in the near- and mid-infrared, provide a unique opportunity to examine the carbon isotope ratio in the solid state. Now with the sensitivity and wide spectral coverage of the James Webb Space Telescope, the multiple weak and strong absorption features of CO_2 and CO have become accessible at high signal to noise in Solar-mass systems.
We aim to study the carbon isotope ratio during the protostellar stage by deriving column densities and ratios from the various absorption bands of CO_2 and CO ice, and comparing our results with ratios derived in other astronomical environments.
We quantify the ^12CO_2/^13CO_2 and the ^12CO/^13CO isotope ratios in 17 class 0/I low mass protostars from the ^12CO_2 ν_1 + ν_2 and 2ν_1 + ν_2 combination modes (2.70 μm and 2.77 μm), the ^12CO_2 ν_3 stretching mode (4.27 μm), the ^13CO_2 ν_3 stretching mode (4.39 μm), the ^12CO_2 ν_2 bending mode (15.2 μm), the ^12CO 1-0 stretching mode (4.67 μm) and the ^13CO 1-0 stretching mode (4.78 μm) using JWST NIRSpec and MIRI observations. We also report a detection of the 2-0 overtone mode of ^12CO at 2.35 μm.
The column densities and ^12CO_2/^13CO_2 ratios derived from the various CO_2 vibrational modes are in agreement within the reported uncertainties and we find mean ratios of 85 ± 23, 76 ± 12 and 97 ± 17 for the 2.70 μm band, the 4.27 μm band and the 15.2 μm band, respectively. The main source of uncertainty on the derived values stem from the error on the band strengths, the observational errors are in comparison negligible. Variation of the ^12CO_2/^13CO_2 ratio is observed from source to source which indicates that there could be genuine differences in the chemical conditions of their envelopes. The ^12CO/^13CO ratios derived from the 4.67 μm bands are consistent, albeit elevated with respect to the ^12CO_2/^13CO_2 ratios and we find a mean ratio of 165 ± 52.
These findings indicate that ices leave the pre-stellar stage with elevated carbon isotope ratios relative to the overall values found in the interstellar medium and that fractionation becomes a significant factor during the later stages of star and planet formation.
JOYS+ study of solid state ^12C/^13C isotope ratios in protostellar envelopes
N. G. C. Brunken1, E. F. van Dishoeck1,2, K. Slavicinska1,3, V. J. M. le Gouellec4, W. R. M. Rocha1,3, L. Francis1, L. Tychoniec1, M. L. van Gelder1, M. G. Navarro6, A. C. A. Boogert7, P. J. Kavanagh8, P. Nazari9, T. Greene 4, M. E. Ressler5 and L. Majumdar10,11
Received; Accepted
============================================================================================================================================================================================================================================================================
§ INTRODUCTION
The ingredients constituting interstellar material, stars and eventually planets originate in dense molecular clouds where atoms and small molecules are accreted from the gas-phase onto cold dust grains forming icy mantles <cit.>. The abundance and composition of these ice mantles are therefore a direct probe of the chemical complexity of the molecular cloud at the time of ice formation.
Susceptible to changes in these chemical conditions are isotope ratios that have been well studied both in the gas-phase and solid state in multiple celestial bodies <cit.>. Variability of the isotope abundance across different astronomical environments is therefore indicative of the chemical evolution of these systems and the physicochemical processes that the pristine material has been subjected to.
The carbon isotope ratio in particular is an attractive contender for isotope chemistry studies for numerous reasons. First, carbonaceous ices such as ^12CO and ^12CO_2 are abundant and readily detected in infrared observations <cit.> and if the sensitivity allows, their weaker isotopologue bands ^13CO and ^13CO_2 are also observable in the same spectral range, thus enabling carbon isotope analysis in the solid state <cit.>. Additionally, CO and CO_2 ice have multiple vibrational modes across the infrared spectrum that facilitate comparison between ratios derived from the different absorption bands.
CO is also the second most abundant gas-phase molecule after H_2 in the interstellar medium (ISM) and its isotopologues ^12C^16O, ^13C^16O, and ^12C^18O therefore have strong gas-phase molecular lines in the sub-mm and infrared making them ideal for gas-phase studies in various environments <cit.>, including exo-planet atmospheres <cit.>. This plethora of observational possibilities enables us to draw comparisons between gas-phase and solid state abundances and further elucidate the origin of interstellar material and its evolution from stellar nurseries to the time of planet formation.
Enrichment of interstellar material with ^13C initially occurs during the CNO-cycle of stellar nucleosynthesis (M_⋆ > 1.3 M_⊙) when ^13N atoms are converted into ^13C atoms. A fraction of these ^13C atoms remain in the star and is later released into the ISM at the end of its life-cycle <cit.>.
Once formed, the main fractionation processes responsible for affecting regional carbon isotope ratios are isotope exchange reactions, isotope-selective photodissociation, and possibly gas-ice partitioning <cit.>.
The yardsticks against which all observations are usually measured are the ratios derived for the local ISM by <cit.> from CO studies and by <cit.> from CN studies as well as the Solar abundance ratio <cit.>.
In diffuse clouds, <cit.> derived ratios from optical line absorption studies of CH^+ that were consistent with the ISM standard (^12CH^+/^13CH^+ ∼ 74). Conversely, the effect of selective photodissociation was observed in other diffuse clouds where higher ratios were extracted from CO measurements (<cit.> and <cit.>).
In dense star forming regions <cit.> found mainly sub-ISM ratios from gas-phase studies of complex organic molecules (COMs) with considerable deviations between the different species (^12C/^13C ∼ 27-67). Large discrepancies were also observed in studies of protoplanetary disks where the values varied drastically depending on the line of sight and the region of the disk in which the ratio was measured <cit.>. Finally, cometary ratios and ratios derived from chondrites were found to be similar to the standard Solar value (∼ 89) <cit.>.
In the solid state, <cit.> examined a number of massive young stellar objects (MYSOs), three low mass young stellar objects (LYSOs), and two clouds and found ^12CO_2/^13CO_2 ∼ 52 - 113, consistent with the standard ISM ratio. Additional studies of CO ice for one MYSO <cit.> and one LYSO <cit.> yielded ^12CO/^13CO ∼ 71 and 69, respectively. The sample of solar-mass protostars remained limited however since past observational facilities lacked the sensitivity required to observe most of these low mass objects. Moreover, oftentimes the partial spectral coverage of these observatories also meant that CO and CO_2 ice could not be examined simultaneously.
Now with the exquisite sensitivity of the James Webb Space Telescope (JWST) <cit.>, we are able to study the various vibrational modes of CO and CO_2, including their weaker isotopologue absorption features, for the first time in solar-mass protostars. Moreover, the high spectral resolution of the JWST offers a unique opportunity to perform detailed profile analysis of these strong and weak ice features. <cit.> for instance, utilized this unprecedented sensitivity to quantify solid state ^12CO_2/^13CO_2 and ^12CO/^13CO ratios in the Cha I dark molecular cloud (^12CO_2/^13CO_2 ∼ 69 - 87 and ^12CO/^13CO∼ 99 - 169). <cit.> also used the high spectral resolution of the JWST to perform detailed profile analysis of the 4.39 μm band of ^13CO_2 and showed that the band can be decomposed in five principal components <cit.> that are representative of the chemical and thermal environment of the CO_2 ice.
In this paper we present high resolution JWST NIRSpec and MIRI observations of the ^12CO_2 ν_1 + ν_2 and the 2ν_1 + ν_2 combination modes (2.70 μm and 2.77 μm), the ^12CO_2 ν_3 stretching mode (4.27 μm), the ^13CO_2 ν_3 stretching mode (4.39 μm), the ^12CO_2 ν_2 bending mode (15.2 μm), the ^12CO 2-0 overtone mode (2.35 μm), the ^12CO 1-0 stretching mode (4.67 μm) and the ^13CO 1-0 stretching mode (4.78 μm) for 17 low mass Class 0/I protostars observed as part of the JWST Observations of Young protoStars (JOYS+) program <cit.>. We derive column densities of ^12CO_2, ^13CO_2, ^12CO and ^13CO for each vibrational mode and determine the carbon isotope ratio for the sources in this sample. We build on the work of <cit.> by expanding the sample of solar-mass objects and examining their isotope reservoir.
This paper is structured as follows. In Section <ref> we present our data and describe the data reduction process and the methods for analyzing the spectra. In Section <ref> we derive column densities and isotope ratios for the bands as a whole, and we discuss the results in Section <ref>. Finally, in Section <ref> we provide a summary and the concluding remarks.
§ OBSERVATIONS AND METHODS
§.§ Observations
Our observations were taken as part of the JWST Observations of Young protoStars (JOYS+) Cycle 1 Guaranteed Time Program (GTO) NIRSpec (PI: E.F. van Dishoeck, ID: 1960), MIRI (PI: M. E. Ressler, ID: 1236). The data consist of NIRSpec and MIRI spectra of 17 Class 0/I sources including binaries observed using the G235M, G235H, G395M and G395H modes (R = λ /Δ λ = 1000 and 2700 for the NIRSpec observations, respectively).
The NIRSpec data cubes were reprocessed using the JWST pipeline (version 1.13.4) and the corresponding CRDS context jwst_1231.pmap (CRDS_VER = `11.17.19'). Additional corrections outside the JWST pipeline were applied to enhance the data quality. The calwebb_detector1 step was executed with standard parameters. Subsequently, NSClean <cit.> was utilized to address faint vertical banding and "picture frame noise" in the rate files. Following this, calwebb_spec2 was run to generate the cal files. A systematic search was conducted in the cal files to identify warm pixels for flagging, as detected in the MAST final cubes. Warm pixels labeled as UNRELIABLE_FLAT and NO_SAT_CHECK were flagged to prevent their usage in the final product production. The remaining not-flagged warm pixels were identified using sigma clipping with high sigma values to prevent the inadvertent clipping of real data. Subsequently, the DQ extension of the cal files was updated to incorporate these newly flagged pixels. Finally, calwebb_spec3 was executed, configuring the outlier detection in the JWST outlier_detection step with a threshold_percent = 99.9 and a kernel_size = 7 7. As a result, the final cubes exhibit improved cleanliness and a significantly higher signal-to-noise ratio compared to those obtained directly from the MAST.
For the MIRI cubes, a two-point dither pattern in the negative direction was performed for the majority of the sources on the source position. One exception is B1-c for which a four-point dither pattern was used. For each star-forming region (i.e., Taurus, Perseus, Serpens), one dedicated background was taken in a two-point dither pattern, except for in Taurus where a single dither was used, in order to properly subtract the telescope background and to better remove detector artifacts. All observations were carried out using the FASTR1 readout mode and use all three MIRI-MRS gratings (A, B, C), providing the full 4.9-28.6 μm wavelength coverage.
The data were processed through all three stages of the JWST calibration pipeline version 1.13.4 <cit.> using the same procedure as described by <cit.>. The reference contexts jwst_1188.pmap of the JWST Calibration Reference Data System (CRDS; Greenfield & Miller 2016) was used. In short, the raw data were processed through the Detercor1Pipeline using the default setting. The dedicated backgrounds were subtracted on the detector level in the Spec2Pipeline. This step also included the correction by the fringe flat for extended sources (Crouzet et al. in prep.) and a residual fringe correction (Kavanagh et al. in prep.). In order to remove any remaining warm and bad pixels from the calibrated detector data, an additional bad pixel routine was applied using the Vortex Imaging Processing (VIP) package <cit.>. Lastly, the final datacubes were constructed for each band of each channel separately using the Spec3Pipeline.
The properties of the Perseus objects studied in this work can be found in <cit.>. An overview of the ice bands discussed in this work is presented in Figure <ref> for the two sources EDJ183-b and Per-emb35.
§.§ Spectral extractions
Spectral extractions were done for both the NIRSpec and MIRI range at central position using a cone diffraction extraction method with a 3λ/D cone aperture. We opted for this aperture size to ensure that a maximum amount of flux was being included while avoiding overlap between the binary sources as much as possible. Overlap in the long wavelength MIRI channels was unavoidable due to the increasing size of the point spread function. The identifying names and spectral extraction coordinates are given for all sources in Table <ref>. Binary systems are denoted with `a' and `b'.
§.§ Continuum subtraction
Prior to the analysis the spectra are converted from flux scale (F^obs_λ) to optical depth scale using equation <ref>:
τ^obs_λ = -ln(F^obs_λ/F^cont_λ),
where F^cont_λ is the flux of the continuum. In the following sections we will discuss the continuum placement for each vibrational mode.
§.§.§ The 2 μm region
The NIRSpec G235M and G235H modes covering the CO and CO_2 bands in the 2 μm region are available for 9 of the 17 sources. Local continuum subtractions were performed for the ^12CO overtone mode (2.35 μm) and the ^12CO_2 combination modes (2.70 μm and 2.77 μm) by fitting a third order polynomial to the data as illustrated in Figure <ref>. For the overtone mode we selected points between 2.30 - 2.34 μm and between 2.36 - 2.38 μm while being mindful of the noise spikes and gas phase lines in this region. Local continua for the 2.70 μm bands were drawn using a similar method as <cit.>.
§.§.§ The 4 μm region
The ^12CO_2 stretching (4.27 μm), the ^13CO_2 stretching mode (4.39 μm), the ^12CO stretching mode (4.67 μm) and the ^13CO stretching mode (4.78 μm) all have absorption features in the 4 μm spectral region. For many sources this region is also heavily dominated by the CO gas-phase rotation-vibrational lines that in some cases even overlap with the ^13CO_2 ice bands <cit.>. For this region we performed a local continuum subtraction by selecting points between 4.30 - 4.36 μm, 4.41 - 4.60 μm, 4.71 - 4.80 μm and 4.91 - 5 μm. We opted to place a local continuum over the 4.30 - 4.9 μm range since the shape of the continuum could be better traced once we expanded the wavelength range, especially for sources with a strong gas-line forest <cit.>. In Figure <ref> we show the continuum fittings for two sources B1a-b (strong CO gas-line forest) and L1527 (weak CO gas-line forest).
We performed separate local continuum subtractions for the ^12CO_2 4.27 μm stretching mode due to the sensitivity of this band to scattering effects <cit.>. Continuum points were placed between 4.17 - 4.19 μm and 4.31 - 4.35 μm following the methods described in <cit.>. The results are presented in Figure <ref>.
§.§.§ The 15 μm region
Continuum subtraction in the 15 μm region is particularly challenging due to several deep absorption and, in some cases, emission features that collectively alter the shape of the continuum. Two significant features for instance are the silicate bending and stretching modes that have long been suspected of distorting the wing of the ^12CO_2 band. The two strong absorption features are located at 9.7 and 18 μm, with the latter likely pushing down the long-wavelength wing of the 15.2 μm CO_2 band when the silicate is in absorption. This produces a band profile that is eerily similar to the grain shape effects observed for the CO_2 stretching mode (4.27 μm) where the short-wavelength wing of the band is raised relative to the long-wavelength wing creating a `negative' absorption feature <cit.>.
Due to this striking resemblance we briefly considered the possibility that the distortion at 15.2 μm is also the result of scattering effects but we argue that in order to produce such a strong scattering feature at these long wavelengths, grain sizes in the order of ∼10 μm are required, which is an unlikely scenario in dense protostellar envelopes where the grain sizes are in the order of ∼ 0.9 μm <cit.>. Furthermore, for sources where the silicate band is in emission (Figure <ref>), we see the opposite effect where the short-wavelength wing of the CO_2 band is lowered with respect to the long-wavelength wing since the silicate band is pushing the long-wavelength wing `up' instead of `down'. In addition to the silicate absorption (and emission) band, water has a broad liberation mode that peaks at 13.6 μm and extends over the 15 - 28 μm region which could further contribute to the total absorption.
Taking all of these factors into account, we first placed a local continuum by selecting points between 12 - 14.5 μm and 25 - 28 μm and converted the spectra to optical depth scale (Figures <ref>, <ref>, <ref> and <ref>, first column). We subsequently modelled the asymmetric shape of the silicate bending mode with two Gaussian profiles centered at 18.3 μm and 22 μm respectively, and subtracted this from the optical depth spectra (Figures <ref>, <ref>, <ref> and <ref>, second column). After subtraction, the bands still bear an extended long-wavelength wing (Figures <ref>, <ref>, <ref> and <ref>, third column) but the short-wavelength wing is no longer significantly raised with respect to the long-wavelength wing. This method of modelling the silicate band with Gaussian profiles was also used by <cit.> and <cit.>. For the two sources where the silicate band is in emission, we fitted an additional Gaussian profile centered at 17 μm to simulate the shape of the silicate band.
Finally, we investigated the contribution of the water liberation mode but found that the wing of this water band is almost identical to the local continuum we placed between the 12 - 28 μm and that it therefore has no additional contribution once we subtract the continuum from the spectra.
§.§ Column densities
Column densities for the vibrational modes are derived using
N = ∫τ dν/A,
where τ_ν is the integrated absorbance under the absorption feature and A is the corresponding band strength of the vibrational mode.
For the band strengths we used the values determined by <cit.> and <cit.> and corrected by <cit.>. All band strengths used in this work are listed in Table <ref> and are further discussed along with the error analysis on the column densities in Appendix <ref>. The column densities derived in this work can be multiplied by the correction factors given in Table <ref> to compare with values derived using the band strengths reported in <cit.> and <cit.>. We note that the recent band strengths <cit.> derived for amorphous CO_2 are similar to the corrected values we are using in this work. The change in the band strength due to the ice mixture and the temperature are accounted for in the error analysis. Lastly, the ^12C/^13C ratios are determined for the bands as a whole and not for the individual components.
§ ANALYSIS
We determined column densities and derived the carbon isotope ratio for the ^12CO_2 ν_1 + ν_2 combination mode (2.70 μm), the ^12CO_2 ν_3 stretching mode (4.27 μm), the ^13CO_2 ν_3 stretching mode (4.39 μm), the ^12CO_2 ν_2 bending mode (15.2 μm), the ^12CO 2-0 overtone mode (2.35 μm), the ^12CO 1-0 stretching mode (4.67 μm) and the ^13CO 1-0 stretching mode (4.78 μm). We also detect the ^12CO_2 2ν_1 + ν_2 combination mode (2.77 μm) in a number of sources. The list of sources, their luminosities, distances and the column densities for each individual vibrational mode are presented in Tables <ref> and <ref> for CO_2 and CO, respectively.
§.§ CO_2
The 2.70 μm, 4.27 μm and 15.2 μm vibrational modes of ^12CO_2 are collectively detected in a number of sources. For these sources we derive ^12CO_2/^13CO_2 ratios and find values that are in agreement within the reported errorbars (Table <ref>). The largest source of uncertainty in the derived ratios stems from the error on the band strengths, the observational errors in contrast are negligible (∼ 3%). For the error on the band strengths we account for the change in the band strength due to the temperature and due to the ice mixture, more specifically a water-rich ice mixture since it is known that ∼ 50% of the CO_2 resides in a water-rich matrix <cit.>.
The CO_2-H_2O binary ices affect each vibrational mode of CO_2 differently. The band strengths of the 2.70 μm, 4.27 μm and 4.39 μm bands for instance decrease relative to the band strength of pure CO_2 when CO_2 is mixed with water. This subsequently result in higher ^12CO_2 and ^13CO_2 column densities <cit.>. The band strength of the 15.2 μm band in contrast increases by a significant amount relative to the band strength of pure CO_2 resulting in lower ^12CO_2 column densities. This therefore introduces a systematic error when calculating ratios from the different vibrational modes since ratios derived from the 15.2 μm band will in general be lower than ratios derived from the other ^12CO_2 vibrational modes if a band strength for a CO_2-H_2O mixture is used. In this study we use the band strength of pure CO_2 and account for this systematic error in the error analysis. For a detail analysis of these uncertainties we refer the reader to Appendix <ref>.
§.§.§ The ^12CO_2 2.70 μm and 2.77 μm combination modes
The assignment of the 2.70 μm band has been the topic of multiple studies. <cit.> for instance first fitted this feature with laboratory spectra of CO-H_2O (100:1) ices but later assigned this feature to CO_2 since the fraction of CO in the binary CO-H_2O ices was inconsistent with the amount of CO present in the other bands. The band has also been assigned to the OH-dangling mode of porous water ice since water has multiple weak features in this spectral region <cit.>. In our sample we detect this 2.70 μm feature in 9 out of 17 sources on the short-wavelength side of the OH-dangling mode situated at 2.75 μm (Figure <ref>) and we assign this band to the combination mode of CO_2 for the following reasons.
First, we fitted several spectra of pure CO_2 ice and CO_2 in mixtures with CO, H_2O and CH_3OH <cit.> and we find an absorption feature at 2.70 μm for every mixture containing CO_2 as illustrated in Figure <ref>. The dangling modes of pure porous water ice and water ice mixed with other ice species do not sufficiently fit the observed profile of this feature. Additionally, we detect the OH-dangling mode of water ice interacting with other species at 2.75 μm in every source where the combination mode is present except in Per-emb35 where it disappears (Figure <ref>). This source shows clear signs of thermal processing with double-peak profiles at 4.39 μm (^13CO_2) <cit.> and 15.2 μm (^12CO_2) <cit.> (Figures <ref>, <ref>, <ref> and <ref>).
OH-dangling modes originate from non-H bonded OH-groups of cold amorphous water ice that is either porous or in mixtures with other species that disrupt the H-bonding network <cit.> and experimental studies have shown that the strength of these bands decreases with increasing temperature <cit.>. This is because the elevated temperatures will cause the molecules to restructure and form bonds which in turn will lead to a decrease in the number of free OH-groups and the strength of the OH-dangling modes. Gradual decrease of OH trapping sites due to ice heating was also shown in experimental results by <cit.> where the H_2O-CO shoulder observed at 4.64 μm disappeared once the amorphous H_2O ice structure was annealed. Since the 4.39 μm and 15.2 μm bands of Per-emb35 show clear signs of thermal processing, the absence of the 2.75 μm band in this source is therefore likely the result of the amorphous porous water structure disappearing due to the elevated temperatures in the envelope.
If both the 2.70 μm and the 2.75 μm bands were indeed water OH-dangling modes however, then the 2.70 μm feature should also disappear when the ices undergo thermal processing. Instead what we observe is a sharpening of the profile and the appearance of a sharp peak on the short-wavelength side of the band which we successfully reproduced with the spectrum of pure CO_2 and the alternative spectral decomposition for warm ices described in <cit.> (Figure <ref>). This behavior is similar to the behavior observed at 4.39 μm and 15.2 μm when pure CO_2 ice begins to segregate from the ice matrix. This strongly indicates that the 2.70 μm feature therefore cannot be primarily the product of OH-dangling modes. We do note that the 2.75 μm OH-dangling mode is detected in the extended sources Ser-S68N-N and Ser-SMM3 (Figure <ref> where the ices are also processed but this could be due to the fact that we might be tracing a fraction of cold ice components in the extended envelopes of these sources.
Finally, while there have been laboratory spectra showing OH-dangling bonds at 2.70 μm for water ice with high porosity, we note that the method of depositing gas-phase water to form very porous water is inconsistent with the mechanism of interstellar ice formation through atom addition which produces more compact ices. Therefore, in this work we assign the 2.70 μm feature to the combination mode of CO_2 and quantify the column density of ^12CO_2 from this band. The ratios extracted from this band are presented in Table <ref>.
We note that the column densities we derive from the 2.70 μm band are consistent with the column densities extracted from the other two ^12CO_2 vibrational modes in both our cold and heated sources except for Ser-SMM3. This is further evidence that the majority of this band is indeed for the most part the ^12CO_2 combination mode.
We also observe the 2ν_1 + ν_2 combination mode at 2.77 μm in Per-emb35 and Ser-SMM3 (Figure <ref>). The band strength of this feature is however a factor ∼ 3 lower than that of the 2.70 μm and it is also highly susceptible to changes in the temperature, even more so than its counterpart at 2.70 μm. Because of these high uncertainties we will refrain from quantifying carbon isotope ratios from this weak feature.
§.§.§ The ^12CO_2 4.27 μm stretching mode
In Figure <ref> we present the 4.27 μm stretching mode for 6 out of 17 sources. While this band is detected in the remaining sources, they all suffer from heavy saturation (τ > 5 ). As a test for using the 4.27 μm band for saturated sources, we fitted the laboratory spectrum of CO_2:H_2O 1:10 at 10 K <cit.> to the wings of the 4.27 μm band in Per-emb35 using the 15.2 μm bending mode as an anchor.
Overall the band profiles of the selected 6 sources appear to be very similar with only the slope of the short-wavelength wing varying depending on the source. Additionally, the strength of the scattering feature on the short-wavelength wing differs per source. The column densities derived from this band are in good agreement with the column densities derived from the 2.70 μm combination mode for each source.
For the fitted band of Per-emb35 we find a ^12CO_2/^13CO_2 ratio of ∼ 80 ± 21 (Figure <ref>). This value is consistent within the reported errorbars with the ratios derived from the other two vibrational modes in this same source. Due to the ambiguity of the true band profiles and optical depths of these saturated bands however, we will omit the band of these sources from this study.
§.§.§ The ^12CO_2 15.2 μm bending mode
The 15.2 μm bending mode is detected in all 17 sources, each showing distinctive band profiles (Figures <ref>, <ref>, <ref> and <ref>). Most notable are the sources that have the double-peak profile, a telltale sign of ice heating <cit.>.
In Section <ref> we discussed the continuum subtraction method and the extended wing-profile after subtracting the silicate model. We investigated the effect of this extended wing in comparison to the wing profile of <cit.> that extends to ∼ 16 μm instead of ∼ 18 μm, and found a difference in column density of ∼ 9%, which is negligible compared to the uncertainties of the band strengths (see also Appendix <ref>). The column densities derived from the bending mode are presented in Table <ref>.
§.§.§ The ^13CO_2 4.39 μm stretching mode
^13CO_2 column densities were derived from the 4.39 μm band for all 17 sources (Table <ref>). Consistent with the 15.2 μm band, we observe a double-peak profile at 4.39 μm for every source that displays peak splitting at 15.2 μm. One hindrance is the CO line forest that bleeds into the ^13CO_2 band of several sources in this sample, potentially causing a slight underestimation of the ^13CO_2 column densities. We briefly investigated the effect of the gaseous CO lines in EDJ183-b by fitting a Gaussian profile to the bottom of the ^13CO_2 band and quantifying the column density from this curve. We find that the ^12CO_2/^13CO_2 ratio decreases by 35% if the integrated optical depth is determined from the Gaussian curve instead of the band directly. In Figure <ref> we show the Gaussian curves that were ultimately used to determine the integrated optical depths for the ^13CO_2 bands most affected by the gaseous CO lines. For the sources where the ^13CO_2 features were well isolated from the line forest the integrated optical depths were determined from the ice bands directly.
§.§.§ ^12CO_2/^13CO_2
In Table <ref> we compare the column densities and ratios derived from the different CO_2 ice features. For the sources in which the 2.70 μm combination mode, the 4.27 μm stretching mode and the 15.2 μm bending mode are detected, we find consistent ratios except for TMC1-W where the ratio derived from the 4.27 μm stretching mode is ∼ 25% lower than the values extracted from both the 2.70 μm and the 15.2 μm bands. This deviation still lies within our reported errorbars however. We also find a discrepancy of ∼ 50% between the ratio derived from the 2.70 μm and the 15.2 μm bands in Ser-SMM3.
For the 2.70 μm band, the 4.27 μm band and the 15.2 μm band we find mean ^12CO_2/^13CO_2 ratios of 85 ± 23, 76 ± 12 and 97 ± 17, respectively. These are similar to the median values we derived from these vibrational modes (see also Section <ref>). We note that the variation in ratios derived for the different sources and the uncertainty margin on the mean ratios could in fact be due to genuine differences that reflect the different chemical conditions of these systems.
§.§ CO
§.§.§ The ^12CO 2.35 μm overtone mode
We detect a weak absorption feature at 2.35 μm above 3σ level for 6 out of the 17 sources (TMC1-E, TMC1-W, Per-emb55-b, EJD183-a, EDJ183-b and Ser-S68N-N). In previous experimental work, this band was assigned to the ν = 2-0 overtone mode of CO that peaks at this wavelength in the near-infrared <cit.>. We used experimental data of pure CO to analyze the band profiles of these absorption features and found that the peak position of all sources except for Per-emb55-b were significantly red-shifted (Figure <ref>).
The sources in which the the 2.35 μm features are red-shifted also display strong gas-phase CO overtone photospheric absorption lines that peak in the 2 μm spectral region. These gaseous CO overtone modes originate from the central protostellar embryo and appear in absorption due to the strong thermal gradient of the gas surrounding the protostar <cit.>. Among these gas-phase absorption lines is the ν = 4-2 at 2.3525 μm that overlaps with the 2.35 μm CO ice absorption band causing the `red-shifted' ice bands we are observing in most of these sources.
In order to disentangle and subtract the gaseous CO overtone modes, the 2 μm hot dust emission was first modelled with a blackbody emission. The photosphere was modelled using the BT-Settl grid of photospheric models <cit.>. The Starfish modeling framework from <cit.> was used (see also Figure <ref>). Further details on these modeling procedures are presented in Le Gouellec et al. (in prep.). The findings show that the models converged well in TMC1-E, TMC1-W, EDJ183-a, EDJ183-b and Ser-S68N-N and after subtraction no residual ice features were observed in these sources. A faint photosphere is detected in Per-emb55-a and a weak absorption feature is visible at 2.35 μm. This feature is however faint and hard to separate from the noise spikes in the spectrum. In Per-emb55-b no photosphere was detected and we can conclude that the 2.35 μm feature is indeed the CO overtone ice absorption band.
Because of the gas contamination, we discarded the other five sources where these gas signatures are observed and extracted the ^12CO column density from the band observed in Per-emb55-b only (Figure <ref>). The column density we derive from this band is a factor ∼ 1.6 lower than the column density derived from the 4.67 μm band in this same source. One possible explanation for the discrepancy between these two vibrational modes is the band strength of this relatively weak absorption feature.
§.§.§ The ^12CO 4.67 μm stretching mode
The bands of the ^12CO 4.67 μm symmetric stretching mode are presented in Figures <ref>, <ref>, <ref> and <ref> and the column densities in Table <ref>. We find variations of more than an order of magnitude between the different sources and ascribe this to the volatility of CO. Because of its lower desorption temperature, CO ice is highly sensitive to the temperature structure of the protostellar envelope which in turn can significantly impact the CO ice budget and cause variations depending on the source.
§.§.§ The ^13CO 4.78 μm stretching mode
The ^13CO 4.78 μm stretching mode is observed in 14 out of 17 sources of which only two are well isolated from the CO gas-phase rotation-vibrational lines (B1-b and B1-c). The strong line forest that dominates most of the sources in this sample poses a challenge to the study of this weak feature since the actual shape and optical depth of this band are likely affected by the gas lines. We briefly investigated the effect of the emission lines by fitting Gaussian profiles and subtracting the two gaseous CO lines closest to the bands and we find that the ^13CO column density increases by a factor 1.4 compared to the non-subtracted band which is still within our reported uncertainties. We note however that the gas-subtracted ^13CO ice bands are significantly broader than non-subtracted bands and also much broader than the band of the laboratory spectrum of pure CO which could be the main contributor of this band <cit.>. On the other hand, a broad band profile is possible if the ^13CO ice is residing in a polar CH_3OH-rich ice matrix <cit.>. A careful modelling and removal of the gaseous CO lines is needed in order to properly isolate the ice feature and quantify the ^13CO ice column density. Such CO gas modeling is however outside the scope of this paper. In Figures <ref> and <ref> we show the Gaussian curves that were fitted to the ^13CO features in order to determine the integrated optical depths.
§.§.§ ^12CO/^13CO
The carbon isotope ratios derived from the CO ice absorption bands are presented in Table <ref>. In general we find high ^12CO/^13CO ratios. The sources B1-a-S, Per-emb22, Per-emb33, Ser-S68N-N and Per-emb27 in particular have very high ratios, all of which also have very weak ^13CO ice absorption features. Moreover, the ^13CO ice bands of some of these sources are buried in the gas line forest which could be the reason behind some of these extreme ratios. We derive a sub-ISM ratio for the source B1-c but we note that the spectrum of this source was taken at medium resolution unlike the other sources in this sample. Consequently, the narrow ^13CO band might not be fully resolved. Additionally, the gaseous CO lines in this source are in absorption and could be blended with the unresolved ice feature further contributing to its width.
Finally, we derive a ^12CO/^13C ratio of 96 ± 14 from 2.35 μm overtone mode in Per-emb55-a, which is a factor ∼ 1.6 lower than the ratio of 152 ± 23 derived in this same source from the 4.67 μm band. For the 4.67 μm band we find a mean ^12CO/^13CO ratio of 165 ± 52 for the sample as whole when we exclude the two extreme outliers Per-emb22, Per-emb33 and Per-emb27. Due to the fluctuating values derived from the CO vibrational modes, we advise the reader to refer to the ratios derived from the CO_2 vibrational modes rather than the CO ice bands.
§.§ Limitations and future work
As shown in the previous sections, we examined the carbon isotope ratios for multiple CO and CO_2 vibrational modes for a large sample of low mass sources. We find, for the most part, consistent values especially between the ratios derived from the CO_2 ice bands. One source of uncertainty are the band strengths especially for the weaker ^12CO overtone mode. The 4.40 - 4.5 μm spectral region is also heavily dominated by gaseous CO rotation-vibrational lines which poses a challenge when quantifying ^13CO column densities from the absorption feature at 4.78 μm. Therefore, a careful modeling and subtraction of these gas lines is needed to properly isolate in the ^13CO ice features and determine ^13CO column densities. Alternatively, spectral extractions at locations where the gas-phase lines are weak or absent could also aid in the analysis of the ^13CO ice features. Finally, future work should also focus on finding similarities or variations between close binaries as well as perform spatial ice mapping, a feat that is now possible with the JWST.
§ DISCUSSION
The carbon isotope ratio is a crucial link between the evolutionary phases of star and planet formation because of its sensitivity to the chemical and physical conditions that reign during each stage. In the following section we will examine how the results presented in this work relate to the trends observed across different astronomical environments both in the solid state and gas phase.
§.§ Protostars: Solid state
§.§.§ CO_2
The ^12C/^13C ratios we derived from solid CO_2 for the JOYS+ sample are presented in Figure <ref> along with isotope ratios derived in the solid state for the ISO-SWS study of young stellar objects <cit.> and two highly extincted lines of sight <cit.> in the Cha I molecular cloud. We note that although previous studies used the uncorrected <cit.> band strengths, that the corrections have no affect on the relative band strengths and consequently the ^12C/^13C ratios. For this reason, the ratios determined in these previous studies can be directly compared to the ratios derived in this work. The light error bars of the JOYS+ data points illustrate the uncertainties including the error on the band strengths, while the solid errorbars illustrate the uncertainties excluding the error on the band strengths. The errors in the band strengths account for the change in the band strength due to the temperature and ice mixture. The small error bars when the uncertainty on the band strengths are excluded demonstrate the great capabilities of the JWST and the high quality of these data. Note that previous studies generally do not include detailed analysis of the uncertainty in the band strengths in their errorbars.
The results show that the ratios obtained from the 15.2 μm bands are clustered slightly above the ISM ratio of 68 <cit.>. This suggest that the ices in our sources are slightly deficient in ^13C in comparison to the local ISM. In the cases where we detect all three ^12CO_2 vibrational modes, we find consistent column densities and ratios (Figure <ref>).
This is further evidence that the 2.70 μm feature primarily consists of the ^12CO_2 combination mode. These consistent values also show that the 4.27 μm stretching mode can be used to determine ^12CO_2 column densities in cases where the other ^12CO_2 bands are not available and where the stretching mode is not saturated.
When compared to other solid state studies of carbon isotope ratios, the ^12CO_2/^13CO_2 ratios measured in our sample are fairly consistent with the those found in the ISO-SWS high mass sources by <cit.> (ranging 52 - 113). Similar to the JOYS+ sources, the authors report values that are scattered around the local ISM ratio though they also find sub-ISM values as low as 52. These sub-ISM ratios still agree with our values within the reported errorbars. Figure <ref> illustrates the distribution of the ^12CO_2/^13CO_2 ratio for the JOYS+ sample and the ISO high mass sources <cit.>. While a large fraction of the ISO MYSOs also display elevated ratios with respect to the ISM standard, the ratios of the JOYS+ protostars are in general higher with median values of 81, 75 and 100 for the 2.70 μm combination mode, the 4.27 μm stretching mode and the 15.2 μm bending mode, respectively. These are similar to the mean values of 85 ± 23, 76 ± 12 and 97 ± 17 derived for these bands respectively. The agreement between the ratios derived from the different CO_2 vibrational modes of the JOYS+ protostars again shows the great capability of the JWST to accurately probe different lines of sight and observe the weak ice features in the 2 μm region, the generally saturated strong stretching mode at 4.27 μm as well as the bending mode at 15.2 μm.
Our values are also consistent, albeit a bit higher, compared to the values reported in <cit.> for the two lines of sight in the Cha I molecular cloud (^12CO_2/^13CO_2 ∼ 69 - 87). We note however that the ^12CO_2 column densities derived for Cha I were extracted from the 4.27 μm bands which are saturated in both lines of sight. Consequently, the ^12CO_2 column density could be somewhat underestimated in those sources. Furthermore, the uncertainties reported in <cit.> do not include errors on the band strengths. Nonetheless, the consistency between the ratios derived from the protostellar stage and the dense cloud stage paint a harmonious picture that the CO_2 was likely inherited from its parent molecular cloud.
§.§.§ CO
The ^12CO/^13CO ratios derived from solid CO for the JOYS+ sample are shown in Figure <ref> along with isotope ratios derived from various studies both in the gas and solid state. Similar to CO_2, the corrections on the CO band strengths <cit.> have no effect on the ^12C/^13C ratio and values derived in previous solid state studies can therefore be directly compared to the findings in this work. Overall our results show that the isotope ratios obtained from the CO 4.67 μm stretching mode are considerably elevated with respect to the local ISM value. The ratios are also higher than the ratios previously derived for one MYSO <cit.> and one LYSO <cit.>, both of which are very close to the ISM standard. This discrepancy is likely a result of our ^13CO column density being underestimated due to the weak ^13CO ice absorption feature and the strong CO gas line forest that dominates this spectral region. This is particularly the case for the two extreme outliers Per-emb22, Per-emb33 and Per-emb27 (shown only in Table <ref>), both of which exhibit strong gas phase lines and weak ^13CO ice bands (Figure <ref>).
The ratio derived from the 2.35 μm feature in Per-emb55-b is also elevated with respect to the local ISM standard. When compared to the ratios derived in molecular clouds however, our values are consistent with those found by <cit.> where they also report super-ISM ^12CO/^13CO ratios (99-184) (Figure <ref>).
§.§ CO versus CO_2 isotope ratios
The ratios derived from CO and CO_2 from the JOYS+ sample are presented in Figure <ref> along with gas phase and solid state ratios derived from various studies. Normally, the carbon isotope ratio of solid CO and CO_2 are expected to bear similarities since the formation route of CO_2 is tied to that of CO through equation <ref> <cit.>:
CO + OH -> HO-CO -> CO_2 + H
In the ices of our sources however, we see that the ^12CO/^13CO ratios are in general higher compared to the ^12CO_2/^13CO_2 ratios derived from the 2.70 μm, 4.27 μm and 15.2 μm bands, though they still agree within the error bars for a number of our sources. As noted in Section <ref> a reason for this deviation could be that the CO ratios are less accurate due to the weak ^13CO ice feature. Overall, we consider the CO_2-derived isotope ratios to be more reliable given the multiple methods we used to quantify this ratio.
§.§ Protostars: Gas phase
In the gas-phase <cit.> reports significant ^12CO/^13CO variability for several YSOs with an overall trend of elevated ratios relative to the local ISM standard (Figure <ref>). One possible scenario they propose is gas-ice partitioning (Section <ref>) with an additional mechanism for removing ^13CO from the solid state through the formation of larger complex organic molecules (COMs) from ^13C-rich CO ices. Consequently, the COMs will be enriched ^13C and exhibit low carbon isotope ratios, such as the sub-ISM values found in COMs by <cit.>. We note however that the gas-phase values reported by <cit.> are similar to those derived in this study which suggests that the ices might have sublimated from the grains, delivering the carbon isotope ratio to the gas during the pre-stellar stage.
Quantifying gas-phase and solid state isotope ratios in the same source was in fact proposed by <cit.> as a way of tracing the origin of organic material. Therefore, observations of gas-phase CO isotopologues for the JOYS+ protostars will be crucial for enabling this method of posterior isotopic labeling and linking the solid state chemistry with the gas-phase chemistry in these systems. In the following section we will take a look at the different mechanisms that can affect the carbon isotope ratio in both the solid and gas phase in the early stages of star and planet formation.
§.§ Fractionation processes
§.§.§ Isotope exchange reactions
Isotope exchange reactions are exothermic reactions that are efficient at low temperatures and that prompt preferential incorporation of ^13C^+ in CO (Equation <ref>) <cit.>. As a result, molecular ^13CO is enhanced with respect to ^12CO in the gas phase and this low ^12CO/^13CO ratio is subsequently conveyed to the solid state upon CO freeze-out. The ^12C^+ in contrast is enhanced in the gas phase leading to high ^12C^+/^13C^+ gas-phase ratios.
^13C^+ + ^12CO <-> ^12C^+ + ^13CO + Δ E = 34 K
§.§.§ Selective photodissociation
Selective photodissociation of ^13CO is a destruction mechanism that occurs because self-shielding of the less abundant ^13CO takes place at higher extinctions in comparison to ^12CO <cit.>. This leaves a large portion of the ^13CO gas vulnerable to ultraviolet radiation. As a result, ^13CO is destroyed increasing the gaseous ^12CO/^13CO ratio and decreasing the ^12C^+/^13C^+ ratios as the molecules dissociate and the gas is enriched with ^13C^+ and O atoms, which counters the effect of isotope selective photodissociation (Section <ref>). This high molecular ^12CO/^13CO ratio translates to the solid state upon CO freeze-out producing ices that are depleted in ^13C. Although we expect that under dark cloud conditions most of the radiation is attenuated by the high density environment, thus limiting the effect of selective photodissociation, studies by <cit.> have shown that turbulence can transfer selective photodissociated material to the dark inner regions of clouds.
§.§.§ Gas-ice partitioning
A third mass-dependent mechanism was proposed by <cit.> where a small difference in the binding energies of ^12CO and the slightly heavier ^13CO (ΔE_bind ∼ 10 K) results in ^12CO desorbing at a slightly lower temperature from the grains. Consequently, the ^12CO/^13CO in the gas phase is enhanced while the ices become enriched with ^13CO, leading to low carbon isotope ratios in the solid state. It is worth nothing however that gas-ice partitioning is only effective during a very narrow temperature window <cit.>.
Of the mechanism discussed above, selective photodissociation could be the culprit of the ^13C deficiency observed in this sample though its efficiency under these conditions is open to question.
§.§ Evolution of the ^12C/^13C ratio
As the stellar system matures, different process will begin to dominate, slowly eroding the initial carbon isotope ratio of the pristine material. In the following sections we will discuss the evolution of the carbon isotope ratio during the later epochs of star formation.
§.§.§ Proto-planetary disks
Studies of carbon isotope ratios in disks reveal that at this stage the conditions in the system are irrevocably changed and the initial carbon isotope imprint can be effectively erased in certain regions of the disk. <cit.> and <cit.> for instance found a dichotomy in the TW Hya disk where two different carbon isotope ratios were measured for CO and C_2H (21 ± 5 and 65 ± 20, respectively). In the same disk <cit.> finds a ratio of 86 ± 4 from HCN rotational lines.
The variability of the carbon isotope ratio in protoplanetary disks is a consequence of the ratio being contingent on the location in which the molecules were formed. This is because different processes dominate in different regions of the disk. Photodissociation for instance will play a significant role in the upper layers of the disk, where the high ratio derived from C_2H is found, while isotope exchange reactions are more efficient in the mid-plane and at larger radii where the temperatures are low and where the low value for CO is measured <cit.>.
§.§.§ Exoplanets
The large variability of the carbon isotope abundance is also observed in the atmospheres of exo-planets where the formation mechanism of the planet plays a crucial role in its carbon isotope reservoir.
Recent observational efforts by <cit.> for instance, revealed a significant enrichment of ^13CO in the atmospheres of hot Jupiters (31±17, 10.2-42.6 with 68% confidence, and 62 ±2, respectively). This ^13C enrichment is likely because the planet was formed beyond the CO snowline and accreted most of its material from ^13C-rich ices. These ^13C-rich ices are likely the result of fractionation processes such as isotope-exchange reactions and possibly gas-ice partitioning that must have occured after the protostellar stage.
<cit.> in contrast measured very high ^12C/^13C ratios (97 ± 15 and 184 ± 61, respectively) for brown dwarfs pointing to a formation route similar to that of stars where the object forms from cloud fragmentation and gravitational collapse and inherits most of its gaseous material from a parent cloud that is also deficient in ^13C and ^13C-ices as if found here. This shows that the carbon isotope ratio is in potentially a powerful tool for tracing the formation pathways of extrasolar planets.
§.§.§ Comets and meteorites
The chemical complexity of the infant planetary system is at last fossilized in the remains of its building blocks, the comets. Observations of C_2, CN and HCN in numerous comets for instance showed that Solar system objects all have carbon isotope ratios similar to the Solar abundance (∼ 89) with little variation between the ratios <cit.>. Similarly, <cit.> finds ratios consistent with the Solar abundance from a large study of 75 chondrites. Finally, <cit.> also derived a ratio of 84 ± 4 from CO_2 measurements in the coma of comet 67P/Churyumov-Gerasimenko. It is worth noting that the carbon isotope ratio found in comets and chondrites does bear some resemblance with the values derived in the protostellar stage. This could be an indication that while fractionation processes erode the initial carbon isotope imprint during the later stages, a fraction of the pristine material still survives this journey and is eventually incorporated into the planetary system.
§ CONCLUSIONS
We have analyzed JWST NIRSpec and MIRI data of 17 Class 0/1 low mass protostars and determined the ^12CO, ^13CO ^12CO_2 and ^13CO_2 column densities and ^12C/ ^13C isotope ratios from the ^12CO_2 ν_1 + ν_2 and 2ν_1 + ν_2 combination modes (2.70 μm and 2.77 μm), the ^12CO_2 ν_3 stretching mode (4.27 μm), the ^13CO_2 ν_3 stretching mode (4.39 μm), the ^12CO_2 ν_2 bending mode (15.2 μm), the ^12CO 2-0 overtone mode (2.35 μm), the ^12CO 1-0 stretching mode (4.67 μm) and the ^13CO 1-0 stretching mode (4.78 μm). The most significant finding is that the ratios are consistent, albeit slightly elevated, in comparison to the local ISM value. These results show that the ices leave the protostellar stage with high ^12C/^13C ratios after which a series of fractionation processes during the later stages could modify the initial isotope abundance.
* The absorption feature is observed at 2.70 μm in 9 out 17 sources is assigned to the combination mode of ^12CO_2. We also detect the 2.77 μm combination mode of ^12CO_2 in two sources.
* We find consistent ^12CO_2/ ^13CO_2 ratios between the 4.27 μm, 15.2 μm and the 2.70 μm bands. Our values are slightly elevated with respect to the standard ISM value but consistent with carbon isotope ratios observed in other protostars and in dark molecular clouds. We observe variations of the ^12CO_2/ ^13CO_2 ratio from source to source which could be pointing towards genuine differences in the chemical complexity of their protostellar envelopes.
* We report a detection of the 2.35 μm ν 2-0 ^12CO overtone mode in one source.
* The ^12CO/^13CO ratios derived from the 4.67 μm band are elevated with respect to the local ISM ratio and the ^12CO_2/^13CO_2 ratios. The ratio derived from the overtone mode is a factor ∼ 1.6 lower compared to the ratio extracted from the 4.67 μm band in the same source.
Our findings show that fractionation processes begin to dominate after the protostellar stage. Future work should be focused on ice spatial mapping of close binary pairs and extended protostellar envelopes as well as obtaining observations of gas-phase CO isotopologue of the JOYS+ sample in order to compare solid state and gas-phase carbon isotope ratios.
Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA), by funding from the European Re- search Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101019751 MOLDISK), and by the Dutch Research Council (NWO) grant 618.000.001. Support by the Danish National Research Foundation through the Center of Excellence “InterCat” (Grant agreement no.: DNRF150) is also acknowledged.
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST.
The following National and International Funding Agencies funded and supported the MIRI development: NASA; ESA; Belgian Science Policy Office (BELSPO); Centre Nationale d’Etudes Spatiales (CNES); Danish National Space Centre; Deutsches Zentrum fur Luft- und Raumfahrt (DLR); Enterprise Ireland; Minis- terio De Economiá y Competividad; Netherlands Research School for Astron- omy (NOVA); Netherlands Organisation for Scientific Research (NWO); Sci- ence and Technology Facilities Council; Swiss Space Office; Swedish National Space Agency; and UK Space Agency.
NB thanks K. Pontoppidan for his contribution to the NIRSpec data reduction and helpful discussions. P.J.K. acknowledges support from the Science Foundation Ireland/Irish Research Council Pathway program under grant No. 21/PATH-S/9360. L.M. acknowledges the financial support of DAE and DST-SERB research grant (MTR/2021/000864) of the Government of India.
aa
§ ERROR ANALYSIS
The uncertainties on the derived isotope ratios are mostly sensitive to the uncertainties in the band strengths. Minor contributors are also the error on the placement of the continuum and the noise level. To ultimately set constraints on the ratios we must take all these factors into account when calculating column densities.
The ratio for the CO and CO_2 bands are calculated as follows:
ratio = N_^12CO_2/N_^13CO_2 = ∫τ dν_^12CO_2/A_^12CO_2/∫τ dν_^13CO_2/A_^13CO_2 = ∫τ dν_^12CO_2/∫τ dν_^13CO_2A_^12CO_2/A_^13CO_2,
where N_^12CO_2 and N_^13CO_2 are the column densities of ^12CO_2 and ^13CO_2 respectively, ∫τ dν_^12CO_2 and ∫τ dν_^13CO_2 are the integrated optical depths and A_^12CO_2 and A_^13CO_2 are the band strengths of the ^12CO_2 and ^13CO_2 vibrational modes respectively.
From equation <ref>, the uncertainty on the ratio is determined following the error propagation rules:
σ_ratio/ratio = √(( σ_A_^12CO_2/A_^12CO_2) ^2 + ( σ_A_^13CO_2/A_^13CO_2) ^2 + ( σ_τ dν_^12CO_2/τ dν_^12CO_2)^2 + ( σ_τ dν_^13CO_2/τ dν_^13CO_2)^2 ),
where σ_τ dν_^12CO_2 and σ_τ dν_^13CO_2 are the uncertainties on the integrated optical depth for ^12CO_2 and ^13CO_2 respectively.
The uncertainty on the integrated optical depth is influenced by the uncertainty in the placement of the continuum and the noise level and is determined as follows:
( σ_τ dν_^12CO_2/τ dν_^12CO_2)^2_^12CO_2 = √(( σ_int. OD, cont/int. OD, cont)^2_^12CO_2 + ( σ_noise/int. OD, noise)^2_^12CO_2),
where σ_int. OD, cont is the uncertainty due to the placement of the continuum, int. OD, cont is the integrated optical depth with the continuum of choice, σ_noise is the uncertainty due to the noise level and int. OD, noise is the integrated optical depth. These parameters are calculated in a similar manner for ^13CO_2.
§.§ Continuum placement
We determine the uncertainty on the placement of the continuum by fitting the bands with two different baselines and using the difference in integrated optical depth as the uncertainty.
§.§ Noise level
The uncertainty due to the noise level is determined by calculating the rms in line free regions on flux scale (σ_f,λ) and propagating this to optical depth scale (σ_f,τ) following the error propagation rules for logarithms:
σ_τ,lambda = σ_f,λ/f_cont,λ,
since the observed fluxes F^obs_λ are converted to optical depth scale following equation (<ref>),
τ^obs_λ = -ln(F^obs_λ/F^cont_λ),
where F^cont_λ is the flux of the continuum.
We take F_cont,λ as constant since the uncertainty for that parameter is determined separately.
We use σ_f,λ to generate a noise level following a normal distribution and resample the spectrum with this added noise. We calculate the integrated optical depth for these resampled spectra which produces a sampling distribution that we can fit a Gaussian curve to (Figure <ref>). From the Gaussian curve we can extract σ_noise and int.OD,noise. This approach is similar to bootstrapping statistics and the contribution of this uncertainty is very small.
§.§ Band strengths
The column density for each vibrational mode is determined as follows:
N = ∫τ dν/A,
where A is the band strengths for of the corresponding vibrational mode.
As mentioned, we use the <cit.> and <cit.> band strength corrected by <cit.> to calculate the column densities.
The contributing factors to the uncertainty on the band strength for A_CO_2 are the placement of the continua (since we are considering relative errors), the uncertainty due to changes in the temperature of the ice and the uncertainty due to the ice mixtures. Since we know that the water component is a has a large contribution to the total band we took the uncertainties determined for the ice mixture CO_2:H_2O 1:24 binary ices. All values used in this work are reported in Table <ref>.
For the error due to the continuum we assume an uncertainty of 5% since <cit.> only reports uncertainties >5% and no uncertainty is reported for this band strength. The uncertainty due to change in temperature is ∼ 10% <cit.>. Uncertainties for A_CO_2-H_2O are reported in <cit.>.
Taking all these assumption into consideration we determine the uncertainties on the band strengths as follows:
√(σ_A,cont^2 + σ_A,temp^2 + σ_A, H_2O-CO_2^2)
The uncertainties on the CO column densities are determined in a similar manner with following exceptions:
* We do not have mixture component for the error on the band strengths since we expect CO to be mainly in its pure form <cit.>
* We do not have a temperature component for the error on the band strengths since CO disorbs above 40 K.
* We take a 10% error on the band strength for both the ^12CO bands and the ^13CO to the placement of the continuum in laboratory spectra.
* The uncertainty on the integrated optical depth of the ^13CO ice feature between gas-subtracted spectra and non-gas subtracted spectra is ∼ 50 %. This is not included in the error bars of the figures.
§ ADDITIONAL FIGURES
In Figures <ref>, <ref> and <ref> we present the continuum subtractions for the 2 μm region, the 4 μm region and the ^12CO_2 stretching mode at 4.27 μm.
§.§ Continuum subtractions
§.§ Band Overview
§.§.§ NIRSpec 4 μm region
In Figures <ref>, <ref>, <ref>, <ref>, <ref> <ref>, <ref> and <ref> we present an overview of the CO_2 and CO ice features in the 4 μm and 15 μm region.
§.§.§ MIRI 15 μm region
§.§ Additional features
Figure <ref> shows the 2.70 μm and 2.77 μm combination modes of CO_2 with the laboratory spectrum of pure CO_2 fitted to the data. Figure <ref> shows the silicate bands in emission in the source EDJ183-a. In Figure <ref> we show the ^12CO_2 4.27 μm band of Per-emb35 and finally in Figure <ref> we show the photospheric model fittings for TMC1-E.
§ SOURCE COORDINATES
In Table <ref> we present the coordinates of the apertures used to extract the spectra.
| The ingredients constituting interstellar material, stars and eventually planets originate in dense molecular clouds where atoms and small molecules are accreted from the gas-phase onto cold dust grains forming icy mantles <cit.>. The abundance and composition of these ice mantles are therefore a direct probe of the chemical complexity of the molecular cloud at the time of ice formation.
Susceptible to changes in these chemical conditions are isotope ratios that have been well studied both in the gas-phase and solid state in multiple celestial bodies <cit.>. Variability of the isotope abundance across different astronomical environments is therefore indicative of the chemical evolution of these systems and the physicochemical processes that the pristine material has been subjected to.
The carbon isotope ratio in particular is an attractive contender for isotope chemistry studies for numerous reasons. First, carbonaceous ices such as ^12CO and ^12CO_2 are abundant and readily detected in infrared observations <cit.> and if the sensitivity allows, their weaker isotopologue bands ^13CO and ^13CO_2 are also observable in the same spectral range, thus enabling carbon isotope analysis in the solid state <cit.>. Additionally, CO and CO_2 ice have multiple vibrational modes across the infrared spectrum that facilitate comparison between ratios derived from the different absorption bands.
CO is also the second most abundant gas-phase molecule after H_2 in the interstellar medium (ISM) and its isotopologues ^12C^16O, ^13C^16O, and ^12C^18O therefore have strong gas-phase molecular lines in the sub-mm and infrared making them ideal for gas-phase studies in various environments <cit.>, including exo-planet atmospheres <cit.>. This plethora of observational possibilities enables us to draw comparisons between gas-phase and solid state abundances and further elucidate the origin of interstellar material and its evolution from stellar nurseries to the time of planet formation.
Enrichment of interstellar material with ^13C initially occurs during the CNO-cycle of stellar nucleosynthesis (M_⋆ > 1.3 M_⊙) when ^13N atoms are converted into ^13C atoms. A fraction of these ^13C atoms remain in the star and is later released into the ISM at the end of its life-cycle <cit.>.
Once formed, the main fractionation processes responsible for affecting regional carbon isotope ratios are isotope exchange reactions, isotope-selective photodissociation, and possibly gas-ice partitioning <cit.>.
The yardsticks against which all observations are usually measured are the ratios derived for the local ISM by <cit.> from CO studies and by <cit.> from CN studies as well as the Solar abundance ratio <cit.>.
In diffuse clouds, <cit.> derived ratios from optical line absorption studies of CH^+ that were consistent with the ISM standard (^12CH^+/^13CH^+ ∼ 74). Conversely, the effect of selective photodissociation was observed in other diffuse clouds where higher ratios were extracted from CO measurements (<cit.> and <cit.>).
In dense star forming regions <cit.> found mainly sub-ISM ratios from gas-phase studies of complex organic molecules (COMs) with considerable deviations between the different species (^12C/^13C ∼ 27-67). Large discrepancies were also observed in studies of protoplanetary disks where the values varied drastically depending on the line of sight and the region of the disk in which the ratio was measured <cit.>. Finally, cometary ratios and ratios derived from chondrites were found to be similar to the standard Solar value (∼ 89) <cit.>.
In the solid state, <cit.> examined a number of massive young stellar objects (MYSOs), three low mass young stellar objects (LYSOs), and two clouds and found ^12CO_2/^13CO_2 ∼ 52 - 113, consistent with the standard ISM ratio. Additional studies of CO ice for one MYSO <cit.> and one LYSO <cit.> yielded ^12CO/^13CO ∼ 71 and 69, respectively. The sample of solar-mass protostars remained limited however since past observational facilities lacked the sensitivity required to observe most of these low mass objects. Moreover, oftentimes the partial spectral coverage of these observatories also meant that CO and CO_2 ice could not be examined simultaneously.
Now with the exquisite sensitivity of the James Webb Space Telescope (JWST) <cit.>, we are able to study the various vibrational modes of CO and CO_2, including their weaker isotopologue absorption features, for the first time in solar-mass protostars. Moreover, the high spectral resolution of the JWST offers a unique opportunity to perform detailed profile analysis of these strong and weak ice features. <cit.> for instance, utilized this unprecedented sensitivity to quantify solid state ^12CO_2/^13CO_2 and ^12CO/^13CO ratios in the Cha I dark molecular cloud (^12CO_2/^13CO_2 ∼ 69 - 87 and ^12CO/^13CO∼ 99 - 169). <cit.> also used the high spectral resolution of the JWST to perform detailed profile analysis of the 4.39 μm band of ^13CO_2 and showed that the band can be decomposed in five principal components <cit.> that are representative of the chemical and thermal environment of the CO_2 ice.
In this paper we present high resolution JWST NIRSpec and MIRI observations of the ^12CO_2 ν_1 + ν_2 and the 2ν_1 + ν_2 combination modes (2.70 μm and 2.77 μm), the ^12CO_2 ν_3 stretching mode (4.27 μm), the ^13CO_2 ν_3 stretching mode (4.39 μm), the ^12CO_2 ν_2 bending mode (15.2 μm), the ^12CO 2-0 overtone mode (2.35 μm), the ^12CO 1-0 stretching mode (4.67 μm) and the ^13CO 1-0 stretching mode (4.78 μm) for 17 low mass Class 0/I protostars observed as part of the JWST Observations of Young protoStars (JOYS+) program <cit.>. We derive column densities of ^12CO_2, ^13CO_2, ^12CO and ^13CO for each vibrational mode and determine the carbon isotope ratio for the sources in this sample. We build on the work of <cit.> by expanding the sample of solar-mass objects and examining their isotope reservoir.
This paper is structured as follows. In Section <ref> we present our data and describe the data reduction process and the methods for analyzing the spectra. In Section <ref> we derive column densities and isotope ratios for the bands as a whole, and we discuss the results in Section <ref>. Finally, in Section <ref> we provide a summary and the concluding remarks. | null | null | null | The carbon isotope ratio is a crucial link between the evolutionary phases of star and planet formation because of its sensitivity to the chemical and physical conditions that reign during each stage. In the following section we will examine how the results presented in this work relate to the trends observed across different astronomical environments both in the solid state and gas phase.
§.§ Protostars: Solid state
§.§.§ CO_2
The ^12C/^13C ratios we derived from solid CO_2 for the JOYS+ sample are presented in Figure <ref> along with isotope ratios derived in the solid state for the ISO-SWS study of young stellar objects <cit.> and two highly extincted lines of sight <cit.> in the Cha I molecular cloud. We note that although previous studies used the uncorrected <cit.> band strengths, that the corrections have no affect on the relative band strengths and consequently the ^12C/^13C ratios. For this reason, the ratios determined in these previous studies can be directly compared to the ratios derived in this work. The light error bars of the JOYS+ data points illustrate the uncertainties including the error on the band strengths, while the solid errorbars illustrate the uncertainties excluding the error on the band strengths. The errors in the band strengths account for the change in the band strength due to the temperature and ice mixture. The small error bars when the uncertainty on the band strengths are excluded demonstrate the great capabilities of the JWST and the high quality of these data. Note that previous studies generally do not include detailed analysis of the uncertainty in the band strengths in their errorbars.
The results show that the ratios obtained from the 15.2 μm bands are clustered slightly above the ISM ratio of 68 <cit.>. This suggest that the ices in our sources are slightly deficient in ^13C in comparison to the local ISM. In the cases where we detect all three ^12CO_2 vibrational modes, we find consistent column densities and ratios (Figure <ref>).
This is further evidence that the 2.70 μm feature primarily consists of the ^12CO_2 combination mode. These consistent values also show that the 4.27 μm stretching mode can be used to determine ^12CO_2 column densities in cases where the other ^12CO_2 bands are not available and where the stretching mode is not saturated.
When compared to other solid state studies of carbon isotope ratios, the ^12CO_2/^13CO_2 ratios measured in our sample are fairly consistent with the those found in the ISO-SWS high mass sources by <cit.> (ranging 52 - 113). Similar to the JOYS+ sources, the authors report values that are scattered around the local ISM ratio though they also find sub-ISM values as low as 52. These sub-ISM ratios still agree with our values within the reported errorbars. Figure <ref> illustrates the distribution of the ^12CO_2/^13CO_2 ratio for the JOYS+ sample and the ISO high mass sources <cit.>. While a large fraction of the ISO MYSOs also display elevated ratios with respect to the ISM standard, the ratios of the JOYS+ protostars are in general higher with median values of 81, 75 and 100 for the 2.70 μm combination mode, the 4.27 μm stretching mode and the 15.2 μm bending mode, respectively. These are similar to the mean values of 85 ± 23, 76 ± 12 and 97 ± 17 derived for these bands respectively. The agreement between the ratios derived from the different CO_2 vibrational modes of the JOYS+ protostars again shows the great capability of the JWST to accurately probe different lines of sight and observe the weak ice features in the 2 μm region, the generally saturated strong stretching mode at 4.27 μm as well as the bending mode at 15.2 μm.
Our values are also consistent, albeit a bit higher, compared to the values reported in <cit.> for the two lines of sight in the Cha I molecular cloud (^12CO_2/^13CO_2 ∼ 69 - 87). We note however that the ^12CO_2 column densities derived for Cha I were extracted from the 4.27 μm bands which are saturated in both lines of sight. Consequently, the ^12CO_2 column density could be somewhat underestimated in those sources. Furthermore, the uncertainties reported in <cit.> do not include errors on the band strengths. Nonetheless, the consistency between the ratios derived from the protostellar stage and the dense cloud stage paint a harmonious picture that the CO_2 was likely inherited from its parent molecular cloud.
§.§.§ CO
The ^12CO/^13CO ratios derived from solid CO for the JOYS+ sample are shown in Figure <ref> along with isotope ratios derived from various studies both in the gas and solid state. Similar to CO_2, the corrections on the CO band strengths <cit.> have no effect on the ^12C/^13C ratio and values derived in previous solid state studies can therefore be directly compared to the findings in this work. Overall our results show that the isotope ratios obtained from the CO 4.67 μm stretching mode are considerably elevated with respect to the local ISM value. The ratios are also higher than the ratios previously derived for one MYSO <cit.> and one LYSO <cit.>, both of which are very close to the ISM standard. This discrepancy is likely a result of our ^13CO column density being underestimated due to the weak ^13CO ice absorption feature and the strong CO gas line forest that dominates this spectral region. This is particularly the case for the two extreme outliers Per-emb22, Per-emb33 and Per-emb27 (shown only in Table <ref>), both of which exhibit strong gas phase lines and weak ^13CO ice bands (Figure <ref>).
The ratio derived from the 2.35 μm feature in Per-emb55-b is also elevated with respect to the local ISM standard. When compared to the ratios derived in molecular clouds however, our values are consistent with those found by <cit.> where they also report super-ISM ^12CO/^13CO ratios (99-184) (Figure <ref>).
§.§ CO versus CO_2 isotope ratios
The ratios derived from CO and CO_2 from the JOYS+ sample are presented in Figure <ref> along with gas phase and solid state ratios derived from various studies. Normally, the carbon isotope ratio of solid CO and CO_2 are expected to bear similarities since the formation route of CO_2 is tied to that of CO through equation <ref> <cit.>:
CO + OH -> HO-CO -> CO_2 + H
In the ices of our sources however, we see that the ^12CO/^13CO ratios are in general higher compared to the ^12CO_2/^13CO_2 ratios derived from the 2.70 μm, 4.27 μm and 15.2 μm bands, though they still agree within the error bars for a number of our sources. As noted in Section <ref> a reason for this deviation could be that the CO ratios are less accurate due to the weak ^13CO ice feature. Overall, we consider the CO_2-derived isotope ratios to be more reliable given the multiple methods we used to quantify this ratio.
§.§ Protostars: Gas phase
In the gas-phase <cit.> reports significant ^12CO/^13CO variability for several YSOs with an overall trend of elevated ratios relative to the local ISM standard (Figure <ref>). One possible scenario they propose is gas-ice partitioning (Section <ref>) with an additional mechanism for removing ^13CO from the solid state through the formation of larger complex organic molecules (COMs) from ^13C-rich CO ices. Consequently, the COMs will be enriched ^13C and exhibit low carbon isotope ratios, such as the sub-ISM values found in COMs by <cit.>. We note however that the gas-phase values reported by <cit.> are similar to those derived in this study which suggests that the ices might have sublimated from the grains, delivering the carbon isotope ratio to the gas during the pre-stellar stage.
Quantifying gas-phase and solid state isotope ratios in the same source was in fact proposed by <cit.> as a way of tracing the origin of organic material. Therefore, observations of gas-phase CO isotopologues for the JOYS+ protostars will be crucial for enabling this method of posterior isotopic labeling and linking the solid state chemistry with the gas-phase chemistry in these systems. In the following section we will take a look at the different mechanisms that can affect the carbon isotope ratio in both the solid and gas phase in the early stages of star and planet formation.
§.§ Fractionation processes
§.§.§ Isotope exchange reactions
Isotope exchange reactions are exothermic reactions that are efficient at low temperatures and that prompt preferential incorporation of ^13C^+ in CO (Equation <ref>) <cit.>. As a result, molecular ^13CO is enhanced with respect to ^12CO in the gas phase and this low ^12CO/^13CO ratio is subsequently conveyed to the solid state upon CO freeze-out. The ^12C^+ in contrast is enhanced in the gas phase leading to high ^12C^+/^13C^+ gas-phase ratios.
^13C^+ + ^12CO <-> ^12C^+ + ^13CO + Δ E = 34 K
§.§.§ Selective photodissociation
Selective photodissociation of ^13CO is a destruction mechanism that occurs because self-shielding of the less abundant ^13CO takes place at higher extinctions in comparison to ^12CO <cit.>. This leaves a large portion of the ^13CO gas vulnerable to ultraviolet radiation. As a result, ^13CO is destroyed increasing the gaseous ^12CO/^13CO ratio and decreasing the ^12C^+/^13C^+ ratios as the molecules dissociate and the gas is enriched with ^13C^+ and O atoms, which counters the effect of isotope selective photodissociation (Section <ref>). This high molecular ^12CO/^13CO ratio translates to the solid state upon CO freeze-out producing ices that are depleted in ^13C. Although we expect that under dark cloud conditions most of the radiation is attenuated by the high density environment, thus limiting the effect of selective photodissociation, studies by <cit.> have shown that turbulence can transfer selective photodissociated material to the dark inner regions of clouds.
§.§.§ Gas-ice partitioning
A third mass-dependent mechanism was proposed by <cit.> where a small difference in the binding energies of ^12CO and the slightly heavier ^13CO (ΔE_bind ∼ 10 K) results in ^12CO desorbing at a slightly lower temperature from the grains. Consequently, the ^12CO/^13CO in the gas phase is enhanced while the ices become enriched with ^13CO, leading to low carbon isotope ratios in the solid state. It is worth nothing however that gas-ice partitioning is only effective during a very narrow temperature window <cit.>.
Of the mechanism discussed above, selective photodissociation could be the culprit of the ^13C deficiency observed in this sample though its efficiency under these conditions is open to question.
§.§ Evolution of the ^12C/^13C ratio
As the stellar system matures, different process will begin to dominate, slowly eroding the initial carbon isotope ratio of the pristine material. In the following sections we will discuss the evolution of the carbon isotope ratio during the later epochs of star formation.
§.§.§ Proto-planetary disks
Studies of carbon isotope ratios in disks reveal that at this stage the conditions in the system are irrevocably changed and the initial carbon isotope imprint can be effectively erased in certain regions of the disk. <cit.> and <cit.> for instance found a dichotomy in the TW Hya disk where two different carbon isotope ratios were measured for CO and C_2H (21 ± 5 and 65 ± 20, respectively). In the same disk <cit.> finds a ratio of 86 ± 4 from HCN rotational lines.
The variability of the carbon isotope ratio in protoplanetary disks is a consequence of the ratio being contingent on the location in which the molecules were formed. This is because different processes dominate in different regions of the disk. Photodissociation for instance will play a significant role in the upper layers of the disk, where the high ratio derived from C_2H is found, while isotope exchange reactions are more efficient in the mid-plane and at larger radii where the temperatures are low and where the low value for CO is measured <cit.>.
§.§.§ Exoplanets
The large variability of the carbon isotope abundance is also observed in the atmospheres of exo-planets where the formation mechanism of the planet plays a crucial role in its carbon isotope reservoir.
Recent observational efforts by <cit.> for instance, revealed a significant enrichment of ^13CO in the atmospheres of hot Jupiters (31±17, 10.2-42.6 with 68% confidence, and 62 ±2, respectively). This ^13C enrichment is likely because the planet was formed beyond the CO snowline and accreted most of its material from ^13C-rich ices. These ^13C-rich ices are likely the result of fractionation processes such as isotope-exchange reactions and possibly gas-ice partitioning that must have occured after the protostellar stage.
<cit.> in contrast measured very high ^12C/^13C ratios (97 ± 15 and 184 ± 61, respectively) for brown dwarfs pointing to a formation route similar to that of stars where the object forms from cloud fragmentation and gravitational collapse and inherits most of its gaseous material from a parent cloud that is also deficient in ^13C and ^13C-ices as if found here. This shows that the carbon isotope ratio is in potentially a powerful tool for tracing the formation pathways of extrasolar planets.
§.§.§ Comets and meteorites
The chemical complexity of the infant planetary system is at last fossilized in the remains of its building blocks, the comets. Observations of C_2, CN and HCN in numerous comets for instance showed that Solar system objects all have carbon isotope ratios similar to the Solar abundance (∼ 89) with little variation between the ratios <cit.>. Similarly, <cit.> finds ratios consistent with the Solar abundance from a large study of 75 chondrites. Finally, <cit.> also derived a ratio of 84 ± 4 from CO_2 measurements in the coma of comet 67P/Churyumov-Gerasimenko. It is worth noting that the carbon isotope ratio found in comets and chondrites does bear some resemblance with the values derived in the protostellar stage. This could be an indication that while fractionation processes erode the initial carbon isotope imprint during the later stages, a fraction of the pristine material still survives this journey and is eventually incorporated into the planetary system. | null |
http://arxiv.org/abs/2409.18016v1 | 20240926162353 | Relating Superconducting Optoelectronic Networks to Classical Neurodynamics | [
"Jeffrey M. Shainline",
"Bryce A. Primavera",
"Ryan O'Loughlin"
] | cs.NE | [
"cs.NE"
] |
textwidth = 18cm,textheight = 24cm
⋮
packed_items
1]Jeffrey M. Shainline^*, Bryce A. Primavera, and Ryan O'Loughlin
National Institute of Standards and Technology
325 Broadway, Boulder, CO, USA, 80305
^*[email protected]
[
\begin@twocolumnfalse
Relating Superconducting Optoelectronic Networks to Classical Neurodynamics
[
September 28, 2024
===========================================================================
§ ABSTRACT
The circuits comprising superconducting optoelectronic synapses, dendrites, and neurons are described by numerically cumbersome and formally opaque coupled differential equations. Reference shainline2023phenomenological showed that a phenomenological model of superconducting loop neurons eliminates the need to solve the Josephson circuit equations that describe synapses and dendrites. The initial goal of the model was to decrease the time required for simulations, yet an additional benefit of the model was increased transparency of the underlying neural circuit operations and conceptual clarity regarding the connection of loop neurons to spin systems, pulse-coupled oscillators, and biological spiking neurons. Whereas the original model simplified the treatment of the Josephson-junction dynamics, essentially by only considering low-pass versions of the dendritic outputs, the model resorted to an awkward treatment of spikes generated by semiconductor transmitter circuits that required explicitly checking for threshold crossings and distinct treatment of time steps wherein somatic threshold is reached. Here we extend that model to simplify the treatment of spikes coming from somas, again making use of the fact that in neural systems the downstream recipients of spike events almost always perform low-pass filtering. The phenomenological model is derived from a Lagrangian, providing a formalism for superconducting optoelectronic networks on the foundation of least-action principles. We provide comparisons between the first and second phenomenological models, quantifying the accuracy of the additional approximations. We identify regions of circuit parameter space in which the extended model works well and regions where it works poorly. For some circuit parameters it is possible to represent the downstream dendritic response to a single spike as well as coincidences or sequences of spikes, indicating the model is not simply a reduction to rate coding. A simple algorithm is given for numerical analysis amenable to parallelized implementation on graphics or tensor processing units. The governing equations are shown to be nearly identical to those ubiquitous in the neuroscience literature for modeling leaky-integrator dendrites and neurons. This formal similarity and the overall transparency of the model provide a concise framework in which to leverage substantial prior work in algorithm development, including artificial neural networks and Hopfield models.
\end@twocolumnfalse
]
§ INTRODUCTION
A promising route to better understand the brain and to leverage its principles to perform useful computations is to construct technological systems that embody neural principles. This field of research is known as neuromorphic computing <cit.>. Within this domain, a specific approach is being pursued that aims to leverage light for spike-based communication at the single-photon level in conjunction with superconducting circuitry to perform fast, energy efficient computation <cit.>. Such systems are referred to as superconducting optoelectronic networks (SOENs). Much progress toward realization of these systems has been made in hardware <cit.>, but to make them a useful technology, appreciable theoretical development and modeling is necessary. For theoretical advances to be possible, an accurate, numerically efficient, and conceptually transparent formalism is required.
The most physically realistic models of biological neurons are based on the Hodgkin-Huxley equations, a system of four coupled, first-order, ordinary differential equations that capture the dynamics of four types of ion channels with voltage-dependent conductances through the cell membrane <cit.>. Similarly, the microscopic components that constitute SOENs are based on superconducting quantum interference devices (SQUIDs, <cit.>), in which the interplay between two Josephson junctions (JJs) and an output low-pass filter can be expressed as a system of five first-order equations <cit.>. The correspondence between the superconducting and biological systems is remarkable <cit.>. In computational neuroscience, the full Hodgkin-Huxley equations are not commonly used to represent large circuits and networks due to the numerical overhead. Instead, simplified models that capture the relevant functionality with more efficient numerical implementation are used in practice with a great deal of success in explaining measured data. This type of phenomenological model was developed for SOENs in Ref. shainline2023phenomenological to strike a compromise between accuracy and numerical speed. That model reduced the problem to a single, first-order, ordinary differential equation representing each synapse, dendrite, or soma.
In addition to numerical speed, phenomenological models can also aid in the interpretability of behavior. The complexity of four interacting, voltage-dependent ion channels can be simplified to a single leaky integrator equation. In neuroscience, the communication of action potentials between neurons is another example of complicated behavior that is often not explicitly treated in models, particularly models that are intended to scale to very large numbers of interacting elements. In many cases, the discrete and discontinuous behavior of spikes is important, such as for detecting sharp transitions or coincidences, but in other cases the encumberance of calculating and recording spike times is not worth the added insight, and the added insight may even be counterproductive, like tracking the positions and momenta of all atoms in a gas when thermodynamic quantities will suffice. The phenomenological model of Ref. shainline2023phenomenological greatly simplified the analysis of the computational aspects of synapses, dendrites, and somas, but the model explicitly included spikes as an essential mechanism of interneuronal communication. In that sense, the model made it only halfway to the goal of simplifying the formalism.
Here we extend that framework by eliminating the need to model spikes. We do not claim that spikes are irrelevant to the computations or communication protocols implemented by neurons, but rather that a model without spikes can provide a helpful treatment of a system that can serve as a foundation for development of algorithms and elucidation of connections to neuroscience and other complex systems. In the simplest theoretical treatments of neural systems, a neuron's activity is reduced to a rate. In one treatment of such a model <cit.>, Hopfield and Tank made the analogy between a picture that includes all discrete spikes and their arrival times as the “quantum” picture, with the rate-based treatment as the continuous, “classical” counterpart. While classical, rate-coding models can explain much of observed neural activity in behaving animals, significant information is omitted, and such models fail to capture many of the benefits of the manner in which neurons leverage the time domain for efficient computation and communication. Here we provide a model that is intended to play this role of a “classical” model of SOENs in the sense that discrete spikes are not explicitly treated. We find that while the model derived is capable of rate coding, it is not limited to rate coding. Numerical experiments indicate that it is possible to treat the entire system without reference to spikes in a manner that remains capable of representing rapid (although still continuous), pulsatile activity. The model can accurately capture a dendrite or neuron's response to a single synapse event as well as to coincidences and sequences between two events. The simple governing equations facilitate formal analysis and provide opportunities for algorithm development for artificial intelligence, while the ability to represent abrupt activity allows numerical simulations to remain closely connected to actual hardware performance. We do not take a novel position on the rate-versus-spike debate <cit.>; our bias toward the value of spike timing is implicit in the numerical examples chosen, and our appreciation of the value of a rate-based formalism is clear from our choice to seek such a model in the first place.
In Sec. <ref> of this work, we derive the phenomenological model of dendrites from the circuit Lagrangian. Through inspection of the properties of the dendritic model, we motivate and perform the extension to the full phenomenological model of neurons communicating to dendrites via synapses and show that the form of the equations modeling neurons in this way is identical to that of the dendrite, thereby unifying the treatment of all components of the system in a common framework. We assess the accuracy relative to the model that includes spikes in Sec. <ref>. We compare the formalism to that of classical neurodynamics in Sec. <ref>, and we discuss several implications in Sec. <ref>.
§ THE PHENOMENOLOGICAL MODEL
We begin by summarizing the model of Ref. shainline2023phenomenological to provide a framework for its extension to spiking neurons. Reference shainline2023phenomenological motivated the model with a rate-equation picture, while here we begin from a Lagrangian formalism, as this provides a more rigorous foundation as well as a pathway to extension to energy models and treatment of large systems with the tools of statistical mechanics.
§.§ Derivation of the Phenomenological Dendrite Model
The circuit instantiation of a dendrite in the SOENs platform is shown in Fig. <ref>. In part (a) of the figure, the integration loops (labeled I) from two input dendrites are shown feeding into the active dendrite under consideration through a collection coil (labeled C). The signals in the loops (s) and the coupled flux between dendrites (ϕ) are labeled in terms of dimensionless quantities that will be derived shortly. Part (b) of Fig. <ref> shows the active core of the dendrite with circuit parameters labeled using conventional circuit quantities, which will be used in the derivation. The objective of the phenomenological model is to capture the essential behaviors of this circuit without resorting to the full solution of the Josephson equations that describe the superconducting wave function on the picosecond timescale. The full circuit equations are numerically intensive and provide little insight into the computations of the dendrite. We would like to abstract the circuit to an input-output device where the output, I, can be efficiently computed in terms of the inputs, Φ and I_b. To this end, we consider just the right half of the circuit, the LRC branch, which we refer to as the integration loop. The equivalent circuit is shown in Fig. <ref>(c).
We can write down the Lagrangian of the LRC branch as <cit.> ℒ = 𝒯-𝒱 with 𝒯 = Lq̇^2/2 and 𝒱 = q^2/2C, where q is the charge on the capacitor, and q̇ is its time derivative (I=q̇ is the current). The resistor introduces a damping term, ℱ = rq̇^2/2 <cit.>. The inductance, L, capacitance, C, and resistance, r, are assumed constant. The central concept of the phenomenological model is to treat the left half of the circuit, the SQUID, as an external electromotive force, ℰ. Once the proper form of this electromotive force is found, the equation of motion can be obtained from the Euler-Lagrange equation <cit.>:
d/dt( ∂ℒ/∂q̇) - ∂ℒ/∂ q = ℰ - ∂ℱ/∂q̇.
To motivate the form for ℰ, we begin with the Josephson equation relating the voltage across a JJ to the rate of change of the phase of the superconducting wavefunction:
V(t) = Φ_0/2π dφ/dt,
where Φ_0 = h/2e is the magnetic flux quantum. Integrating this equation gives
∫_0^t_fq V(t) dt = Φ_0/2π ∫_0^2πdφ = Φ_0.
The limits of the integrals indicate that the phase of the superconducting wavefunction advances by 2π in a time that we define as t_fq, which is the time required to produce a single flux quantum for a given temporal form of V(t). In general, the voltage across each JJ in the SQUID will be a complicated function of the applied flux, the bias current, and the phase of the wavefunction across the other JJ. For the cases of neuromorphic interest considered here, the output of the SQUID will be coupled to an LRC branch that will perform a low-pass filter on these high-speed dynamics. We thus take the crucial step that defines the phenomenological model, which is to replace V(t) in Eq. <ref> with V̅(t̅), which we take to be constant over the short duration t_fq. The quantity V̅ does vary in time, but over timescales much longer than t_fq. We denote the timescale over which V̅ varies as t̅. With this simplification, Eq. <ref> reduces to
V̅(t̅) = Φ_0 G_fq(t̅),
where G_fq(t̅) = 1/t_fq(t̅) is the rate of flux-quantum production defined on timescales long relative to t_fq. The t̅ notation serves to remind us that the model can only be expected to provide an accurate representation of the circuit over timescales long compared to flux-quantum production. It cannot be relied upon for picosecond dynamics, but it may be accurate on timescales of hundreds of picoseconds or longer, provided the LRC branch performs as a low-pass filter, which is exactly the type of integrating behavior we desire from a dendrite. The important function is then G_fq(t̅), which we will discuss shortly.
To complete the phenomenological model, we make the identification ℰ = V̅ = Φ_0 G_fq, and from Eq. <ref> we obtain
d^2q/dt^2 + r/L dq/dt + 1/LC q = Φ_0/L G_fq.
We define the characteristic frequency of the Josephson junctions as ω_c = 2π r_jj I_c/Φ_0, where r_jj is the shunt resistance of a JJ in the resistively and capacitively shunted junction model <cit.>, and I_c is the critical current of a single JJ. In terms of these quantities we can define a dimensionless charge, ξ = (ω_c/I_c) q, and a dimensionless time, t' = ω_c t. Moving to t' gives d^nξ/dt^n = ω_c^n d^nξ/dt'^n, so that Eq. <ref> becomes
β d^2ξ/dt'^2 + α dξ/dt' + β(ω_LC/ω_c)^2 ξ = g_fq,
where we have defined the dimensionless quantities β = 2π L I_c/Φ_0, α = r/r_jj, the angular frequency of the LC oscillator is ω_LC = (L C)^-1/2, and g_fq = G_fq/(ω_c/2π). Equation <ref> is the governing equation for a general SOEN dendrite, and it describes a damped, driven harmonic oscillator. The oscillatory component is useful for achieving resonances, subthreshold oscillations, and population synchronization. However, many dendrites will not include a capacitor and will be described instead in terms of a first order equation. In this case, C→∞ and ω_LC→ 0. Introducing the dimensionless current s = dξ/dt' = I/I_c and the dimensionless flux ϕ = Φ/Φ_0, we can write the phenomenological model for a first-order dendrite in the preferred form:
ds/dt = γ g_d(ϕ,s; i_b) - s/τ.
Here we dropped the prime on the dimensionless time variable as it is the only temporal variable in the remainder of the study. Equation <ref> introduced alternative dimensionless quantities γ = 1/β, and τ = β/α for convenience. The replacement g_fq→ g_d was made to indicate that this source function applies to dendrites (as opposed to neurons). The reason for this replacement will be evident in Sec. <ref>. The source function depends on the dimensionless applied flux, ϕ, and the dimensionless current, s, because these two quantities both affect the instantaneous electromotive force experienced by the circuit. g_d also depends on the dimensionless bias current, i_b, as a parameter, as discussed in Ref. shainline2023phenomenological.
To fully specify the model, we must identify the form of g_d(ϕ,s). This can be accomplished with a series of numerical calculations, as described in Ref. shainline2023phenomenological. In brief, the full JJ circuit equations are solved for many values of ϕ and i_b, and the rate of flux-quantum production is monitored as signal s accumulates in the integration loop. A table of g_d is assembled as a function of ϕ and s for multiple values of i_b, which were referred to as rate arrays in Ref. shainline2023phenomenological. Here we change the terminology to refer to the same objects as source functions to avoid confusing the model with rate coding. In numerical implementation, the source function can be used as a look-up table or its form can be fit. Examples of the dendrite source functions are shown in Fig. <ref>(a) and (b).
The final aspect of the model relates to interactions. Dendrites are coupled to each other through flux. The coupling flux from dendrites indexed by j to dendrite i is given by
ϕ_i = ∑_j=1^n J_ij s_j,
where J_ij is a static coupling term determined by the mutual inductance that includes contributions from all the transformers present on the collection coil in Fig. <ref>(a). The explicit form of J_ij is given in Ref. shainline2023phenomenological. Equation <ref> shows that coupling between dendrites is due to the signal in the integration loop of one dendrite being communicated as flux into the receiving loop of a subsequent dendrite. The signal in the subsequent dendrite is then obtained through the evolution of Eq. <ref> with the signal from the first dendrite providing the flux ϕ and the signal from the second dendrite providing the s term in the function g_d(ϕ,s).
Given the approximation made in this model (replacing the true JJ voltages with V̅ represented by the source function), it is not immediately clear under which circumstances the model will be useful or how accurate the result will be. Reference shainline2023phenomenological analyzed numerous cases and found accuracy of 10^-4 as calculated with a distance metric (χ^2) relative to the full circuit equations while achieving an increased speed of simulation of 10^4. An illustrative example is shown in Fig. <ref>. A step-function input is delivered to a dendrite, as seen in Fig. <ref>(a). The accumulated signal in the dendritic integration loop is shown in Fig. <ref>(b), with the black, solid trace showing the result of the full JJ circuit equations, and the green, dotted line showing the phenomenological model of Eq. <ref>. The zoom in Fig. <ref>(c) reveals the difference. The circuit equations give minute ripples, owing to the dynamics of the phase of the superconducting wave function across the two JJs, while the phenomenological model approximates the result with a smooth response. The discrepancy is tolerable in the present context because we care about the low-pass-filtered, integrated signal, which is accurate on the time scales of interest.
§.§ Extension of the Phenomenological Model to Neurons
The phenomenological model described to this point has proven useful for simulating systems of superconducting optoelectronic neurons. However, each neuron's soma behaves in a manner that is not captured in this model. A soma is essentially a dendrite, with two important differences. First, the integration loop of a soma has a threshold, and when this threshold is reached, the integrated current is evacuated, and a refractory dendrite temporarily provides inhibitory flux to the receiving loop. Second, upon reaching threshold, an amplifier chain is activated that produces a pulse of light from a semiconductor transmitter circuit. The photons in this pulse then drive synaptic single-photon detectors to generate flux as input to downstream dendrites. These extensions of the dendrite circuit result in pulsatile, spiking behavior from the neuron. In the original model, all of these behaviors were modeled with circuit equations treating the transmitter circuit transistors and light source that provided faithful representation of the full spike response of the soma and the receiving synapses.
Here we develop an alternative approach to modeling neurons that avoids treating spikes in much the same way we avoided treating JJ dynamics in the first phenomenological model. The concept is depicted in Fig. <ref>. Figure <ref>(a) represents a neuron with two dendrites feeding flux into a passive, linear collection coil, which couples into the soma. The schematic illustrates that the transmitter produces spike events that deliver photons to downstream synapses coupled to dendrites. By contrast, in Fig. <ref>(b) the circuit is the same up to the collection coil, but the entire soma, transmitter, and synapses are omitted. We seek a model in which the signal generated in downstream dendrites can be obtained exclusively through consideration of the applied flux to the neuronal receiving loop, ϕ_n. Importantly, this flux is an analog, continuous function of time that is not erased upon neuronal firing, just as the flux applied to any dendrite provides a persistent input drive, independent of the output signal, s. This analog signal is indicated in Fig. <ref>(b) where the flux input to the neuronal receiving loop, ϕ_n becomes also the flux to each of the downstream dendritic receiving loops, ϕ_d. This signal is shown schematically as a continuously varying signal in place of a train of discrete spikes. This concept, coupled with the image of the underlying approximation captured in Fig. <ref>(c) motivates us to seek a phenomenological model for the soma, transmitter, and synapses in a nearly identical fashion to the model constructed for the dendrites, except targeting yet a slower time scale. While the first phenomenological model accepts ignorance of dynamics faster than 100 ps by ignoring Josephson behavior on the 1 ps to 10 ps timescale, here we accept ignorance of dynamics faster than 50 ns by ignoring the precise synaptic response on the 1 ns to 10 ns timescale.
To construct such a model, we repeat the steps described in Sec. <ref>. We again seek a differential equation describing the rate of change of the signal in a dendrite, but instead of taking the driving term to be a function of the flux applied to that dendrite, we let the flux to the neuronal receiving loop of the upstream soma be the input driving argument. We conjecture that this will be possible with an equation of the form
ds/dt = γ g_n(ϕ,s) - s/τ.
We now must find the new g_n, which is related to but distinct from g_d. We determine g_n in exactly the same way as we did for the dendrite: we apply a constant flux to the neuronal receiving loop of a soma, and we observe the rate of flux-quantum production in a downstream dendrite that is coupled to the neuron via a transmitter and synaptic receiver. We use the first phenomenological model of Ref. shainline2023phenomenological, complete with transmitter and synaptic dynamics, to construct this source function.
Due to the added complexity of the full neuronal circuit resulting from the circuit values describing the soma, the resulting source function is specified by additional parameters, including the value of the somatic threshold, s_th, the bias current to the soma, i_n, properties of the refractory dendrite, and properties of the synaptic receiver. Most of these parameters can be fixed at reasonable values. For the present study, we consider only s_th = 0.2, one fixed refractory dendrite design, and a default synapse configuration. The source function takes as arguments ϕ_n, s, and the slowly varying or constant parameters i_n and i_d. Examples of these neuronal source functions are shown in Fig. <ref>(c)-(f) for two values of i_n and two values of i_d. Increasing the bias to the neuron shifts the flux threshold to lower values, while increasing the bias to the receiving dendrite increases the signal saturation value. These behaviors are slightly different from the source function for dendrites, where increasing the bias in that case reduces the flux threshold and also increases the signal saturation value. While the specific form of the source function g depends on whether the dendrite in question is receiving flux from other dendrites (g_d) or from a neuron (g_n), the form of the equation is identical, which we write in general as
ds/dt = γ g(ϕ,s) - s/τ.
The specific values of γ and τ can be chosen independently for each dendrite across a wide range of values <cit.>.
The result is exemplified in Fig. <ref>, which shows a soma being driven by a step function of input flux. The partial phenomenological model of Ref. shainline2023phenomenological is used to calculate the complete soma dynamics, shown in Fig. <ref>(a) and (b). The output signal in a downstream dendrite is shown in Fig. <ref>(c) for both the partial and full phenomenological models. Schematics for the spiking and phenomenological neural circuits used to make this figure are shown in the insets. In the spiking case, the step-function applied flux to the neuronal receiving loop, ϕ_n, is input to the soma, which produces a train of spikes at a well-defined rate based on the dynamics of the soma itself, the refractory dendrite, and the transmitter circuit. In the full phenomenological case, this step-function input is applied directly to the downstream dendrite so that ϕ_n = ϕ_d, and the only difference between a dendrite receiving spikes from a neuron and a dendrite receiving flux from another dendrite is the specific form of the source term, g_d versus g_n.
For the spiking case, circuit operation can be understood as follows. If the applied flux exceeds the flux threshold of the somatic dendrite [Fig. <ref>(a)], signal will begin to accumulate in the neuronal integration loop [Fig. <ref>(b)], and if that signal reaches the current threshold of the integration loop, the neuron will produce spikes that deliver impulses to downstream synapses, and the neuron will also activate the refractory dendrite. The sum of the external input flux and the refractory inhibitory flux can be seen in Fig. <ref>(a), labeled “total flux”. The saw-tooth behavior of the neuronal integration loop seen in Figs. <ref>(a) and (b) results from the steady drive paired with reset dynamics upon reaching threshold. It is precisely these fast oscillations that we avoid modeling with the full phenomenological model, analogous to the treatment in which we glossed over the fast JJ dynamics in a dendrite. The results of the two approaches are compared in Fig. <ref>(c), which is strikingly similar to Fig. <ref>(c) except that the time scale is slower by nearly four orders of magnitude. We are applying the phenomenological concept, based on low-pass filtered outputs, hierarchically across scales of the network, first to individual dendrites, and then between neurons. It may eventually prove necessary to employ the same trick at the level of populations.
While this model does give up some accuracy in tracking the fast, saw-tooth response of dendrites, it provides a helpful unification across the system: only dendrites must be considered; neither synapses nor neurons are explicitly modeled; plasticity functions can be treated using dendrites that obey the exact same equation; and many benefits of spiking neurons are retained, while the complications of spike discontinuities are not present in the mathematics. Before exploring various cases to test the accuracy and limits of the model, let us further describe why this unification is conceptually and numerically helpful.
§.§ Modeling Systems with the Phenomenological Framework
Consider a neural system implemented in superconducting optoelectronic hardware. The network comprises a set of N dendrites, with the state of dendrite i given by s_i(t), which obeys Eq. <ref> with the appropriate choice of g_d or g_n depending on whether the dendrite is a common dendrite in a neuron's arbor or a synaptic dendrite that receives signal directly based on the flux applied to the upstream soma. The flux applied to dendrite i that generates signal is calculated from
ϕ_i = ∑_j J_ij s_j + ϕ_i^ext,
where J_ij is an element of the static coupling matrix established by a transformer that can be learned in design but is fixed upon fabrication, and ϕ_i^ext is the external drive that excites dendrite i. Dendrites for which ϕ_i^ext 0 are referred to as input dendrites. We can place all s_i in a column vector, 𝐬(t) ≡( s_1(t) ⋯ s_N(t) )^T, which is the state vector of the system. 𝐬(t) describes the system as a point in configuration space, and its time evolution traces a trajectory. We place all J_ij in a square coupling matrix, 𝐉. The flux vector is given by
ϕ(t) = 𝐉 𝐬(t) + ϕ_ext(t).
We can similarly organize the elements γ_i and τ_i in column vectors γ and τ, and write the equation of flow through configuration space as
d𝐬/dt = γ⊙𝐠(ϕ,𝐬) - 𝐬/τ,
where 𝐠(ϕ,𝐬) is a vector function that takes two vector inputs and returns a vector output computed component-wise. The “⊙” notation refers to component-wise multiplication (Hadamard product), and 𝐬/τ is component-wise division.
With this unified model and notation, the numerical algorithm becomes simple, transparent, and highly parallelizable. The initial condition, 𝐬(t_0), must be provided, and 𝐬(t_0) = 0 is common. The network is inactive prior to the introduction of driving flux. The state of the network at subsequent time steps indexed by t_p is obtained as follows:
* With known ϕ_ext and 𝐉, calculate ϕ(t_p+1) = 𝐉 𝐬(t_p) + ϕ_ext(t_p+1). This operation is just the matrix-vector multiplication of 𝐉 𝐬, where 𝐉 is an extremely sparse matrix.
* Call a function 𝐠(t_p+1) = 𝐠[ϕ(t_p+1),𝐬(t_p)] that computes the signal to be added to all dendrites at the next time step. Each of these is independent, so the operation is highly parallelizable.
* Obtain 𝐬(t_p+1) using a discrete form of Eq. <ref> and the values of 𝐠(t_p+1), 𝐬(t_p), γ, and τ.
* Return to first step and repeat forward Euler through the time grid.
This algorithm is efficient for several reasons. Step 1 involves matrix-vector multiplication, 𝐉 𝐬, but 𝐉 is extremely sparse, so fast algorithms that leverage graphics or tensor processing units can be used <cit.>. Step 2 requires calculation of 𝐠[ϕ(t_p+1),𝐬(t_p)], and this can be distributed over as many cores as one can access, because the computation for each element is independent from the other elements at each time. Step 3 is simply arithmetic—element-wise multiplication and subtraction—which again can be easily parallelized. In this form, the algorithm reduces to one sparse matrix-vector multiplication and a few parallelizable operations. The entire algorithm is well-matched to graphics or tensor processing units. In addition to being faster than the spiking algorithm, which requires special treatment on steps in which an action potential is produced, it is also significantly more transparent, with no thresholds, if statements, synaptic response functions, or transmitter models. The model only references dendrites that all evolve with the same differential equation. The entire state of the network at time t is specified by 𝐬(t), which is a list of real-valued numbers between zero and one. There is no need to track discontinuous quantities or tabulate spike times. We next assess the accuracy of the model in several cases of relevance to neural systems.
§ QUANTIFYING ACCURACY AND DOMAIN OF APPLICABILITY
Based on the premise of the model, we expect it to be accurate only for dendrites that integrate over time scales relatively long compared to the inter-spike interval. Nevertheless, it is worth assessing the accuracy of the model when attempting to capture activity ranging from a single synapse event up to long trains of pulses. We begin by considering a monosynaptic neuron under various drive conditions and then consider a disynaptic neuron to assess the ability of the model to capture coincidences and sequences. To assess the accuracy, we compare the full phenomenological model to the partial phenomenological model that includes spiking, which was compared to circuit equations in Ref. shainline2023phenomenological. We quantify accuracy with a χ^2 of the form
χ^2 = ∑_p=1^n_t| s_spike(t_p) - s_phenom(t_p) |^2 /∑_p=1^n_t| s_spike(t_p) |^2 ,
where p indexes the discrete numerical time steps, n_t is the number of time steps, s_spike is the calculated value of the signal in the output dendrite using the first phenomenological model that faithfully represents the spiking activity of the soma, transmitter, and receiving synapse, while s_phenom is the calculated signal in the output dendrite using the full phenomenological model that omits spiking and instead generates signal using a source function driven by ϕ_n. This χ^2 is not defined when s_spike = 0.
For all simulations investigated with the full phenomenological model in this work, the soma was specified by i_b = 1.7, β_ni/2π = 10^3, τ_ni = 50 ns, and s_th = 0.2. The refractory dendrite had the same values of i_b, β, and τ.
§.§ Monosynaptic Neuron
The monosynaptic neuron under consideration is shown schematically in Fig. <ref>. The full spiking version is shown in Fig. <ref>(a), and the phenomenological version is shown in Fig. <ref>(b).
§.§.§ Single Synapse Event
To begin assessing the utility of the model, we compare the spiking version to the full phenomenological version when a single synapse event is input to a receiving dendrite on the neuron. For all spiking simulations in this section, the neuron circuit parameters are chosen such that the single input synapse event pushes the neuron above threshold, and the neuron produces one output spike, which is delivered to a downstream synapse coupled to a dendrite. The goal of the phenomenological model in this case is to accurately represent the output dendritic signal when all the model has access to is the time trace of input flux to the soma due to the input synapse event.
In biology, the temporal envelope of the post-synaptic signal is determined by the properties of neurotransmitter receptors (AMPA and NMDA for excitatory glutamate and GABA_A and GABA_B for inhibitory GABA), which have time constants in the range of 5 ms to 100 ms (shorter for AMPA and GABA_A, longer for NMDA and GABA_B). In SOENs, the analogous time constants are determined by the decay of the dendritic integration loop, which can be very fast (less than 1 ns), or extended to biological time scales, with 5.8 ms demonstrated in Ref. khan2022superconducting. The simulations in this section consider dendrites with a time constant in the range of 50 ns to 6.25 µs responding to the arrival of a single action potential, intended to cover the primary range of circuit parameters used in common SOENs applications.
Figure <ref>(a) shows χ^2 calculated as a function of the time constant of the output dendrite, τ_out, for several values of τ_in, β_out, and i_d. The common trend is that the representation becomes more accurate as τ_out increases, which is to be expected, as this model is intended to capture low-pass dynamics. Example time traces are shown corresponding to the data points labeled b-g in Fig. <ref>(a). In the first time trace, Fig. <ref>(b), both the input dendrite time constant and the output dendrite time constant are 50 ns, which is close to the minimum interspike interval of SOENs. In this case we would expect the accuracy to be poor, as almost no low-pass filtering is occurring in the circuit, and indeed the value of χ^2 is relatively low at 0.07. However, inspection of the temporal response reveals that while the detailed structure of the signal in the output dendrite does not accurately match that of the spiking model, the qualitative behavior is similar: an input synapse event to the neuron evokes a pulse of activity in the downstream dendrite. For some modeling purposes, this numerical error may be intolerable, and the full spiking model must be used. But for many modeling applications and algorithm development, such accuracy may be sufficient. Moreover, this limit of rapidly decaying signals on the downstream dendrite is not likely to be common in practice, as the primary function of most dendrites is to integrate signals over time, for which purpose time constants longer than the most rapid interspike interval are required. For example, Fig. <ref>(c) shows the response when the decay time constant of the downstream dendrite is 427 ns, which is about 10 times the minimum interspike interval. In this case the error is 5.7× 10^-3, and the qualitative representation of the resulting dendritic response is excellent. The representation further improves with τ_di^out = 6.25 µs, as shown in Fig. <ref>(d), with χ^2 = 3.9× 10^-4.
The data in Fig. <ref>(b)-(c) are from a circuit in which the input dendrite had a low-capacity integration loop, with β_in/2π = 10^2. The data in Fig. <ref>(e)-(g) are from a circuit in which the input dendrite had a higher-capacity integration loop, with β_in/2π = 10^3, so the loop can integrate the inputs from several synapse events before saturating. The time traces in Fig. <ref>(e)-(g) are from scenarios in which the time constant of the dendrite providing input to the soma varies from 50 ns to 200 ns. The quantitative agreement for the 100 ns case is much better than the other two, with the phenomenological model underestimating the response for the 50 ns drive and overestimating the response for the 200 ns drive. Nevertheless, the qualitative response is in accord in all three cases.
It is useful to understand how accuracy may depend on bias current, as this circuit parameter may be used to realize different dendrite or neuron classes or as a plasticity mechanism for training and learning. Figure <ref> compares the phenomenological and spiking models for various neuron and dendrite bias currents. Here we fix the input time constant to 50 ns and the output time constant to 250 ns. The output loop had β_out/2π = 10^2, so it saturates quickly. Similarly to Fig. <ref>(e)-(g), the phenomenological model may overshoot or undershoot the spiking model. Panels (a)-(c) of Fig. <ref> each show four values of the bias current to the soma. For the spiking model, this has very little effect on the output in this specific instance of a single input synapse event; the latency of spike production is slightly impacted. However, this bias current does affect the phenomenological model, as it determines the subtleties of the source function [Fig. <ref>(c)-(f)]. It is not clear why the model is more accurate for some bias currents than others.
Because this model abstracts away spike activity, it may be tempting to consider it a rate-coded model. In a rate-coded model, the rate is typically defined either as the reciprocal of the inter-spike interval (which requires two or more spikes to be defined) or as the number of spikes in a moving-window average (which can be defined for a single spike). In the present case of Figs. <ref> and <ref> generated with Eq. <ref>, neither of these definitions of rate coding are applicable. Only one spike is present, so the first definition of a rate cannot apply, and no information about an average number of spikes in a moving time window is provided to the model. However, Eqs. <ref> and <ref> are unambiguously rate equations, so we must understand their proper interpretation. In the case of a dendrite, the phenomenological model given by Eq. <ref> represents the rate of change of signal in the integration loop of that dendrite due to flux applied to the receiving loop of that dendrite as well as passive, exponential signal decay in the integration loop. In the case of a neuron, the phenomenological model given by Eq. <ref> represents the rate of change of signal in the integration loop of a downstream dendrite due to flux applied to the receiving loop of the neuron's soma as well as passive, exponential signal decay in the downstream dendrite's integration loop. Information regarding spike rates is not provided to or generated by the model. The phenomenological model is not a model of neuronal rate coding, but rather a model of dendritic signal rates of change due to driving fluxes and passive leaks.
§.§.§ Pulse Trains
Next consider a train of synapse events input to the monosynaptic neuron of Fig. <ref>. The response to a neuronal pulse train evoked by a constant flux input to the soma was shown in Fig. <ref>. Here we consider a temporally extended input drive due to a train of input pulses delivered to a synapse coupled to a dendrite with output fed into the soma. In this scenario, the neuron is acting as a repeater, with each input synapse event evoking a spike, and we monitor the response of the output dendrite to those spikes induced in the neuron.
First consider the response to an input train of seven synapse events with uniform interspike interval, as shown in Fig. <ref>. Here the responses to multiple input trains of different numbers of pulses are shown overlaid so one can see how the accuracy changes as the pulse train proceeds. In Fig. <ref>(a)-(c) the input dendrite has a small loop capacity, with β_in/2π = 10^2 and relatively short input time constant of τ_in = 150 ns. Responses are shown for output loops of varying inductance parameter from β_di^out/2π = 10^2 to β_di^out/2π = 10^4. For the smaller output loops, the strength of the initial synapse events is slightly overestimated by the phenomenological model, while for the largest loop the initial effect is slightly underestimated, but in all cases the overall accuracy in response to the train becomes reasonably high as the train proceeds.
Figure <ref>(d)-(f) shows responses in the case of a larger input integration loop, β_in/2π = 10^3, capable of integrating more synapse events, with τ_in = 250 ns. Here a train of 20 input synapse events is considered. The responses are shown for output loops of varying inductance parameter from β_out/2π = 10^3 to β_out/2π = 10^5. While the response of the initial synapse events is undervalued in all cases, the signal catches up through the pulse train for the case of β_out/2π = 10^3, while it remains slightly low for β_out/2π = 10^4, and the error is even more pronounced for β_di^out/2π = 10^5. While a loop of β_out/2π = 10^5 is so (spatially) large as to rarely be used in practice, it is important to be aware that in this region of circuit parameter space the phenomenological model does underestimate the response to the entire pulse train. Still, the qualitative agreement is likely sufficient for much theoretical analysis and algorithm development.
Next consider a train of 20 aperiodic input synapse events with interspike intervals drawn randomly with uniform probability between 50 ns and 1 µs. Figure <ref> shows similar data to Fig. <ref> with χ^2 plots accompanied by illustrative time traces. As in Fig. <ref>, χ^2 drops markedly for longer output time constant, consistent with the principle of the model as a low-pass filter. Figure <ref>(b) is the case with the worst accuracy, wherein the input dendrite had β_in/2π = β_out/2π = 10^3, and like Fig. <ref>(b), the agreement is qualitatively reasonable but quantitatively poor. The phenomenological model tracks the true spike model as individual synapses perturb the state of the neuron, but the output signal is much more rounded than the sharp peaks of the spiking model. Figures <ref>(c) and (d) also show the case of β_in/2π = β_out/2π = 10^3. In (c) the input time constant was 250 ns, while for (d) it was 1.25 µs, and for both cases the output time constant was 1.25 µs. Here the χ^2 values are much better, well under 1 %. For the faster input circuit of (c), the phenomenological model accurately tracks the rising and falling response of the spiking model, with low-pass filtering evident in the temporal zoom. For the slower input circuit of (d), the phenomenological model closely matches the time averaged behavior, and in this case the model closely matches what would be expected from rate coding. The final specific instance shown in (e) is the case of long integration times (τ_in = τ_out = 6.25 µs) and high-capacity loops (β_in/2π = β_out/2π = 10^4). The accuracy is high at χ^2 = 4.4× 10^-4. Significant low-pass filtering occurs, and again we are in the rate-coding domain. Due to the large input loop capacity, the signal from multiple synapse events must be integrated before the input dendrite drives the soma above threshold and signal begins to accumulate in the output dendrite, just after 2 µs. Upon reaching threshold, both the spiking and phenomenological models begin accumulating signal at nearly identical times and comparable rates. The inset to Fig. <ref>(e) shows the rising signal in the initial integration region. After roughly 2 µs of integration, the output dendrite is saturated, and signal changes are minute. The average spiking activity is sufficient to keep the dendrite saturated despite the variability in interspike interval. The temporal zoom shows that the phenomenological model traces the average signal level of the spiking model, as we would expect from a rate-coding picture with instantaneously updated rate term, but the phenomenological model does not track the full saw-tooth response of the spiking model. From another perspective, one may argue the phenomenological model does an excellent job of representing the smoothly varying input to the soma, while the spiking model does a reasonable job approximating the smooth function, given the limitation that it can only send discrete events from the soma to downstream synapses.
Considering all examples presented here, we conclude that the phenomenological model is capable of but not limited to rate coding. Rate coding arises in the specific cases where the drive signal is extended in time and the output dendrite has a long time constant relative to the interspike interval. However, the model is still qualitatively representative even on shorter time scales than an interspike interval, and it is sufficiently accurate for many purposes, even on shorter time scales than may be expected due to the low-pass picture that motivated the model in the first place. For example, as we see in Fig. <ref>(b), it is possible to design a receiver that accurately tracks the full temporal response to a single synapse event, indicating that this “classical” neurodynamic model is useful for representing discrete, spike-induced “quantum” effects.
In general, it appears possible to design dendrites that are accurately represented by the phenemonological model to serve a variety of roles. If one wishes to have a receiver that responds rapidly to a single synapse event, the loop inductance should be relatively small. For a dendrite to play the role of a long-term integrator, the loop capacity should be large, and the time constant should be long. The phenomenological model can be accurate in either case, but the same dendrites should not be expected to serve both purposes.
§.§ Disynaptic Neuron
The examples thus far have been of monosynaptic neurons connected to a single downstream dendrite. Much of the power of spiking neurons results from their ability to make use of information related to timing between input spikes on different synapses. To assess the ability of the phenomenological model to capture these effects, we now consider neurons with two input synapses, as shown schematically in Fig. <ref>(a), beginning with coincidence detection. We study a disynaptic neuron with synapses of the same time constant, τ_in = 100 ns. Results are shown in Fig. <ref>. Figure <ref>(b) summarizes the data from the coincidence detector, plotting the peak of the downstream dendritic signal versus the relative delay between a synapse event on the first synapse and one on the second for several values of i_d. The most striking feature of Fig. <ref>(b) is that the spiking model has a square response, while the phenomenological model is graded. These shapes occur because the spiking neuron communicates a binary response to the downstream dendrite: the synapse events are either close enough in time to drive the soma above threshold and produce a spike, or they are not. The same is not true for the phenomenological model, where analog information about the state of flux into the soma evokes a graded response in the downstream dendrite. Like the case of the low-pass-filtered spike train in Figs. <ref>(d) and (e), this is another instance where the downstream dendrite in the phenomenological model provides a better representation of the state of the soma because it has more information to work with, while the spiking neuron is seen to provide the best approximation possible to the full analog dendritic response given the restriction of only being able to communicate with binary spikes. While it may be appropriate to consider this to be a weakness or shortcoming of the phenomenological model as a tool to simulate spiking neurons, the time traces in Fig. <ref>(c) and (d) show that even when the phenomenological model departs from the spiking case, the agreement is still quantitatively decent and qualitatively excellent. As a tool for the study of network dynamics and algorithm development, the ability of the phenomenological model to capture coincidence detection supports the conjecture that a model without spikes has practical utility. Like the case of representing responses to single synapse events, this coincidence example confirms that the phenomenological model is not a reduction to rate coding, as times of individual spikes are not treated in rate codes.
The straightforward extension of the coincidence detector is to sequence detection using two input synapses on dendrites with different decay times, as shown in Fig. <ref>(e). Here we simulated τ_in 1 = 250 ns and τ_in 2 = 50 ns. The asymmetrical downstream dendrite response is shown in Fig. <ref>(e), with comparision time traces in Fig. <ref>(f) and (g). Just as in the case of coincidence detection, the spiking neuron has a binary response, while the phenomenological case produces a graded, analog response. Again, the time traces reveal reasonable qualitative accuracy, even at large time delays.
§ COMPARISON TO CLASSICAL NEURODYNAMICS
Reference hopfield1986computing introduced the terminology of “classical neurodynamics”, which the authors used to refer to a point-neuron model in which spikes are not considered and only average activity is represented. The term “classical” was used to mean non-discrete, in contrast to “quantum” phenomena characterized by discrete energy levels, particles, or events. It is important to disambiguate that the words “classical” and “quantum” are used in this context only for analogy, and we are not discussing quantum neural networks in which superposition and entanglement are leveraged for computation. The paper by Hopfield and Tank included a conventional model of leaky-integrator point neurons in the rate-coding regime to represent classical neurodynamics. To compare the SOENs phenomenological model with classical neurodynamics, we refer to that model, where the equation of motion for the membrane potential on neuron i, u_i, is
C_idu_i/dt = ∑_j=1^N T_ij f_j(u_j) - u_i/R_i + I_i.
The nonlinear source term, f_j(u_j), maps the membrane potential to an output firing rate. C_i and R_i are the membrane capacitance and resistance. T_ij represents coupling, and I_i is an external current drive. Equation <ref> is also the first equation in the Hodgkin-Huxley model, with the other three equations capturing the dynamics of the gating variables that enter the coupling term. The analogy with SOENs follows from making the associations
u_i →ϕ_i,
f_i(u_i) → g(ϕ_i,s_i).
To make the correspondence between the two models clear, we need a differential equation describing the time evolution of ϕ_i. Taking the time derivative of Eq. <ref>, using Eq. <ref>, and considering static inputs (dϕ_i^ext/dt = 0), we have
dϕ_i/dt = ∑_j J_ijγ_j g(ϕ_j,s_j) - ∑_j J_ijs_j/τ_j.
To move closer to the model of classical neurodynamics, we assume γ_j = γ, and τ_j = τ; we reduce the generality of the model by assuming these constants do not vary with j. This allows us to use Eq. <ref> to make the replacement ∑_j J_ij s_j = ϕ_i - ϕ_i^ext, which gives
βdϕ_i/dt = ∑_j J_ij g(ϕ_j,s_j) - αϕ_i + αϕ_i^ext.
Recall that α = β/τ. Comparing Eq. <ref> with Eq. <ref>, we see the forms are nearly identical. The dimensionless inductance, β, plays the role of the capacitance, and the dimensionless resistance, α, plays the role of the conductance. It is a relatively superficial difference that in SOENs these constants apply on the pre-synaptic side of the interaction, while in classical neurodynamics they apply on the post-synaptic side. They correspond when all such constants are taken to be equal. The more consequential difference is the fact that s_i enters the source term, g. s_i is a low-pass-filtered memory trace of the firing rate. The classical neurodynamic model has an output V_i = f_i(u_i) which is not an input to f_i. Therefore, f_i is a monotonically increasing function of only one input. In the case of SOENs with the simple dendrite comprising only a receiving and integrating loop <cit.>, g(ϕ_i,s_i) is a monotonically increasing function of the input ϕ_i (assuming total input flux is limited to Φ_0/2, <cit.>), but it is a monotonically decreasing function of s_i, which means for constant or increasing ϕ, the source term can have a positive or negative derivative. However, the main consequence of s_i entering the source term is to cause the total response to saturate, leading to transfer functions that are close to sigmoidal, as was measured in Ref. khan2022superconducting (see also Fig. <ref> in Sec. <ref> of the present work). The presence of s in g has the effect of shaping the response function into a nonlinear transfer function with a saturating nonlinearity. But the transfer function of classical neurodynamics, f, is typically taken to be exactly the same type of function, with a threshold and a saturating nonlinearity. Thus, the presence of s in g serves to render the analogy between the phenomenological SOENs model and classical neurodynamics quite close. In classical neurodynamics, f is asserted to be a sigmoid, whereas in SOENs g is made to have a similar shape through circuit engineering.
Considering how closely SOENs can map onto the classical neurodynamical model (Eq. <ref> compared to Eq. <ref>) and how well the SOENs phenomenological model can capture discrete events due to single spikes (Figs. <ref> and Fig. <ref>) and spike timing (Fig. <ref>), perhaps assuming the classical model forces us to neglect impulses of activity and timing between these impulses is giving up more than we must based on the mathematics. It is usually stated in words that f represents the average firing rate of a neuron, which implies the model only applies to time-averaged spiking activity and cannot be defined to represent discrete firing events. But another interpretation is possible. As in the case of the g term in SOENs, we can think of f as a source term that adds charge to the neuronal membrane in a continuous and sometimes rapid fashion. As we have seen, this interpretation allows for the membrane potential to quickly respond to qualitatively and often quantitatively capture the response to a single spike input. A generating function with a sharper turn on than a sigmoid may be required from f to capture this behavior, but that does not appreciably change the formulation of classical neurodynamics.
Perhaps the major limitation of classical neurodynamics is not from the math itself, but rather the interpretation. Equation <ref> is assumed to model neurons, when important computations can be achieved by interpreting this equation as a model of both dendrites and neurons, with possibly different generating functions employed for each, as is necessary for SOENs. If only neurons are considered, dendritic substructures of the full 𝐉 weighted, directional, adjacency matrix are implicitly omitted, and we forfeit an important stage of processing complexity. To make the best use of temporal and spatial information across the network, it is not as important that we extend Eq. <ref> to explicitly track discrete spikes as it is that we structure the network adjacency matrix, 𝐉, and the dendritic time constants, τ, heterogeneously across scales, giving each neuron an input dendritic tree and an output axonal arbor, just as we find in pyramidal neurons. Computational neuroscience literature abounds with models of active dendrites <cit.>; with SOENs we build them in artificial hardware, and their behavior is quite straightforward to model. With the partial phenomenological model, soma dynamics were still treated by modeling spikes. It was evident that dendrites followed the equations of classical neurodynamics, but only with the full phenomenological model presented here is it apparent that full superconducting optoelectronic neurons also map directly onto classical neurodynamics.
§ DISCUSSION
The pragmatic motivation for this work was that the model, without reference to spikes, provides new avenues for numerical treatment. In this context, it is important that the phenomenological model successfully captures the qualitative features of the downstream dendritic response for a wide range of circuit parameters and neuronal stimuli, and in many cases the quantitative agreement is quite good. Appreciable additional work leveraging parallelized numerical implementation is required to determine exactly how much of a computational speedup results from this formalism.
The model has important limitations. For example, entorhinal grid cells are critical in the formation cognitive maps. One model of their behavior involves on a grid cell responding to two input spiking neurons with spike rates that precess with an animal's velocity in different directions <cit.>. The grid cell then responds to coincidences between spikes from these two input neurons. As shown in Fig. <ref>, coincidences can be handled by the phenomenological model, but only if the spikes generated by the two upstream neurons are represented as discrete, highly localized temporal events. In the phenomenological framework, if these input neurons to the grid cell are deep in a complex network, they will not propagate discrete, highly localized events unless all upstream neurons and dendrites have short time constants to retain spike timing information. Accurate modeling of networks that produce grid cells is therefore one example of an instance in which a full spiking model is likely to provide benefits over the phenomenological approximation.
Given that the phenomenological model provides a qualitative and often quantitative representation of the dynamics of spiking neurons without any reference to spikes, and in some cases this model provides more accurate information about the state of the soma than is contained in the output spike train, we may ask the question: Why do neurons spike? There are many reasons <cit.>. From a mathematical perspective, spikes are a unique basis set in that they can achieve translation and scale invariance, making spiking neural networks capable of representing a wide range of temporal and spatial phenomena, spanning length and time scales <cit.>. Examples from the neuroscience cannon argue that spikes enable neurons to compute and communicate not just with rates but also precise coincidences <cit.>, sequences <cit.>, in bursts <cit.>, and relative to a background phase <cit.> to achieve nested oscillations <cit.> and rhythms <cit.> that direct attention <cit.>, and enable rapidly adaptive processing through multistability <cit.>. Across these examples, the computations performed by dendrites play a crucial role. Spikes have low latency and timing jitter <cit.>, particularly when the stimuli require precision timing <cit.>. They provide a unique means to send information accurately, rapidly, and over long distances, all of which enable highly interconnected networks to exchange information in a robust manner, which equips organisms for survival in a variety of circumstances <cit.>. When stimulus changes rapidly, neurons can represent that change rapidly with spikes. When understanding a stimulus requires precise timing—such as in the auditory cortex where timing differences between two cochlea map to spatial location of a source—spikes are excellent for representing that precise timing. Yet in different circumstances or in different brain regions when stimuli are slowly varying, spikes can still do an excellent job of representing these stimuli as well. It is possible that spikes are important for all the above reasons related to energy, computation, and communication, but that spikes are still not the most important quantity for the network designer to track in their model.
One need not choose globally between a spike-timing representation of information or a slower, rate-based representation. Spikes can do both, and the model presented here can capture both modalities. In typical modeling formalisms, the observables are either delta-function spike trains or time-averaged spike rates. With this formalism for SOENs, we dispense with both; the observables are the signals in all dendrites of the network. The vector 𝐬 uniquely identifies a point in configuration space, and its temporal evolution, Eq. <ref>, provides a complete description of the system. Sometimes 𝐬 varies quickly, resembling spikes. Other times it varies slowly, resembling rates. In all cases, the neurons are adapting dynamically in response to the stimulus. This picture is consistent with the perspective from a dendrite's point of view. At any moment, a dendrite cannot convey the rate of afferent synapse events it has recently experienced, nor can it report the precise time of the most recent or any other input spike. It can only communicate the signal it has stored in its integration loop. With properly configured dendritic arbors and neural network graph structure, these signals are sufficient to represent the input, whether that input is changing rapidly or slowly. The phrase “properly configured” refers to the choices of all elements of the 𝐉 matrix, the loop-filling rates γ, and the time constants, τ, all of which must be chosen based on prior information regarding the statistical structure of the input stimuli the network is designed to represent.
In the specific case of SOENs, we conjecture that the objective of a soma is to communicate information about its state to its downstream synapses. From the physical arguments detailed in Refs. shainline2021optoelectronic and shainline2019superconducting we conclude the optimal medium to communicate this information is light at the few-photon level. But this could be done in at least two ways. One way is to keep a light source on continuously, and to have the luminosity of that source (set by the current through the diode) be an analog function of the state of the soma. The downstream synapses would then receive a signal that, on average, reflected the state of the soma, and the time variability due to the Poisson statistics of the diode would be a source of noise. Another way is to use spikes. The soma produces discrete, short pulses at times that indicate it crossed a threshold, and these spikes include enough photons for every recipient synapse to receive at least one photon with low timing jitter. In this case noise is reduced. The spike case improves over the analog Poisson case because the uncertainty faced by the synapse regarding the state of the soma is more rapidly reduced when the soma state changes significantly, while the Poisson source must be integrated over a longer time to reduce noise from timing jitter and gain confidence (in a Bayesian sense) that the varying photon flux has indeed changed, representing change in the soma's state. The Poisson source could be made less variable, but only by increasing the photon flux, which costs energy. Thus, in the specific case of SOENs, where faint-light signals are received by single-photon detectors, several arguments can be made in favor of the energy and temporal benefits of spikes relative to an alternative analog approach. Comparing to biology, we find similar arguments can be made. If each soma had to maintain its entire axonal arbor at a specified analog voltage that varied in time to track the state of the soma's membrane, significantly more energy would be required, or the signal would be subject to appreciably more noise.
These arguments recapitulate what has already been presented in the literature regarding the reasons neurons spike. But the model presented here does offer a new lens through which to view the situation. In Figs. <ref>(d), (e) and <ref> we showed examples where the phenomenological model propagated analog information about the state of the soma to the downstream dendrites, which allow the phenomenological model to provide a better representation of the state of the soma than was achieved through discrete, binary spikes. In that context we lamented the inability of the phenomenological model to quantitatively match the saw-tooth response of the full spiking neuron, but we can turn the picture around. Instead, one could conjecture that a neuron wishes to, or ideally would, cast a time-continuous, analog signal to all downstream synapses so they could have accurate information about the state of the soma at all times. But no neuron—biological or technological—can accomplish this due to physical constraints. Such signaling is practically difficult and energetically expensive, so neurons resort to spikes as efficient, and quick approximations to the full analog, time-continuous state of the soma. We can then see the message of Figs. <ref>(d), (e) and <ref> inversely: spiking neurons do a pretty good job of approximating the signal the soma would like to send, but their physical limitations force them to approximate the desired smooth signals with jagged responses. Low-pass filtering will smooth the time series, and during realistic network operation, the square corners are rounded by various noise mechanisms, so expending additional energy to capture the analog curve is not a beneficial endeavor.
An additional result of the model is conceptual simplification. We can consider systems entirely in terms of the state vector 𝐬 at all levels of the network, and each s_i is defined at all times purely in terms of other values of s_j and the fixed 𝐉 matrix. The pulsatile body of a spiking neuron and the recipient synapse have both been eliminated from the picture, and what remains is a network of interconnected dendrites, performing summation, temporal integration, and nonlinear transfer functions on their inputs. Everything is continuous in time and constructed from the same dendritic building blocks, which are simple yet versatile. Even the learning circuits which establish plasticity signals are governed by the same picture and the same differential equations. Yet eliminating the spiking somas and recipient synapses in the model does not require forfeiting their benefits in hardware. Superconducting optoelectronic neurons still spike, which means their transmitters are usually off, saving appreciable energy. These spikes can travel far and fast with low noise, updating downstream knowledge at light speed, in contrast to what would be achieved if an analog electrical signal were required to charge a large axonal arbor. And dendrites can still make use of timing information for coincidence and sequence detection, phase and burst coding, and spike-timing-dependent learning protocols.
It may be the case that the first phenomenological model of Ref. shainline2023phenomenological, which included spikes, produces results that are sensitive to the precise timing of spikes, even in low-pass-filtered cases, such as when the peaks of the saw-tooth response of Fig. <ref> affect downstream behavior, with discrepancies observed based on few-nanosecond timing jitter of synapse events, a temporal resolution at which we should not expect the present model to be accurate. Dendritic and neuronal nonlinearities make it likely that the response across trials could be chaotic with respect to minor variations in such timing. One must then consider the timing jitter of physically instantiated components. The dominant contribution to the jitter of synapses events is likely to result from stochasticity in photon production times due to the light source. A one-nanosecond, direct-gap emitter would be ideal, but there is no guarantee such a source will become viable at scale with low cost when integrated with the rest of the SOENs fabrication process. A silicon light source is another possibility, in which case the emission time constant is likely to be between 30 ns and 100 ns, with commensurate uncertainty in timing of synapse events. It is necessary in such cases that any computation or algorithm implemented by the network be resilient to such timing jitter. One approach to achieve this in modeling is to run Monte Carlo trials, inject the expected jitter into the simulation, and average over network outcomes. The average of this procedure will lead to a result comparable to the output of the framework presented here.
The phenomenological model offers a transparent framework for formal analysis and renders various learning modalities and training algorithms applicable to SOENs. We showed above that the analogy between SOENs viewed through this lens and classical neurodynamics is tight. The body of knowledge generated in the context of classical neurodynamics therefore applies in bulk to SOENs as well. Several useful machine-learning algorithms applied to neural networks are motivated by the basic picture of neurons gleaned from classical neurodynamics. For example, backpropagation <cit.> has been the most impactful technique for credit assignment in artificial intelligence, and the transfer functions used in that formalism are typically interpreted as representing average neuron or population spike rates. Yet it has been difficult to implement backpropagation in spiking neural networks due to the discrete, non-differentiable nature of spike signals. In the case of the SOENs phenomenological model, all signals are continuous in time, and the model can be used to calculate steady-state dendritic transfer functions and their derivatives. The steady state of the model can be obtained from Eq. <ref>,
s_ss(ϕ_ss) = 1/α g(ϕ_ss,s_ss),
an equation that can be straightforwardly solved self-consistently, providing the continuous, differentiable transfer functions shown in Fig. <ref>.
The phenomenological model for dendrites was derived directly from a Lagrangian and the Euler-Lagrange equations that result from minimizing action. An approximation was made to treat the JJs as a voltage source of a particular form, and the quantitative agreement between the phenomenological and circuit models indicates the purest Lagrangian of the system is well represented by the approximate form. We can proceed as if the course of dendrites through configuration space follows exactly the path of least action. The phenomenological model for neurons was conjectured on conceptual grounds by analogy to the dendritic case on slower time scales. A first-principles derivation from a Lagrangian is more cumbersome for a neuron due to the complexities of the superconducting thin-film amplifiers and semiconductor transmitter circuits involved. But we have shown that the full circuit equations and the much simpler Lagrangian representation—again a simple circuit with a peculiar but tractable voltage source—provides a decent representation of the dynamics. At the scale of neurons, too, we can proceed as if the network trajectory through configuration space closely follows the path of least action of the much simpler phenomenological circuit of Fig. <ref>(c). Complete evolution of the network follows from foundational physical principles. The tools of statistical mechanics can be applied.
The framework developed here may form a bridge to energy models <cit.> such as Hopfield networks <cit.> and associated training algorithms such as equilibrium propagation <cit.>, wherein an energy is associated with any state of the network, and the task of training is to find update rules that take an error or cost function and use this information to change the parameters of the network in such a way that states of low energy are also states of low error. The equation of motion for classical neurodynamics, Eq. <ref>, was used in Ref. yan2013nonequilibrium as a starting point to formulate a nonequilibrium statistical mechanical model of neural networks. Because the dynamical equation of SOENs under the approximations considered here match classical neurodynamics, the formalism of Ref. yan2013nonequilibrium applies to SOENs as well. The Lagrangian model presented here can be straightforwardly extended to an energy function. The energy in a given dendrite is
E_i = 1/2β s_i^2.
Given the threshold-linear transfer function of a dendrite, Fig. <ref>(a), which also happens to be symmetric about ϕ = 0, a dendrite with two inputs of equal magnitude and opposite sign has energy E ∼ (s_1-s_2)^2, which is a commonly used cost function for energy models. Therefore, in SOENs, the physical energy stored in a dendrite is also the ubiquitous energy used as an error function, a fact with immediate ramifications for predictive coding <cit.> and the free-energy principle <cit.>. With SOENs, it is trivial to engineer dendrites or neurons whose activity ceases only when top-down and bottom-up signals match.
In the energy picture, the parameters J_ij affect the energy landscape and so can be chosen to sculpt a given network to the statistical signatures of a certain data set. Additional plasticity dendrites can also be added to dynamically sculpt the landscape, enabling phenomena such as multistability <cit.>, winnerless competition <cit.> and episodic memories <cit.>. To implement biologically inspired learning techniques, such as Hebbian update rules <cit.>, spike-timing-dependent plasticity <cit.>, and three-factor update <cit.> (relevant for reinforcement learning), the fact that the model captures coincidence and sequence events is essential, and the values of s present at all dendrites can be used for credit assignment and update of the entire dendritic arbor in the presence of a top-down reward or error signal. These examples point to a plethora of possible future investigations. The value of the model presented here will be gauged by its numerical speed and its ability to bridge SOENs to these vast bodies of other work so this promising hardware can be made practically useful.
§ ACKNOWLEDGEMENTS
This is a contribution of NIST, an agency of the US government, not subject to copyright.
unsrt
| A promising route to better understand the brain and to leverage its principles to perform useful computations is to construct technological systems that embody neural principles. This field of research is known as neuromorphic computing <cit.>. Within this domain, a specific approach is being pursued that aims to leverage light for spike-based communication at the single-photon level in conjunction with superconducting circuitry to perform fast, energy efficient computation <cit.>. Such systems are referred to as superconducting optoelectronic networks (SOENs). Much progress toward realization of these systems has been made in hardware <cit.>, but to make them a useful technology, appreciable theoretical development and modeling is necessary. For theoretical advances to be possible, an accurate, numerically efficient, and conceptually transparent formalism is required.
The most physically realistic models of biological neurons are based on the Hodgkin-Huxley equations, a system of four coupled, first-order, ordinary differential equations that capture the dynamics of four types of ion channels with voltage-dependent conductances through the cell membrane <cit.>. Similarly, the microscopic components that constitute SOENs are based on superconducting quantum interference devices (SQUIDs, <cit.>), in which the interplay between two Josephson junctions (JJs) and an output low-pass filter can be expressed as a system of five first-order equations <cit.>. The correspondence between the superconducting and biological systems is remarkable <cit.>. In computational neuroscience, the full Hodgkin-Huxley equations are not commonly used to represent large circuits and networks due to the numerical overhead. Instead, simplified models that capture the relevant functionality with more efficient numerical implementation are used in practice with a great deal of success in explaining measured data. This type of phenomenological model was developed for SOENs in Ref. shainline2023phenomenological to strike a compromise between accuracy and numerical speed. That model reduced the problem to a single, first-order, ordinary differential equation representing each synapse, dendrite, or soma.
In addition to numerical speed, phenomenological models can also aid in the interpretability of behavior. The complexity of four interacting, voltage-dependent ion channels can be simplified to a single leaky integrator equation. In neuroscience, the communication of action potentials between neurons is another example of complicated behavior that is often not explicitly treated in models, particularly models that are intended to scale to very large numbers of interacting elements. In many cases, the discrete and discontinuous behavior of spikes is important, such as for detecting sharp transitions or coincidences, but in other cases the encumberance of calculating and recording spike times is not worth the added insight, and the added insight may even be counterproductive, like tracking the positions and momenta of all atoms in a gas when thermodynamic quantities will suffice. The phenomenological model of Ref. shainline2023phenomenological greatly simplified the analysis of the computational aspects of synapses, dendrites, and somas, but the model explicitly included spikes as an essential mechanism of interneuronal communication. In that sense, the model made it only halfway to the goal of simplifying the formalism.
Here we extend that framework by eliminating the need to model spikes. We do not claim that spikes are irrelevant to the computations or communication protocols implemented by neurons, but rather that a model without spikes can provide a helpful treatment of a system that can serve as a foundation for development of algorithms and elucidation of connections to neuroscience and other complex systems. In the simplest theoretical treatments of neural systems, a neuron's activity is reduced to a rate. In one treatment of such a model <cit.>, Hopfield and Tank made the analogy between a picture that includes all discrete spikes and their arrival times as the “quantum” picture, with the rate-based treatment as the continuous, “classical” counterpart. While classical, rate-coding models can explain much of observed neural activity in behaving animals, significant information is omitted, and such models fail to capture many of the benefits of the manner in which neurons leverage the time domain for efficient computation and communication. Here we provide a model that is intended to play this role of a “classical” model of SOENs in the sense that discrete spikes are not explicitly treated. We find that while the model derived is capable of rate coding, it is not limited to rate coding. Numerical experiments indicate that it is possible to treat the entire system without reference to spikes in a manner that remains capable of representing rapid (although still continuous), pulsatile activity. The model can accurately capture a dendrite or neuron's response to a single synapse event as well as to coincidences and sequences between two events. The simple governing equations facilitate formal analysis and provide opportunities for algorithm development for artificial intelligence, while the ability to represent abrupt activity allows numerical simulations to remain closely connected to actual hardware performance. We do not take a novel position on the rate-versus-spike debate <cit.>; our bias toward the value of spike timing is implicit in the numerical examples chosen, and our appreciation of the value of a rate-based formalism is clear from our choice to seek such a model in the first place.
In Sec. <ref> of this work, we derive the phenomenological model of dendrites from the circuit Lagrangian. Through inspection of the properties of the dendritic model, we motivate and perform the extension to the full phenomenological model of neurons communicating to dendrites via synapses and show that the form of the equations modeling neurons in this way is identical to that of the dendrite, thereby unifying the treatment of all components of the system in a common framework. We assess the accuracy relative to the model that includes spikes in Sec. <ref>. We compare the formalism to that of classical neurodynamics in Sec. <ref>, and we discuss several implications in Sec. <ref>. | null | null | null | The pragmatic motivation for this work was that the model, without reference to spikes, provides new avenues for numerical treatment. In this context, it is important that the phenomenological model successfully captures the qualitative features of the downstream dendritic response for a wide range of circuit parameters and neuronal stimuli, and in many cases the quantitative agreement is quite good. Appreciable additional work leveraging parallelized numerical implementation is required to determine exactly how much of a computational speedup results from this formalism.
The model has important limitations. For example, entorhinal grid cells are critical in the formation cognitive maps. One model of their behavior involves on a grid cell responding to two input spiking neurons with spike rates that precess with an animal's velocity in different directions <cit.>. The grid cell then responds to coincidences between spikes from these two input neurons. As shown in Fig. <ref>, coincidences can be handled by the phenomenological model, but only if the spikes generated by the two upstream neurons are represented as discrete, highly localized temporal events. In the phenomenological framework, if these input neurons to the grid cell are deep in a complex network, they will not propagate discrete, highly localized events unless all upstream neurons and dendrites have short time constants to retain spike timing information. Accurate modeling of networks that produce grid cells is therefore one example of an instance in which a full spiking model is likely to provide benefits over the phenomenological approximation.
Given that the phenomenological model provides a qualitative and often quantitative representation of the dynamics of spiking neurons without any reference to spikes, and in some cases this model provides more accurate information about the state of the soma than is contained in the output spike train, we may ask the question: Why do neurons spike? There are many reasons <cit.>. From a mathematical perspective, spikes are a unique basis set in that they can achieve translation and scale invariance, making spiking neural networks capable of representing a wide range of temporal and spatial phenomena, spanning length and time scales <cit.>. Examples from the neuroscience cannon argue that spikes enable neurons to compute and communicate not just with rates but also precise coincidences <cit.>, sequences <cit.>, in bursts <cit.>, and relative to a background phase <cit.> to achieve nested oscillations <cit.> and rhythms <cit.> that direct attention <cit.>, and enable rapidly adaptive processing through multistability <cit.>. Across these examples, the computations performed by dendrites play a crucial role. Spikes have low latency and timing jitter <cit.>, particularly when the stimuli require precision timing <cit.>. They provide a unique means to send information accurately, rapidly, and over long distances, all of which enable highly interconnected networks to exchange information in a robust manner, which equips organisms for survival in a variety of circumstances <cit.>. When stimulus changes rapidly, neurons can represent that change rapidly with spikes. When understanding a stimulus requires precise timing—such as in the auditory cortex where timing differences between two cochlea map to spatial location of a source—spikes are excellent for representing that precise timing. Yet in different circumstances or in different brain regions when stimuli are slowly varying, spikes can still do an excellent job of representing these stimuli as well. It is possible that spikes are important for all the above reasons related to energy, computation, and communication, but that spikes are still not the most important quantity for the network designer to track in their model.
One need not choose globally between a spike-timing representation of information or a slower, rate-based representation. Spikes can do both, and the model presented here can capture both modalities. In typical modeling formalisms, the observables are either delta-function spike trains or time-averaged spike rates. With this formalism for SOENs, we dispense with both; the observables are the signals in all dendrites of the network. The vector 𝐬 uniquely identifies a point in configuration space, and its temporal evolution, Eq. <ref>, provides a complete description of the system. Sometimes 𝐬 varies quickly, resembling spikes. Other times it varies slowly, resembling rates. In all cases, the neurons are adapting dynamically in response to the stimulus. This picture is consistent with the perspective from a dendrite's point of view. At any moment, a dendrite cannot convey the rate of afferent synapse events it has recently experienced, nor can it report the precise time of the most recent or any other input spike. It can only communicate the signal it has stored in its integration loop. With properly configured dendritic arbors and neural network graph structure, these signals are sufficient to represent the input, whether that input is changing rapidly or slowly. The phrase “properly configured” refers to the choices of all elements of the 𝐉 matrix, the loop-filling rates γ, and the time constants, τ, all of which must be chosen based on prior information regarding the statistical structure of the input stimuli the network is designed to represent.
In the specific case of SOENs, we conjecture that the objective of a soma is to communicate information about its state to its downstream synapses. From the physical arguments detailed in Refs. shainline2021optoelectronic and shainline2019superconducting we conclude the optimal medium to communicate this information is light at the few-photon level. But this could be done in at least two ways. One way is to keep a light source on continuously, and to have the luminosity of that source (set by the current through the diode) be an analog function of the state of the soma. The downstream synapses would then receive a signal that, on average, reflected the state of the soma, and the time variability due to the Poisson statistics of the diode would be a source of noise. Another way is to use spikes. The soma produces discrete, short pulses at times that indicate it crossed a threshold, and these spikes include enough photons for every recipient synapse to receive at least one photon with low timing jitter. In this case noise is reduced. The spike case improves over the analog Poisson case because the uncertainty faced by the synapse regarding the state of the soma is more rapidly reduced when the soma state changes significantly, while the Poisson source must be integrated over a longer time to reduce noise from timing jitter and gain confidence (in a Bayesian sense) that the varying photon flux has indeed changed, representing change in the soma's state. The Poisson source could be made less variable, but only by increasing the photon flux, which costs energy. Thus, in the specific case of SOENs, where faint-light signals are received by single-photon detectors, several arguments can be made in favor of the energy and temporal benefits of spikes relative to an alternative analog approach. Comparing to biology, we find similar arguments can be made. If each soma had to maintain its entire axonal arbor at a specified analog voltage that varied in time to track the state of the soma's membrane, significantly more energy would be required, or the signal would be subject to appreciably more noise.
These arguments recapitulate what has already been presented in the literature regarding the reasons neurons spike. But the model presented here does offer a new lens through which to view the situation. In Figs. <ref>(d), (e) and <ref> we showed examples where the phenomenological model propagated analog information about the state of the soma to the downstream dendrites, which allow the phenomenological model to provide a better representation of the state of the soma than was achieved through discrete, binary spikes. In that context we lamented the inability of the phenomenological model to quantitatively match the saw-tooth response of the full spiking neuron, but we can turn the picture around. Instead, one could conjecture that a neuron wishes to, or ideally would, cast a time-continuous, analog signal to all downstream synapses so they could have accurate information about the state of the soma at all times. But no neuron—biological or technological—can accomplish this due to physical constraints. Such signaling is practically difficult and energetically expensive, so neurons resort to spikes as efficient, and quick approximations to the full analog, time-continuous state of the soma. We can then see the message of Figs. <ref>(d), (e) and <ref> inversely: spiking neurons do a pretty good job of approximating the signal the soma would like to send, but their physical limitations force them to approximate the desired smooth signals with jagged responses. Low-pass filtering will smooth the time series, and during realistic network operation, the square corners are rounded by various noise mechanisms, so expending additional energy to capture the analog curve is not a beneficial endeavor.
An additional result of the model is conceptual simplification. We can consider systems entirely in terms of the state vector 𝐬 at all levels of the network, and each s_i is defined at all times purely in terms of other values of s_j and the fixed 𝐉 matrix. The pulsatile body of a spiking neuron and the recipient synapse have both been eliminated from the picture, and what remains is a network of interconnected dendrites, performing summation, temporal integration, and nonlinear transfer functions on their inputs. Everything is continuous in time and constructed from the same dendritic building blocks, which are simple yet versatile. Even the learning circuits which establish plasticity signals are governed by the same picture and the same differential equations. Yet eliminating the spiking somas and recipient synapses in the model does not require forfeiting their benefits in hardware. Superconducting optoelectronic neurons still spike, which means their transmitters are usually off, saving appreciable energy. These spikes can travel far and fast with low noise, updating downstream knowledge at light speed, in contrast to what would be achieved if an analog electrical signal were required to charge a large axonal arbor. And dendrites can still make use of timing information for coincidence and sequence detection, phase and burst coding, and spike-timing-dependent learning protocols.
It may be the case that the first phenomenological model of Ref. shainline2023phenomenological, which included spikes, produces results that are sensitive to the precise timing of spikes, even in low-pass-filtered cases, such as when the peaks of the saw-tooth response of Fig. <ref> affect downstream behavior, with discrepancies observed based on few-nanosecond timing jitter of synapse events, a temporal resolution at which we should not expect the present model to be accurate. Dendritic and neuronal nonlinearities make it likely that the response across trials could be chaotic with respect to minor variations in such timing. One must then consider the timing jitter of physically instantiated components. The dominant contribution to the jitter of synapses events is likely to result from stochasticity in photon production times due to the light source. A one-nanosecond, direct-gap emitter would be ideal, but there is no guarantee such a source will become viable at scale with low cost when integrated with the rest of the SOENs fabrication process. A silicon light source is another possibility, in which case the emission time constant is likely to be between 30 ns and 100 ns, with commensurate uncertainty in timing of synapse events. It is necessary in such cases that any computation or algorithm implemented by the network be resilient to such timing jitter. One approach to achieve this in modeling is to run Monte Carlo trials, inject the expected jitter into the simulation, and average over network outcomes. The average of this procedure will lead to a result comparable to the output of the framework presented here.
The phenomenological model offers a transparent framework for formal analysis and renders various learning modalities and training algorithms applicable to SOENs. We showed above that the analogy between SOENs viewed through this lens and classical neurodynamics is tight. The body of knowledge generated in the context of classical neurodynamics therefore applies in bulk to SOENs as well. Several useful machine-learning algorithms applied to neural networks are motivated by the basic picture of neurons gleaned from classical neurodynamics. For example, backpropagation <cit.> has been the most impactful technique for credit assignment in artificial intelligence, and the transfer functions used in that formalism are typically interpreted as representing average neuron or population spike rates. Yet it has been difficult to implement backpropagation in spiking neural networks due to the discrete, non-differentiable nature of spike signals. In the case of the SOENs phenomenological model, all signals are continuous in time, and the model can be used to calculate steady-state dendritic transfer functions and their derivatives. The steady state of the model can be obtained from Eq. <ref>,
s_ss(ϕ_ss) = 1/α g(ϕ_ss,s_ss),
an equation that can be straightforwardly solved self-consistently, providing the continuous, differentiable transfer functions shown in Fig. <ref>.
The phenomenological model for dendrites was derived directly from a Lagrangian and the Euler-Lagrange equations that result from minimizing action. An approximation was made to treat the JJs as a voltage source of a particular form, and the quantitative agreement between the phenomenological and circuit models indicates the purest Lagrangian of the system is well represented by the approximate form. We can proceed as if the course of dendrites through configuration space follows exactly the path of least action. The phenomenological model for neurons was conjectured on conceptual grounds by analogy to the dendritic case on slower time scales. A first-principles derivation from a Lagrangian is more cumbersome for a neuron due to the complexities of the superconducting thin-film amplifiers and semiconductor transmitter circuits involved. But we have shown that the full circuit equations and the much simpler Lagrangian representation—again a simple circuit with a peculiar but tractable voltage source—provides a decent representation of the dynamics. At the scale of neurons, too, we can proceed as if the network trajectory through configuration space closely follows the path of least action of the much simpler phenomenological circuit of Fig. <ref>(c). Complete evolution of the network follows from foundational physical principles. The tools of statistical mechanics can be applied.
The framework developed here may form a bridge to energy models <cit.> such as Hopfield networks <cit.> and associated training algorithms such as equilibrium propagation <cit.>, wherein an energy is associated with any state of the network, and the task of training is to find update rules that take an error or cost function and use this information to change the parameters of the network in such a way that states of low energy are also states of low error. The equation of motion for classical neurodynamics, Eq. <ref>, was used in Ref. yan2013nonequilibrium as a starting point to formulate a nonequilibrium statistical mechanical model of neural networks. Because the dynamical equation of SOENs under the approximations considered here match classical neurodynamics, the formalism of Ref. yan2013nonequilibrium applies to SOENs as well. The Lagrangian model presented here can be straightforwardly extended to an energy function. The energy in a given dendrite is
E_i = 1/2β s_i^2.
Given the threshold-linear transfer function of a dendrite, Fig. <ref>(a), which also happens to be symmetric about ϕ = 0, a dendrite with two inputs of equal magnitude and opposite sign has energy E ∼ (s_1-s_2)^2, which is a commonly used cost function for energy models. Therefore, in SOENs, the physical energy stored in a dendrite is also the ubiquitous energy used as an error function, a fact with immediate ramifications for predictive coding <cit.> and the free-energy principle <cit.>. With SOENs, it is trivial to engineer dendrites or neurons whose activity ceases only when top-down and bottom-up signals match.
In the energy picture, the parameters J_ij affect the energy landscape and so can be chosen to sculpt a given network to the statistical signatures of a certain data set. Additional plasticity dendrites can also be added to dynamically sculpt the landscape, enabling phenomena such as multistability <cit.>, winnerless competition <cit.> and episodic memories <cit.>. To implement biologically inspired learning techniques, such as Hebbian update rules <cit.>, spike-timing-dependent plasticity <cit.>, and three-factor update <cit.> (relevant for reinforcement learning), the fact that the model captures coincidence and sequence events is essential, and the values of s present at all dendrites can be used for credit assignment and update of the entire dendritic arbor in the presence of a top-down reward or error signal. These examples point to a plethora of possible future investigations. The value of the model presented here will be gauged by its numerical speed and its ability to bridge SOENs to these vast bodies of other work so this promising hardware can be made practically useful. | null |
http://arxiv.org/abs/2409.17211v1 | 20240925171955 | X-ray multiple-beam (n-baem) dynamical diffraction theories, numerical methods to solve them and experimental verification by using the synchrotron X-rays | [
"Kouhei Okitsu"
] | q-bio.BM | [
"q-bio.BM",
"math-ph",
"math.MP",
"physics.comp-ph",
"physics.med-ph",
"34L20 (Primary) 65L15, 42A38, 65T50 (Secondary)",
"I.6.1"
] |
[
[
=====
0.9
§ ABSTRACT
Behavior of X-rays diffracted in a perfect or quasi-perfect crystal
can be described by the dynamical theory of X-ray diffraction.
Study on the two-beam cases
in which only transmitted and one reflected X-ray beams are strong
has a history of one hundred years.
However, the population of researchers who study on
the multiple-beam cases (n-beam cases) in which
more than two beams are simultaneously strong is small.
The present author has derived the Takagi-Taupin (T-T) dynamical theory
that can be applied to the n-beam cases,
coded the computer programs to solve it and
experimentally verified them by using the synchrotron X-rays.
The equivalence between the Ewald-Laue (E-L)
and the T-T dynamical theories
described by the Fourier transform also for the n-beam cases
is explicitly verified in the present paper.
Further, the methods of the computer simulations and the experiments
are also described.
Furthermore, a hypothesis concerning the too large values of R-factor
in protein crystallography is also described.
This might be extremely important in protein crystallography
in the future.
Keywords:
X-ray diffraction,
dynamical diffraction theory,
multiple-beam diffraction,
n-beam diffraction,
Takagi-Taupin equation,
silicon crystal,
diamond crystal,
X-ray phase retarder,
computer simulation,
synchrotron radiation,
protein crystallography,
phase problem
§ INTRODUCTION
The theory that describes the behavior of X-rays
diffracted in a perfect or quasi-perfect crystal
is called as the dynamical theory.
Just after the discovery of the X-ray diffraction
by von. Laue,
basic theories were given by
Darwin
<cit.>
and by Ewald
<cit.>.
The most widely known dynamical theory
is the Ewald-Laue (E-L) theory that has been derived by
applying the two-beam approximation to
the fundamental equation given by von. Laue
<cit.>.
There are several textbooks that describe the dynamical theory
<cit.>.
However, the Takagi-Taupin (T-T) equation
that has been derived by Takagi
<cit.>
was accepted as another form of the dynamical theory.
It can deal with the X-ray wave field
in a distorted crystal.
Various images of the crystal defects
were computer-simulated based on the T-T equation
<cit.>.
Incidentally, it can easily be understood
that n reciprocal lattice nodes (n ≥ 3)
can exist on the surface of the Ewald sphere
by rotating the crystal
around the axis of H_0 H_1.
Here, H_0 is the origin of the reciprocal space
and H_1 is the reciprocal lattice node that causes
the H_0 H_1 reflection.
X-ray intensity measurement taken by rotating
the H_0 H_1 axis
is called as the Renninger scan
<cit.>.
While silicon, diamond and/or germanium crystals
are usually used as the monochromator
in the energy scan of X-ray spectroscopy,
discontinuities of the X-ray intensity
are frequently found and referred to as glitches.
When scanning the photon energy of X-rays by
rotating the monochromator crystal,
the radius of the Ewald sphere changes.
Then, a third reciprocal lattice node
other than the origin of the reciprocal space H_0
and reciprocal lattice node H_1
giving the primary reflection
causes the glitch when it exists on the surface
of the Ewald sphere.
Let refer to the case that
n-reciprocal lattice nodes H_0, H_1, H_2,
⋯, H_n-1 exist on the surfaces of the Ewald sphere
as the n-beam cases.
The E-L two-beam dynamical diffraction theory
was extended such as to deal with the n-beam cases
in 1965-1968
<cit.>.
The numerical method to solve the theory was given by Colella in 1974
<cit.>.
However,
the extension of the T-T equation
was delayed for many years
due to the complexity
when dealing with the polarization effect of X-rays.
The three-beam T-T equation that neglected the polarization effect
was given by Thorkildsen in 1987
<cit.>.
The T-T equation that takes into account the polarization factor
was for the first time given by Larsen and Thorkildsen in 1998
<cit.>.
The present author reported
the T-T equation extended to the n-beam cases
for n ∈{3, 4, 6, 8, 12} in 2003
<cit.>.
The numerical method to solve it
and experimental verification by using the synchrotron radiation
was given by the present author and coauthors
for a six-beam case
<cit.>.
The computer-simulated and experimentally obtained results
to be compared with each other
for the n-beam cases
were reported
for n ∈{3, 4, 5, 6, 8, 12} in 2012 by Okitsu, Imai and Yoda
<cit.>.
The excellent agreements were found between the computer-simulated
and the experimentally obtained pinhole topographs.
Between the E-L and the T-T dynamical theories,
there is a simple relation described by the Fourier transform
that has been implicitly recognized but
has been explicitly described for the first time in 2012
<cit.>.
It can be recognized that
this delayed the extension to the n-beam cases of the T-T equation
in comarison with the E-L dynamical theory.
In the present paper,
the n-beam E-L theory is derived
from Laue's fundamental equation of the dynamical theory
<cit.>
at first.
Then, the n-beam T-T equation is derived by
Fourier-transforming the n-beam E-L theory.
The equivalence between the n-beam E-L and T-T dynamical theories
is explicitly described
also for an arbitrary number of n.
The n-beam dynamical diffraction phenomena of X-rays
can be described by both the T-T and E-L theories
and be numerically solved.
The numerical methods to solve these theories have
advantages and disadvantages when compared with each other.
The present author considers that
they should be used depending to the purpose
with this recognition.
Authier's book describing the dynamical theory
of X-ray diffraction
<cit.>
is recognized to be the most widely read textbook
that has over 500 pages.
However, it has only 24 pages for description
on the n-beam diffraction.
In Pinsker's book
<cit.>,
descriptions concerning the n-beam E-L theory
are found.
These have been revied by
Weckert and Hümmer
<cit.>
and Colella
<cit.>.
§ DERIVATION OF THE EWALD-LAUE (E-L) N-BEAM DYNAMICAL THEORY
The following equation is the fundamental equation of the dynamical theory
<cit.>:
k_i^2 - K^2 /k_i^2𝒟_i
= ∑_j χ_h_i - h_j[𝒟_j]_⊥𝐤_i.
Here,
k_i is the wavenumber of the ith numbered Bloch wave
whose wavevector is
𝐤_i
(= 𝐤_0 + 𝐡_i)
of the ith numbered Bloch wave.
𝐤_0 is the forward-diffracted X-ray beam
in the crystal.
𝐡_i is the scattering vector.
K (= 1 / λ) is the the wavenumber of
the incident X-rays in vacuum where
λ is the wavelength of them.
𝒟_i and 𝒟_j
are amplitude vectors of the ith and jth numbered Bloch waves.
∑_j is the infinite summation for j.
χ_h_i - h_j is the Fourier coefficient of the electric susceptibility.
[𝒟_j]_⊥𝐤_i
is the vector component
of 𝒟_j perpendicular to 𝐤_i.
By applying the approximation of k_i + K ≈ 2 k_i
to (<ref>),
the following equation can be obtained:
ξ_i 𝒟_i = K /2∑_j
χ_h_i - h_j[ 𝒟_j ]_⊥𝐤_i,
where ξ_i = k_i - K.
The electric displacement vectors 𝒟_i and
𝒟_j can be reperesented as linear combinations of
the scalar electric displacements as follows:
𝒟_i = 𝒟_i^(0)𝐞_i^(0)
+ 𝒟_i^(1)𝐞_i^(1),
𝒟_j = 𝒟_j^(0)𝐞_j^(0)
+ 𝒟_j^(1)𝐞_j^(1).
When 𝐬_i is a unit vector
in the direction of 𝐤_i and then
𝐞_i^(0) and 𝐞_i^(1)
are defined such that
𝐬_i, 𝐞_i^(0) and 𝐞_i^(1)
construct a right-handed orthogonal system in this order.
𝐬_j, 𝐞_j^(0) and 𝐞_j^(1)
are defined in the same way.
With regard to the following description,
Fig. <ref>
should be referred.
When considering the number of n for cubic crystals
with the highest symmetry,
the number of reciprocal lattice nodes n that can simultaneously exist
on the surface of the Ewald sphere,
is restricted to be 3, 4, 5, 6, 8 and 12
by applying the approximation that
the distances of the reciprocal lattice nodes
other than n reciprocal lattice nodes
to be taken into account,
are sufficiently far from the surface of the Ewald sphere.
The Laue point La_0 is the point whose distance from
the reciprocal lattice nodes H_i
(i ∈{ 0, 1, ⋯, n-1 }) is K
(= 1 / λ).
λ is the wavelength of the X-rays in vacuum.
Pl_i is a plane surface that approximates the sphere
whose radius is K and center is H_i.
Only Pl_0 and Pl_3 are drawn in Fig. <ref>.
The Laue point is darely symbolized as La_0 such that
the theories can be extended for the cases
that | La_0 H_i|
is not exactly equal K.
P_1 is the point on Pl_0 that is the start point
of the wavevector of the incident X-rays whose end point is H_0.
The polarization factors C and S are defined as follows,
𝐞_j^(m) = S_i, j^(m) 𝐬_i
+ C_i, j^(0, m) 𝐞_i^(0)
+ C_i, j^(1, m) 𝐞_i^(1).
Therefore,
S_i, j^(m) = 𝐬_i ·𝐞_j^(m),
C_i, j^(0, m) = 𝐞_i^(0) ·𝐞_j^(m),
C_i, j^(1, m) = 𝐞_i^(1) ·𝐞_j^(m).
By substituting (<ref>) and (<ref>)
into the left and right sides of (<ref>), respectively,
the following equations can be obtained:
ξ_i (
𝒟_i^(0) 𝐞_i^(0)
+ 𝒟_i^(1) 𝐞_i^(1)
)
= K /2
∑_j = 0^n - 1 χ_h_i - h_j
[
𝒟_j^(0) 𝐞_j^(0)
+ 𝒟_j^(1) 𝐞_j^(1)
]_⊥𝐤_i.
By substituting (<ref>) into
the right side of (<ref>) and comparing
the terms of 𝐞_i^(l),
ξ_i 𝒟_i^(l)
= K /2
∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1
C_i, j^(l, m)
𝒟_j^(m).
P_1^' P_1 is
parallel to the downward surface normal 𝐧_z of the crystal
and described as follows:
P_1^' P_1 = ξ𝐧_z.
β^(0) and β^(1) are the angular deviations of the start
point of wavevector of the incident X-rays.
In reference with Fig. <ref>,
they are described as follows:
P_1 La_0
= K β^(0)𝐞_0^(0)
+ K β^(1)𝐞_0^(1).
ξ_i (= k_i - K) is obtained from the scalar product
of 𝐬_i and 𝐤_i - 𝐊_i.
Here, 𝐤_i = P_1^' H_i and
𝐊_i = La_0 H_i.
(<ref>) can be substituted into
the scalar product of
𝐬_i
·
P_1^' La_0
to obtain the following equation:
ξ_i
= 𝐬_i ·(
P_1^'P_1
+ P_1La_0
)
= ξ𝐬_i ·𝐧_z
+ K β^(0) 𝐬_i ·𝐞_0^(0)
+ K β^(1) 𝐬_i ·𝐞_0^(1)
= ξcosΘ_i + K β^(0) S_i, 0^(0)
+ K β^(1) S_i, 0^(1).
(<ref>) can be substituted into
the left side of (<ref>)
to obtain the following equation:
ξcosΘ_i 𝒟_i^(l)
+ K (
S_i, 0^(0) β^(0)
+ S_i, 0^(1) β^(1)
)
𝒟_i^(l)
= K /2 ∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1
C_i, j^(l, m)
𝒟_j^(m).
Here, i, j ∈{ 0,1, ⋯, n-1 },
n ∈{ 3,4,5,6,8,12 } and
l, m ∈{ 0, 1 }.
Θ_i is the angle spanned by 𝐬_i and 𝐧_z.
By deviding the both sides of (<ref>) by cosΘ_i,
the following equation can be obtained:
ξ𝒟_i^(l)
= - K/ cosΘ_i (
S_i, 0^(0) β^(0)
+ S_i, 0^(1) β^(1)
)
𝒟_i^(l)
+ K/ 2 cosΘ_i
∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1
C_i, j^(l, m)
𝒟_j^(m).
(<ref>) can also be described by using
matrices and a vector as follows:
ξ𝐄𝒟
= 𝐀𝒟.
Here, 𝐄 is a 2 n × 2 n unit matrix.
𝒟 is a 2n-dimensional column vector whose
qth element is 𝒟_j^(m)
where q = 2 j + m + 1.
Hereafter, eigenvector or matrix whose column vectors are eigenvectors,
are symbolized with flower characters.
The element of the pth raw and qth column of
the 2 n × 2 n matrix 𝐀,
a_p, q(p = 2 i + l + 1) is given as follows:
a_p, q = K / 2 cosΘ_i
χ_h_i - h_j C_i, j^(l, m)
- δ_p, q K/ cosΘ_i
( S_i, 0^(0) β^(0)
+ S_i, 0^(1) β^(1)
).
Here, δ_p, q is the Kronecker delta.
(<ref>) describes an eigenvalue problem
whose 2 n eigenvalues are ξ and
2n eigen vectors are 𝒟.
ξ restricts the wavevector of the the Bloch wave.
𝒟 are amplitude ratios of the Amplitudes.
The two-beam E-L theory can also be described as an eigenvalue problem.
However, any textbook that describes
the two-beam theory as an eigenvalue problem cannot be found.
The conventional description of the two-beam dynamical theory
has two steps, i.e.
description of the dispersion surfaces at first and then
(<ref>) is deformed as follows:
(𝐀-ξ𝐄)
𝒟 = 𝐎.
Here, 𝐎 are a 2n-dimensional column vector whose all
elements are zero.
(𝐀-ξ𝐄) = 0.
(<ref>) describes the condition for
(<ref>) has the solution other than zero vector.
In the two-beam case,
σ- and π-polarized X-rays can be dealt with independently
since they do not interfere with each other.
Further, the term of j=i of the left side of (<ref>)
is deleted in general by defining the Lorentz point.
The dispersion surfaces described with (<ref>)
can be approximated by hyperbolic curves
whose cross point of the asymptotes is the Lorentz point.
Then, the two-beam E-L theory can be approximately solved analytically.
In the description of the present paper,
Lorentz point is not defined.
There is an advantage that
the T-T equation explicitly having the term of j=i can deal with
the wave fields in an arbitrary shaped crystal
<cit.>.
The dispersion surfaces for the n-beam cases
are described by a complex 2nth order equation
whose analytical solution cannot be obtained.
This is one of the reason for the late development
of the n-beam dynamical theory.
The numerical solution of the n-beam E-L theory
has been given by Colella in 1974
<cit.>
for the first time.
Colella's method takes into account the curvature
of the sphere whose radius is K and
center is H_i.
This method gives the numerical solution with a higher precision
when the start point of the wavevector of Incident X-rays is far distant
from the Laue Point
in comparison with the method to solve the eigenvalue problem
described by (<ref>).
In Fig. <ref>,
La_i whose distance from
H_i (i ∈{ 6, 7,⋯, 17 }) is K,
is also described
in addition to the Laue point La_0
whose distance from
H_0, H_1, H_2, H_3, H_4 and H_5 is K.
For later description in <ref>
regarding to the 18-beam case
as shown in Fig. <ref>,
the definition of La_i is necessary.
In Fig. <ref>,
La_0 La_i is parallel to
P_1^' P_1.
Then, ξ𝒟_i^(l)
in the left side of (<ref>)
should be replaced with
(ξ + ξ_i^') 𝒟_i^(l).
Here, ξ_i^' = 0 for i < 6.
(<ref>) that gives elements
of 2n × 2n matrix in (<ref>)
should be rewritten as follows:
a_p, q = K /2 cosΘ_i
χ_h_i - h_j C_i, j^(l, m)
- δ_p, q K/ cosΘ_i
( S_i, 0^(0) β^(0)
+ S_i, 0^(1) β^(1)
) - δ_p, q ξ_i^',
where, ξ_i^' = 0 for i < 6.
Fig. <ref> (b) has been
obtained in the following sequence i.e.
i) define the 2n × 2n (n = 18) matrix
whose elements are as described in
(<ref>),
ii) solve the eigenvalue problem of (<ref>) and
iii) fast Fourier transform the diffraction curves
obtained based on (<ref>).
This has an important meaning.
In the T-T equation that has been described
in 2012
<cit.>,
the n reciprocal lattice nodes should be
on a circle in the reciprocal space.
However,
both for the E-L and T-T dynamical theories
described in the present paper,
the above restriction has been removed.
Further, La_0^' is separately defined
in the vicinity of La_0.
Then, La_i^'
(i ∈{ 0, 1, 2, ⋯, n - 1 })
are defined on Pl_i such that
La_0^' La_i^'
= ξ_i^''𝐧_z.
The n-beam E-L theory corresponding to (<ref>)
is described as follows:
ξcosΘ_i 𝒟_i^'(l)
+ K (
S_i, 0^(0) β^'(0)
+ S_i, 0^(1) β^'(1)
)
𝒟_i^'(l)
= - ξ_i^''
cosΘ_i 𝒟_i^'(l)
+ K /2 ∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1
C_i, j^(l, m)
𝒟_j^'(m),
where, i, j ∈{ 0,1, ⋯, n-1 },
n is number of reciprocal lattice nodes,
l, m ∈{ 0, 1 }.
Here, P_1 La_0^'=
K β^'(0)𝐞_0^(0)+
K β^'(1)𝐞_0^(1).
The reason for that
the first term of the left side of the above equation
will be described after deriving
(<ref>) and (<ref>).
The equation corresponding to (<ref>)
is given as follows:
ξ𝒟_i^'(l)
= - K/ cosΘ_i (
S_i, 0^(0) β^'(0)
+ S_i, 0^(1) β^'(1)
)
𝒟_i^'(l)
- ξ_i^''
𝒟_i^'(l)
+ K/ 2 cosΘ_i
∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1
C_i, j^(l, m)
𝒟_j^'(m).
Further, (<ref>)
can be replaced with the following equation:
a_p, q = K / 2 cosΘ_i
χ_h_i - h_j C_i, j^(l, m)
- δ_p, q K/ cosΘ_i
( S_i, 0^(0) β^'(0)
+ S_i, 0^(1) β^'(1)
) - δ_p, q ξ_i^''.
(<ref>) can be solved
by substituting the above equation into (<ref>)
even when
n (n is an arbitrary number)
reciprocal lattice nodes to be taken into account
that exist in the vicinity of
the surface of the Ewald sphere.
§.§ Derivation of the the Takagi equation (T-T theory) from the Ewald-Laue (E-L) theory
In this section,
the n-beam T-T equation
is derived from the n-beam E-L theory
described by
(<ref>) and/or (<ref>).
The exchange of the order of integration in the reciprocal space
and differentiation in the real space
and that of integration and summation
are essential of the discussion.
Let the whole wave field 𝐃̃ (𝐫) i.e.
the solution of the dynamical theory
be described as follows:
𝐃̃(𝐫) = ∑_i = 0^n-1∑_l = 0^1𝐞_i^(l) D_i^(l)(𝐫) exp(
- i 2 π La_0 H_i·𝐫).
Here, 𝐫 is the location vector.
For the later description,
let 𝐫 be described as a linear combination of
𝐬_i, 𝐞_i^(0) and 𝐞_i^(1)
as follows:
𝐫 = s_i 𝐬_i
+ e_i^(0) 𝐞_i^(0)
+ e_i^(1) 𝐞_i^(1).
The amplitude of the
ith numbered wave with polarization state of l can be described as follows:
𝒟_i^(l) (Δ𝐤)
exp(
- i 2 πP_1^'H_i
·𝐫
)
= 𝒟_i^(l) (Δ𝐤) exp(
- i 2 πΔ𝐤 ·𝐫
)
exp(
- i 2 πLa_0H_i ·𝐫
),
where Δ𝐤
= P_1^' La_0.
Let us decribe the amplitudes of Bloch waves 𝒟_i^(l)
as 𝒟_i^(l)(Δ𝐤)
to clartify these amplitudes
are functions of Δ𝐤.
Further, for later description,
let us confirm that Δ𝐤 are described from
(<ref>) and (<ref>) as follows:
Δ𝐤 = ξ𝐧_z
+ K β^(0)𝐞_0^(0)
+ K β^(1)𝐞_0^(1).
𝒟_i^(l)(Δ𝐤) can be
an arbitrary function of Δ𝐤.
For example,
𝒟_i^(l)(Δ𝐤) is
the Dirac delta function of Δ𝐤
for the condition of plane wave incidence.
However,
the constant function whose amplitude and phase do not change depending
on Δ𝐤 for
the condition of spherical wave incidence.
Since D_i^(l)(𝐫) and D_j^(m)(𝐫) are
the amplitudes of the ith and jth numbered waves
whose polarization states are l and m
are considered to be coherent superpositions of Bloch waves,
they are described as follows:
D_i^(l) (𝐫)
= ∫_Δ𝐤^D.S.
𝒟_i^(l) (Δ𝐤) exp(
- i 2 πΔ𝐤 ·𝐫
)
dS,
D_j^(m) (𝐫)
= ∫_Δ𝐤^D.S.
𝒟_j^(m) (Δ𝐤) exp(
- i 2 πΔ𝐤 ·𝐫
)
dS.
Here, ∫_Δ k^D.S. dS means
the integration over the dispersion surfaces.
Since there are 2n couples of dispersion surfaces
and eigenvectors,
they can be described as
∑_k=1^2n∫_Δ k^D.S. dS.
However, (<ref>) has been described under the assumption that
∫_Δ k^D.S. dS means an integration for
2n dispersion surfaces in
(<ref>) and (<ref>)
for simplicity in the later deformation of the equations.
D_i^(l) (𝐫) and D_j^(m) (𝐫) are the amplitudes
that modulate the waves of
exp(- i 2 π La_0 H_i·𝐫)
𝐞_i^(l)
and
exp(- i 2 π La_0 H_j·𝐫)
𝐞_j^(m),
respectively.
By substituting (<ref>) and (<ref>)
and considering the polarization factors
defined as in (<ref>),
the following equation can be obtained:
D_i^(l) (𝐫)
= ∫_Δ𝐤^D.S.
𝒟_i^(l) (Δ𝐤)
×exp{
- i 2 π[
(
ξcosΘ_i
+ K β^(0) S_i, 0^(0)
+ K β^(1) S_i, 0^(1)
) s_i
+ f_i(
e_i^(0), e_i^(1)
)
]
}
dS.
Here, f_i(e_i^(0), e_i^(1))
are functions of e_i^(0), e_i^(1)
that do not depend on s_i.
Therefore, ∂ D_i^(l) (𝐫) / ∂ s_i
can be calculated as follows:
∂/ ∂s_i D_i^(l) (𝐫)
= ∂/ ∂s_i
∫_Δ𝐤^D.S.
𝒟_i^(l) (Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
) d S
=
∫_Δ𝐤^D.S.
∂/ ∂s_i
[
𝒟_i^(l) (Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
)
] d S
=
- i 2 π∫_Δ𝐤^D.S.
[
ξcosΘ_i + K
(
S_i, 0^(0) β^(0)
+ S_i, 0^(1) β^(1)
)
]
×𝒟_i^(l) (Δ𝐤)exp(
- i 2 πΔ𝐤 ·𝐫
)
d S.
On the other hand,
in place of Δ𝐤 in (<ref>),
let define Δ𝐤^' as follows:
Δ𝐤^'
= P_1^'La_0^'
= ξ𝐧_z
+ K β^'(0) 𝐞_0^(0)
+ K β^'(1) 𝐞_0^(1).
In (<ref>) and (<ref>),
Δ𝐤, 𝒟_i^(l)(Δ𝐤),
D_j^(m)(𝐫),
𝒟_j^(m)(Δ𝐤),
β^(0) and β^(1)
can be replaced with Δ𝐤^',
D_i^'(l)(𝐫),
𝒟_i^'(l)(Δ𝐤^'),
D_j^'(m)(𝐫),
𝒟_j^'(m)(Δ𝐤^'),
β^'(0) and β^'(1)
to obtain the following equation:
∂/ ∂s_i D_i^'(l) (𝐫)
=
- i 2 π∫_Δ𝐤^'^D.S.
[
ξcosΘ_i + K
(
S_i, 0^(0) β^'(0)
+ S_i, 0^(1) β^'(1)
)
]
×𝒟_i^'(l) (Δ𝐤^')
exp(
- i 2 πΔ𝐤^' ·𝐫
)
d S.
When n reciprocal lattice nodes exist on a circle
in the reciprocal space,
D_i^(l)(𝐫) in (<ref>)
is the amplitude
that modulates the oscillation of
exp( - i 2 π
La_0 H_i·𝐫)
𝐞_i^(l).
However, D_i^'(l)(𝐫)
in (<ref>)
is the amplitude
that modulates the oscillation of
exp( - i 2 π
La_0^' H_i·𝐫)
𝐞_i^(l).
(<ref>)
has been derived to generalize the T-T equation
in the later description
such as to take into account all reciprocal lattice nodes
in the vicinity of the surface of the Ewald sphere.
By substituting (<ref>) into (<ref>),
the following equations can be obtained:
∂/∂s_i D_i^(l) (𝐫)
= - i πK ∫_Δ𝐤^D.S.
∑_j = 0^n-1 χ_h_i - h_j
∑_m = 0^1 C_i, j^(l, m)
𝒟_j^(m)(Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
)
d S
= - i πK
∑_j = 0^n-1 χ_h_i - h_j
∑_m = 0^1 C_i, j^(l, m)
∫_Δ𝐤^D.S.𝒟_j^(m) (Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
)
d S.
(<ref>) can be substituted into (<ref>)
to obtain the following equation:
∂/∂s_i D_i^(l)(𝐫)
= - i πK ∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1 C_i, j^(l, m)
D_j^(m) (𝐫),
where, i, j ∈{ 0,1, ⋯, n-1 },
n ∈{ 3,4,5,6,8,12 },
l, m ∈{ 0, 1 }.
The above equation (<ref>)
is the n-beam T-T equation applicable to the cases where
n reciprocal lattice nodes exist on an identical circle
in the reciprocal space.
Incidentally, in the case of a perfect crystal,
the electric susceptibility in the crystal χ(𝐫)
can be Fourier-expanded to be
χ(𝐫)=
∑_iχ_h_i
exp(- i 2 π𝐡_i ·𝐫).
However,
in the cases that the crystal has the lattice displacement field
𝐮 (𝐫),
the electric susceptibility is approximately Fourier-expanded
as follows:
χ[
𝐫 - 𝐮(𝐫)
]
= ∑_i χ_h_i
exp{
-i 2 π𝐡_i ·[
𝐫 - 𝐮(𝐫)
]
}
= ∑_i χ_h_i
exp[i 2 π𝐡_i ·𝐮(𝐫)]
exp(-i 2 π𝐡_i ·𝐫).
Then, when
the crystal has the lattice displacement field of
𝐮(𝐫),
the Fourier coefficient of the electric susceptibility
is a function of the position in the crystal,
can be described as
χ_h_i - h_j exp[ i
2 π (𝐡_i - 𝐡_j)
·𝐮 (𝐫)].
Therefore, (<ref>) can be deformed
for a crystal having the lattice displacement field
as follows:
∂/∂s_i D_i^(l) (𝐫)
= - i πK
∑_j = 0^n - 1
χ_h_i - h_j exp[i 2 π(𝐡_i - 𝐡_j) ·𝐮 (𝐫)
]
∑_m = 0^1
C_i, j^(l, m) D_j^(m) (𝐫).
The above equation (<ref>)
is nothing but the n-beam T-T equation
<cit.>
that can deal with the X-ray wave fields
in a deformed crystal
in the case that
n reciprocal lattice nodes exist on
an identical circle in the reciprocal space
<cit.>.
Next,
let us derive the n-beam T-T equation
to take into account the all reciprocal lattice nodes
in the vicinity of the surface of the Ewald sphere.
As decribed just before deriving (<ref>),
La_0^' is the 0th-numbered
`generalized Laue point'
that exist on Pl_0
in Fig. <ref>.
As described before,
La_0^' La_i^' is
ξ_i^''𝐧_z
(see Fig. <ref>).
For the case that
n reciprocal lattice nodes exist on an identical circle
in the reciprocal space,
(<ref>) has been substituted into
(<ref>) to obtain (<ref>).
To generalize the n-beam T-T equation such as to take into account
the all reciprocal lattice nodes in the vicinity of
the surface of the Ewald sphere,
(<ref>)
can be substituted into (<ref>)
to obtain the following equation:
∂/∂s_i D_i^'(l) (𝐫)
= i 2 πξ_i^'' cosΘ_i
∫_Δ𝐤^'^D.S.
𝒟_i^'(l) (Δ𝐤^')
exp[
- i 2 πΔ𝐤^' ·𝐫
] d S
- i πK ∫_Δ𝐤^'^D.S.
∑_j = 0^n-1 χ_h_i - h_j
∑_m = 0^1 C_i, j^(l, m)
𝒟_j^'(m)
(Δ𝐤^')
exp(
- i 2 πΔ𝐤^' ·𝐫
)
d S
= i 2 πξ_i^'' cosΘ_i
D_i^'(l) (𝐫)
- i πK ∑_j = 0^n - 1
χ_h_i - h_j
∑_m = 0^1 C_i, j^(l, m)
D_j^'(m) (𝐫),
where, i, j ∈{ 0,1, ⋯, n-1 },
n is the number of reciproca lattice nodes,
l, m ∈{ 0, 1 }.
At first glance, it might seem like
the calculation can be simplified by
putting that
Δ𝐤^'_i =
P_1^' La_i^'
to change the contents of the integration of the right side
of (<ref>)
to be
𝒟_i^'(l)(Δ𝐤_i^')
exp(- i2πΔ𝐤_i^'·𝐫).
However, in this case,
the second term of (<ref>)
cannot be deformed to be
the second term of (<ref>)
since the contents of the integration in the right side
of (<ref>)
should be
𝒟_j^(m)(Δ𝐤_j^')
exp(- i2πΔ𝐤_j^'·𝐫)
to keep the symmetry of the equation.
For this reason,
the first term of the right side of (<ref>)
has not been transferred to the left side.
Then, based on (<ref>),
the following equation is obtained:
∂/∂s_i D_i^'(l) (𝐫)
= i 2 πξ_i^'' cosΘ_i
D_i^'(l) (𝐫)
- i πK ∑_j = 0^n - 1
χ_h_i - h_j exp[i 2 π(𝐡_i - 𝐡_j) ·𝐮 (𝐫)
]
∑_m = 0^1 C_i, j^(l, m)
D_j^'(m) (𝐫).
The above equation (<ref>)
is the n-beam T-T equation
that describes the X-ray wave fields
in a crystal that has lattice displacement field
by taking into account all reciprocal lattice nodes
in the vicinity of the surface of the Ewald sphere.
Now let us derive the equation applicable for the case that
plane wave X-rays are incident on the crystal.
D_i^'(l)(𝐫) and D_j^'(m)(𝐫)
in (<ref>)
are X-ray amplitudes that modulate the oscillations of
exp(- i2 π
La_0^' H_i
· 𝐫)
𝐞_i^(l)
and
exp(- i2 π
La_0^' H_j·
𝐫)
𝐞_j^(m), respectively.
D_i^''(l)(𝐫)
and
D_j^''(m)(𝐫)
are X-ray amplitudes that modulate the oscillations of
exp(- i2 π
P_1 H_i
· 𝐫)
𝐞_i^(l)
and
exp(- i2 π
P_1 H_j
· 𝐫)
𝐞_j^(m).
In reference to
Fig. <ref>,
we can understand the following relations
D_i^'(l)(𝐫)
= D_i^''(l)(𝐫)
exp(
- i2πP_1La_0^'
·𝐫
)
= D_i^''(l)(𝐫)
exp[
- i2πK (
β^'(0) 𝐞_0^(0)
+ β^'(1) 𝐞_0^(1)
)
·𝐫
],
D_j^'(m)(𝐫)
= D_j^''(m)(𝐫)
exp(
- i2πP_1La_0^'
·𝐫
).
The both sides of (<ref>)
can be partially differentiated as follows:
∂/ ∂s_i D_i^'(l) (𝐫)
= [
∂/ ∂s_i D_i^''(l) (𝐫)
]
exp(
-i 2 πP_1La_0^'
·𝐫
)
- i 2 πK (
β^'(0) S_i, 0^(0)
+ β^'(1) S_i, 0^(1)
)
D_i^''(l) (𝐫) exp(
- i 2 πP_1La_0^'
·𝐫
).
(<ref>),
(<ref>)
and (<ref>)
can be substituted into (<ref>)
to obtain the following equation:
∂/ ∂s_i D_i^''(l) (𝐫)
= i 2 πξ_i^'' cosΘ_i
D_i^''(l) (𝐫)
+ i 2 πK (
β^'(0) S_i, 0^(0)
+ β^'(1) S_i, 0^(1)
) D_i^''(l) (𝐫)
- i πK ∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1 C_i, j^(l, m)
D_j^''(m) (𝐫).
D_i^''(l)(𝐫) and
D_j^''(m)(𝐫)
in the above equation (<ref>)
are X-ray amplitudes that modulate the oscillations of
exp(- i2π
P_1 H_i
·𝐫) 𝐞_i^(l)
and
exp(- i2π
P_1 H_j
·𝐫) 𝐞_j^(m).
Under the condition that plane wave X-rays
whose wavevector is P_1 H_0
are incident on the crystal,
a constant value of D_0^''(0)(𝐫)
and/or D_0^''(1)(𝐫)
should be given as the boundary condition
depending on the polarization state of the incident X-rays.
When the downward surface normal of the crystal is 𝐧_z
and unit vectors
𝐞_x and 𝐞_y are
defined such that
𝐞_x, 𝐞_y and 𝐧_z
construct a right-handed orthogonal system in this order
and the location vector
𝐫=
x 𝐞_x
+y 𝐞_y+z𝐧_z,
the X-ray amplitudes obtained by solving (<ref>)
are the function of only z
not depending on the values of x and y.
Therefore, they can be described as
D_i^''(l) (z) and D_j^''(m) (z).
Then,
∂ D_i^''(l) (𝐫) / ∂ s_i
in the left side of (<ref>)
can be deformed as follows:
∂D_i^''(l) (𝐫) / ∂s_i
= 𝐬_i ·(
d D_i^''(l) (z)/d z
) 𝐧_z
= cosΘ_i d D_i^''(l) (z)/d z.
(<ref>)
can be substituted into (<ref>)
to obtain the ordinary differential equation as follows:
d/ d z D_i^''(l) (z)
= i 2 πξ_i^''
D_i^''(l) (z)
+ i 2 πK/ cosΘ_i
(
β^'(0) S_i, 0^(0)
+ β^'(1) S_i, 0^(1)
) D_i^''(l) (z)
- i πK/ cosΘ_i
∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1 C_i, j^(l, m)
D_j^''(m) (z).
(<ref>) can be described also as
an eigenvalue problem
whose numerical solution can be obtained
by using the numerical subroutine libralies e.g. LAPACK.
Also from this,
the equivalence between the E-L and T-T theories
can be verified.
However, the details of this are not described here.
§.§ Derivation of the Ewald-Laue (E-L) theory
from the Takagi-Taupin (T-T) equation
In this section,
the n-beam E-L theory
described in (<ref>) and/or (<ref>)
is derived from the n-beam T-T equation
described in (<ref>).
When plane wave X-rays are incident on the crystal
to excite 2n tie points on the dispersion surfaces,
total wave field 𝒟̃
is described as the summation of Bloch waves as follows:
𝒟̃
= ∑_i = 0^n - 1 ∑_l = 0^1
𝐞_i^(l) 𝒟_i^(l)(Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
)
exp(
- i 2 πLa_0H_i
·𝐫
).
Here,
D_i^(l)(𝐫)=
𝒟_i^(l)(Δ𝐤)
exp ( - i 2 π P_1 H_i·𝐫)
and
D_j^(m)(𝐫) =
𝒟_j^(m)(Δ𝐤)
exp ( - i2πP_1 H_j
·𝐫).
Therefore,
∂ /∂s_i [
𝒟_i^(l)(Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
)
]
= - i πK
∑_j = 0^n - 1 χ_h_i - h_j
∑_m = 0^1
C_i, j^(l, m)
[
𝒟_j^(m)(Δ𝐤) exp(
- i 2 πΔ𝐤 ·𝐫
)
].
However, the left side of (<ref>)
can be deformed in the same procedure used
when deriving (<ref>)
as follows:
∂ /∂s_i[
𝒟_i^(l)(Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
) ]
= 𝒟_i^(l)(Δ𝐤)
∂/∂s_i exp{
- i 2 π[
(
ξcosΘ_i
+ K β^(0) S_i, 0^(0)
+ K β^(1) S_i, 0^(1)
) s_i
+ f_i(
e_i^(0), e_i^(1)
)
]
}
= - i 2 π(
ξcosΘ_i
+ K β^(0) S_i, 0^(0)
+ K β^(1) S_i, 0^(1)
)
𝒟_i^(l)(Δ𝐤)
exp(
- i 2 πΔ𝐤 ·𝐫
).
The right hands of (<ref>) and (<ref>)
can be compared to obtain the same equation as (<ref>).
The equivalence between the E-L and T-T theories
that can be described by the Fourier transform,
has been verified explicitly
from the descriptions of the previous and present subsections.
As far as the present author knows,
just a short statement concerning this relation
between the E-L and T-T theories for the two-beam case
can be found only in 11.3 of Authier's book
<cit.>.
§ NUMERICAL METHOD TO SOLVE THE N-BEAM DYNAMICAL THEORIES
§.§ Numerical method to solve the n-beam T-T equation
In reference to Fig. <ref> (a) and
<ref> (b),
the algorithm to solve the n-beam T-T equation
as described in (<ref>) for
n = 6
is explained.
The following method was used to obtain the computer-simulated images
shown in Fig. <ref>.
R_i^(0)R^(1)
in Fig. <ref> (a)
is parallel to 𝐬_i.
When the length of R_i^(0)R^(1)
is sufficiently small compared with the value
of |-1/(χ_0 K)|,
the n-beam T-T equation can be approximated by
the following equation:
D_i^(l)(R^(1))-D_i^(l)(R_i^(0))/| R_i^(0)R^(1) |
= - i πK ∑_j=0^n-1χ_h_i-h_j
∑_m=0^1 C_i, j^(l, m)
D_j^(m)(R_i^(0)) + D_j^(m)(R^(1)) /2.
(<ref>) describes
2 n-dimensional simultaneous linear equations
for (i ∈{ 0, 1, ⋯, n - 1 }, l ∈{ 0, 1 })
and can be numerically solved by using
subroutine
e.g.
ZGeTRF and ZGeTRS
in the lapack
(Linear Algebra subroutine library Package).
Fig. <ref> is the top view
of Fig. <ref> (b).
In this case,
0 0 0-forward diffracted,
4 0 4,
4 2 6,
0 6 6,
2 6 4
and
2 2 0-reflected X-ray beams
are simultaneously strong
(see Fig. <ref>).
The angle spanned by the directions of n_x and n_y
is 120^∘.
Vectors
R_incR_i^(1)
(i ∈{ 0,1,2,3,4,5 })
in Fig. <ref> (b)
are parallel to the directions of
propagations of
0 0 0-forward diffracted,
4 0 4,
4 2 6,
0 6 6,
2 6 4 and
2 2 0-reflected
X-ray beams.
A four-dimensional array D_even(i,l,n_x,n_y)
[i ∈{ 0, 1, ⋯, n-1 },
l ∈{ 0, 1 },
n_x ∈{⋯, -2, -1, 0, 1, 2, ⋯},
n_y ∈{⋯, -2, -1, 0, 1, 2, ⋯}]
should be prepared such that
the calculated X-ray amplitudes are saved.
Here, i is the ordinal number of the X-ray wave,
l (l ∈{ 0, 1 }) is the polarization state,
n_x and n_y are two-dimensional positions
on the layer in the crystal.
The crystal is divided to sufficiently large number of layers.
The calculations are repeated layer by layer
toward the depth direction.
As shown in Fig. <ref> (a),
X-ray amplitudes D_odd(j, m, n_x, n_y) just below
the crystal surface can be obtained from
D_odd(i, l, n_x-0, n_y-0),
D_odd(i, l, n_x-0, n_y-2),
D_odd(i, l, n_x-1, n_y-3),
D_odd(i, l, n_x-3, n_y-3),
D_odd(i, l, n_x-3, n_y-2), and
D_odd(i, l, n_x-1, n_y-0)
by solving (<ref>).
Outside the Borrmann pyramid as shown in
Fig. <ref> (b),
the wave fields do not exist.
The calculation is performed by scanning insid the Bormann pyramid.
The values of χ_h_i - h_j were calculated
by using XOP version 2.3
<cit.>
The difference equation that approximates
the standard differential equation (<ref>)
is given as follows:
D_i^''(l)(z+Δz)-D_i^''(l)(z)/Δz
= i 2 π [
ξ_i^''
+ K/ cosΘ_i (
β^'(0) S_i, 0^(0)
+ β^'(1) S_i, 0^(1)
)
]
D_i^''(l)(z) + D_i^''(l)
(z + Δz) /2
- i πK/ cosΘ_i
∑_j=0^n-1χ_h_i-h_j
∑_m=0^1 C_i, j^(l, m)
D_j^''(m)(z)
+ D_j^''(m)(z + Δz) /2.
(<ref>) can be solved in a short time
only when incident plane-wave X-rays excite n
diffracted X-ray beams all in Laue geometries.
§.§ Numerical method to solve the n-beam E-L theory
After substituting (<ref>)
into matrix 𝐀 of (<ref>),
kth (k ∈{ 1, 2, ⋯, 2n })
eigenvalue ξ_k and eigenvector 𝒟_k
can be solved by using subroutine libraries
e.g. LAPACK.
In this way,
wave vectors and amplitude ratios
of the qth (q = 2 j + m + 1)
Bloch wave.
On the other hand, mixing ratio of 2n Bloch waves
should be calculated such as to
satisfy the boundary condition.
After making 2n × 2n matrix 𝒟
whose element of qth raw
and kth column is 𝒟_q, k
(=𝒟_j, k^(m)),
the following equation is obtained:
𝒟 α^(0)
= ( 1, 0, 0, ⋯, 0, 0 )^T,
𝒟 α^(1)
= ( 0, 1, 0, ⋯, 0, 0 )^T.
The above equations
(<ref>) and (<ref>)
are the boundary conditions that should be given
for the entrance surface
when an X-ray beam whose polarization state
is l (l ∈{ 0, 1 }) is incident
on the crystal.
These equations can be solved
to obtain the kth element α_k^(l)
of the column vector α^(l).
These are mixing ratios of kth Bloch waves.
Then,
the amplitudes 𝒟_j^(l, m)(exit)
[ =𝒟_q^(l)(exit) ]
of the jth numbered X-ray beam whose polarization state is m,
can be obtained as follows:
𝒟_j^(l, m)(exit)
= 𝒟_q^(l)(exit)
= ∑_k=1^2n α_k^(l) 𝒟_j, k^(m)
exp[ -i 2 πξ_k T_z].
Here, T_z is the thickness of the crystal.
The second terms β^(0) and β^(1) in the left side
of (<ref>)
are angular deviations
from the exact n-beam condition.
Two-dimensional rocking curves are obtained as
shown in Fig. Fig. <ref>
by calculating
| 𝒟_j^(l, m)(exit) |^2
(𝐧_z ·𝐬_j )
/
(𝐧_z ·𝐬_0)
as the intensities of the jth numbered X-ray beam.
(𝐧_z ·𝐬_j)
/
(𝐧_z ·𝐬_0)
is the correction term for taking into account the difference in
the area of the cross section for the X-ray beams.
j = 0 in the case of
Fig. <ref>.
When the jth numbered X-rays are reflected
in the Bragg geometry,
the boundary condition should be given to be
∑_k = 1^2nα_k^(l)𝒟_j,k^(m)
exp( - i 2 πξ_k T_z ) = 0
such that
the summation of the amplitudes is zero.
In the present author's and his coauthors papers
published in 2019
<cit.>
report pinhole topograph images
obtained by fast Fourier-transforming the solution
of the E-L theory.
This method was developed by
Kohn & Khikhlukha
<cit.>
and by Kohn
<cit.>.
They reported computer-simulated pinhole topographs
for a symmetric six-beam case.
The present author and his coauthors extended this method
such as to deal with a 18-beam case in which
18 reciprocal lattice nodes do not exist on
an identical circle in the reciprocal space.
Here, let the location vector on the exit surface
𝐫_exit be described as follows:
𝐫_exit = x_exit 𝐞_x
+ y_exit 𝐞_y
+ T_z 𝐧_z.
Further, in reference to Fig. <ref>,
the following equation is obtained:
P_1^''La_0
= Δk_x 𝐞_x + Δk_y 𝐞_y.
The X-ray amplitude D_j^(m)(x_exit, y_exit)
to be calculated by using the fast Fourier transform (FFT)
modulates the wave of
exp( - i 2 π
La_0 H_j·𝐫)
𝐞_j^(m).
Considering that
this amplitude is the coherent superposition of
𝒟_j^(m) (Δ𝐤),
the next equation can be obtained:
D_j^(m)(x_exit, y_exit) exp(
-i 2 πLa_0H_j ·𝐫_exit
)
= ∫_Δ𝐤^D.S.
𝒟_j^(m) (Δ𝐤)
exp[
-i 2 π(
P_1^'P_1^''
+ P_1^''La_0
) ·𝐫_exit
]
exp(
-i 2 πLa_0H_j ·𝐫_exit
) dS.
Here, let 𝒟_j^'(m)(Δ k_x, Δ k_y)
be as follows:
𝒟_j^'(m)(Δk_x, Δk_y)
= ∑_k=1^2n α_k^(l)
𝒟_j, k^(m)(Δ𝐤)
exp(
-i 2 πξ_k^''' T_z
),
where, P_1, k^'P_1^''
= ξ_k^''' 𝐧_z.
∑_k=1^2n in (<ref>)
has been obtained by extracting from
the integration in the right side of (<ref>).
By substituting (<ref>)
into (<ref>)
and considering (<ref>) and (<ref>),
the following equations can be obtained:
D_j^(m)(x_exit, y_exit)
= ∫_Δ𝐤^D.S.
∑_k = 1^2n α_k^(l)
𝒟_j,k^(m)(Δ𝐤)
exp(
-i 2 πξ_k^''' T_z
)
exp(
-i 2 πP_1^''La_0
·𝐫_exit
) dS
= ∫_Δk_x ∫_Δk_y
𝒟_j^'(m)(Δk_x, Δk_y)
exp[
-i 2 π(
Δk_x x_exit + Δk_y y_exit
)
] d Δk_y
d Δk_x.
The above equation (<ref>)
is a standard two-dimensional Fourier transform.
The X-ray amplitude D_j^(m)(x_exit, y_exit)
can be obtained by fast Fourier-transforming
𝒟_j^'(m)(Δ k_x, Δ k_y)
defined by (<ref>).
§ EXPERIMENTAL
§.§ Phase retarder system
The experiments to obtain
4-,
5-,
6- and
8-beam X-ray pinhole topographs
were performed at BL09XU of SPring-8
by using the synchrotron X-rays.
They were monochromatized to 18.245 keV
by using the water-cooled diamond monochromator.
The polarization state of the incident X-rays
was controlled by using
`the rotating four-quadrant phase-retarder system'
<cit.>.
There were previous states before
developing this polarization control system
i.e.
`the two-quadrant X-ray phase retarder system'
to compensate for the off-axis aberration
<cit.>
and `the four-quadrant X-ray phase retarder system'
to compensate for both
the off-axis and chromatic aberrations
<cit.>.
These were invented, designed and manufactured
by the present author.
The experimental estimations were performed
by the present author and Ueji to obtain
the excellent results at the Photon Factory of KEK.
These systems were further improved
as shown in Figs. <ref>
and <ref>
such as to rotate around the optical axis to
generate arbitrary polarized X-rays.
The transmission-type phase retarder
<cit.>
was innovative polarization-controll system.
This can generate uniform phase shift between
σ- and π-polarized X-rays
compared with the reflection-type phase retarders
<cit.>.
Nevertheless, there was a problem
of inhomogeneity (aberrations) of the phase shift
owing to the angular divergence and energy spread of
the incident X-rays.
However, further homogeneous value of phase shift
can be obtained by overlapping the phase retarder crystals
such that the planes of incidence of them
are inclined by 45^∘ and 225^∘
from the horizontal plane
(two-quadrant system)
<cit.>
and by 45^∘, 135^∘, 225^∘
and 315^∘
(four-quadrant system)
<cit.>.
The two- and four-quadrant phase retarder system
are particularly effective in the high energy region
since the large value of total thickness of the diamond crystals
can decrease the residual ununiformity of phase shift for
σ- and π-polarized X-rays.
Fig. <ref> is
a schematic drawing of the phase retarder system.
Fig. <ref>
is its photograph.
This system consists of PR_n (n ∈{ 1,2,3,4 }).
These are [1 0 0]-oriented
diamond crystals whose thickness are
1.545, 2.198, 1.565 and 2.633 mm
and used in the vicinity of the angles
to give 1 1 1 reflection
in an asymmetric Laue geometry.
How to control this system
has been described in detail
in the paper published in 2006
by the present author and his coauthors
<cit.>.
In the three-, 12- and 18-beam pinhole topograph experiments,
the horizontally polarized monochromatized synchrotron X-rays
were incident on the crystal
without using the phase retarder system.
The photon energy for the three-beam case
was 18.245 keV.
That for 12- and 18-beam cases was 22.0 keV.
§.§ Silicon crystals uses as sample crystals
and their position angle adjustment
Fig. <ref> was reproduction
of Fig. 7
in the paper published by the present author
and his coauthors in 2006
<cit.>.
The sample crystals used in the n-beam pinhole topograph experiment
for n ∈{ 3,4,5,6,8,12,18 }
were [1 1 1]-oriented floating zone silicon crystals
with high purity and high resistance.
The thicknesses of the sample crystals
were 10.0 mm in 12- and 18-beam cases and
9.6 mm in the other cases.
The sample crystals were mounted on a goniometer
that has four axes of
χ, ϕ, ω and θ.
Their angles were controlled as shown
in Fig. <ref>.
0 0 0-forward diffracted and two reflected beam intensities
were monitored with PIN photodiodes.
The angles of the goniometer were controlled
such that their intensities have maximum value.
The positions of the diodes were adjusted such that
the laser beam is incident on the detector.
Before that, the rotation angles of the axes of
the goniometer were calculated such as to
reflect the laser beam in the beam direction of
the reflected X-rays.
It is impossible to adjust the positions of the diodes
whose detection area was 15 × 15 mm
such that the reflected X-rays were incident on them.
The dimension of the incident X-ray beam was set to be
25 × 25 μm with a four-quadrant slit system
placed upstream of the phase retarder system.
N pinhole topograph images of forward diffracted
and reflected X-rays
were simultaneously taken on the imaging plate
set behind the sample crystal.
§ RESULTS OF THE EXPERIMENT AND COMPUTER-SIMULATION
§.§ Three-beam case
Figs. <ref> [E(a)] and
<ref> [S(a)] are the
experimentally obtained and computer-simulated images
of 0 0 0-forward-diffracted,
0 4 4-reflected and
4 0 4-reflected X-ray topographs
<cit.>.
Figs. <ref> [E(b)] and
<ref> [S(b)]
are enlargements of 0 4 4-reflected X-ray images of
Figs. <ref> [E(a)] and
<ref> [S(a)]
Fine fringe regions (FFR(1)), (FFR(2)) and
Y-shaped bright region (YBR) indicated by arrows
in Fig. <ref> [S(b)]
are found also in
Fig. <ref> [E(b)],
which shows the excellent agreement between
the computer-simulated and experimentally obtained results.
§.§ Four-beam case
Figs. <ref> [E(x)], <ref> [S(x)]
(x ∈{ a,b,c })
are experimentally obtained and computer-simulated images
of 0 0 0-forward diffracted,
6 2 4-,
6 2 8- and
6 2 8-reflected X-rays.
(a), (b) and (c)
are different from each other
in polarization state of the incident X-rays.
These were obtained
for (a): +45^∘-inclined linear polarization,
(b): -45^∘-inclined linear polarization and
(c): right-screwed circular polarization
when viewed from the downstream direction.
Figs. <ref> [E(x)] and
<ref> [S(x)]
(x ∈{ a,b,c }) are
enlargement of 6 2 8-reflected X-ray images
in Figs. <ref> [E(x)]
and <ref> [S(x)]
<cit.>.
Fine Fringe Region (FFR(1)) can be found both in
Figs. <ref> [E(a)] and
<ref> [S(a)].
Fine Fringe Region (FFR(2))
can be found both in
Figs. <ref> [E(x)] and
<ref> [S(x)]
(x ∈{ a, b, c }).
Sharp lines [Knife Edge Line (KEL)]
are found in all figures.
These lines found in
Figs. <ref> [E(a)] and
<ref> [S(a)]
are dark
in comparison with
Figs. <ref> [E(b)] and
<ref> [S(b)].
In the cases of
Figs. <ref> [E(c)] and
<ref> [S(c)]
intensities of these lines are intermediate between
the cases of (a) and (b).
[Pattern like Fish Born (PFB)],
[Arched Line (AL)] and
[Bright Region (BR)]
are not found
in Figs. <ref> [E(a)] and
<ref> [S(a)].
However, they are found in the cases of
[E(b)], [S(b)], [E(c)] and [S(c)]
It has been clarified that
the computer-simulated and experimentally obtained
pinhole topograph images coincide with each other
when the polarization state used in the experiment
or assumed in the computer simulation
agreed with each other.
Intensity ratio between the horizontally and vertically polarized
X-rays is the same for
(a), (b) and (c).
However, there are difference in phase
between the amplitudes of horizontally and vertically polarized
X-rays.
This difference in phase caused to
the distinct difference in the topograph images.
Figs. <ref> (a),
<ref> (b) and
<ref> (c)
are rocking curves of the forward-diffracted X-ray intensity
calculated based on the E-L theory
under the assumptions of different polarization states
of the incident X-rays:,
(a) +45^∘ and
(b) -45^∘-inclined linear polarization and
(c) right-screwed circular polarization.
Δω and Δϕ
are angular deviations
around axes of [2 1 1] and
[0 1 1] directions, respectively,
from the exact four-beam condition.
Enhancement of the X-ray intensities are found at regions
indicated to be `6 2 4',
`6 2 8' and `0 6 6'.
These were caused by the Borrmann effect
(amorous transmission)
<cit.>.
Where these enhanced regions cross with each other to satisfy
the four-beam condition,
further enhancement of the forward-diffracted intensities
owing to super Borrmann effect
<cit.>.
The enhancement of `6 2 8' found in
Fig. <ref> (a)
is relatively small in comparison with
that found in
Fig. <ref> (b).
In Fig. <ref> (c),
the situation is intermediate between them.
The Bragg angle of 6 2 8 reflection
for X-rays of 18.245 keV is 39.64^∘.
Then, the polarization factor for π polarization
[cos(2 × 39.64^∘)]
is calculated to be 0.186 that is a relatively small value
in comparison with that for σ polarization.
Then, -45^∘-inclined linear polarization for
6 2 8 reflection
is almost π polarization, from which
the relatively small enhancement of 6 2 8
can be explained.
The intensity of KEL depends on the polarization state
of the incident X-rays for the same reason.
§.§ Five-beam case
There are cases in which
five reciprocal lattice nodes simultaneously exist on a circle
in the reciprocal space
as shown in Fig. 1 of the paper published by the
present author and his coauthors
<cit.>.
Fig. <ref> [E(a)] and
<ref> [S(a)] are
experimentally obtained and computer-simulated
five-beam pinhole topographs.
The vertical-linearly polarized X-rays
that were converted from the horizontal-linearly polarized
synchrotron X-rays.
Figs. <ref> [E(b)] and
<ref> [S(b)] are enlargements of
5 5 5-reflected images
<cit.>.
[Knife Edge Line (KEL(1), KEL(2))] and
[Harp-Shaped Patten (HpSP)] can be found
both in the experimentally obtained and computer-simulated images.
The directions of KEL(1) and KEL(2) found in
Figs. <ref> [E(b)] and <ref> [S(b)]
are parallel to the direction to tie
the topograph images of 5 5 5- and 3 3 3-reflected X-rays.
This suggests the energy exchange between
5 5 5-reflected and
0 0 0-forward diffracted X-rays and between
5 5 5- and
3 3 3-reflected X-rays.
Similar [Knife Edge Line (KEL)] can be found
also in three-, four-, six- and eight-beam topographs.
§.§ Six-beam case
In the six-beam case reported by
the present author and his coauthors
<cit.>,
the topograph images were regular hexagons.
However, in the six-beam case
described in the present section,
the topograph images are not regular hexagons.
Fig. <ref> shows
experimentally obtained and computer-simulated topographs
with the incidence of horizontally polarized X-rays
used in the experiment and assumed in the computer simulation
<cit.>.
Figs. Figs. <ref> [E(b)] and
<ref> [S(b)] are
enlargements of topographs of 0 6 6- and 2 6 4-reflected X-rays
in Figs. <ref> [E(a)] and
<ref> [S(a)].
[Knife Edge Line (KEL(1)) and (KEL(2))] and
[Hart-Shaped Pattern (HSP)]
can be found both in images
experimentally obtained and computer-simulated.
In the cases of six-beam pinhole topographs
whose shapes are regular hexagons,
circular patterns that suggest the existence of
cone-shaped path of energy flow.
However, such circular patterns cannot be found
in Fig. <ref>.
§.§ Eight-beam case
Fig. <ref> shows
eight-beam pinhole topographs whose reflection indices
are as shown in
Fig. <ref> [S_h(T-T)]
<cit.>.
Fig. <ref> [E_x]
(x ∈{ h, v })
show the experimentally obtained pinhole topographs
obtained with the incidence of horizontally polarized (x=h) and
vertically polarized (x=v) X-rays
<cit.>.
The horizontally polarized X-rays were obtained
not by removing the phase-retarder crystals from the X-ray path
but by reversing the sign of phase shift given by
the diamond crystals that reflect the incident X-rays
in the directions of the odd-numbered quadrant and
the even-numbered quadrant directions.
Figs. <ref> [S_x(T-T)] are
computer-simulated pinhole topographs for the incidence of
horizontally polarized (x=h) and vertically polarized X-rays
obtained by solving
the n-beam T-T equation (T-T simulation).
However, Fig. <ref> [S_x(E-L)]
are obtained by fast Fourier-transforming the calculated X-ray amplitudes
based on the E-L theory (E-L&FFT simulation).
Fig. <ref> (a) shows
the geometrical relation of the crystal shape and
the X-ray path.
The E-L FFT simulation was performed under the assumption
of geometry as shown in
Figs. <ref> (b) and
<ref> (c) at first.
Then, after removing the parts of
α_2 and β_2,
the parts of (α_1) and (β_1)
were calculated separately.
Figs. <ref> (α_1) and
<ref> (β_1)
were linked to obtain
Fig. <ref>
[S_v(E-L)].
The downward surface normal of the crystal
in the cases of
Figs. <ref> (b) and
<ref> (c),
are mutually perpendicular to each other.
The important factor when considering the E-L&FFT simulation
for such a complex geometry as shown in
Fig. <ref> (a),
are that
the plane waves consisting of the incident X-rays should be in phase
at the incidence point on the crystal.
Further, the distances of the incidence point of X-rays
from the edge of the crystal
should be strictly measured i.e.
horizontally 16.5 mm and vertically 9.6 mm
[see Figs. <ref> (a), (b) and (c)].
Detail of the E-L&FFT simulation
has been described in a paper published in 2019
<cit.>.
Figs. <ref> [S_x(T-T)],
<ref> [E_x] and
<ref> [S_x(E-L)]
(x ∈{ h, v })
are enlargement of topograph images of 0 0 0-forward diffracted
X-rays in Figs. <ref> [S_x(T-T)],
<ref> [E_x] and
<ref> [S_x(E-L)], respectively.
In Fig. <ref> [E_h]
that was experimentally obtained,
[Harp-Shaped Pattern (HpSP)],
[Nail-Shaped Pattern (NSP)] and
[Nail-Shaped Pattern (NSP)] are found.
These patterns are found also in
Figs. <ref> [S_h(E-L)],
and <ref> [S_h(T-T)].
[Knife Edge Line (KEL)] as found in
Fig. <ref> [S_h(T-T)]
is not found in
Figs. <ref> [E_h] and
[S_h(E-L)].
When calculating [S_h(T-T)],
Non-zero amplitude of incident X-rays only at
the incidence point of X-rays was given as the boundary condition of
Dirac's delta function.
This bonday condition means the assumption that
X-rays with infinite angular divergence
are incident on the crystal.
KEL is a sharp line that needs plane wave components
whose directions of propagation are far different from the
n-beam condition.
This bondar condition is given in the computer simulation
to obtain [S_h(T-T)].
However, the angular divergence of the incident X-rays
practically used in the experiment
to obtain the image of [E_h] is finite.
When calculating [S_h(E-L)],
the amplitude of the incident X-rays whose angular deviation
from the exact n-beam condition is a finite range.
This is considered to be the reason for that
KEL is not observed in [E_h] and [S_h(E-L)].
Also in Figs. <ref> [E_v],
<ref> [S_v(T-T)] and
and <ref> [S_v(E-L)],
vertically polarized incident X-rays were used in the experiment
or assumed in the simulation, HpSP can be found.
However, the intensity is weak
in comparison with
[S_h(T-T)], [E_h], and [S_h(E-L)].
In this eight-beam case,
the time for the E-L&FFT simulation with
parallel calculation using 24 cores
was ∼8 minutes
(about 100 times as fast as the T-T simulation).
However, it cannot be concluded that
the E-L&FFT is more excellent unconditionally
as compared with the T-T simulation.
The calculation time of the E-L&FFT simulation
is constant not depending on the thickness of the crystal.
On the other hand,
that of the T-T simulation
is proportional to the third power of the crystal thickness
since the three-dimensional scanning
in the Borrmann pyramid is need for the T-T simulation.
Then, for the crystal whose thickness is 1.0 mm,
the T-T simulation is 10 times as fast as
compared with the E-L&FFT simulation.
§.§ Twelve-beam case
The present author recognized that
the largest number of reciprocal lattice nodes
that exist on a circle in the reciprocal space
is twelve for silicon crystal
before publishing the present paper.
On the other hand, he noticed that
sixteen reciprocal lattice nodes exist on a circle
in the reciprocal space.
However, the sixteen-beam case is not described
in the present paper.
Figs. <ref> [E(a)] and
<ref> [S(a)]
show twelve-beam pinhole topogpraphs
experimentally obtained and computer-simulated
based on the n-beam T-T equation
<cit.>.
The horizontally polarized synchrotron X-rays monochromatized
to be 22.0 keV
were directly incident on the silicon crystal.
The indices of reflections
are as shown in figures.
Figs. <ref> [E(b)] and
<ref> [S(b)]
are enlargements of 2 4 2-reflected images in
Figs. <ref> [E(a)] and
<ref> [S(a)].
[Very Bright Region (VBR)],
[`V-Shaped' Pattern (VSP)],
[Central Circle (CC)] and
[`U-Shaped' Pattern (USP)]
in Fig. <ref> [S(b)]
are also found in
the experimentally obtained image in
Fig. <ref> [E(b)].
§.§ 18-beam case
Figs. <ref> (a) and
<ref> (b)
are 18-beam pinhole topographs
experimentally obtained by using the synchrotron X-rays and
E-L&FFT-computer-simulated
<cit.>.
Fig. <ref> (a) were
experimentally obtained by aiming to take
six-beam pinhole topograph images
with the synchrotron X-rays at 22.0 keV.
However, around the six topograph images
that were aimed to obtained,
additional twelve images were found.
After careful consideration concerning the geometry
in the reciprocal space,
further twelve reciprocal lattice nodes
were found in the vicinity of the surface of the Ewald sphere.
The arrangement of 18 (= 6 + 12) reciprocal lattice nodes
can be drawn as shown in
Fig. <ref>.
In reference to this figure,
The distances of H_i
(i ∈{ 0, 1, 2, 3, 4, 5 }) and
that of H_j
(j ∈{ 6, 7,⋯, 17 })
from the Laue Point La_0
are not the same.
Therefore,
dynamic change of the 18-beam topograph images
were found by changing slightly the photon energy.
When the photon energy was assumed to be 21.98415 keV,
good agreement between the experimentally obtained
and the E-L&FFT-simulated pinhole topograph images
was found as shown in
Fig. <ref>.
To obtain the E-L&FFT-simulated pinhole topograph images,
(<ref>) can be solved
with the procedure described in
<ref>.
Directly calculated based on (<ref>)
were X-ray amplitude profiles when rotating the crystal.
Similarly in the eight-beam case,
it should be considered that
plain wave X-rays that consist of the incident X-rays
with the wave front of the delta function
are in phase at the incident point of the X-rays.
It is clear from the reference to
Fig. <ref>
that the Borrmann pyramid as shown in
Fig. <ref> (b)
cannot be defined for this 18-beam case.
However, the T-T simulation
is not completely ineffective.
(<ref>) can be applied to
the n-beam case for plane-wave incidence.
Difference equation (<ref>)
derived from (<ref>) can be
The amplitude profile calculated by solving (<ref>)
can be fast Fourier-transformed
to obtain the n-beam topograph images
used to simulate the n-beam case
even when n = 18 like this
(T-T&FFT simulation).
Regarding this method,
a separate paper is in preparation.
§ CONCLUDING REMARKS
Fig. <ref>
is a glitch map (simultaneous reflection map)
calculated for silicon 2 2 0 reflection.
The abscissa is the photon energy (eV) of the X-rays.
The ordinate (ψ) is the rotation angle of the crystal
around [1 1 0] axis.
ψ = 0 when
𝐊_000×𝐊_220
is parallel to the [0 0 1] direction.
𝐊_000 and 𝐊_220
are the wavevectors of 0 0 0-forward diffracted and
2 2 0-reflected X-rays.
All red curves found in the glitches
owing to the simultaneous reflections.
These are caused by the reciprocal lattice nodes
other than 2 2 0
existing simultaneously on the surface of the Ewald sphere
to beak the two-beam condition.
The consideration on the glitches are important
when designing such X-ray optical device as
the monochromator, polarizer, analyzer and/or
phase retarder.
When scanning the photon energy e.g.
by using the silicon 2 2 0 reflection,
The two-beam approximation is broken at the photon energy
where reciprocal lattice nodes whose indices are other
than 2 2 0 exist on the surface of the Ewald sphere
to cause the glitches (defects).
In reference to Fig. <ref>,
it can be found that
the density of glitches is higher in the heigh-energy region
compared with the low-energy region.
When an energy-scanning experiment is done
by using the X-ray optical device,
the value of ψ can be adjusted such that
threre is no reciprocal lattice node causing the
two-beam approximation in the vicinity
of the surface of Ewald sphere
In the energy ranges bellow 10 keV or so,
such energy scan experiments are usually done
by eliminating the glitches.
However, it becomes difficult to eliminate the glitches
whose density is proportional to the third power of the photon energy
in the energy range higher than 20 keV.
X-rays whose intensity is extremely high,
are available at experimental stations of
the third-generation synchrotron radiation sources.
However, the X-ray optical devices cannot be designed
only based on the two-beam dynamical diffraction theories
in such a high energy range
where the two-beam approximation is always broken.
It is considered to be necessary
to design the X-ray optical devices working in the high energy ranges
based on the n-beam X-ray dynamical theories
described by (<ref>)
and/or (<ref>).
Simultaneously, advanced technique becomes necessary to control
two or three axes e.g. θ and/or ψ
of the goniometer on which
the crystal devices are mount.
On the other hand,
the n-beam effect cannot be ignored also when
the two-beam approximation is always broken due to
the large size of crystal lattice e.g.
in the protein crystal structure analysis.
Since the late 1980s,
the use of two-dimensional detector generalized
and are becoming more sophisticated
in the crystal structure analysis.
Many diffraction spots;
several dozen even in the cases of small molecular crystals and
several hundred those are found in the cases of protein crystals,
are simultaneously found in general.
When seeing such situations,
it is difficult to consider that
the two-beam approximation is not broken.
such cases where the two-beam approximation is broken
even in the small molecular crystals,
are well known as the Renninger effect
<cit.>.
In the case of protein crystals,
it is rare that the reliability factor (R-factor) evaluated
after the determination of the molecular structure
is less than 10%.
Even if the R-factor is evaluated to be 10%,
it means the weighted discrepancy up to 20%
between the X-ray diffraction intensities
experimentally observed and calculated based on the kinematical theory
from the determined molecular structure.
The present author has a hypothesis
concerning the too large values of the R-factor for protein crystals
that this problem
is caused by the bankruptcy of the two-beam approximation
due to the large density of reciprocal lattice nodes
compared with the cases of small-molecule crystals.
If the crystal structure factor F_c(𝐡) for 𝐡 reflection
were estimated by using the n-beam theory taking into account
the reciprocal lattice nodes in the vicinity of the surface of the Ewald sphere
to be compared with those measured by the experiment F_o(𝐡),
the R-factor might be decreased dramatically.
If that is the case for protein crystals,
the crystal structures (and the phase problem) for protein crystals
become to be solved by using the n-beam dynamical theory
in place of the two-beam (kinematical) theory.
In Kato's book published in 1995,
there is a description [in Japanese] as follows:
When overviewing the history pf the X-ray diffraction in crystals,
the backburn of the dynamical theory has been established
by Darwin (1914)
and by Ewald (1917)
just after the discovery of the phenomenon of X-ray diffraction by von. Laue.
The kinematical diffraction theory could be felt safe to use
since its foundation
has been given by their dynamical theories.
The present author had a lot of respect for Kato
who passed away in 2002.
However, he untouched almost at all the n-beam diffraction cases.
In 1997, the present author asked him about the reason.
He answered that he thought the solution of the dynamical theory
cannot be obtained when Laue and Bragg geometries are mixed.
However, the present author has already obtained the numerical solution
of the T-T dynamical theory
for a three-beam case when Laue and Bragg geometries were mixed.
When telling him about this,
he seemed to be confused.
Is it can be said that we cannot feel safe to use
the kinematical theory based on the two-beam approximation when
it is clarified to be broken ?
In 1949, Lipscomb suggested
<cit.>
that the phase information of the crystal structure factor
can be extracted from the X-ray diffraction profile
in principle.
In the introduction of the famous article published by Collela in 1974
<cit.>,
the purpose of the study was to determine the phases
of crystal structure factor
by referring the Lipscomb's article.
However, this has not been realized even today.
The phase problem in protein crystallography has been overcome
mainly by using the heavy atom replacement and/or
the method to replace methionine (one of the 20 amino acids included in
molecules of the proteins) with selenomethionine
that has selenium atom in place of sulfur
<cit.>
based on the two-beam approximation.
The molecular replace methods are usually used
when the similar or partial structures of the molecule
have been determined
by using the above-mentioned phasing methods.
However, the phase determination of native protein crystals
using the anomalous dispersion of sulfur
is being surveyed due to its advantage without replacement.
When the molecular structures of protein crystals
are obtained based on the n-beam theory in the future,
the phases of the crystal structure factors might be determined only by
indexing the diffraction spots simultaneously recorded
on the two-dimensional detector.
It is impossible to realize the three-beam case
for protein crystals whose density of reciprocal lattice nodes
are extremely high.
However, the n-beam Ewald-Laue (E-L) dynamical theory as
described in (<ref>)
can deal with the cases where
the n-reciprocal lattice nodes not on a circle exist
in the vicinity of the surface of the Ewald sphere.
The 18-beam pinhole topography is such a case.
The 18 reciprocal lattice node not on a circle
in the reciprocal space but exist in the vicinity
of the surface of the Ewald sphere as shown in
Fig. <ref>
has been computer-simulated and agreed well
with the experimental result
as shown in Fig. <ref> (b).
The n-beam T-T equation described as (<ref>)
and/or (<ref>) has been derived by
Fourier-transforming the E-L theory (<ref>)
and/or (<ref>).
These can numerically be solved by taking into account
the existence of all reciprocal lattice nodes
in the vicinity of the surface of the Ewald sphere.
The present author is now developing the computer program
to solve the n-beam E-L theory described as (<ref>)
and n-beam T-T theory as (<ref>) and/or
(<ref>) for n ∼ 100.
It will be time-consuming.
The abilities of computers are rapidly being improved
in all cases about calculation speed, memory capacity and hard disk capacity
The quantum computer may be realized in the future.
These situations are important when considering
the perspective of the n-beam theory.
The present author will continue to study on the computer simulation
of the n-beam X-ray diffraction
with the equivalence in mind between the E-L and T-T dynamical theories.
§ ACKNOWLEDGEMENT
The supercomputer system `sumire', `kashiwa' and `sekirei'
of the Institute for Solid
State Physics, the University of Tokyo
and `TSUBAME 3.0' of Tokyo Institute of Technology
were used for the computer simulations.
The authors are indebted to Dr X.-W. Zhang of KEK Photon Factory,
and Dr T. Oguchi of SPring-8 Service Corporation
for their technical support in the
present experiments
and also to Professor Emeritus S. Kikuta
of The University of Tokyo
and Professor Emeritus H. Hashizume
of Tokyo Institute of Technology
for their encouragement and fruitful discussions regarding
the present work.
The experimental part of the present work
was conducted in cooperation with
Dr. Y. Yoda and Dr. Y. Imai of SPring-8 JASRI
and Dr. Y. Ueji of Rigaku Corporation.
§ FUNDING INFORMATION
The theoretical component and the computer simulations of the
present work were supported by the Nanotechnology Platform
Project (No. 12024046) of the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), Japan.
The preliminary experiments were performed at
BL4A and BL15C of Photon Factory
and AR-NE3A of
Photon Factory AR under the approval of the Photon Factory
Program Advisory Committee
(Proposal Nos. 97G-179, 97G-180, 99S2-003,
2003G202 and
2003G203).
The main experiments were performed at
BL09XU of SPring-8 under the approval of the Japan
Synchrotron Radiation Research Institute (JASRI)
(Proposal No.
2002A 0499-NMD3-np,
2003B 0594-NM-np,
2004A 0330-ND3c-np,
2004B 0575-ND3c-np,
2005B 0714 and
2009B 1384)
naturemag
| The theory that describes the behavior of X-rays
diffracted in a perfect or quasi-perfect crystal
is called as the dynamical theory.
Just after the discovery of the X-ray diffraction
by von. Laue,
basic theories were given by
Darwin
<cit.>
and by Ewald
<cit.>.
The most widely known dynamical theory
is the Ewald-Laue (E-L) theory that has been derived by
applying the two-beam approximation to
the fundamental equation given by von. Laue
<cit.>.
There are several textbooks that describe the dynamical theory
<cit.>.
However, the Takagi-Taupin (T-T) equation
that has been derived by Takagi
<cit.>
was accepted as another form of the dynamical theory.
It can deal with the X-ray wave field
in a distorted crystal.
Various images of the crystal defects
were computer-simulated based on the T-T equation
<cit.>.
Incidentally, it can easily be understood
that n reciprocal lattice nodes (n ≥ 3)
can exist on the surface of the Ewald sphere
by rotating the crystal
around the axis of H_0 H_1.
Here, H_0 is the origin of the reciprocal space
and H_1 is the reciprocal lattice node that causes
the H_0 H_1 reflection.
X-ray intensity measurement taken by rotating
the H_0 H_1 axis
is called as the Renninger scan
<cit.>.
While silicon, diamond and/or germanium crystals
are usually used as the monochromator
in the energy scan of X-ray spectroscopy,
discontinuities of the X-ray intensity
are frequently found and referred to as glitches.
When scanning the photon energy of X-rays by
rotating the monochromator crystal,
the radius of the Ewald sphere changes.
Then, a third reciprocal lattice node
other than the origin of the reciprocal space H_0
and reciprocal lattice node H_1
giving the primary reflection
causes the glitch when it exists on the surface
of the Ewald sphere.
Let refer to the case that
n-reciprocal lattice nodes H_0, H_1, H_2,
⋯, H_n-1 exist on the surfaces of the Ewald sphere
as the n-beam cases.
The E-L two-beam dynamical diffraction theory
was extended such as to deal with the n-beam cases
in 1965-1968
<cit.>.
The numerical method to solve the theory was given by Colella in 1974
<cit.>.
However,
the extension of the T-T equation
was delayed for many years
due to the complexity
when dealing with the polarization effect of X-rays.
The three-beam T-T equation that neglected the polarization effect
was given by Thorkildsen in 1987
<cit.>.
The T-T equation that takes into account the polarization factor
was for the first time given by Larsen and Thorkildsen in 1998
<cit.>.
The present author reported
the T-T equation extended to the n-beam cases
for n ∈{3, 4, 6, 8, 12} in 2003
<cit.>.
The numerical method to solve it
and experimental verification by using the synchrotron radiation
was given by the present author and coauthors
for a six-beam case
<cit.>.
The computer-simulated and experimentally obtained results
to be compared with each other
for the n-beam cases
were reported
for n ∈{3, 4, 5, 6, 8, 12} in 2012 by Okitsu, Imai and Yoda
<cit.>.
The excellent agreements were found between the computer-simulated
and the experimentally obtained pinhole topographs.
Between the E-L and the T-T dynamical theories,
there is a simple relation described by the Fourier transform
that has been implicitly recognized but
has been explicitly described for the first time in 2012
<cit.>.
It can be recognized that
this delayed the extension to the n-beam cases of the T-T equation
in comarison with the E-L dynamical theory.
In the present paper,
the n-beam E-L theory is derived
from Laue's fundamental equation of the dynamical theory
<cit.>
at first.
Then, the n-beam T-T equation is derived by
Fourier-transforming the n-beam E-L theory.
The equivalence between the n-beam E-L and T-T dynamical theories
is explicitly described
also for an arbitrary number of n.
The n-beam dynamical diffraction phenomena of X-rays
can be described by both the T-T and E-L theories
and be numerically solved.
The numerical methods to solve these theories have
advantages and disadvantages when compared with each other.
The present author considers that
they should be used depending to the purpose
with this recognition.
Authier's book describing the dynamical theory
of X-ray diffraction
<cit.>
is recognized to be the most widely read textbook
that has over 500 pages.
However, it has only 24 pages for description
on the n-beam diffraction.
In Pinsker's book
<cit.>,
descriptions concerning the n-beam E-L theory
are found.
These have been revied by
Weckert and Hümmer
<cit.>
and Colella
<cit.>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17787v1 | 20240926122958 | Weak coupling asymptotics for the Pauli operator in two dimensions | [
"Matthias Baur"
] | math.SP | [
"math.SP",
"math-ph",
"math.MP",
"81Q15, 81Q10"
] |
Institute of Analysis, Dynamics and Modeling, Department of Mathematics, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany
[email protected]
§ ABSTRACT
We compute asymptotic expansions for the negative eigenvalues of the Pauli operator in two dimensions perturbed by a weakly coupled potential with definite sign. Whereas previous results were limited to the case of radial magnetic fields and potentials, we are able to drop the assumption of radial symmetry entirely.
Weak coupling asymptotics for the Pauli operator in two dimensions
Matthias Baur
==================================================================
§ INTRODUCTION AND MAIN RESULTS
§.§ Introduction
Given a magnetic vector potential A∈ L^2_loc(^2;^2) and its associated magnetic field B=curl A, we consider the two-dimensional Pauli operator
P(A) = [ P_+(A) 0; 0 P_-(A) ], P_±(A) = (i∇ +A)^2 ± B,
acting on L^2(^2 ; ^2). It models a spin-1/2 fermion interacting with a magnetic field perpendicular to the plane and is obtained as the non-relativistic limit of the Dirac operator, see Thaller <cit.> for more details. The operators P_±(A) on the diagonal are the spin-up and spin-down components of the Pauli operator and defined via closure of the quadratic forms
∫_^2 |(i∇ +A)u|^2 ± B|u|^2 x, u∈ C_0^∞(^2).
Under suitable decay and regularity conditions on B, the Pauli operator is essentially self-adjoint on C_0^∞(^2; ℂ^2) and σ(P(A)) = σ_ess(P(A)) = [0,∞), see for example Cycon et al. <cit.> and Avramska-Lukarska et al. <cit.>. The spectrum for each of the spin-components P_±(A) is also [0,∞).
For B∈ L^1(^2), let
α := 1/2π∫_^2 B(x) x < ∞
be the normalized magnetic flux. The Aharonov-Casher theorem <cit.> states that P(A) has a zero eigenvalue if |α| > 1. Its multiplicity is
n = #{ k ∈ℕ_0 : k< |α|- 1 } = ⌊ |α| ⌋, α∈∖ℤ,
|α|- 1, α∈ℤ∖{0 }.
Here, ⌊ . ⌋ denotes the floor function. If α > 1, then the zero eigenvalues originate purely from the spin-down component P_-(A) while the spin-up component P_+(A) does not exhibit a zero eigenvalue. The opposite holds if α < -1. In this case, P_+(A) has a zero eigenvalue of multiplicity n while P_-(A) has none. The zero eigenstates of P(A) are commonly called Aharonov-Casher states.
In the following, we consider perturbations of the Pauli operator of the form
H(ε) = P(A) - ε𝒱, ε>0,
where
𝒱 = [ V_1 0; 0 V_2 ]
and V_1, V_2 denote multiplication operators with real-valued potentials that are suitably regular and fast decaying. As usual in the literature, we denote the potentials also with V_1, V_2. For easier presentation, we will restrict our discussion to the case V_1 =V_2 =V. Physically, this case corresponds to a perturbation with a small electric field. Note however that since the analysis that follows treats both diagonal components of H(ε) separately, it is straightforward to state our results also for two potentials V_1 and V_2 that do not coincide.
For a wide class of potentials, the essential spectrum of the perturbed Pauli operator H(ε) remains that of P(A), see <cit.>. If the perturbation is attractive, then one expects that negative eigenvalues emerge from the bottom of the essential spectrum of the unperturbed Pauli operator, since for sufficiently small coupling parameter ε, the interaction between potential and the zero eigenstates pushes the zero eigenvalues down. In the case where ε is small, the potential V is usually called ”weakly coupled”.
Weakly coupled potentials are indeed physically relevant. The interaction between spin of an electron and a magnetic field is characterized by the gyromagnetic ratio of the magnetic moment g. While g=2 for particles described by the unperturbed Pauli operator, the experimentally measured gyromagnetic ratio of electrons exhibits an anomaly which slightly shifts the g-factor to g ≈ 2.0023. This shift can be explained by QED corrections. In the 1990s, various authors observed that the presence of a magnetic field together with any anomalous magnetic moment g>2 allows binding of electrons. The anomalous magnetic moment of electrons was taken into account by adding small perturbations ε V_1,2 = ±1/2(g-2) B to the Pauli operator. We briefly review selected results.
In 1993, Bordag and Voropaev <cit.> established the existence of n+1 bound states for α∈ℝ∖ℤ in three explicitly solvable models. Also relying on an explicit model, Cavalcanti, Fraga and de Carvalho <cit.> later discussed the case of a magnetic field that is constant on a disc and zero outside. The first step towards general magnetic field profiles was done by Bentosela, Exner and Zagrebnov <cit.>, who showed that magnetic fields with a rotational symmetry admit at least one bound state if the field is strong enough. Then, Cavalcanti and de Carvalho <cit.> were able to construct a suitable set of test functions to show the existence of at least
n' =
n+1, α∈∖ℤ,
n+2, α∈ℤ,
bound states with negative energy, assuming also rotational symmetry of the magnetic field and non-negative sign. Symmetry assumptions and assumptions on the sign of the magnetic field could be dropped in the following, see <cit.>.
Observe that the number of negative eigenvalues n' under weak perturbations is strictly larger than n, the number of zero eigenvalues of the unperturbed Pauli operator. The difference between n' and n is caused by so-called virtual bound states at zero or zero resonant states of P(A). If α≥ 0, P_-(A) exhibits one resp. two virtual bound states at zero. These are zero modes of P_-(A) that are in L^∞(^2) ∖ L^2(^2). The number of virtual bound states at zero of P_-(A) depends on whether the magnetic flux α is integer or non-integer. Similarly, if α≤ 0, then P_+(A) shows one or two virtual bound states at zero. Each virtual bound state at zero leads to one additional negative eigenvalue for the weakly coupled Pauli operator.
While the previously mentioned papers predominantly discussed the existence of bound states for certain weakly perturbed Pauli operators, it is natural to ask for approximate expressions for the bound state energies for a wider class of weakly coupled potentials. In this paper, we derive asymptotic expansions of the negative eigenvalues of H(ε) in the limit ε↘ 0, the weak coupling limit.
For Schrödinger operators -Δ - ε V, asymptotics for eigenvalues in the weak coupling limit were already discussed by several authors in the 1970s, see e.g. <cit.>. The free Laplacian does not have a zero eigenvalue but it has one virtual bound state at zero, the constant function, which leads to one negative eigenvalue in the weak coupling limit. For classical, non-relativistic Schrödinger operators in two dimensions, Simon <cit.> showed in 1976 that if ∫ V x >0, then for small enough ε, there exists one negative eigenvalue λ(ε) with
λ(ε) ∼ - exp(-4π( ∫_^2 V(x) x )^-1ε^-1), ε↘ 0.
His calculations are based on the Birman-Schwinger principle and the representation of the Birman-Schwinger operator as an integral operator using the integral kernel of (-Δ - λ)^-1. Following Simon's approach, Arazy and Zelenko <cit.> proved asymptotic expansions for negative eigenvalues of generalized Schrödinger operators (-Δ)^l - ε V in ^d for 2l ≥ d. Fractional Schrödinger operators have also been discussed <cit.>.
Extraction of weak coupling asymptotics via Birman-Schwinger operators and explicit resolvent expressions could not be applied in the same way for the Pauli operator, since its resolvent is not as easily expressed by an integral kernel as the resolvent of the free Laplacian. This can been seen as a reason why methods applied in the setting of anomalous magnetic moments, see again <cit.>, to proof existence of negative eigenvalues instead relied on rather explicit models and variational principles.
Using the abstract Birman-Schwinger principle, Weidl <cit.> could compute the number of negative eigenvalues of H(ε) for small enough ε when B is bounded, compactly supported and the potential V is sufficiently regular. For non-negative potential V, his result guarantees the existence of exactly n' bound states in the weak coupling limit. However, his approach did not yield asymptotic expansions for the eigenvalues.
Frank, Morozov and Vugalter <cit.> were able to compute weak coupling asymptotics for the Pauli operator by further pushing the variational approach. They worked however under the rather restrictive assumption of radial, compactly supported B and radial, non-negative V. Here, the assumption of radial fields allowed decomposing the perturbed Pauli operator into several half-line operators which were subsequently treated variationally.
We will generalize the weak coupling asymptotics computed by Frank, Morozov and Vugalter to B and V that do not necessarily exhibit a radial symmetry. In contrast to Frank, Morozov and Vugalter, we return to the Birman-Schwinger principle approach and make use of asymptotic expansions of resolvents of the Pauli operator found in a recent paper by Kovařík <cit.>. We extract the eigenvalue asymptotics by iterated use of the Schur-Livsic-Feshbach-Grushin (SLFG) formula, see Lemma <ref>.
The paper is organised as follows: In the following, we will recall the notion of Aharonov-Casher states and state our main results. Asymptotic expansions for negative eigenvalues of H(ε) are given for three mutually exclusive cases: the zero-flux case, the non-integer flux case and the non-zero, integer flux case. In Section <ref>, we will set up more notation and list the resolvent expansions on which our calculations are based. Section <ref> will be concerned with the proofs of the main results. We give a proof for each of the three cases mentioned above.
§.§ Gauge and Aharonov-Casher states
The zero modes of the Pauli operator, the Aharonov-Casher states, play a central role in the following. To define them, we first need to set a gauge for the magnetic field B. Although the results presented here do not depend on the specific choice of gauge, we choose the gauge as in <cit.>, so that the results therein hold. Let
h(x) = - 1/2π∫_^2 B(y) log|x-y| y.
We set the canonical vector potential
A_h(x) = (∂_2h(x), -∂_1 h(x)).
Then, curl A_h = -Δ h = B, so A_h induces B. Furthermore, note that
h(x) = -αlog |x| + O(|x|^-1), |x| →∞.
This implies e^∓ h has polynomial growth (decay) as |x| →∞. This property of h will become important shortly. Finally, we make a gauge transformation and replace A_h by A = A_h + ∇χ, where χ is the transformation function χ: ^2 → constructed in <cit.>.
We now construct the Aharonov-Casher states, the zero eigenstates of P(A). For this, let N_± be the space of zero eigenstates of P_±(A) (in other words, the kernel of P_±(A)). It is easily verified that for v∈ C_0^∞(^2)
∫_^2 |(i∇ +A_h)(e^∓ h v)|^2 ± B|e^∓ h v|^2 x = ∫_^2 e^∓ 2 h |(∂_1 ∓ i ∂_2 )v|^2 x,
and it follows that any zero mode of P_±(A)=P_±(A_h + ∇χ), i.e. a solution of the equation P_±(A)u=0, must have the form u=e^∓ h + i χ v with v analytic in x_1± i x_2.
Suppose α > 0. A zero mode u is a zero eigenstate if u ∈ L^2(^2). Requiring u ∈ L^2(^2) forces v to be a polynomial of finite degree. To see this, it is enough to realize that if v is a polynomial of degree m, then
e^∓ h v = |x|^m ±α (1+o(1)), |x | →∞,
due to (<ref>), so the degree must satisfy m< ∓α -1. The spin-up component P_+(A) has therefore a trivial zero eigenspace, i.e.
N_+ = { 0 },
while the spin-down component P_-(A) has the zero eigenspace
N_- = span{e^ h(x) + iχ(x) ,(x_1 - i x_2)e^ h(x) + iχ(x) , ..., (x_1 - i x_2)^n-1 e^ h(x) + iχ(x) },
with n from (<ref>). We note that under enough regularity of B, the function e^h is non-zero almost everywhere. Hence, the states (x_1 - i x_2)^k e^ h(x) + iχ(x) are linearly independent and hence indeed dim(N_-)=n. Finally, the zero eigenstates of the full Pauli operator P(A) are given by
[ ψ^+; ψ^- ], ψ^±∈ N_±.
This is the statement of the Aharonov-Casher Theorem.
Virtual bound states at zero are zero modes of P(A) that are bounded but not in L^2(^2;ℂ^2). To define them, we need to find all bounded zero modes of P_±(A) first. Hence, let N_±^∞ denote the space of bounded zero modes of P_±(A). By the same arguments as above, we see that for u to be bounded, the function v must be a polynomial of degree m≤∓α. For α >0, this implies that the spin-up component P_+(A) does not have any bounded zero modes, while the spin-down component P_-(A) admits, in addition to its zero eigenstates, the bounded zero modes
(x_1 - i x_2)^n e^ h(x) + iχ(x)
if α∈ℝ∖ℤ or
(x_1 - i x_2)^n e^ h(x) + iχ(x) , (x_1 - i x_2)^n+1 e^ h(x) + iχ(x)
if α∈ℤ. Therefore,
N_+^∞ = { 0 } ,
N_-^∞ =span{e^ h(x) + iχ(x) ,(x_1 - i x_2)e^ h(x) + iχ(x) , ..., (x_1 - i x_2)^n e^ h(x) + iχ(x) }, α∈ℝ∖ℤ,
span{e^ h(x) + iχ(x) ,(x_1 - i x_2)e^ h(x) + iχ(x) , ..., (x_1 - i x_2)^n+1 e^ h(x) + iχ(x) }, α∈ℤ.
The situation is analogous in the case α <0. Here, P_-(A) has trivial zero eigenspace and no bounded zero modes, i.e. N_-=N_-^∞ = { 0 }, while P_+(A) has the n-dimensional zero eigenspace
N_+ = span{e^ - h(x) + iχ(x) ,(x_1 + i x_2)e^ - h(x) + iχ(x) , ..., (x_1 + i x_2)^n-1 e^ - h(x) + iχ(x) }
and the space of bounded zero modes
N_+^∞ =span{e^ - h(x) + iχ(x) ,(x_1 + i x_2)e^ -h(x) + iχ(x) , ..., (x_1 + i x_2)^n e^ -h(x) + iχ(x) }, α∈ℝ∖ℤ,
span{e^ -h(x) + iχ(x) ,(x_1 + i x_2)e^ -h(x) + iχ(x) , ..., (x_1 + i x_2)^n+1 e^ -h(x) + iχ(x) }, α∈ℤ.
A special case is the case α = 0. In this case, the operators P_±(A) both have no zero eigenstates, but both exhibit a bounded zero mode, given by e^∓ h(x) + iχ. Hence,
N_± = { 0 }, N_±^∞ = span{e^∓ h(x) + iχ(x) }.
Having determined the spaces of L^2-integrable and bounded zero modes of P_±(A), virtual bound states at zero of P_±(A) are simply all states in N_+^∞∖ N_+ resp. N_-^∞∖ N_-. As before, the bounded zero modes and virtual bound states at zero of the full Pauli operator are attained by composing the respective states ψ^±∈ N_±^∞ as in (<ref>).
In the following, we will often work either with the spin-up component P_+(A) or the spin-down component P_-(A). We will often use the term “Aharonov-Casher states“ synonymously for the above constructed zero eigenstates of the spin components P_±(A), i.e. the functions
ψ_k^±(x) = (x_1± ix_2)^k e^∓ h(x)+iχ(x),
where k=0,...,n_±-1, n_± = dim(N_±). Additionally, we will use the term “generalized Aharonov-Casher state“ for the states ψ_k^± with k=n and k=n+1 when they are virtual bound states.
§.§ Main results
The weak coupling asymptotics presented in the following require assumptions on the regularity and decay behaviour of the magnetic field B and the potential V.
For the magnetic field, we assume the following.
The magnetic field B:^2 → is continuous and satisfies
|B(x)| ≲ (1+|x|^2)^-ρ
for some ρ > 7/2.
This assumption is a sufficient condition for the validity of the resolvent expansions of the Pauli operator in Section <ref> that we apply in our proofs.
Additionally, for the potential, we make the following assumption.
The potential V:^2 → satisfies V ≥ 0, V> 0 on a set of positive measure and
V(x) ≲ (1+|x|^2)^-σ
for some σ > 3.
Note that this assumption implies V ∈ L^1(^2) and ∫_^2 V(x) x > 0.
We can assume that the magnetic flux satisfies α≥ 0. This is because P_±(- A) is unitarily equivalent to P_∓(A), so flipping the sign of B and hence of α is equivalent to exchanging the roles of P_+(A) and P_-(A). The corresponding results for α<0 thus follow similar as in the case α>0.
We present the asymptotics in three mutually exclusive cases: α = 0, α∈∖ℤ and α∈ℤ∖{ 0}. We begin with α = 0.
Let B:^2 → satisfy Assumption <ref> and V:^2 → satisfy Assumption <ref>. Assume α=0. Then for all sufficiently small ε > 0, the operator H(ε) has precisely two negative eigenvalues λ_0^±(ε) which satisfy
λ_0^±(ε) = -exp( - μ_± ^-1 ε^-1 (1+O(ε)) )
as ε↘ 0 with
μ_± = 1/4π∫_^2 V|ψ_0^± |^2 x.
Observe that if B≡ 0, then the free Pauli operator acts as two copies of the free Laplacian -Δ. In that case holds ψ_0^±≡ 1 and (<ref>) becomes just two copies of Simon's weak coupling asymptotic (<ref>).
When the magnetic flux is positive, i.e. α>0, there exists at least one virtual bound state at zero, possibly a collection of zero eigenstates and a second linear independent virtual bound state at zero if α is an integer.
We denote by P_0^- the projector onto the zero eigenspace of P_-(A). If the zero eigenspace is non-trivial, then V^1/2 P_0^- V^1/2 is a non-trivial, bounded, self-adjoint and non-negative linear operator on L^2(^2) of finite rank. It will be shown in Section <ref> that Assumption <ref> implies that the rank of V^1/2 P_0^- V^1/2 is equal to n, the dimension of the zero eigenspace of P_-(A), see (<ref>), and V^1/2 P_0^- V^1/2 has n positive eigenvalues (counted with multiplicities). Let Q denote the orthogonal projection onto (ran(V^1/2 P_0^- V^1/2))^⊥, the orthogonal complement of the range of V^1/2 P_0^- V^1/2. Also, let φ^- ∈ N_-^∞∖ N_- be a particular virtual bound state at zero that appears in Theorem <ref>. With these definitions, we can state the weak coupling asymptotics for α >0.
For positive, non-integer flux, we have the following theorem.
Let B:^2 → satisfy Assumption <ref> and V:^2 → satisfy Assumption <ref>. Assume α >0, α∈ℝ∖ℤ, set n= ⌊α⌋ and α'= α- ⌊α⌋. Then for all sufficiently small ε > 0, the operator H(ε) has precisely n+1 negative eigenvalues λ_0(ε), ..., λ_n(ε) which satisfy
λ_k(ε) = - μ_k ε (1 + O(ε^min{α',1-α'} )), k=0, ..., n-1,
λ_n(ε) = - μ_n ε^1/α' (1+ O(ε^min{1,1/α'-1} ))
as ε↘ 0. Here, {μ_k }_k=0^n-1 are the positive eigenvalues of V^1/2 P_0^- V^1/2 and μ_n is given by
μ_n = ( 4^α'-1Γ(α') /πΓ(1-α') ‖ Q (V^1/2φ^-) ‖_L^2(^2)^2)^1/α'.
Note that if 0< α < 1, then α'=α, n=0 and the Pauli operator has trivial eigenspace, i.e. P_0^- = 0 and φ^- = ψ_0^-. In that case, the second order term of λ_n(ε)= λ_0(ε) can be improved. The asymptotic expansion of λ_0(ε) is then given by
λ_0(ε) = - ( 4^α'-1Γ(α') /πΓ(1-α') ∫_^2 V |ψ_0^-|^2 x )^1/α'ε^1/α' (1+ O(ε ))
as ε↘ 0.
If the flux is positive and integer, let φ_1^-, φ_2^- ∈ N_-^∞∖ N_- be particular virtual bound states at zero that appear in Theorem <ref> and let Q be again the orthogonal projection onto (ran(V^1/2 P_0^- V^1/2))^⊥. Moreover, let Q̃ be the orthogonal projection onto (ran(V^1/2 P_0^- V^1/2)+ span{ V^1/2φ_2^- })^⊥ and let K be the linear operator also defined in Theorem <ref>. Then, we have the next theorem.
Let B:^2 → satisfy Assumption <ref> and V:^2 → satisfy Assumption <ref>. Assume α>0, α∈ℤ and set n = α -1. Then for all sufficiently small ε > 0, the operator H(ε) has precisely n+2 negative eigenvalues λ_0(ε), ..., λ_n+1(ε) which satisfy
λ_k(ε) = - μ_k ε(1+ O(|logε|^-1) ), k=0, ...,n-1,
λ_n(ε) = - μ_n ε/|logε|(1 + O( log|logε|/|logε|) )
λ_n+1(ε) = - exp( -μ_n+1^-1ε^-1(1 + O(ε) ) ),
as ε↘ 0. Here, {μ_k }_k=0^n-1 are given by the non-zero eigenvalues of V^1/2 P_0^- V^1/2 and μ_n, μ_n+1 are given by
μ_n = 1 /π ‖ϕ_2 ‖_L^2(^2)^2, ϕ_2 = Q (V^1/2φ_2^-),
μ_n+1 = ⟨ϕ_1 , V^1/2 K V^1/2 ϕ_1 ⟩/‖ϕ_1 ‖_L^2(^2)^2, ϕ_1 = Q̃ (V^1/2φ_1^-).
The proofs of Theorems <ref>, <ref> and <ref> are given in Section <ref>. We conclude this section with several remarks.
In the case of radial B and V, our asymptotic expansions reproduce those found by Frank, Morozov and Vugalter in <cit.>, where the coefficients {μ_k } are given more explicitly in terms of the (generalized) Aharonov-Casher states {ψ_k^±} and the second order error terms are slightly improved. This can be explained by observing that the (generalized) Aharonov-Casher states are pairwise orthogonal in case of radial fields. The missing interaction between the (generalized) Aharonov-Casher states eventually leads to simpler expressions for the coefficients {μ_k } and less pollution of the eigenvalue asymptotics in second order. We refer the reader to the appendix for a detailed discussion of the case of radial fields.
The asymptotic expansion (<ref>) of λ_0(ε) for 0<α<1 coincides with that found by Frank, Morozov and Vugalter for radial fields, but (<ref>) holds for general, potentially non-radial B and V. The improvement of the second order error term from O(ε^min{1,1/α'-1} ) in (<ref>) to O(ε) in (<ref>) is again due to absence of interaction of (generalized) Aharonov-Casher states. In this case however, the absence of interaction between (generalized) Aharonov-Casher states has the trivial reason that there is only a single generalized Aharonov-Casher state.
We expect that the Assumptions <ref> and <ref> can be weakened. The stated regularity and decay behaviour of the magnetic field B are sufficient conditions for the resolvent expansions of the Pauli operator we cite. It is expected that the resolvent expansions hold under weaker assumptions on B (although B should at least be in L^1(^2)). Such weaker assumptions would then also be applicable here.
Moreover, we have assumed that the potential V is point-wise non-negative and fulfills ∫ V x > 0. For a sign-indefinite potential, the problem of computing weak coupling eigenvalue asymptotics remains open. One major difficulty comes from the fact that the number of negative eigenvalues that appear under weak coupling is then not necessarily equal to the number of Aharonov-Casher plus generalized Aharonov-Casher states, but instead given by the number of non-negative eigenvalues of the matrix
V̂ = (∫ V ψ_kψ_l x )_k,l=0^n or n+1 where {ψ_k }_k=0^n or n+1 are the (generalized) Aharonov-Casher states, see Weidl <cit.>. We think that if V̂ has only positive eigenvalues, the assumption that V is non-negative is not necessary and very similar asymptotic expansions should continue to hold for such sign-indefinite potentials V.
Clearly, our results must cease to hold as soon as V̂ exhibits a negative eigenvalue. When the sign of V is negative (i.e. a repulsive potential is coupled), recent results by Breuer and Kovařík <cit.> show the appearance of resonances. In the generic sign-indefinite case, we expect that H(ε) exhibits a mix of bound states as well as resonances with asymptotic expansions that not necessarily exhibit the same leading order behaviour as derived here or in <cit.>.
§ PRELIMINARIES AND NOTATION
§.§ Notation
Let X be a set and let f_1, f_2: X → be two functions. We write f_1(x) ≲ f_2(x) if there exists a constant c>0 such that f_1(x) ≤ c f_2(x) for all x∈ X. We write f_1(x) ≍ f_2(x) if f_1(x) ≲ f_2(x) and f_2(x) ≲ f_1(x)
For two Hilbert spaces H, H' let ℒ(H, H') denote the Banach space of bounded linear operators from H to H'. For brevity we also write ℒ(H) for ℒ(H, H).
For a subspace U⊂ H, let U^⊥ = H ⊖ U denote its orthogonal complement and let Q_U be the orthogonal projector onto U.
For a linear operator A∈ℒ(H) and two subspaces U,V ⊂ H we define A|_U→ V∈ℒ(U,V) by A|_U→ V : U → V, x ↦ Q_V Ax. This operator is the component of A that acts on U and maps into V.
In Section <ref>, we will recall resolvent expansions of the operators P_±(A) given in the topology of weighted L^2-spaces. For this purpose, let L^2,s(^2), s∈ℝ, denote the weighted L^2-space equipped with the norm
‖ u ‖_2,s = ‖ (1+|· |^2)^s/2 u ‖_L^2(^2).
We denote by ℬ(s,s') the space of bounded linear operators from L^2,s(^2) to L^2,s'(^2). For s∈ℝ and u∈ L^2,s(^2), v∈ L^2,-s(^2), let u ⟨ v, · ⟩ denote the linear operator in ℬ(s,-s) that acts as
u ⟨ v, f⟩ := u ∫_^2v f x.
§.§ Resolvent expansions
At the core of the following proofs are resolvent expansions for P_±(A) derived in <cit.>. There are three cases that need to be distinguished. The first case is α = 0, where the resolvents of P_±(A) look similar for the + and - case.
Let α=0. Then
(P_±(A)-λ)^-1= - logλ/4 πψ_0^±⟨ψ_0^±, . ⟩ +O(1)
holds in ℬ(s,-s), s>3, as λ→ 0, Im(λ)≥ 0.
The branch of the complex logarithm is chosen such that logλ = log |λ| + i π for λ< 0. Therefore, one can also write
(P_±(A)-λ)^-1= - log |λ |/4 πψ_0^±⟨ψ_0^±, . ⟩ +O(1)
as λ→ 0, instead.
If the magnetic flux is positive, the situation is different for P_+(A) and P_-(A). For the resolvent of the spin-up component we have the following simple expansion.
Let α>0. Then
(P_+(A)-λ)^-1= O(1)
holds in ℬ(s,-s), s>3, as λ→ 0, Im(λ)≥ 0.
For the other component, P_-(A), the situation is more complicated. It requires us to adopt a bit of notation of <cit.> to state the resolvent expansions.
For 0<t∈∖ℤ let us define
ζ(t) = - 4^t-1Γ(t) e^iπ t/πΓ(1-t), ω(t) = |d_n^-|^2/ζ(t) ,
with d_n^- ∈ℂ. The precise definition of d_n^- is found in <cit.>, but it is not relevant for the results nor the proofs presented in this paper. Furthermore, let P_0^- denote the orthogonal projector onto the zero eigenspace of P_-(A), let ψ^-∈ N_- denote a particular zero eigenfunction of P_-(A) and φ^- ∈ N_-^∞∖ N_- a particular virtual bound state at zero. For the precise definition of the states ψ^- and φ^- we refer to <cit.>. We emphasize that they are certain linear combinations of the (generalized) Aharonov-Casher states {ψ_k^-}_k=0^n.
Then, the following asymptotic expansion for the resolvent is valid for non-integer magnetic flux.
Let 0<α∈∖ℤ and α'=α- ⌊α⌋. Then there are constants τ, ρ∈ such that
(P_-(A)-λ)^-1= -λ^-1 P_0^- + ω(1+α') λ^α'-1/1+τω(1+α') λ^α'ψ^- ⟨ψ^-, . ⟩ - ζ(α') λ^-α'/1+ρζ(α') λ^1-α'φ^- ⟨φ^-, . ⟩ + O(1),
holds in ℬ(s,-s), s>3, λ→ 0, Im(λ)≥ 0. If α<1, then the zero eigenspace of P_-(A) is trivial and the asymptotic expansion holds with P_0^- = 0 and ψ^-=0.
Definitions of the constants τ, ρ can also be found in <cit.>, however their precise values are not revelant for our proof.
At integer magnetic flux, one constructs two virtual bound states φ_1^- and φ_2^- by linear combination of (generalized) Aharonov-Casher states {ψ_k^-}_k=0^n+1 such that φ_1^- ∈ L^∞(^2) ∖ L^p(^2) for any 2 ≤ p < ∞ and φ_2^- ∈ L^p(^2) ∖ L^2(^2) for any 2 < p ≤∞. For the precise construction, we refer again to <cit.>. Given φ_1^- and φ_2^-, let
Π_jk = φ_k^- ⟨φ_j^-, . ⟩ , j,k=1,2.
The resolvent expansion in the integer flux case then takes the following form.
Let α∈ℤ, α>0. Then there are constants m∈, κ∈ such that
(P_-(A)-λ)^-1= -λ^-1 P_0^- + Π_22/πλ (logλ +m- iπ) - K (logλ -i π) + O(1),
holds in ℬ(s,-s), s>3, as λ→ 0, Im(λ)≥ 0. Here,
K = 1/4π( Π_11 + κΠ_12 + κΠ_21 + |κ|^2 Π_22) + π |d_n^-|^2/4 ψ^- ⟨ψ^- , . ⟩ .
If α = 1, then the above expansion holds with P_0^-=0 and ψ^- = 0.
We remark that there appears a typographical error in <cit.> concerning the case α = 1 where Theorem 6.5 reads ”P_0^- = Π_22 = 0” instead of ”P_0^-=ψ^- = 0”. Again, the precise definitions of the constants d_n^-, m and κ can be found in <cit.>.
§.§ Schur complement
Another important ingredient to our calculations is the Schur-Livsic-Feshbach-Grushin (SLFG) formula and in particular the Schur complement of a bounded, self-adjoint operator given in block form. We will apply the following version of the SLFG formula.
Let H be a Hilbert space and suppose P is an orthogonal projector. Let H_1 = P H and H_2 =(1 - P) H. Then H_1, H_2 are closed subspaces of H and H=H_1 ⊕ H_2. Let A be a bounded, self-adjoint operator on H. We write A in block form
A = [ A_11 A_12; A_21 A_22 ]
where A_11= A|_H_1→ H_1, A_12= A|_H_2→ H_1, A_21= A|_H_1→ H_2 and A_22= A|_H_2→ H_2 (recall the notation introduced in the beginning of this section).
Assume A_22 is invertible on H_2. Then A is invertible on H if and only if its Schur complement S=A_11 - A_12 A_22^-1 A_21 is invertible on H_1. In particular,
dimker A = dimker S.
We provide a short proof of (<ref>). Let
ψ = [ ψ_1; ψ_2 ]∈ H_1 ⊕ H_2.
Suppose ψ∈ker A. Then, A ψ = 0 is equivalent to
A_11ψ_1 + A_12ψ_2 = 0,
A_21ψ_1 + A_22ψ_2 = 0.
Since A_22^-1 is invertible, this implies ψ_2 = - A_22^-1 A_21ψ_1 and thus S ψ_1 = A_11ψ_1 - A_12 A_22^-1 A_21ψ_1 = 0.
Conversely, if ϕ∈ker S, then
ψ = [ ϕ; -A_22^-1 A_21ϕ ]∈ker A.
If A^-1 is bounded,
A^-1 = [ S^-1 -S^-1 A_12 A_22^-1; - A_22^-1 A_21S^-1 A_22^-1 - A_22^-1 A_21 S^-1 A_12 A_22^-1 ].
While we will not use equation (<ref>), we will make extensive use of the invertibility condition that A is invertible if and only if its Schur complement S is invertible (having shown that A_22 is invertible) and that the dimensions of the kernels coincide.
§ PROOFS OF THE MAIN RESULTS
We will now give the proofs of Theorems <ref>, <ref> and <ref>. First, to shorten notation, we define v:=V^1/2, which is well-defined, since V≥ 0.
We begin by applying the Birman-Schwinger principle, see the recent monograph <cit.> and references therein. It reveals that λ < 0 is an eigenvalue of H(ε) if and only if 1 is an eigenvalue of the Birman-Schwinger operator ε v(H(ε)-λ)^-1v. Since
ε v(H(ε)-λ)^-1v = [ ε v(P_+(A)-λ)^-1v 0; 0 ε v(P_-(A)-λ)^-1v ],
this is the case if and only if 1 is an eigenvalue of ε v(P_+(A)-λ)^-1v or 1 is an eigenvalue of ε v(P_-(A)-λ)^-1v. Here, the linear operators
L^2(^2) ∋ψ ↦ v(P_±(A)-λ)^-1 v ψ∈ L^2(^2) , λ <0,
are bounded, since v is bounded by Assumption <ref>. They are in fact compact. This follows from compactness of the Birman-Schwinger operator v(-Δ-λ)^-1v to the corresponding classical Schrödinger operator (which is actually Hilbert-Schmidt, since V satisfies the conditions of Proposition 3.2 of <cit.> by Assumption <ref>). Together with the diamagnetic inequality, see <cit.>, and a theorem by Pitt <cit.> one concludes that the Birman-Schwinger operator v((-i∇ +A)^2-λ)^-1v to the corresponding magnetic Schrödinger operator is also compact. The Birman-Schwinger operators v(P_±(A)-λ)^-1v=v((-i∇ +A)^2± B-λ)^-1v for the spin-components of the Pauli operator are then seen to be compact as well when one applies a resolvent identity (note that B is bounded due to Assumption <ref>).
Now, recall that the asymptotic expansions of the Theorems <ref> to <ref> hold in ℬ(s,-s), s>3. Under Assumption <ref> on the potential V, the linear operators
L^2(^2) ∋ψ ↦ v ψ∈ L^2,σ(^2),
L^2,-σ(^2) ∋ψ ↦ v ψ∈ L^2(^2),
are bounded. If v(P_±(A)-λ)^-1 v is understood as a map
L^2(^2) v⟶ L^2,σ (^2) (P_±(A)-λ)^-1⟶ L^2,-σ (^2) v⟶ L^2 (^2),
one sees that it is valid to take the asymptotic expansions of Theorem <ref> to <ref> in ℬ(σ,-σ) and simply multiply them by v from the left and right to gain asymptotic expansions of v(P_±(A)-λ)^-1 v ∈ℒ(L^2(^2)) as λ→ 0.
§.§ Zero flux - Proof of Theorem <ref>
Let us start with the case α = 0. We treat P_+(A) and P_-(A) simultaneously and divide the proof into several steps.
Step 1: Preliminaries
Note that if α =0, then ψ_0^± = e^∓ h+iχ = O(1) as |x| →∞. Hence, under Assumption <ref> on the potential V, we have vψ_0^±∈ L^2(^2). Consider H_± = span{φ^±}⊂ L^2(^2) where φ^± = vψ_0^± / ‖ vψ_0^±‖_L^2(^2). Note that v(P_±(A)-λ)^-1v ∈ℒ(L^2(^2)) is continuous in operator norm with respect to λ < 0 and
v(P_±(A)-λ)^-1v = o(1)
in ℒ(L^2(^2)) as λ→ -∞. Furthermore, by Theorem <ref>,
v(P_±(A)-λ)^-1v= - log |λ|/4 π vψ_0^±⟨ vψ_0^±, . ⟩ +O(1)
in ℒ(L^2(^2)) as λ↗ 0. This means that
v(P_±(A)-λ)^-1v|_H_±→ H_±^⊥,
v(P_±(A)-λ)^-1v|_H_±^⊥→ H_±,
v(P_±(A)-λ)^-1v|_H_±^⊥→ H_±^⊥
are uniformly bounded over λ < 0.
Step 2: Bounding the number of negative eigenvalues of H(ε)
We can already show with (<ref>) that H(ε) has at most two eigenvalues for small enough ε. For that, we argue that both spin components P_±(A) - ε V have at most one negative eigenvalue. Let λ_k^±(ε), k=0,1,..., denote the negative eigenvalues of P_±(A) - ε V and μ_k^±(λ), k=0,1,..., the non-negative eigenvalues of v(P_±(A)-λ)^-1v. Let also μ̃_k^±(λ), k=0,1,..., denote the eigenvalues of the leading order operator
S(λ) =- log |λ|/4 π vψ_0^±⟨ vψ_0^±, . ⟩
on the right hand side of (<ref>). Note that all eigenvalues of S(λ) except one are zero since S(λ) is a rank one operator. We can assume that |λ| is small enough such that μ̃_k^±(λ)≥ 0 for any k. We also assume that the eigenvalues μ_k^±(λ), μ̃_k^±(λ) are indexed in decreasing order. Then μ̃_k^±(λ)=0 for k≠ 0 and μ̃_0^±(λ) is the only eigenvalue that is possibly non-zero. The asymptotic equation (<ref>) implies that there exist Λ>0 and C>0 such that for any Λ < λ < 0
|μ_k^±(λ) - μ̃_k^±(λ) | ≤ C.
and therefore |μ_k^±(λ) | ≤ C for any Λ < λ < 0 and k≠ 0. It follows that |εμ_k^±(λ) | ≤ 1 for k≠ 0 and |λ|, ε small enough. By the Birman-Schwinger principle, we conclude
#{λ_k^±(ε) } = lim_λ↗ 0#{λ_k^±(ε) : λ_k^±(ε)< λ} = lim_λ↗ 0#{εμ_k^±(λ) : εμ_k^±(λ)>1 }≤#{εμ_0^±(λ) } = 1.
This shows that P_±(A) - ε V each have at most one negative eigenvalue.
Now we show existence of the negative eigenvalues of H(ε) and derive their asymptotic expansions. For that, we consider the operator
K_ε^±(λ) := 1 - ε v(P_±(A)-λ)^-1v.
Because the Birman-Schwinger operator ε v(P_±(A)-λ)^-1v is compact and self-adjoint, the condition of 1 being an eigenvalue of ε v(P_±(A)-λ)^-1v is equivalent to the operator K_ε^±(λ) being not invertible on L^2(^2). The invertibility of K_ε^±(λ) will therefore be what we examine in the following.
Step 3: Reduction to a rank-one operator
We decompose K_ε^±(λ) into
K_ε^±(λ)= [ K_ε,11^±(λ) K_ε,12^±(λ); K_ε,21^±(λ) K_ε,22^±(λ) ] = [ K_ε^±(λ)|_H_±→ H_± K_ε^±(λ)|_H_±^⊥→ H_±; K_ε^±(λ)|_H_±→ H_±^⊥ K_ε^±(λ)|_H_±^⊥→ H_±^⊥ ].
Using the uniform boundedness of (<ref>), (<ref>) and (<ref>) and 1|_H_±^⊥→ H_± = 1|_H_±→ H_±^⊥ = 0, we get the estimates
‖ K_ε,12^±(λ) ‖ = ‖ ( - ε v(P_±(A)-λ)^-1v )|_H_±^⊥→ H_±‖≲ε,
‖ K_ε,21^±(λ) ‖ =‖ ( - ε v(P_±(A)-λ)^-1v )|_H_±→ H_±^⊥‖≲ε ,
‖ K_ε,22^±(λ) -1|_H_±^⊥→ H_±^⊥‖ = ‖ ( - ε v(P_±(A)-λ)^-1v )|_H_±^⊥→ H_±^⊥‖≲ε.
for any λ <0. Thus, if ε is sufficiently small, K_ε,22^±(λ) becomes invertible on H_±^⊥ for any λ <0 and then
‖ (K_ε,22^±(λ) )^-1 - 1 |_H_±^⊥→ H_±^⊥‖≲ε .
The SLFG Lemma (Lemma <ref>) now implies that K_ε^±(λ) is invertible if and only if its Schur complement
S_ε^±(λ) := K_ε,11^±(λ) - K_ε,12^±(λ) (K_ε,22^±(λ))^-1K_ε,21^±(λ)
is invertible in H_±. Here,
‖ K_ε,12^±(λ) (K_ε,22^±(λ))^-1K_ε,21^±(λ) ‖≲ε^2
by (<ref>), (<ref>) and (<ref>). Observe that S_ε^±(λ): H^±→ H^± is a self-adjoint rank one operator, so
S_ε^±(λ) = s_ε^±(λ) φ^±⟨φ^±, . ⟩
where
s_ε^±(λ) = ⟨φ^±,S_ε^±(λ) φ^±⟩∈.
Hence, the Schur complement S_ε^±(λ) is not invertible in H^± if and only if
s_ε^±(λ) = 0.
The negative eigenvalues of H(ε) are given by the solutions λ to the scalar equation (<ref>).
Step 4: Showing existence of negative eigenvalues of H(ε)
Let us argue that s_ε^±(λ) = 0 has at least one solution λ<0 when ε is small enough (one each for the plus and the minus case). Let λ^*<0 be given. Because of (<ref>), v(P_±(A)-λ)^-1v is uniformly bounded in ℒ(L^2(^2)) for λ < λ^*<0. With this fact and equations (<ref>), (<ref>) and (<ref>), we conclude that
s_ε^±(λ) ≥ 1 - c_1 ε -c_2 ε^2, λ < λ^*,
for some constants c_1,c_2>0 and in particular
s_ε^±(λ) > 0, λ < λ^*,
if ε is small enough. On the other hand, because of (<ref>), (<ref>) and (<ref>), there exists some Λ^* <0 such that
s_ε^±(λ) = 1 + ε/4π‖ vψ_0^±‖_L^2(^2)^2 log|λ| + r_ε(λ), Λ^* < λ < 0,
with a remainder term r_ε(λ) that satisfies |r_ε(λ)| ≲ε for Λ^* < λ < 0. Therefore, s_ε^±(λ) → - ∞ as λ↗ 0. Since s_ε^±(λ) is continuous in λ<0, the intermediate value implies that there exists at least one solution λ_0^±(ε)∈ [λ^*,0) of s_ε^±(λ) = 0.
Since we have shown in Step 2 that P_±(A)-ε V each have at most one eigenvalue for small enough ε, we have proven that P_±(A)-ε V each have precisely one negative eigenvalue for small enough ε and hence H(ε) has precisely two negative eigenvalues for small enough ε.
Step 5: Extracting eigenvalue asymptotics
Note that λ_0^±(ε) → 0 as ε↘ 0, since λ^* was arbitrary. It follows that Λ^*<λ_0^±(ε)<0 for any ε small enough and according to (<ref>), the solution λ_0^±(ε) must satisfy
log|λ_0^±(ε)| = - 4π/‖ vψ_0^±‖_L^2(^2)^2ε^-1 (1+r_ε(λ_0^±(ε))).
Because |r_ε(λ)| ≲ε for Λ^*<λ_0^±(ε)<0, we finally obtain
λ_0^±(ε) = - exp( - 4πμ^±ε^-1 (1+O(ε)) ), ε↘ 0,
with
μ_± = ‖ vψ_0^±‖_L^2(^2)^2=∫_^2 V|ψ_0^±|^2 x.
□
§.§ Non-integer flux - Proof of Theorem <ref>
For non-zero flux, the calculations become more complicated. We first recall that the asymptotic expansion of P_+(A) has no singularities at λ = 0, while that of P_-(A) has, so we need to treat the spin-up component and the spin-down component separately.
Let us first consider the component P_+(A) of the Pauli operator. We consider K_ε^+(λ) as defined by (<ref>). Theorem <ref> gives for any α>0
v (P_+(A)-λ)^-1 v = O(1)
in ℒ(L^2(^2)) as λ↗ 0. Since also v (P_+(A)-λ)^-1 v = o(1) as λ→ -∞ and v (P_+(A)-λ)^-1 v is continuous with respect to λ<0, we find that v (P_+(A)-λ)^-1 v is uniformly bounded for λ <0. This implies that K_ε^+(λ)=1 + O(ε) in ℒ(L^2(^2)) as ε↘ 0 uniform in λ <0 and hence K_ε^+(λ) is invertible for any λ <0 as soon as ε is small enough. This means that P_+(A) - ε V exhibits no eigenvalue λ<0 if ε is small enough.
For small enough ε, any negative eigenvalues of H(ε) must therefore be negative eigenvalues of P_-(A) - ε V. Let us discuss P_-(A) - ε V and its associated operator K_ε^-(λ) in the following.
Step 1: Preliminaries
Let α >0, α∈∖ℤ. Let n=⌊α⌋ and α' = α - n. Finally, recall the (generalized) Aharonov-Casher states ψ_k^-(x) = (x_1-ix_2)^k-1 e^h(x)+iχ(x) for k=0,...,n. The states {ψ_k^- }_k=0^n-1 span the zero eigenspace of P_-(A), i.e. ran P_0^-, while ψ_n^- is a virtual bound state at zero. Recall that the (generalized) Aharonov-Casher states are linearly independent under Assumption <ref> on the magnetic field. Under Assumption <ref> on the potential V, we have v ψ_k^- ∈ L^2(^2), k=0,...,n, since all {ψ_k^-}_k=0^n are bounded functions. Furthermore, the functions { vψ_k^-}_k=0^n are linearly independent, since V>0 on some set of non-zero measure.
Let H_n = span{ vψ_k^- }_k=0^n-1 and H_n+1 = span{ vψ_k^- }_k=0^n. Because of linear independence of the states vψ_k^-, the spaces H_n and H_n+1 are n- resp. (n+1)-dimensional subspaces of L^2(^2). To shorten notation, for λ<0, |λ| small enough, let also
f_0(λ) = -λ^-1,
f_1(λ) =ω(1+α')λ^α'-1/1+τω(1+α') λ^α',
f_2(λ) = - ζ(α') λ^-α'/1+ρζ(α') λ^1-α'
with ζ(t) and ω(t) given by (<ref>).
These are the prefactors appearing in front the of the linear operators in the resolvent expansion of Theorem <ref>. Observe that for λ<0 the prefactors f_0(λ), f_1(λ) and f_2(λ) can be rephrased as
f_0(λ) = |λ|^-1,
f_1(λ) = |d_n^-|^2e^-iπ(1-α')|λ|^-(1-α')/ζ(1+α')+τ |d_n^-|^2 e^iπα'|λ|^α' = -|d_n^-|^2|λ|^-(1-α')/4^α'Γ(1+α')/πΓ(-α')+τ |d_n^-|^2 |λ|^α',
f_2(λ) = - ζ(α') e^-iπα'|λ|^-α'/1+ρζ(α') e^iπ (1-α')|λ|^1-α'= 4^α'-1Γ(α') /πΓ(1-α') |λ|^-α'/1+ρ4^α'-1Γ(α') /πΓ(1-α') |λ|^1-α',
and one sees that all three prefactors are real-valued.
By Theorem <ref> and by boundedness of the operators (<ref>), we find
v(P_-(A)-λ)^-1v = f_0(λ) vP_0^-v +f_1(λ) vψ^- ⟨ vψ^-, . ⟩ + f_2(λ) vφ^- ⟨ vφ^-, . ⟩ + O(1),
in ℒ(L^2(^2)) as λ↗ 0, where vψ^- ∈ H_n and vφ^- ∈ H_n+1∖ H_n. Since f_0(λ), f_1(λ), f_2(λ) are real for λ<0, all terms on the right-hand side are self-adjoint. Also, note that ran vP_0^-v = H_n and therefore since vP_0^-v is self-adjoint, ker vP_0^-v = H_n^⊥. In addition, H_n^⊥⊂ker vψ^- ⟨ vψ^-, . ⟩, since vψ^- ∈ H_n.
As in the case of zero flux, we note that v(P_±(A)-λ)^-1v is continuous in operator norm with respect to λ < 0 and
v(P_-(A)-λ)^-1v = o(1)
in ℒ(L^2(^2)) as λ→ -∞. Hence, we conclude that
v(P_-(A)-λ)^-1v|_H_n+1→ H_n+1^⊥,
v(P_-(A)-λ)^-1v|_H_n+1^⊥→ H_n+1,
v(P_-(A)-λ)^-1v|_H_n+1^⊥→ H_n+1^⊥
are uniformly bounded over λ < 0.
Step 2: Bounding the number of negative eigenvalues of H(ε)
Similarly as in the case α=0, we can already show that H(ε) has no more than n+1 negative eigenvalues. We have already seen that P_+(A)-ε V contributes no negative eigenvalues for small enough ε. We now argue that P_-(A)-ε V has at most n+1 negative eigenvalues for small enough ε. Hence, let as before λ_k^-(ε), k=0,1,..., denote the negative eigenvalues of P_-(A) - ε V and μ_k^-(λ), k=0,1,..., the non-negative eigenvalues of v(P_-(A)-λ)^-1v. Let μ̃_k^-(λ), k=0,1,..., now denote the eigenvalues of the operator
S(λ)=f_0(λ) vP_0^-v +f_1(λ) vψ^- ⟨ vψ^-, . ⟩ + f_2(λ) vφ^- ⟨ vφ^-, . ⟩.
It collects all terms on the right hand side of (<ref>) that are singular as λ→ 0. Since S(λ) is at most rank n+1, it has at most n+1 non-zero eigenvalues. We can again assume that |λ| is small enough such that μ̃_k^-(λ)≥ 0 for any k. This is because f_0(λ), f_2(λ) are non-negative and f_1(λ) vψ^- ⟨ vψ^-, . ⟩ can be seen as a weak perturbation to f_0(λ) vP_0^-v since vψ^-∈ H_n and f_1(λ) has a weaker singularity as λ→ 0 than f_0(λ). If we assume that the eigenvalues μ_k^-(λ), μ̃_k^-(λ) are indexed in decreasing order, then μ̃_k^-(λ)=0 for k > n. Equation (<ref>) implies that there exist Λ>0 and C>0 such that for any Λ < λ < 0
|μ_k^-(λ) - μ̃_k^-(λ) | ≤ C.
and therefore |μ_k^-(λ) | ≤ C for any Λ < λ < 0 and k> n. It follows that |εμ_k^-(λ) | ≤ 1 for k> n and |λ|, ε small enough. We finally conclude
#{λ_k^±(ε) } = lim_λ↗ 0#{λ_k(ε) : λ_k(ε)< λ} = lim_λ↗ 0#{εμ_k^±(λ) : εμ_k^±(λ)>1 }
≤#{εμ_0^±(λ),...,εμ_n^±(λ) } = n+1
by the Birman-Schwinger principle. This shows that P_-(A) - ε V and hence H(ε) has at most n+1 negative eigenvalues.
Step 3: Reduction to a finite rank operator
We now consider the operator
K_ε^-(λ) = 1 - ε v(P_-(A)-λ)^-1v.
As argued in the case α = 0, λ is an eigenvalue of H(ε) if and only if the operator K_ε^-(λ) is not invertible.
We decompose K_ε^-(λ) into
K_ε^-(λ)= [ K_ε,11^-(λ) K_ε,12^-(λ); K_ε,21^-(λ) K_ε,22^-(λ) ] = [ K_ε^-(λ)|_H_n+1→ H_n+1 K_ε^-(λ)|_H_n+1^⊥→ H_n+1; K_ε^-(λ)|_H_n+1→ H_n+1^⊥ K_ε^-(λ)|_H_n+1^⊥→ H_n+1^⊥ ].
Using 1|_H_n+1^⊥→ H_n+1 = 1|_H_n+1→ H_n+1^⊥ = 0, we find
‖ K_ε,12^-(λ) ‖ = ‖ ( - ε v(P_-(A)-λ)^-1v )|_H_n+1^⊥→ H_n+1‖≲ε,
‖ K_ε,21^-(λ) ‖ =‖ ( - ε v(P_-(A)-λ)^-1v )|_H_n+1→ H_n+1^⊥‖≲ε,
‖ K_ε,22^-(λ) -1|_H_n+1^⊥→ H_n+1^⊥‖ = ‖ ( - ε v(P_-(A)-λ)^-1v )|_H_n+1^⊥→ H_n+1^⊥‖≲ε
for any λ <0. If ε is sufficiently small, then K_ε,22^-(λ) is invertible for any λ <0 and then
‖ (K_ε,22^-(λ) )^-1 - 1 |_H_n+1^⊥→ H_n+1^⊥‖≲ε
The SLFG formula (Lemma <ref>) now implies that K_ε^-(λ) is invertible if and only its Schur complement
S_ε^-(λ) = K_ε,11^-(λ) - K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ)
is invertible in H_n+1. Here,
‖ K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ) ‖≲ε^2
by (<ref>), (<ref>) and (<ref>). The Schur complement S_ε^-(λ) is finite rank and self-adjoint and therefore not invertible if and only if one of its eigenvalues is zero. Let us denote the eigenvalues of the Schur complement by {μ_k (S_ε^-(λ)) }_k=0^n. Any solution λ to one of the n+1 equations
μ_k (S_ε^-(λ)) = 0, k=0,...,n,
yields therefore a negative eigenvalue of H(ε).
Step 4: Showing existence of negative eigenvalues of H(ε)
Each parametric eigenvalue function λ↦μ_k (S_ε^-(λ)), k=0,..., n, is continuous in λ< 0 and it is not difficult to show by an intermediate value theorem argument similar to that in the case α=0, that each equation μ_k (S_ε^-(λ)) =0 must have at least one solution λ_k(ε), if ε is sufficiently small. Since we have already shown that H(ε) cannot have more than n+1 negative eigenvalues λ_k(ε), it is clear that if the solutions λ_k(ε) for each k=0,...,n are distinct, then H(ε) must have precisely n+1 negative eigenvalues, namely said solutions λ_k(ε), k=0,..., n. However, it may happen that the solutions λ_k(ε) of (<ref>) are not distinct. In this case, suppose m equations μ_k (S_ε^-(λ)) = 0 have the same solution, i.e. λ_ℓ(ε)=...=λ_ℓ+m-1(ε) for some ℓ and hence S_ε^-(λ)|_λ=λ_ℓ(ε) has a zero eigenvalue of multiplicity m. Using (<ref>) from the SLFG Lemma, we are allowed to conclude that if S_ε^-(λ)|_λ=λ_ℓ(ε) has a zero eigenvalue of multiplicity m, then also K_ε^-(λ)|_λ=λ_ℓ(ε) has a zero eigenvalue of multiplicity m, which in turn means that P_-(A)-ε V has the eigenvalue λ=λ_ℓ(ε) of multiplicity m by the Birman-Schwinger principle. Alternatively, we can say that P_-(A)-ε V has the eigenvalues λ_ℓ(ε)=...=λ_ℓ+m-1(ε) counted with multiplicity. The number of negative eigenvalues of H(ε) counted with multiplicities is therefore always precisely n+1. Each of them comes as one solution of one of the equations (<ref>).
Although we have now argued the existence of precisely n+1 negative eigenvalues of H(ε), we still have difficulties to directly extract the asymptotic behaviour, since the asymptotic expansion of the resolvent from (<ref>) projected onto H_n+1 has still multiple singular terms of varying degree.
To get hold of asymptotics, we consider different ”reference windows” for λ that scale with ε so that ε f_i(λ) is bounded above and below for a particular i. This allows us to seperate the singular terms and the eigenvalues of the Birman-Schwinger operator attributed to each singular term. The different degrees of singular terms lead to different asymptotics for the eigenvalues of the Birman-Schwinger operator and hence, after resolving the implicit equation (<ref>), for the eigenvalues of H(ε).
Step 5: Extracting eigenvalue asymptotics - first reference window
Let us first consider λ<0 with
C_1 ≤ε f_0(λ) = ε |λ|^-1≤ C_2
for some C_1,C_2 >0 that we specify later. This means we have C_2^-1ε≤ |λ| ≤ C_1^-1ε and therefore
ε f_1(λ) ≍ε^α',
ε f_2(λ) ≍ε^1-α',
for all λ in the reference window. Let G be the orthogonal complement of H_n in H_n+1, i.e. G = H_n+1⊖ H_n. The space G is one-dimensional. Recall that S_ε^-(λ) acts only on H_n+1 and not the full space L^2(^2). We view the Schur complement S_ε^-(λ) now as a block operator on H_n ⊕ G, i.e.
S_ε^-(λ) = [ S_ε,11^-(λ) S_ε,12^-(λ); S_ε,21^-(λ) S_ε,22^-(λ) ] = [ S_ε^-(λ)|_H_n→ H_n S_ε^-(λ)|_G → H_n; S_ε^-(λ)|_H_n→ G S_ε^-(λ)|_G → G ].
Our idea is to apply the SLFG Lemma again to the operator S_ε^-(λ) in the above block form. We have for ε small enough
S_ε^-(λ) = ( K_ε,11^-(λ) - K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ) ,
= (1 - ε v (P_-(A) - λ)^-1 v)|_H_n → H_n - K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ) ,
= (1 - ε f_0(λ) vP_0^-v - ε f_1(λ) vψ^- ⟨ vψ^-, . ⟩ - ε f_2(λ) vφ^- ⟨ vφ^-, . ⟩ - R̃_ε(λ) )|_H_n → H_n
- K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ)
Here, the operator R̃_ε(λ) is the operator R̃_ε(λ)= ε· ( v (P_-(A) - λ)^-1 v - S(λ))|_H_n → H_n with the singular terms S(λ) from (<ref>). Since the ε-independent part of R̃_ε(λ) is the uniformly bounded for bounded |λ|, it holds ‖R̃_ε(λ) ‖≲ε. By our choice of H_n,
1|_G → H_n = 1|_H_n → G = 0
vP_0^-v|_G → H_n = vP_0^-v|_H_n → G= vP_0^-v|_G → G =0
vψ^- ⟨ v ψ^- , . ⟩ |_G → H_n = vψ^- ⟨ v ψ^- , . ⟩ |_H_n → G = vψ^- ⟨ v ψ^- , . ⟩ |_G → G= 0 ,
so using (<ref>), we find for example
S_ε,12^-(λ) = S_ε^-(λ) |_G → H_n = - ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) |_G → H_n + R_ε,12(λ)
where
R_ε,12(λ) =( - R̃_ε (λ) - K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ) ) |_G → H_n
is a linear operator with ‖ R_ε,12(λ)‖≲ε + ε^2 ≲ε for all λ <0 satisfying (<ref>). In a similar way, we find
S_ε,21^-(λ) = - ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) |_H_n → G + R_ε,21(λ)
and
S_ε,22^-(λ) = 1|_G → G - ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) |_G → G + R_ε,22(λ)
where R_ε,21(λ), R_ε,22(λ) are linear operators with ‖ R_ε,21(λ)‖, ‖ R_ε,22(λ)‖≲ε for all λ <0 satisfying (<ref>). Now, ε f_2(λ) ≲ε^1-α' implies
‖ S_ε,12^-(λ) ‖ ≲ε^1-α' + ε≲ε^1-α' ,
‖ S_ε,21^-(λ) ‖ ≲ε^1-α' + ε≲ε^1-α' ,
‖ S_ε,22^-(λ) -1|_G → G‖ ≲ε^1-α' + ε≲ε^1-α'.
From (<ref>) it follows that once ε is small enough, S_ε,22^-(λ) is invertible for all λ<0 in the reference window (<ref>) and then
‖ (S_ε,22^-(λ))^-1 -1|_G → G‖ ≲ε^1-α'.
Applying the SLFG Lemma now reveals that S_ε^-(λ) is invertible if and only if the Schur complement
T_ε^-(λ) = S_ε,11^-(λ) - S_ε,12^-(λ) (S_ε,22^-(λ))^-1S_ε,21^-(λ)
is invertible in H_n. Here,
‖ S_ε,12^-(λ) (S_ε,22^-(λ))^-1S_ε,21^-(λ) ‖≲ε^2(1-α')
due to (<ref>), (<ref>) and (<ref>).
What we have achieved now is that we reduced the Schur complement S_ε^-(λ) by one dimension to T_ε^-(λ) by taking another Schur complement. The dimension we got rid of by restricting our view to the reference window is the dimension spanned by vψ_n which appears only in the lower-order singular term ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) of the Birman-Schwinger operator. When taking this second Schur complement, we pay with another term added of order ε^2(1-α').
We are now ready to find asymptotic expansions of negative eigenvalues of H(ε) within our chosen reference window. The Schur complement T_ε^-(λ) is not invertible if and only if one of its eigenvalues is zero. We denote the eigenvalues of T_ε^-(λ) by μ_k (T_ε^-(λ)), k=0, ..., n-1. We show that each equation μ_k (T_ε^-(λ))=0 has at least one solution, if we choose the frame of the reference window, given by C_1, C_2, appropriately.
The Schur complement T_ε^-(λ) can be written as
T_ε^-(λ) =( 1 - ε f_0(λ) vP_0^-v -ε f_1(λ) vψ^- ⟨ vψ^-, . ⟩ - ε f_2(λ) vφ^- ⟨ vφ^-, . ⟩ ) |_H_n → H_n
+ R_ε,11(λ) - S_ε,12^-(λ) (S_ε,22^-(λ))^-1S_ε,21^-(λ)
where R_ε,11(λ) is some linear operator with ‖ R_ε,11(λ)‖≲ε for all λ <0 satisfying (<ref>). Because of (<ref>), (<ref>) and (<ref>) we see that for λ in the given reference window
‖ T_ε^-(λ) - (1 - ε f_0(λ) vP_0^-v)|_H_n → H_n‖≲ε^α' +ε^1-α' +ε+ ε^2(1-α')≲ε^min{α',1-α'}
Let us briefly focus on the operator (vP_0^- v) |_H_n → H_n. It is clearly finite rank and self-adjoint. But it is also positive. It is easily seen to be non-negative, because for any ϕ∈ H_n
⟨ϕ , (vP_0^- v) ϕ⟩ = ‖ P_0^- v ϕ‖^2_L^2(^2)≥ 0.
Now suppose there is some ϕ∈ H_n with ⟨ϕ , (vP_0^- v) ϕ⟩ = ‖ P_0^- v ϕ‖^2_L^2(^2) = 0. Then vϕ must be orthogonal to all zero eigenstates {ψ_k^-}_k=0^n-1. But this implies
⟨ϕ, vψ_k^-⟩ = ⟨ vϕ, ψ_k^-⟩=0
for any k=0,...,n-1, hence ϕ∈ H_n^⊥ and so ϕ=0. This shows that (vP_0^- v) |_H_n → H_n is positive and thus it has only positive eigenvalues. Let μ_k, k=0,...,n-1, denote these eigenvalues.
We turn back to the eigenvalues of the Schur complement T_ε^-(λ). Suppose C_1,C_2 are chosen such that {μ_k^-1}_k=0^n-1⊂ (C_1+δ,C_2-δ) for some small δ>0. By (<ref>),
| μ_k(T_ε^-(λ)) - (1 - ε f_0(λ) μ_k) | ≲ε^min{α',1-α'}
for all λ in the reference window, if ε is small enough. It follows that μ_k(T_ε^-(λ)) > 0 for small enough ε if λ is chosen such that ε f_0(λ) ≤μ_k^-1-δ and μ_k(T_ε^-(λ')) < 0 for small enough ε if λ' is chosen such that ε f_0(λ') ≥μ_k^-1 + δ. Since μ_k(T_ε^-(λ)) is continuous in λ, there exists λ_k(ε) such that μ_k(T_ε^-(λ_k(ε))) =0 by the intermediate value theorem. Finally, for this λ_k(ε) holds according to (<ref>)
|1- ε f_0(λ_k(ε)) μ_k| ≲ε^min{α',1-α'},
which yields
λ_k(ε) = - μ_k ε (1+ O(ε^min{α',1-α'} ))
as ε↘ 0.
Step 6: Extracting eigenvalue asymptotics - second reference window
Let us now consider λ<0 with
C_1 ≤ε |λ|^-α'≤ C_2
for some yet-to-be-specified C_1,C_2 >0. This means we have (C_2^-1ε)^1/α'≤ |λ| ≤ (C_1^-1ε)^1/α' and therefore
ε f_0(λ) ≍ε^1-1/α',
ε f_1(λ) ≍ε^2-1/α'.
ε f_2(λ) ≍ 1.
We decompose the Schur complement S_ε^-(λ) again into
S_ε^-(λ) = [ S_ε,11^-(λ) S_ε,12^-(λ); S_ε,21^-(λ) S_ε,22^-(λ) ] = [ S_ε^-(λ)|_H_n→ H_n S_ε^-(λ)|_G → H_n; S_ε^-(λ)|_H_n→ G S_ε^-(λ)|_G → G ].
Again, G=H_n+1⊖ H_n. We apply the SLFG Lemma again to this block operator, but this time we exchange the roles of S_11^-(λ) and S_22^-(λ). As in the case of the first reference window, we have for ε small enough
S_ε,11^-(λ) = (1 - ε f_0(λ) vP_0^-v -ε f_1(λ) vψ^- ⟨ vψ^-, . ⟩ - ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) |_H_n → H_n + R_ε,11(λ)
S_ε,12^-(λ) = -ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) |_G → H_n + R_ε,12(λ)
S_ε,21^-(λ) = - ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) |_H_n → G + R_ε,21(λ)
S_ε,22^-(λ) = 1|_G → G - ε f_2(λ) (vφ^- ⟨ vφ^-, . ⟩) |_G → G + R_ε,22(λ)
where R_ε,ij(λ) are some linear operators with ‖ R_ε,ij(λ)‖≲ε for all λ <0 satisfying (<ref>). The choice of the reference window implies this time
‖ (ε f_0(λ ))^-1 S_ε,11^-(λ) -(vP_0^-v)|_H_n → H_n‖ ≲ε^1/α'-1
‖ S_ε,12^-(λ) ‖ ≲ 1,
‖ S_ε,21^-(λ) ‖ ≲ 1.
Since (vP_0^-v)|_H_n → H_n is positive and hence invertible, it follows that once ε is small enough, S_ε,11^-(λ) is invertible for all λ<0 in the reference window (<ref>) and
‖ (S_ε,11^-(λ))^-1‖ ≍ε^1/α'-1.
Applying the SLFG Lemma now shows that S_ε^- is invertible if and only if the Schur complement
W_ε^-(λ) = S_ε,22^-(λ) - S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ)
is invertible in G. Here,
‖ S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ) ‖≲ε^1/α'-1
due to (<ref>), (<ref>) and (<ref>). This time, W_ε^-(λ) is a rank one operator and can be written as
W_ε^-(λ) = w_ε^-(λ) φ_n ⟨φ_n , . ⟩
where φ_n is an L^2-normalized state that spans G. The scalar function w_ε^-(λ) is given by
w_ε^-(λ) = ⟨φ_n , W_ε^-(λ)φ_n ⟩.
The Schur complement W_ε^-(λ) is not invertible if and only if w_ε^-(λ) = 0.
We argue that w_ε^-(λ) = 0 has at least one solution for appropriately chosen reference window frame. Let
ν_n =⟨ v φ^-, Q(v φ^-)⟩ = ‖ Q(v φ^-) ‖_L^2(^2)^2
where Q denotes the orthogonal projection onto G (we may now understand G as a subspace of L^2(^2)). We remark that ν_n>0 since vφ^- ∉ H_n. Let C_1,C_2>0 be such that ν_n^-1∈ (C_1+δ,C_2-δ) for some small δ >0. Then, from (<ref>), (<ref>) and (<ref>) follows
|w_ε^-(λ) - (1-ε f_2(λ) ν_n )| ≲ε + ε^1/α'-1≲ε^min{1,1/α'-1}.
We now always find λ in our reference window, such that ε f_2(λ)≤ν_n^-1-δ. For this λ we conclude w_ε^-(λ)>0, if ε is sufficiently small. We also always find λ' in our reference window, such that ε f_2(λ')≥ν_n^-1+δ which then yields w_ε^-(λ')<0. Since w_ε^-(λ) is continuous in λ, we now find λ_n(ε) such that w_ε^-(λ_n(ε)) =0. This solution satisfies
|1- ε f_2(λ_k(ε)) ν_n| ≲ε^min{1,1/α'-1},
due to (<ref>), which implies with the definition of f_2(λ) that
λ_n(ε) = - ( 4^α'-1Γ(α') /πΓ(1-α') ν_n )^1/α'ε^1/α' (1+ O(ε^min{1,1/α'-1} ))
as ε↘ 0.
Note that if 0< α < 1, then n=0 and α' = α and H_n = { 0 }. The first Schur complement S_ε(λ) is already a rank one operator and doesn't need to be decomposed further. We can then skip the first reference window and work with W_ε^-(λ) = S_ε^-(λ) in the second reference window. Eventually, one finds that there is one eigenvalue λ_0(ε) satisfying
λ_0(ε) = - ( 4^α'-1Γ(α') /πΓ(1-α') ν_n )^1/α'ε^1/α' (1+ O(ε ))
as ε↘ 0.
□
§.§ Integer flux - Proof of Theorem <ref>
For the case of integer flux, we basically repeat the procedure we applied in the non-integer case, i.e. choosing suitable reference windows and taking iterated Schur complements. Of course, due the resolvent expansion that we now draw from Theorem <ref>, the explicit first order terms and the bounds on the second order terms will change accordingly.
Step 1: Preliminaries
Let α > 0, α∈ℤ and n=α-1. Again, the Aharonov-Casher states {ψ_k^- }_k=0^n-1 span the zero eigenspace of P_-(A). Additionally, there are two virtual bound states at zero, namely the generalized Aharonov-Casher states ψ_n^- and ψ_n+1^-. All (generalized) Aharonov-Casher states states are bounded and linearly independent, so vψ_k^- ∈ L^2(^2) for all k=0,...,n+1 and all vψ_k^- are linearly independent, given V satisfies Assumption <ref>.
Set now
H_n = span{ vψ_k^- }_k=0^n-1,
H_n+1 = span({ vψ_k^- }_k=0^n-1∪{ vφ_2^-}) ,
H_n+2 = span({ vψ_k^- }_k=0^n-1∪{ vφ_2^-, vφ_1^-})
where φ_1^-, φ_2^- are the virtual bound states from Theorem <ref>.
The spaces H_n, H_n+1 and H_n+2 are n-, (n+1)- resp. (n+2)-dimensional subspaces of L^2(^2). Further define for λ <0, |λ| small enough,
f_0(λ) = -λ^-1 = |λ|^-1,
f_1(λ) =1/πλ (log |λ| +m) = - 1/π |λ| (log |λ| +m),
f_2(λ) = - log|λ|.
with m as it appears in Theorem <ref>.
Theorem <ref> then implies
v(P_-(A)-λ)^-1v= f_0(λ) vP_0^- v+ f_1(λ)vΠ_22v +f_2(λ) vKv + O(1),
in ℒ(L^2(^2)) as λ↗ 0, with Π_22 and K as defined in (<ref>) and (<ref>). Notice that all terms on the right-hand side of (<ref>) are self-adjoint.
Step 2: Bounding the number of negative eigenvalues of H(ε)
Similarly as in the previous cases α = 0 and α∈ℝ∖ℤ, the asymptotic equation (<ref>) allows to argue that H(ε) has no more than n+2 negative eigenvalues. Again, P_+(A)-ε V contributes no negative eigenvalues for small enough ε, so one needs to argue that P_-(A)-ε V has at most n+2 negative eigenvalues for small enough ε. The reason for that is that the operator
S(λ)=f_0(λ) vP_0^-v +f_1(λ) vψ^- ⟨ vψ^-, . ⟩ + f_2(λ) vφ^- ⟨ vφ^-, . ⟩,
which collects all terms on the right hand side of (<ref>) that are singular as λ→ 0, is rank n+2. Denote by λ_k^-(ε), k=0,1,..., the negative eigenvalues of P_-(A) - ε V and μ_k^-(λ), k=0,1,..., the non-negative eigenvalues of v(P_-(A)-λ)^-1v. Observing that f_0(λ), f_1(λ) and f_2(λ) are positive for |λ| small enough, one can eventually show as in the previous cases that |εμ_k^-(λ) | ≤ 1 for k> n+1 and |λ|, ε small enough. Again, one concludes
#{λ_k^±(ε) } = lim_λ↗ 0#{λ_k(ε) : λ_k(ε)< λ} = lim_λ↗ 0#{εμ_k^±(λ) : εμ_k^±(λ)>1 }
≤#{εμ_0^±(λ),...,εμ_n+1^±(λ) } = n+2
by the Birman-Schwinger principle. This shows that P_-(A) - ε V and hence H(ε) has at most n+2 negative eigenvalues.
Step 3: Reduction to a finite rank operator
To extract eigenvalue asymptotics, we consider
K_ε^-(λ) = 1 - ε v(P_-(A)-λ)^-1v
as before and check its invertibility. We decompose this operator into
K_ε^-(λ)= [ K_ε,11^-(λ) K_ε,12^-(λ); K_ε,21^-(λ) K_ε,22^-(λ) ] = [ K_ε^-(λ)|_H_n+2→ H_n+2 K_ε^-(λ)|_H_n+2^⊥→ H_n+2; K_ε^-(λ)|_H_n+2→ H_n+2^⊥ K_ε^-(λ)|_H_n+2^⊥→ H_n+2^⊥ ].
Following the arguments in the previous section, one finds that for ε small enough, K_ε^-(λ) is invertible if and only if its Schur complement
S_ε^-(λ) = K_ε,11^-(λ) - K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ)
is invertible in H_n+2. Again, one finds
‖ K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ) ‖≲ε^2
and any solution λ to one of the n+2 equations
μ_k (S_ε^-(λ)) = 0, k=0,...,n+1,
where μ_k (S_ε^-(λ)) denote the eigenvalues of S_ε^-(λ), yields a negative eigenvalue of H(ε).
Step 4: Showing existence of negative eigenvalues of H(ε)
This step is completely analogous to the same step in the non-integer flux case. Each solution of one of the equations μ_k (S_ε^-(λ)), k=0,..., n+1, yields precisely one eigenvalue λ_k(ε) of H(ε). We find that if ε is small enough, H(ε) has precisely n+2 eigenvalues, counting multiplicities.
Next, we compute the eigenvalue asymptotics by investigating the finite dimensional operator S_ε^-(λ) and asking when it is not invertible in H_n+2. This time we need to apply three different reference windows.
Step 5: Extracting eigenvalue asymptotics - first reference window
As in the case of non-integer flux, we first consider λ<0 with
C_1 ≤ε f_0(λ) = ε |λ|^-1≤ C_2
for some C_1,C_2 >0 that we specify later. It follows
ε f_1(λ) ≍ |logε|^-1,
ε f_2(λ) ≍ |εlogε |.
Let G_2 denote the orthogonal complement of H_n in H_n+2, i.e. G_2 = H_n+2⊖ H_n. The space G_2 is two-dimensional. Recall that S_ε^-(λ) acts on H_n+2. We decompose the Schur complement S_ε^-(λ) into
S_ε^-(λ) = [ S_ε,11^-(λ) S_ε,12^-(λ); S_ε,21^-(λ) S_ε,22^-(λ) ] = [ S_ε^-(λ)|_H_n→ H_n S_ε^-(λ)|_G_2→ H_n; S_ε^-(λ)|_H_n→ G_2 S_ε^-(λ)|_G_2→ G_2 ].
Using (<ref>) and (<ref>), we find for ε small enough,
S_ε,12^-(λ) = ( K_ε,11^-(λ) - K_ε,12^-(λ) (K_ε,22^-(λ))^-1K_ε,21^-(λ) )|_G_2→ H_n
= (-ε f_1(λ) vΠ_22v - ε f_2(λ) vKv) |_G_2 → H_n + R_ε,12(λ),
S_ε,21^-(λ) = (-ε f_1(λ) vΠ_22v - ε f_2(λ) vKv) |_H_n → G_2 + R_ε,21(λ),
S_ε,22^-(λ) = (1- ε f_1(λ) vΠ_22v - ε f_2(λ) vKv) |_G_2→ G_2 + R_ε,22(λ) ,
where R_ε,ij(λ) are some linear operators with ‖ R_ε,ij(λ)‖≲ε for all λ <0 satisfying (<ref>). Now, (<ref>) and (<ref>) imply
‖ S_ε,12^-(λ) ‖ ≲ |logε|^-1 + |εlogε | + ε≲ |logε|^-1,
‖ S_ε,21^-(λ) ‖ ≲ |logε|^-1 +|εlogε | + ε≲ |logε|^-1,
‖ S_ε,22^-(λ) -1|_G_2→ G_2‖ ≲ |logε|^-1 + |εlogε | + ε≲ |logε|^-1.
From (<ref>) it follows that once ε is small enough, S_ε,22^-(λ) is invertible for all λ<0 in the reference window (<ref>) and then
‖ (S_ε,22^-(λ))^-1 -1|_G_2→ G_2‖ ≲ |logε|^-1.
Applying the SLFG Lemma, we see that S_ε^-(λ) is invertible if and only if the Schur complement
T_ε^-(λ) = S_ε,11^-(λ) - S_ε,12^-(λ) (S_ε,22^-(λ))^-1S_ε,21^-(λ)
is invertible in H_n. Here, the estimate
‖ S_ε,12^-(λ) (S_ε,22^-(λ))^-1S_ε,21^-(λ) ‖≲ (logε)^-2
holds, due to (<ref>), (<ref>) and (<ref>).
As before, let us denote eigenvalues of T_ε^-(λ) by μ_k (T_ε^-(λ)), k=0, ..., n-1, and assume them to be sorted in non-decreasing order. The Schur complement T_ε^-(λ) is not invertible if and only if μ_k (T_ε^-(λ))=0 for some k. As in the case of non-integer flux, it may be argued that each equation μ_k (T_ε^-(λ))=0 has at least one solution λ_k(ε), if the frame of the reference window is chosen appropriately. If {μ_k }_k=0^n-1 denote again the (positive) eigenvalues of (vP_0^- v) |_H_n → H_n, we eventually find
| μ_k(T_ε^-(λ)) - (1 - ε f_0(λ) μ_k) | ≲ |logε |^-1 +|εlogε| + ε + (logε)^-2≲ |logε |^-1
for all λ in the reference window, if ε is small enough. It follows that
|1- ε f_0(λ_k(ε)) μ_k| ≲ |logε |^-1,
which yields
λ_k(ε) = - μ_k ε(1+ O(|logε|^-1) )
as ε↘ 0.
Step 6: Extracting eigenvalue asymptotics - second reference window
Now consider λ<0 with
C_1 ≤ε f_1(λ) ≤ C_2
for some C_1,C_2 >0. It follows
ε f_0(λ) ≍ |logε|,
ε f_2(λ) ≍ |εlogε |.
We decompose the Schur complement S_ε^-(λ) again into
S_ε^-(λ) = [ S_ε,11^-(λ) S_ε,12^-(λ); S_ε,21^-(λ) S_ε,22^-(λ) ] = [ S_ε^-(λ)|_H_n→ H_n S_ε^-(λ)|_G_2→ H_n; S_ε^-(λ)|_H_n→ G_2 S_ε^-(λ)|_G_2→ G_2 ].
with G_2 being the orthogonal complement of H_n in H_n+2. With (<ref>) and (<ref>), we find for ε small enough,
S_ε,11^-(λ) = (1 - ε f_0(λ) vP_0^-v - ε f_2(λ) vKv) |_H_n → H_n + R_ε,11(λ)
S_ε,12^-(λ) = (-ε f_1(λ) vΠ_22v - ε f_2(λ) vKv) |_G_2→ H_n + R_ε,12(λ),
S_ε,21^-(λ) = (-ε f_1(λ) vΠ_22v- ε f_2(λ) vKv) |_H_n → G_2 + R_ε,21(λ),
where R_ε,ij(λ) are some linear operators with ‖ R_ε,ij(λ)‖≲ε for all λ <0 satisfying (<ref>). This time, (<ref>) and (<ref>) imply
‖ (ε f_0(λ))^-1 S_ε,11^-(λ) - vP_0^- v|_H_n → H_n‖ ≲ |logε|^-1 (1 + ε| logε| + ε )≲ |logε|^-1.
‖ S_ε,12^-(λ) ‖ ≲ 1,
‖ S_ε,21^-(λ) ‖ ≲ 1.
Because (vP_0^- v)|_H_n → H_n is positive and thus invertible, it follows from (<ref>) that once ε is small enough, S_ε,11^-(λ) is invertible for all λ<0 in the reference window (<ref>) and then
‖ (S_ε,11^-(λ))^-1‖ ≲ |logε|^-1.
Using the SLFG Lemma, we infer that S_ε^-(λ) is invertible if and only if the Schur complement
W_ε^-(λ) = S_ε,22^-(λ) - S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ)
is invertible in G_2. Here,
‖ S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ) ‖≲ |logε|^-1
due to (<ref>), (<ref>) and (<ref>).
The operator W_ε^-(λ) acts on the two-dimensional space G_2. We can write W_ε^-(λ) as
W_ε^-(λ) = (1 - ε f_1(λ) vΠ_22v - ε f_2(λ) vKv) |_G_2→ G_2 + R_ε,22(λ) - S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ)
where R_ε,22(λ) is some linear operator with ‖ R_ε,22(λ)‖≲ε for all λ <0 satisfying (<ref>). We summarize the last three terms in a linear operator
R_ε,22'(λ) = (-ε f_2(λ) vKv) |_G_2→ G_2 + R_ε,22(λ) - S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ)
which satisfies
‖ R_ε,22'(λ) ‖≲ |εlogε| + ε + |logε|^-1≲ |logε|^-1
because of (<ref>), the estimate ‖ R_ε,22(λ) ‖≲ε and (<ref>). Then,
W_ε^-(λ) = (1 - ε f_1(λ) vΠ_22v ) |_G_2→ G_2 + R_ε,22'(λ).
Let Q be the orthogonal projection onto G_2. The space G_2 is spanned by Q(v φ_1^-) and Q(v φ_2^-). If we set
u_2 = Q(v φ_2^-)/‖ Q(v φ_2^-) ‖, u_1' = Q(v φ_1^-) - ⟨ u_2, Q(v φ_1^-) ⟩ u_2, u_1 = u_1'/‖ u_1' ‖,
then { u_1, u_2 } is an orthonormal basis of G_2. Then,
W_ε^-(λ) = ⟨ u_1 , W_ε^-(λ) u_1 ⟩·⟨ u_2 , W_ε^-(λ) u_2 ⟩ - ⟨ u_1 , W_ε^-(λ) u_2 ⟩·⟨ u_2 , W_ε^-(λ) u_1 ⟩
and
W_ε^-(λ) is not invertible if and only if W_ε^-(λ) = 0. Now,
|⟨ u_1 , W_ε^-(λ) u_1 ⟩ - 1 | ≲ |logε|^-1,
|⟨ u_1 , W_ε^-(λ) u_2 ⟩ | ≲ |logε|^-1,
|⟨ u_2 , W_ε^-(λ) u_1 ⟩ | ≲ |logε|^-1,
|⟨ u_2 , W_ε^-(λ) u_2 ⟩ - (1 - ε f_1(λ) ‖ Q(v φ_2^-) ‖_L^2(^2)^2 ) | ≲ |logε|^-1,
because of (<ref>) and (<ref>). This shows ⟨ u_1 , W_ε^-(λ) u_1 ⟩≠ 0 for small enough ε and W_ε^-(λ) is not invertible if and only if
⟨ u_2 , W_ε^-(λ) u_2 ⟩ - ⟨ u_1 , W_ε^-(λ) u_2 ⟩· (⟨ u_1 , W_ε^-(λ) u_1 ⟩)^-1·⟨ u_2 , W_ε^-(λ) u_1 ⟩ = 0.
Let us denote the left hand side by u_ε^-(λ) and let
ν_n = ‖ Q(v φ_2^-) ‖_L^2(^2)^2 .
We chose now C_1, C_2 from the reference window small resp. large enough such that ν_n^-1∈ (C_1 + δ, C_2 -δ) for some small δ. We then find for small enough ε that u_ε^-(λ)>0 for λ such that ε f_2(λ) ≤ν_n^-1-δ, while u_ε^-(λ')<0 for λ' such that ε f_2(λ') ≥ν_n^-1+δ. As earlier, we conclude by the intermediate value theorem that there must exist λ_n(ε) with u_ε^-(λ_n(ε))=0. Finally, one deduces from the estimates (<ref>) to (<ref>) that this λ_n(ε) must satisfy
|1 - ε f_1(λ_n(ε)) ν_n | ≲ |logε|^-1
which implies with Lemma 2.8 in <cit.>
λ_n(ε) = - ν_n/πε/|logε|(1 + O( log|logε|/|logε|) )
as ε↘ 0.
Step 7: Extracting eigenvalue asymptotics - third reference window
In the last step, we consider λ<0 with
C_1 ≤ε f_2(λ) ≤ C_2
for some C_1,C_2 >0. It follows
ε f_0(λ) ≍ε e^1/ε,
ε f_1(λ) ≍ε^2 e^1/ε.
Let G_1 denote the orthogonal complement of H_n+1 in H_n+2, i.e. G_1 = H_n+2⊖ H_n+1, which is one-dimensional. We decompose the Schur complement S_ε^-(λ) from (<ref>), that acts on H_n+2, now into
S_ε^-(λ) = [ S_ε,11^-(λ) S_ε,12^-(λ); S_ε,21^-(λ) S_ε,22^-(λ) ] = [ S_ε^-(λ)|_H_n+1→ H_n+1 S_ε^-(λ)|_G_1 → H_n+1; S_ε^-(λ)|_H_n+1→ G_1 S_ε^-(λ)|_G_1 → G_1 ].
As before, we find with (<ref>) and (<ref>) for ε small enough,
S_ε,11^-(λ) = (1 - ε f_0(λ) vP_0^-v - ε f_1(λ) vΠ_22v - ε f_2(λ) vKv) |_H_n+1→ H_n+1 + R_ε,11(λ)
S_ε,12^-(λ) = ( -ε f_2(λ) vKv) |_G_1 → H_n+1 + R_ε,12(λ),
S_ε,21^-(λ) = (-ε f_2(λ) vKv) |_H_n+1→ G_1 + R_ε,21(λ),
where R_ε,ij(λ) are some linear operators with ‖ R_ε,ij(λ)‖≲ε for all λ <0 satisfying (<ref>). Now, (<ref>) and (<ref>) imply
‖ S_ε,12^-(λ) ‖ ≲ 1 + ε≲ 1,
‖ S_ε,21^-(λ) ‖ ≲ 1 + ε≲ 1.
Furthermore, for small enough ε, the operator S_ε,11^-(λ) is invertible for all λ in the reference window (<ref>). This can be seen by applying the SLFG Lemma again to S_ε,11^-(λ). One considers S_ε,11^-(λ) on H_n ⊕ G_1' where G_1'= H_n+1⊖ H_n is the orthogonal complement of H_n in H_n+1. Then, S_ε,11^-(λ)|_H_n→ H_n becomes invertible for small enough ε because (vP_0^- v) |_H_n→ H_n is positive and upon application of the SLFG Lemma, one sees that S_ε,11^-(λ) is invertible for small ε if and only if ν_n from (<ref>) satisfies ν_n = ‖ Q(vφ_2) ‖^2 ≠ 0. But this is always true, because Q(vφ_2)≠ 0 and therefore ‖ Q(vφ_2) ‖^2> 0. Thus, S_ε,11^-(λ) is indeed invertible for small enough ε. It is also not hard to find that then
‖ (S_ε,11^-(λ))^-1‖ ≲ε^-2 e^-1/ε .
By the SLFG Lemma, S_ε^-(λ) is invertible if and only if the Schur complement
Z_ε^-(λ) = S_ε,22^-(λ) - S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ)
is invertible in G_1. Here,
‖ S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ) ‖≲ε^-2 e^-1/ε
due to (<ref>), (<ref>) and (<ref>).
The operator Z_ε^-(λ) acts on the one-dimensional space G_1. We can write Z_ε^-(λ) as
Z_ε^-(λ) = (1 - ε f_2(λ) vKv) |_G_1 → G_1 + R_ε,22(λ) - S_ε,21^-(λ) (S_ε,11^-(λ))^-1S_ε,12^-(λ)
where R_ε,22(λ) is some linear operator with ‖ R_ε,22(λ)‖≲ε for all λ <0 satisfying (<ref>). Let Q̃ be the orthogonal projector onto G_1 and let φ̃ = Q̃(vφ_1^-)/ ‖Q̃(vφ_1^-) ‖_L^2(^2), which is a unit vector that spans G_1. Then Z_ε^-(λ) can be written as
Z_ε^-(λ) = z_ε^-(λ) φ̃⟨φ̃, . ⟩
with
z_ε^-(λ) = ⟨φ̃, Z_ε^-(λ) φ̃⟩∈
and one sees that Z_ε^-(λ) is not invertible on G_1 if and only if z_ε^-(λ) = 0. Similar to the zero flux case, one might then argue that z_ε^-(λ) = 0 has at least one solution for small enough ε. We denote this solution by λ_n+1(ε). Finally, because of z_ε^-(λ_n+1(ε)) = 0 and (<ref>), λ_n+1(ε) satisfies
|1 - ε f_2(λ_n+1(ε)) ⟨φ̃ , v K v φ̃⟩ | ≲ε
which implies
λ_n(ε) = - exp( - ⟨φ̃ , v K v φ̃⟩^-1ε^-1(1 + O(ε) ) )
as ε↘ 0.
□
§ RADIAL FIELDS
We explain how the asymptotic expansions of Theorems <ref>, <ref> and <ref> reduce to those found by Frank, Morozov and Vugalter in <cit.> in the special case of radial magnetic field B and radial potential V. As before, we denote v = V^1/2.
First, in the case of α = 0, equation (<ref>) in Theorem <ref> gives exactly the same asymptotic expression as is found in Theorem 1.3 of <cit.>. There is nothing to be done here.
Next, in the case of α > 0, α∈ℝ∖ℤ, Theorem 1.2 of Frank, Morozov and Vugalter states the eigenvalue expansions
λ_k(ε) = - μ_k ε (1 + O(ε )), k=0, ..., n-2,
λ_n-1(ε) = - μ_n-1 ε (1 + O(ε^α' )),
λ_n(ε) = - μ_n ε^1/α' (1+ O(ε^min{1,1/α'-1} ))
as ε↘ 0, where
μ_k = ∫_^2 V |ψ_k^-|^2 x/∫_^2 |ψ_k^-|^2 x , k=0, ..., n-1,
μ_n = ( 4^α'-1Γ(α') /πΓ(1-α') ∫_^2 V |ψ_n^-|^2 x )^1/α'.
There are two notable differences to the expressions in Theorem <ref> of this paper. First, the coefficients {μ_k }_k=0^n are given explicitly in terms of the (generalized) Aharonov-Casher states {ψ_k^- }_k=0^n and second, the second order error terms of the eigenvalues λ_k(ε) with linear first order appear to be better in the expansions by Frank, Morozov and Vugalter. Let us explain how the above asymptotic expressions can be derived from Theorem <ref> and its proof under the assumption of radial fields.
We first note that by writing the corresponding integrals in polar coordinates that the zero eigenstates {ψ_k^-}_k=0^n-1 are L^2-orthogonal to each other when the magnetic field is radial. The projector P_0^- onto the zero eigenspace of P_-(A) can then be represented as
P_0^- = ∑_k=0^n-11/‖ψ_k^- ‖_L^2(^2)^2ψ_k^- ⟨ψ_k^-, . ⟩,
which implies
vP_0^-v = ∑_k=0^n-11/‖ψ_k^- ‖_L^2(^2)^2 vψ_k^- ⟨ vψ_k^-, . ⟩.
Theorem <ref> asserts that the coefficients {μ_k }_k=0^n-1 to the asymptotically linear eigenvalues {λ_k(ε) }_k=0^n-1 are given by the non-zero eigenvalues of vP_0^- v. Assuming now that V is radial, one finds similarly by writing the corresponding integrals in polar coordinates that the states {v ψ_k^- }_k=0^n are also orthogonal in L^2(^2). Given (<ref>), we conclude that the non-zero eigenvalues of vP_0^- v are
μ_k = ‖ v ψ_k^- ‖_L^2(^2)^2/‖ψ_k^- ‖_L^2(^2)^2=∫_^2 V |ψ_k^-|^2 x/∫_^2 |ψ_k^-|^2 x , k=0, ..., n-1,
which is exactly the expression in (<ref>).
Let us now discuss the coefficient μ_n. It is pointed out in Corollary 5.10 of <cit.> that in the case of radial magnetic fields, the states φ^- and ψ^- mentioned in Theorem <ref> and thus Theorem <ref> are given by φ^-= ψ_n^- and ψ^- = d_n^- ψ_n-1^- where d_n^- is some complex number defined in <cit.>. This means that ran(v P_0^- v)=span{ vψ_k^- }_k=0^n-1 is orthogonal to vψ_n^- = v φ^- and the projector Q from Theorem <ref> acts as an identity on v φ^-. Therefore, by Theorem <ref>,
μ_n = ( 4^α'-1Γ(α') /πΓ(1-α') ‖ Q (v φ^-) ‖_L^2(^2)^2)^1/α' = ( 4^α'-1Γ(α') /πΓ(1-α') ∫_^2 V |ψ_n^-|^2 x )^1/α' .
This shows that Theorem <ref> indeed yields (<ref>) and (<ref>) for the coefficients {μ_k}_k=0^n in case of radial B and V.
The pairwise orthogonality of the (generalized) Aharonov-Casher states is also the reason why the second order error terms in (<ref>) and (<ref>) are improved. To see this, recall from (<ref>) the asymptotic expansion
v(P_-(A)-λ)^-1v = f_0(λ) vP_0^-v +f_1(λ) vψ^- ⟨ vψ^-, . ⟩ + f_2(λ) vφ^- ⟨ vφ^-, . ⟩ + O(1),
as λ→ 0, which was the basis of our calculations in the case of non-integer flux. The reason why the second order error term of {λ_k(ε)}_k=0^n-1 is of order ε^min{α',1-α'} in Theorem <ref> is that under the first reference window the terms f_1(λ) vψ^- ⟨ vψ^-, . ⟩ and f_2(λ) vφ^- ⟨ vφ^-, . ⟩ above were both treated as general perturbations to f_0(λ) vP_0^-v on H_n = span{ vψ_k^- }_k=0^n-1. But with φ^-= ψ_n^- and ψ^- = d_n^- ψ_n-1^-, above expansion becomes
v(P_-(A)-λ)^-1v = f_0(λ) vP_0^-v +f_1(λ)|d_n|^2 vψ_n-1^- ⟨ vψ_n-1^-, . ⟩ + f_2(λ) vψ_n^- ⟨ vψ_n^-, . ⟩ + O(1),
as λ→ 0. The orthogonality of the states {v ψ_k^- }_k=0^n implies that f_2(λ) vψ_n^- ⟨ vψ_n^-, . ⟩ does not perturb f_0(λ) vP_0^-v on H_n and f_1(λ)|d_n|^2 vψ_n-1^- ⟨ vψ_n-1^-, . ⟩ only perturbs f_0(λ) vP_0^-v along the one-dimensional subspace span{ vψ_n-1^- }⊂ H_n. Following the rest of the proof and estimating terms more carefully reveals that the second order error terms of λ_k(ε) can indeed be improved to order ε for k=0,..., n-2 and order ε^α' for k=n-1 in this case.
Finally, let us discuss the case of α > 0, α∈ℤ. Theorem 1.3 of Frank, Morozov and Vugalter gives in this case the eigenvalue expansions
λ_k(ε) = - μ_k ε(1+ O(ε |logε| ) ), k=0, ...,n-1,
λ_n(ε) = - μ_n ε/|logε|(1 + O( log|logε|/|logε|) )
λ_n+1(ε) = - exp( - μ_n+1^-1ε^-1(1 + O(ε) ) ),
as ε↘ 0, where
μ_k = ∫_^2 V |ψ_k^-|^2 x/∫_^2 |ψ_k^-|^2 x , k=0, ..., n-1,
μ_n = 1 /π∫_^2 V |ψ_n^-|^2 x,
μ_n+1 = 1 /4π∫_^2 V |ψ_n+1^-|^2 x.
Again the expressions for the coefficients {μ_k}_k=0^n+1 look different to those in Theorem <ref> and also the second order error term of the eigenvalues {λ_k(ε)}_k=0^n-1 with linear first order term appears to differ.
The expression (<ref>) is explained as before by pairwise orthogonality of the zero eigenstates {ψ_k^- }_k=0^n-1 and pairwise orthogonality of the states {v ψ_k^- }_k=0^n-1. Let us explain the remaining coefficients μ_n and μ_n+1. We find in Corollary 6.7 of <cit.> that in the case of radial magnetic fields, the states φ_1^-, φ_2^- and ψ^- mentioned in Theorem <ref> are given by φ_1^-=ψ_n+1^-, φ_2^-=ψ_n^- and ψ^- = d_n^- ψ_n-1^-. It then follows by pairwise orthogonality of the states {v ψ_k^- }_k=0^n+1 that the projections Q resp. Q̃ of Theorem <ref> act as identities on vφ_2^- resp. vφ_1^-. By (<ref>), we find that indeed
μ_n = 1/π‖ Q(v φ_2^-)‖^2_L^2(^2) = 1/π‖ v φ_2^-‖^2_L^2(^2) = 1 /π∫_^2 V |ψ_n^-|^2 x.
For the coefficient μ_n+1, we recall that
μ_n+1 = ⟨ϕ_1 , v K v ϕ_1 ⟩/‖ϕ_1 ‖_L^2(^2)^2,
where ϕ_1 = Q̃ (v φ_1^-) = v φ_1^- and the operator K is defined in (<ref>). Since κ = 0 in case of radial B, see again Kovařík <cit.>, the operator K simplifies to
K = 1/4πΠ_11 + π |d_n^-|^2/4 ψ^- ⟨ψ^- , . ⟩
and hence vKv becomes
vKv = 1/4π vφ_1^- ⟨ vφ_1^- , . ⟩ + π |d_n^-|^2/4 vψ^- ⟨ vψ^- , . ⟩ .
Finally, because ϕ_1= vφ_1^- = vψ_n+1^- is orthogonal to vψ^- = vψ_n-1^-, we conclude that
μ_n+1 = ⟨ϕ_1 , v K v ϕ_1 ⟩/‖ϕ_1 ‖_L^2(^2)^2 = 1/4π‖ vψ_n+1^- ‖_L^2(^2)^4/‖ vψ_n+1^- ‖_L^2(^2)^2 = 1 /4π∫_^2 V |ψ_n+1^-|^2 x.
This shows that all coefficients {μ_k}_k=0^n+1 in Theorem <ref> indeed simplify to those of (<ref>), (<ref>) and (<ref>) in the case of radial B and V.
Similarly to the non-integer flux case, the improved second order error term can be explained by discussing the integer-flux resolvent expansion (<ref>) which states that
v(P_-(A)-λ)^-1v= f_0(λ) vP_0^-v + f_1(λ)vΠ_22v +f_2(λ) vKv + O(1),
as λ→ 0. In the proof of Theorem <ref> we arrived at an second order error term of the eigenvalues {λ_k(ε)}_k=0^n-1 of order |logε|^-1. The reason why a term of this order appeared was that under the first reference window f_1(λ)vΠ_22v was treated as a general perturbation to f_0(λ) vP_0^-v on H_n. Now, when B and V are radial, pairwise orthogonality of the states { v ψ_k^-}_k=0^n+1 implies that f_1(λ)vΠ_22v vanishes on H_n and hence does not perturb f_0(λ) vP_0^-v on H_n. The next largest perturbation to f_0(λ) vP_0^-v on H_n that remains comes from the term f_2(λ) vKv (to be specific, the summand of K where ψ^-=d_n^- ψ_n-1^- appears) which only perturbs f_0(λ) vP_0^-v along the one-dimensional subspace span{ vψ_n-1^- }⊂ H_n. Carefully finishing the proof with this additional information shows that the eigenvalue expansions of {λ_k(ε)}_k=0^n-1 can be improved to
λ_k(ε) = - μ_k ε(1+ O(ε ) ), k=0, ...,n-2,
λ_n-1(ε) = - μ_n-1 ε(1+ O(ε |logε| ) ),
as ε↘ 0. We see that the order of the error term of the (n-1)-th eigenvalue coincides with that given by (<ref>). For the eigenvalues λ_0(ε), ..., λ_n-2(ε), we gain a slightly improved error term.
§ ACKNOWLEDGEMENTS
The author is immensely grateful to Hynek Kovařík for the introduction to and helpful discussions on this problem. He also gratefully acknowledges the hospitality at Università degli studi di Brescia during his research stay in February-March 2024. The author expresses his gratitude toward Timo Weidl for suggestions on improvements and many helpful comments on exposition. He is also grateful to Tobias Ehring for remarks on a first draft.
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17645v1 | 20240926085016 | High- and low-energy many-body effects of graphene in a unified approach | [
"Alberto Guandalini",
"Giovanni Caldarelli",
"Francesco Macheda",
"Francesco Mauri"
] | physics.atm-clus | [
"physics.atm-clus"
] |
[email protected]
Dipartimento di Fisica, Università di Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Roma, Italy
Dipartimento di Fisica, Università di Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Roma, Italy
Dipartimento di Fisica, Università di Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Roma, Italy
Dipartimento di Fisica, Università di Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Roma, Italy
§ ABSTRACT
We show that the many-body features of graphene band structure and electronic response can be accurately evaluated by applying many-body perturbation theory to a tight-binding (TB) model.
In particular, we compare TB results for the optical conductivity with previous ab-initio calculations, showing a nearly perfect agreement both in the low energy region near the Dirac cone (∼ 100 meV), and at the higher energies of the π plasmon (∼ 5 eV).
A reasonable agreement is reached also for the density-density response at the Brillouin zone corner.
With the help of the reduced computational cost of the TB model, we study the effect of self-consistency on the screened interaction (W) and on the quasi-particle corrections, a task that is not yet achievable in ab-initio frameworks.
We find that self-consistency is important to reproduce the experimental results on the divergence of the Fermi velocity, while it marginally affects the optical conductivity.
Finally, we study the robustness of our results against doping or the introduction of a uniform dielectric environment.
High- and low-energy many-body effects of graphene in a unified approach
Francesco Mauri
Received XXX; accepted XXX
========================================================================
§ INTRODUCTION
Since its discovery in 2004 <cit.>, graphene excitations have been thoroughly studied in the literature due to its remarkable transport, plasmonic and opto-electronic applications <cit.>.
Initially, theoretical studies within the framework of the random-phase approximation (RPA) have been used to describe some electronic graphene properties, such as the Dirac <cit.> and π plasmon dispersions <cit.>.
Nevertheless, the inclusion of the Coulomb interaction between electrons, beyond the RPA, is relevant to describe several spectroscopic features, such as the logarithmic divergence of the Fermi velocity near the neutrality point <cit.> and the shape and position of the π plasmon peak both in optics <cit.> and low momentum-transfer electron-energy loss (EEL) spectroscopy <cit.>.
Further, as suggested in previous studies, excitonic effects may be relevant in determining the optical phonon dispersion near the K point.
In particular, theoretical arguments based on the renormalization group approach<cit.> and experimental Raman spectra <cit.> suggest a steepening of the Khon-anomaly spectrum guided by an enhancement of the electron-phonon interactions.
The same observations seem to be consistent with Raman spectra of bilayer graphene <cit.>.
These effects cannot be reproduced by a single particle framework, e.g. a TB model parametrized with experimental data or density-functional theory (DFT) in a local or semi-local approximation.
Thus, we refer to them as many-body effects.
Many-body effects in graphene have been studied with ab-initio methods in the framework of many-body perturbation theory.
Within the G^0W^0 <cit.> plus Bethe-Salpeter <cit.> (BSE) formalism, the high-energy π plasmon peak at ∼ 5 eV have been successfully reproduced. <cit.>
G^0W^0 studies <cit.> have been also able to reproduce the low-energy (∼ 100 meV) Fermi velocity logarithmic behaviour, despite the renormalization is underestimated with respect to experimental data <cit.>.
The reason of this discrepancy may be attributed to the necessity to include more classes of Feynman diagrams, e.g. by calculating self-consistently the screened interaction and quasi-particle properties. Also, ab-initio frameworks require a too high computational cost to converge graphene calculations at such low-energy scale, both for the freestanding and low-doping cases, probably affecting the robustness of the results.
Dirac cone models <cit.> with an Hartree-Fock self-energy <cit.> or with renormalization group approaches <cit.> have shown to be able to reproduce the experimental Fermi velocity increase near the neutrality point.
However, the parameter choice is not unique between different studies, and there is no possibility to describe with the same framework higher-energies π-plasmon features.
A TB model plus many-body perturbation theory approach, as the one studied in Ref. Stauber_17, successfully describes the Fermi velocity renormalization, containing also the physics to describe many-body effects above the range of validity of the Dirac cone model.
However, in Ref. Stauber_17, the effect on doping on the velocity renormalization has not been studied and graphene is considered as a 2D sheet without thickness through the definition of the Coulomb potential.
In addition, up to now a BSE approach has never been used on top of a graphene TB model to describe both low- and high-energy many body effects in a unique framework.
In this work, we develop a computational scheme based on a TB model plus many-body perturbation theory corrections that is able to describe both low- and high- energy many-body effects in graphene with high accuracy, thus avoiding the need to resort to costly ab-initio methods.
The quasi-particle corrections are calculated with the static screened exchange (SX) approximation, while the electron-hole interaction is included by solving the BSE with the same static screened interaction used for the quasi-particle properties.
We focus on the effects of self-consistency between the quasi-particle corrections and the screened interaction, which we find to be important to compare with experiments.
Finally, we discuss the effect of both doping and of a static dielectric environment on both the band structure and the electronic response.
The paper is organized as follows: in Sec. <ref> we summarize the main equations of many-body perturbation theory concerning the quasi-particle properties (Sec. <ref>) and the evaluation of density response functions (Sec. <ref>).
In Sec. <ref>, we focus on two-dimensional (2D) materials and describe how we approximate the electronic orbitals in the non-periodic direction.
The application of many-body perturbation theory to the graphene TB model is described in Sec. <ref> for what concerns quasi-particle corrections (Sec. <ref>), screened interaction (Sec. <ref>), the BSE (Sec. <ref>) and how to evaluate doping and environmental effects (Sec. <ref>).
Numerical details of the calculations are provided in Sec. <ref>.
In Sec. <ref>, we show the results. We firstly study the band structure, Fermi velocity renormalization and optical conductivity and compare with experiments in Sec. <ref>.
Next, in Sec. <ref> we analyze doping and environmental effects.
The conclusions are drawn in Sec. <ref>.
Details about the TB model are included in Appx. <ref>.
Next, in Appx. <ref> we compare the TB results with ab-initio results from the literature.
Finally, in Appx. <ref> we provide additional data about doping and environment effects in the group velocity renormalization and dielectric response functions.
§ SUMMARY OF MANY-BODY PERTURBATION THEORY
In this section, we review the main concepts of many body perturbation theory. In particular, we summarize the key ingredients for the calculation of quasi-particle and dielectric properties in Sec. <ref> and <ref> respectively <cit.>.
The set of Feynman diagrams included in our calculations are sketched in Fig. <ref>
§.§ Quasi-particle excitations
We consider a periodic crystal with N electrons per unit cell interacting via the Coulomb interaction. In the following, we discretize reciprocal space momenta by considering a Born-Von Karman supercell made of N_k unit cells. The Hamiltonian of this system may be written as
Ĥ = Ĥ^0+Ĥ^1 ,
where Ĥ^0 = T̂+V̂ is a single-particle Hamiltonian, T̂ being the kinetic energy and V̂ the potential one, and Ĥ^1 is the Coulomb repulsion between electrons.
V̂ includes the interaction between electrons and nuclei, and usually includes a mean field self-consistent potential, e.g. the Hartree or Kohn-Sham one, so to improve the starting point before applying perturbation theory.
If this is the case, the mean field potential must be subtracted from Ĥ^1 so to avoid double counting.
In this work, Ĥ^̂0̂ is described by a TB Hamiltonian which parameters are determined via ab-initio calculations, and therefore implicitly contain the effect of a self-consistent mean field potential.
The double counting is avoided by considering only the irreducible vertex in the expression for the electronic self-energy, as described later.
The single particle problem for the periodic system is solved by determining the eigenvalues ^0_m (band energies) and eigenvectors ^0_m (Bloch functions) of Ĥ^0 as
Ĥ^0^0_m = ^0_m^0_m .
From Eq. (<ref>), we define the non-interacting Green's function as
G^0(,̊'̊,) =
lim_η→ 0^+∑_m^0_m()̊^0*_m('̊)
×[f^0_m/-^0_m-iη
+(1-f^0_m)/-^0_m+iη] ,
where f^0_m are the Fermi-Dirac occupation numbers corresponding to ^0_m.
In the framework of many body perturbation theory, it can be shown that interaction effects are effectively included in a similar fashion as Eq. (<ref>) by including a dynamical self-energy operator Σ
[Ĥ^0 + Σ(_m)]_m = _m_m .
_m and _m are the quasi-particle energies and orbitals that enter in the determination of the interacting single-particle Green's function
G(,̊'̊,) = lim_η→ 0^+∑_m_m()̊^*_m('̊)
×[f_m/-_m-iη
+(1-f_m)/-_m+iη] ,
where f_m are the quasi-particle occupation numbers corresponding to _m.
Eq. (<ref>) may be reformulated in terms of a Dyson equation that relate G and G^0 via Σ as
G() = G^0() +G^0()Σ() G() ,
where space coordinates and integrals have been omitted to ease of notation.
In practice, Eq. (<ref>) is often simplified via the the quasi-particle orbital approximation, which consists in taking ≈^0.
With this approximation and assuming non-degeneracy for simplicity, quasi-particle energies may be written as
_n = ^0_n+ nΣ(_n)n,
where we used the Dirac notation to identify the Bloch functions. By Taylor-expanding the self-energy Σ around the bare energy , the above equation may also be rewritten as
_n = ^0_n+ Z_nnΣ(^0_n)n,
where
Z_n𝐤 = [. 1-⟨ n k|∂Σ(ω)/∂ω| n k⟩|_ ω=_n k]^-1
is the renormalization factor.
§.§.§ Approximations for the self-energy
The exact expression of the self-energy as a function of the screened interaction W is given by the Hedin's equations.
A detailed explanation can be found in Ref. Reining_16.
In this paragraph we remind the expressions for the self-energy that are relevant for this work.
Within the GW approximation<cit.>, the self-energy is given by first-order perturbation theory with respect to the screened interaction W as
Σ^(,̊'̊,t-t')
= iG(,̊'̊,t-t')W(,̊'̊,t-t'+0^+) .
In the screened-exchange (SX) approximation, the dynamical screened interaction W in Eq. (<ref>) is approximated by its static/instantaneous counterpart, thus
Σ^(,̊'̊,t-t')
= iG(,̊'̊,t-t')W(,̊'̊)δ(t-t') ,
where W(,̊'̊)=W(,̊'̊, = 0) is the static screened interaction.
The Fourier transform of Σ^ over time Σ^(,̊'̊,) = Σ^(,̊'̊) is frequency independent and reads
Σ^(,̊'̊)
= -n(,̊'̊)W(,̊'̊) ,
n(,̊'̊) = ∑_m f_m_m()̊^*_m('̊) ,
where n(,̊'̊) is the quasi-particle density matrix.
The quasi-particle Dyson equation in Eq. (<ref>) with a SX self-energy is sketched in terms of Feynman diagrams in Fig. <ref> (a).
Finally, in the G^0W^0 approximation, both the one-particle Green function G and screened interaction W are approximated by their bare counterparts
Σ^(,̊'̊,t-t')
= iG^0(,̊'̊,t-t')W^0(,̊'̊,t-t'+0^+) ,
where W^0 is the screened interaction of the bare system.
The static approximation of the screened interaction may be applied also to the G^0W^0 self-energy, obtaining the non self-consistent screened-exchange (SX^0) approximation
Σ^(,̊'̊) = -n^0(,̊'̊)W^0(,̊'̊) .
§.§ Density response functions and dielectric properties
The electron-hole (e-h) propagator L is the main quantity of interest in the calculation of density-density response and dielectric functions with many-body perturbation theory <cit.>. L is determined via the BSE, which expresses the linear response of the single-particle Green's function G to an external non-local source<cit.> and reads
L(,) = L^0(,) + L^0(,)K()L(,) .
is the center-of-mass momentum of the e-h pair,
K is a kernel including the effect of interaction and L^0 is the bare e-h propagator, describing the non-interacting evolution of two quasi-particles, with the following real space representation:
L^0(_̊1,_̊2,_̊3,_̊4,) = 2i∑_nm∑_̨̨' (f_n-f_m'̨)
×lim_η→ 0^+_n(_̊1)_m'̨(_̊2)^*_m'̨(_̊3)^*_n(_̊4)/-(_n-_m'̨)+iη .
If the single-particle Green's function G is obtained from a SX self-energy, the kernel is composed by two terms
K = K^ + K^,
where K^ is the response due to the Hartree potential and K^ contains the attractive e-h interaction.
Their matrix elements are conveniently expressed in the Hilbert space of e-h single particle transitions {|t⟩} = { |n→̨m+̨⟩}, where n, m are band indexes and $̨ is a wave-vector of the Brillouin zone (BZ).
In particular we consider⟨t_1| = ⟨n_1_̨1 →m_1_̨1+|and|t_2⟩ = |n_2_̨2 →m_2_̨2+⟩, wherecan assume values outside the first Brillouin zone, and obtain the matrix elements
t_1K^()t_2 =
2∫ d∫̊d'̊^*_m_1_̨1+()̊^*_n_2_̨2('̊)
×
v(-̊'̊)
_n_1_̨1()̊_m_2_̨2+('̊),
t_1K^()t_2 =
-∫ d∫̊d'̊^*_m_1_̨1+()̊^*_n_2_̨2('̊)
×
W(,̊'̊)
_n_1_̨1('̊)
_m_2_̨2+()̊ .vis the Coulomb interaction
v(-̊'̊) = 1/ε_r1/|-̊'̊|,
whereε_ris an effective dielectric constant taking into account the possible dielectric screening of an embedding isotropic medium. Notice that⟨t_1|and|t_2⟩contain electron-hole transitions mediated by the same wave-vector, due to momentum conservation as made explicit in the following by Eqs. (<ref>) and (<ref>).
The BSE given by Eq. (<ref>) along with the kernel in Eq. (<ref>) are represented in terms of Feynman diagrams in Fig. <ref> (c) and (d).
In the SX approximation, the BSE may be recast as an eigenvalue problem of the two-body Hamiltonian
t_1Ĥ^t_2 =
E_t_1() δ_t_1t_2 +
f_t_1t_1K()t_2 ,
whereE_t_1() = (_m+̨-_n)are the e-h excitation energies andf_t_1 = f_n-f_m+̨the occupation factors.
We note in Eq. (<ref>) we are including both resonant and anti-resonant transitions, thus we are not employing the Tamm–Dancoff approximation <cit.>.
The eigenvalues and eigenvectors of the e-h Hamiltonian in Eq. (<ref>) are excitonic energies and wavefunctions<cit.>.
The diagonal component of the density-density response function may be related to the inverse of the e-h Hamiltonian as
χ(,)
= ∑_t_1t_2ρ_t_1^*()[t_11-Ĥ^()t_2]^-1ρ_t_2(),
ρ_t() = ne^-i· rm+̨.ρ_t()are the oscillator strengths; if one is interested in the non-diagonal components of Eq. (<ref>), then only the oscillator strengths have to be changed in order to contain the dependence on local fields.
The inverse dielectric function is expressed, in terms ofχ, as
(,) = 1 + v()χ(,) ,
wherev()is the Fourier transform of the Coulomb interaction of Eq. (<ref>).
For the remainder of this section, we supposeis a small momentum transfer nearΓ.
As mentioned in the introduction, in this work we are interested in the optical absorption, i.e. the current response computed under the condition of null macroscopic electric field.
The Coulomb kernel without the macroscopic component,vis obtained by settingv_() = 0, wherev_()is the Fourier transform of the Coulomb potential. The corresponding kernel, given by Eq. (<ref>) by substitutingvwithv̅, is labelled asK^.
The correspondent density-density response function is
χ(,)
=∑_t_1t_2ρ_t_1^*()[t_11-Ĥ̅̂^()t_2]^-1ρ_t_2() .
In the above equation,Ĥ̅̂^is the same of Eq. (<ref>) but withK^replaced byK^.
The response functionχis related to the macroscopic dielectric matrix via
(,) =
1 - v()χ(,) .
The formulation presented in this section is valid for three-dimensional crystals. For systems that are periodic in two directions but finite sized the third, e.g. graphene, more care as to be taken, as shown in the next section.
§ DIELECTRIC PROPERTIES OF A 2D CRYSTAL WITH FINITE THICKNESS
Graphene is a low-dimensional system.
In the last years, several studies have shown the importance to take into account the 2D nature of monolayer materials in the description of electronic screening <cit.>.
In ab-initio calculations based on a plane-wave representation of the orbitals, one possibility is to cutoff the Coulomb interaction so to remove spurious interaction between replicas <cit.>.
In our model, we instead take into account the nanosized nature of graphene by hypothesizing an analytic form of the orbitals in the non-periodic directions, as discussed in Ref. PhysRevB.107.094308 and summarized in the following.
We consider a 2D crystal, thus a system periodic in two dimensions(x,y)and nanosized in thezdirection with a typical thicknessd.
In the long wavelength limit (||d ≪1) and within the RPA, the screening properties of the layer are the same of a dielectric sheet located atz=0<cit.>.
Outside this limit, finite thickness effects may become relevant, and they may depend on the nature of the electronic orbitals that determine the out-of-plane thickness of the material.
In this work, we are mainly interested in long-rage effects due to the Coulomb interaction.
For this reason, the accurate description of thezdependence of the electronic orbitals is not critical, and we therefore approximate it by a rectangular profile <cit.>.
Thus, we write the electronic orbitals as
_i()̊≈_i(_̊∥)Θ̃(z)/√(d) ,
where_̊∥= (x,y)andΘ̃(z)≡Θ(|z|-d/2)is the Heaviside theta.
We emphasize that, since the Bloch theorem is valid only in the in-plane coordinates, the crystallographic momentum$̨ is oriented along the 2D periodic directions.
Thanks to Eq. (<ref>), z integrals in the electronic self energy Σ and kernels K may be performed analytically to find effective equations of 2D sheets with a modified Coulomb interaction.
In fact, by substituting Eq. (<ref>) into Eq. (<ref>), the matrix elements of Σ^ are
t_1Σ^t_2
= ∑_m∑_' f_m'̨t_1e^i(+)·_̊∥_m
× W^_'()
_me^-i(+')·_̊∥t_2
where
W^_'() =
1/d^2∫_-d/2^d/2 dz
∫_-d/2^d/2 dz'
W_'(,z,z').
Eq. <ref> is the Fourier transform of the screened interaction along the periodic directions and averaged along the non-periodic direction, where 𝐪 is reduced to the first Brillouin zone and {} are 2D reciprocal lattice vectors.
In the same way, by substituting Eq. (<ref>) into Eqs. (<ref>)-(<ref>), and Fourier transforming v and W along the periodic directions, we find
t_1K^()t_2 =
2∑_m_1_̨1+e^i(+)·_̊∥n_1_̨1
v^_()
n_2_̨2e^-i(+)·'̊_∥m_2_̨2+ ,
t_1K^()t_2 =
∑_'m_1_̨1+e^i(_̨1-_̨2+)·_̊∥m_2_̨2+
W_'^(_̨1-_̨2)
n_2_̨2e^-i(_̨1-_̨2+')·'̊_∥n_1_̨1 ,
where
v^_() =
1/d^2∫_-d/2^d/2 dz
∫_-d/2^d/2 dz' v_(,z,z') =
2π/ε_r|+|F(|+|d)
is an effective 2D Coulomb interaction that includes finite (orbital-independent) thickness effects through the form factor
F(x) = 1/d^2∫_-d/2^d/2 dz ∫_-d/2^d/2 dz' e^-|x/d||z-z'|
= 2/x(1+e^-x-1/x) .
The Fourier transform of the density-density response function, assuming the z profile of Eq. (<ref>), shows the same rectangular z profile as the electronic orbitals
χ(,z,z',) = χ^(,)Θ̃(z)Θ̃(z')/d^2 .
This is inherited by the out-of-plane form of χ^0, which is directly expressed as a function of orbitals, via the Dyson equation. The 2D effective density-density response functions χ^ and χ^ are evaluated from Eqs. (<ref>) and (<ref>) with the kernels defined in Eqs. (<ref>)-(<ref>).
The 3D Fourier transforms of the inverse and macroscopic dielectric functions are not defined for a 2D crystal, as described in more detail in Ref. Nazarov_15.
Still, we can define effective 2D dielectric functions averaged along the z direction as
ε^-1,(,) = 1/d^2∫_-d/2^d/2 dz
∫_-d/2^d/2 dz'
(, z, z', )
=1+v^()χ^(,),
^(,) = 1/d^2∫_d/2^d/2 dz
∫_d/2^d/2 dz'
_M(, z, z', )
= 1-v^()χ^(,)
Finally, the long wavelength limit of the 2D effective conductivity is related to the 2D dielectric function via the following relation <cit.>:
^(→ 0,ω)-1 = lim_|| → 0 2π iσ^(ω)Q/ω .
By combining Eq. (<ref>) with Eq. (<ref>), we find the 2D optical conductivity as
σ^() = lim_|| → 0iχ^(,)/||^2 .
We note that χ^→ 0 as || → 0 with a quadratic power, thus the optical conductivity is finite.
§ MANY-BODY PERTURBATION THEORY APPLIED TO GRAPHENE TB MODEL
In this section, we describe how we evaluate the quasi-particle corrections and response functions of graphene starting from the five nearest neighbour TB model for graphene described in Appx. <ref>.
We in particular show how we model the screened interaction W for graphene.
§.§ Screened-exchange self-energy
Our aim is to evaluate the SX self-energy corrections within the TB model for graphene.
In this case, the quasi-particle Hamiltonian, given by Eq. (<ref>), may be written in the basis of the Block functions as
H^_ = H^0_+ (
0 Σ_^
Σ_^* 0
) ,
H^0_ =
(
ℊ_ 𝒻_
𝒻^*_ ℊ_
) .
H^0_ is the TB Hamiltonian, obtained as described in App. <ref>; its matrix elements are defined in Eq. (<ref>) and (<ref>).
For freestanding graphene (i.e. at zero doping), the diagonal terms of the self-energy are $̨- and atom-independent, thus they can be neglected.
For simplicity, we neglect the diagonal terms of the self-energy even in the doped case, as for low doping these terms are not expected to change the physical conclusions.
The eigenvalues of the Hamiltonian in Eq. (<ref>) give the quasi-particle energies
_π^*/π = ℊ_±|𝒻_+Σ_^| .
Self-energy corrections do not change the phase of the spin componentsϕin a first nearest neighbour tight-binding model, as demonstrated in the supporting information of Ref. Stauber_17.
We numerically verify that also in our five-nearest neighour tight-binding model the phaseϕ[defined in Eq. (<ref>)] do not appreciably change.
Thus, we can safely use the quasi-particle orbital approximation_n ≈^0_nalready described in Sec. <ref>.
We now consider Eq. (<ref>), and disregard non-diagonal local field effects. Then,Σ_^assumes the form
Σ^_ = -1/N_kA∑_'̨'Δ f_'̨
W(-̨'̨-') ϕ^*_'̨+' ,
whereΔf_'̨ = f_π^*'̨-f_π'̨.
We noteWdepends only on onevector as we neglect some local field effects in screening.
This approximation is described in Sec. <ref>.
The screened Coulomb of Eq. (<ref>) implicitly depends on the set of energies{_π^*/π }determined via Eq. <ref>. This means that a self-consistent procedure is needed in order to evaluateW, in the SX approximation, which we implement in our framework. Beside this, we also study the case where the screened interaction is evaluated once and for all, in a non self-consistent way, using the TB energies; we refer to this screened interaction asW^0, and to the approximation as SX^0. In this approximation, Eqs. (<ref>) and (<ref>) are still valid, but with a self-energy containingW^0as
Σ^_ = -1/N_kA∑_'̨'Δ f^0_'̨
W^0(-̨'̨-') ϕ^*_'̨+' ,
whereΔf^0_'̨ = f^0_π^*'̨-f^0_π'̨.
As it regards the electronic populations, we find that for the electronic temperature studied in this work (T = 0orT=4K), we can safely setΔf_'̨ ≈Δf^0_'̨.
In the following subsection, we model the screened interaction in graphene for both SX and SX^0approximations.
§.§ Screened interaction
In this section we describe our model for the screened interaction, within the RPA, that is used both in the evaluation of the quasi-particle self-energy Eq. (<ref>) and correlation kernel of the BSE equation Eq. (<ref>).
The effective screened interaction of a 2D crystal within the orbital approximation given by Eq. (<ref>) may be written as
W^_'(,) = ε̃^-1,_'(,)^_'() ,
whereε̃^-1/is the inverse dielectric function, obtained from Eq. (<ref>) with a density-density responseχ̃obtained within the RPA
χ̃^_'(,)
= [1-χ^0,_”(,)v^_”()]^-1χ^0,_”'(,) .
A sketch of the Feyman diagrams included in the screened interactionWcan be found in Fig. <ref> (b).χ^0,is the Fourier transform of the irreducible polarizability
χ^0,_'(,) = 2
lim_η→ 0^+∑_m,n=π^π^*∑_(f_m-f_n+̨)n+̨e^i(+)·mme^-i(+')·n+̨/-(_n+̨-_m)+iη .
For simplicity, we neglect off-diagonal matrix elements ofχ^0,_', which has been shown to be a good approximation for graphene <cit.>.
Within the proposed approximations, the screened interaction can be written as
W^_() = v^_()/1-v^_()χ^0,_() .
The electronic screening is encoded in the irreducible polarizability.
In the self-consistent SX, Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) must be solved self-consistently, as the irreducible polarizability depend on the quasi-particle energies.
In Fig. <ref> (a) and (b) we sketch the Feynman diagrams included in the self-consistent SX.
This is analogous to self-consistent quasi-particle GW theory <cit.> with the static approroximation of the screened interaction.
In the non-self consistent case (SX^0) we simply useW^0.
Practically, we approximate the static screened interactionWorW^0of graphene with screening obtained from low energy models.
In fact, it is known from the literature that low-energy excitations are the most important ones in the description of long-wavelength screening <cit.>, that is the responsible of graphene many-body features.
This procedure speeds up the calculations, as the evaluation of the screened interaction is usually the most time-consuming part of a many-body perturbation theory simulation.
We now list the explicit forms ofWandW^0 used in this work for different doping levels. The effect of the embedding in the screened interaction is treated in Sec. <ref>.
§.§.§ W^0 freestanding graphene
For this zero doping case, we approximate the static irreducible polarizability, given by Eq. (<ref>), with the one of the Dirac cone model atT=0K
χ^0,() = -q/4v_ ,
wherev_is the Fermi velocity of the TB model.
By inserting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), the screened interaction can be evaluated analytically.
§.§.§ W^0 doped graphene
Also in this case we approximate the static irreducible polarizability of the TB model with the one of the Dirac cone model atT = 0K.
In case of finite doping, the irreducible polarizability has been already derived in Ref. Hwang_07, and it reads
χ^0,() =
1-π q/8 , q ≤ 2
1-1/2√(1-4/q^2)-
q/4sin^-12/q , q > 2 ,
wherek_ = √(πn)is the Fermi wavevector,nbeing the carrier (electrons or holes) density.
As in the previous case, by inserting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), the screened interaction can be evaluated analytically.
§.§.§ W freestanding graphene
In this case, we want to calculate the irreducible polarizability obtained from the quasi-particle band structure in Eq. (<ref>).
The Dirac cone approximation can no more be applied, due to the Fermi velocity renormalization with a logarithmic divergence at the neutrality point, as shown later in detail.
However, in Ref. Guandalini_2024 a proper low-energy model of the quasi-particle band structure is derived.
We suppose the band structure of the quasi-particle cone has the following functional form
_π^*/π(k) =± v_ k ± f v_k/2[cosh^-1(2 k_c/k) +1/2] ,
wherefandk_care numerical parameters fitted with numerical data obtained by sampling the surroundings of the Dirac point.
Next, the k sum of the irreducible polarizability in Eq. (<ref>) is evaluated numerically in elliptic coordinates to exploit the symmetries of the sum.
The resulting screened interaction is evaluated on a numerical logarithmic grid aroundq = 0, then evaluated when needed by the code with a linear interpolation scheme.
§.§.§ W doped graphene
The procedure we used to calculate the screened interaction is the same as Sec. <ref>, but the low-energy quasi-particle band structure is interpolated with Chebyshev polynomials instead of fitted with Eq. (<ref>). In fact, to the best of our knowledge, an analytical low-energy model that can be exploit to speed up the calculation is not present in the literature.
§.§ Bethe-Salpeter equation
The kernels in Eqs. (<ref>) and (<ref>) are evaluated by using the matrix elements in Eq. (<ref>).
We model screened interactionW^as explained in Sec. <ref> and use it in the kernel of Eq. (<ref>).
Non interacting excitation energiesE_t_1()are evaluated or from Eq. (<ref>) with SX quasi-particle energies given by Eq. (<ref>) or from G^0W^0TB parameters (see Appx. <ref>).
From a diagrammatic perspective, the solution of the BSE requires the self-consistent solution of all the diagrammatic equations sketched in Fig. <ref>.
§.§ Doping and environmental effects
We model doping effects by changing the chemical potentialμin the occupation numbersf_mand using a proper screened interaction as explained in Sec. <ref>.
In principle, as doping changes the Kohn-Sham potential, the DFT TB parametrization is doping dependent.
However, we neglect the TB fit dependence on doping, as this effect is negligible <cit.>.
Environmental effects are taken into account with a static uniform dielectric constantε_r ≠1in the Coulomb interactionv^given by Eq. (<ref>).
For what concerns the screened interaction, the environment reduces its intensity through a reduction ofv^in Eq. (<ref>).
However, there is also a screening reduction, enchancingW, given by an increase of the inverse dielectric function, as can be seen from Eq. (<ref>).
The net effect is a reduction ofWwith respect to the freestanding case, but with a lower value thanε_rdue to the reduced screening caused by the environment.
§ NUMERICAL DETAILS
Ab-initio—DFT calculations have been done with the QUANTUM ESPRESSO package <cit.> within the local-density approximation exchange-correlation functional. <cit.>
We adopted norm-conserving pseudopotentials to model the electron-ion interaction and the kinetic energy cutoff for the wavefunctions is set to90Ry.
G^0W^0quasi-particle band structure and the BSE excitation spectra have been calculated with the YAMBO package <cit.>.
The bare Coulomb interaction has been truncated with a slab cutoff in order to remove spurious interactions between replicas. <cit.>
The screened interactionWhas been considered within the plasmon pole approximation <cit.> in G^0W^0calculations and with the static approximation in BSE calculations. A cutoff energy of10Ry has been applied to its Fourier transform.
We used100states both in the sum-over-states of the irreducible polarizability and of the self-energy.
The optical conductivity and density-density response function have been calculated by selecting3valence and12conduction bands nearer to the Fermi level and a finite dampingη= 0.2eV.
We used a Mohnkhorst-Pack grid <cit.> of60×60in G^0W^0and of90×90in the BSE calculations.
TB—All TB calculations have been done with a python3 code designed by the authors. The parameters of the five nearest neighbour TB are fitted to the DFT or G^0W^0calculations. When we perform SX^0or SX calculations, the TB parameters are fitted to DFT. The electronic temperature is set to4K.
In the Hartree kernel, given by Eq. (<ref>), the sum overwas performed up to the first three shells ofvectors.
Instead, in the SX kernel [see Eq. (<ref>)] we cutoff the screened interactionWforq > 4π/3a, whereais the graphene lattice constant.
We stress than in our 2D formulation thevectors corresponds to those oriented along the periodic directions.
The local fields oriented along the non-periodic direction are implicitly taken into account via the real space formulation provided in Sec. <ref>.
We used a Monkhorst-Pack grid of181×181withη= 0.2eV in the calculations of Fig. <ref> while a grid of361×361in the response functions in Figs. <ref> and <ref> with a dampingη= 0.1eV.
All other calculations have been performed with telescopic grids <cit.> withp=3,𝒩=25and differentlandL.
In particular, we usedl=6,L=10in Figs. <ref> and <ref>,l=8,L=13on the left side of Fig. <ref>,l=8,L=12in Fig. <ref> andl=8,L=10in the doped response functions of doped graphene.
We checked numerically that all the calculations are converged.
Bethe-Salpeter—The BSE, both in ab-initio and TB calculations, is solved with a pseudo-Hermitian Lanczos algorithm, that is particularly efficient to calculate the propagators [see Eqs. (<ref>) and (<ref>)] at differentωin just one calculation, due to an iterative procedure able to find a basis representation where the BSE matrix is tridiagonal. <cit.>
§ RESULTS
In this section, we summarize the results obtained with the computational scheme proposed in the theoretical section.
Firstly, we discuss the quasi-particle band structure, and compare the Fermi velocity renormalization with experiments.
Then, we study the optical conductivity.
In both cases, we focus in the role of self-consistency in the quasi-particle correction.
Finally, we study the effects of doping and dielectric embedding on both the group velocity near the K point and the optical conductivity.
A validation between our TB and ab-initio calculations can be found in Appx. <ref>. In Appx. <ref>, we show a wider range of data set for completeness, along with the density-density response function atq=K.
In all the figures of the results section, we highlight two region intervals in the low-energy sector of the graphene spectrum.
The first region (|_n| ≲0.1eV) is typically relevant for low-doping electron-phonon mediated transport properties or for the hydrodynamic transport regime dominated by e-e interaction <cit.>. In such region, the band structure is strongly affected by the many-body renormalization of Fermi velocities, with an expected large impact on the evaluation of transport properties.
The second region is determined by the phonon energy range (|_n| ≲0.2eV), where electron excitations are in resonance with optical phonons, originating non-adiabatic effects <cit.>.
§.§ Band structure, Fermi velocity renormalization and optical conductivity in the SX approximation
In Fig. <ref>, we compare the quasi-particle band structure and self-energy of freestanding graphene in the vicinity of the Fermi level.
The ab-initio and G^0W^0results are taken from Ref. Guandalini_2024. SX^0bands are slightly more ripid than the G^0W^0band structure, mainly due to the missing of the renormalization factorZ_n𝐤, that is≈0.75in the G^0W^0calculation.
In fact, the SX^0TB and G^0W^0ab-initio self-energies are very similar in the vicinity of the K point, in particular in the low-energy range relevant for transport and phonon properties.
Near the M point, the SX^0self-energy slightly deviate from the G^0W^0probably due to the missing of the dynamical nature of the screened interactionW. SX calculations are qualitatively similar to the SX^0ones, with strongly quantitative differences near the K point that we will highlight in the following.
In Fig. <ref>, we compare the Fermi velocity renormalization calculated with SX^0and SX starting from a TB band structure, G^0W^0obtained ab-initio as in Ref. Guandalini_2024, and the experimental results.
While the DFT results provide a constant and underestimated Fermi velocity, the SX^0Fermi velocity show the qualitative correct behavior, even if the renormalization effect is underestimated in the same fashion of previous G^0W^0calculations <cit.>.
Only in the SX case we reach a renormalization comparable with the experiments, in agreement with what found in Ref. Stauber_17.
The inclusion of a proper band structure and screened interaction with doping effects included is important to describe the decay of the renormalization at higher dopings.
We note these effects are difficult to include with an ab-initio calculation, due to the large amounts of screened interactions (one for each doping level) are necessary to compute.
In Fig. <ref>, we compare instead the optical conductivity, calculated with the SX^0or SX band structure solving the BSE or within the RPA, with experiments.
As already shown in the literature <cit.>, the inclusion of excitonic effects through the BSE is mandatory in order to have an optical conductivity in agreement with experiments.
Contrary to the band structure, the optical conductivity is not sensible to the self-consistent calculation of the screened interaction.
In fact, there is a cancellation between the quasi-particle corrections, that enhance the gap thus blue shifting theπplasmon position, and the excitonic effects, which red shift the spectrum <cit.>.
Overall, SX results are in excellent agreement with experimental data.
In summary, we have shown in this section that the use of a self-consistent screened interactionWis relevant to understand low-energy many-body effects, while at higher energies it seems less crucial, as evidenced by previous studied with a non self-consistentW<cit.>.
§.§ Doping and environmental effects
Having validated our model for freestanding graphene in the previous section, we now deal with the effects of doping and environment on the group velocity renormalization and optical conductivity, which are easily treated within our TB approach.
In Fig. <ref>, we show the group velocity of theπband in the neighborhood of the Dirac point with the SX approximation for the case of freestanding, doped (n = 2.075 ×10^12cm^-2) and embedded (ε_r = 4) graphene, and optical conductivity obtained with the same band structure by solving the BSE.
The group velocity of doped graphene does not diverge at the Dirac point, due to the quenching of the long wavelength component of screened interaction due to metallic screening (see Appx. <ref> for a discussion about doping effects in the inverse dielectric function).
Instead, the dielectric environment reduces the renormalization of the group velocity, but without removing it. This is because the screened interaction is reduced by the embedding, but still not quenched in the long-wavelength limit contrary to the doped case.
Doping and environment affect less the dielectric response with respect than the band structure.
This is due to the partial cancellation between quasi-particle blue-shift and excitonic red shift.
This cancellation is nearly perfect in the doped case, where theπplasmon is indistinguishable from the freestanding one.
The embedding instead causes a small red-shift of theπplasmon with an enhancement of the peak height.
§ CONCLUSIONS
We proposed a TB plus many-body perturbation theory model to describe low- and high-energy many-body effects in graphene with a unified framework.
The renormalization of the Fermi velocity is in accordance with experiments, provided that the quasi-particle correction are calculated self-consistently with the screened interaction.
The optical conductivity is in accordance both with ab-initio calculations and experiments. In this case, the self-consistent solution of the quasi-particle corrections and the screened interaction does not play a critical role due to the partial cancellation between quasi-particle corrections and excitonic effects.
Also the density-density response function atq=K(shown in Appx. <ref>) and low energy is in accordance with ab-initio calculations.
Instead, at higher energies some deviations occur, probably due to the approximate description of local field effects in the TB model, that does not accurately catch the microscopic description of the electronic orbitals.
We further studied the effect of doping and environment on the quasi-particle band structure and electronic response.
We found that doping quenches the logarithmic divergence of the group velocity, while the environment reduces the group velocity due to the screening of electron-electron interaction, but without changing qualitatively the results.
In the optical conductivity, the environmental effects slightly red shift theπplasmon peak.
Instead, the presence of doping do not change the plasmon position.
This counter-intuitive behaviours, since in both cases the electron-electron interaction is more screened than in the freestanding case, are due to the partial cancellation between quasi-particle corrections and excitonic effects.
The same results are valid also for the density-density response function atq=K.
In principle, this framework can be extended to take into account also the lattice response and electron-phonon coupling matrix elements. The proposed unified framework will allow also to study whether the many-body effects described in this paper may influence the lattice response, as we will do in a subsequent work.
§ ACKNOWLEGEMENTS
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programe (MORE-TEM ERC-SYN project, grant agreement No 951215).
§ TB MODEL
We summarize in this section the graphene electronic structure obtained from a5-th nearest neighbour TB model as derived in the appendix of Ref. Venezuela_11.
Here, we summarize the model for the sake of completeness.
The TB Block wavefunctions are defined as
|,̨s⟩ = ∑_le^i·̨(_l+_s)|l,s⟩ ,
where{_l}are the lattice vectors and{_s}carbon positions within the unit cell.|l,s⟩is thep_zorbital of thesatom in thelunit cell.
The TB HamiltonianH^0_,̨ss' = ,̨sĤ^0,̨s'/N, whereNthe number of unit cells, can be written as
H^0_ =
(
ℊ_ 𝒻_
𝒻^*_ ℊ_
) ,
where
𝒻_ = -t_1∑_i=1^3 e^i·̨𝐂^1_i
-t_3∑_i=1^3 e^i·̨𝐂^3_i
-t_4∑_i=1^6 e^i·̨𝐂^4_i ,
ℊ_ = -t_2∑_i=1^6 e^i·̨𝐂^2_i
-t_5∑_i=1^6 e^i·̨𝐂^5_i = g^*_ ,
andt_nare the hopping parameters.𝐂^n_iis thei-th vectors connecting the reference atom to an-th nearest neighbor.
By diagonalizing the Hamiltonian in Eq. (<ref>), we have theπandπ^*bands
_π^*/π = ℊ_±|𝒻_|
and the corresponding eigenvectors
a_π^*/π = 1/√(2)(
1
±ϕ_
) ,
where
ϕ_ = f^*_/|f_|
are the spinorial phases.
The TB parameters𝐭, used in Eqs. (<ref>) and (<ref>), have been fitted over the DFT band structure.
After the fit, the optimal parameters found are𝐭^ = (-2.8810 0.2797 -0.2034 0.1017 0.0763 )(eV), in accordance with Ref. Venezuela_11.
To mimic the G^0W^0band structure, again in accordance with Ref. Venezuela_11, we use the following TB parameters:𝐭^G^0W^0 = α_𝐭^0, whereα_ = 1.18.
Both in the calculation of the SX self-energy and BSE kernels [see Eqs. (<ref>), (<ref>) and (<ref>)], we have matrix elements that can be written within this model as
me^i·_̊∥n'̨ =
ℱ_-̨'̨1/2 [1-n mϕ^*_ϕ_'̨]δ_,-̨'̨ ,
whereℱ_-̨'̨ = ∫d_̊∥ |(_̊∥)|^2 e^i(-̨'̨)·_̊∥is a 2D atomic form factor.
We already approximated, via Eq. (<ref>), thezprofile of thep_zorbitals with a rectangular shape.
For simplicity, we neglect the_̊∥orbital shape by setting|(_̊∥)|^2 = δ(_̊∥), which correspond toℱ_-̨'̨ = 1.
§ VALIDATION OF THE DIELECTRIC RESPONSE WITH AB-INITIO CALCULATIONS
As described in Sec. <ref>, the TB parameters for DFT and G^0W^0band-structures are fitted on the respective ab-initio calculations, while for the SX^0and SX calculations we use the parameters fitted on DFT as the bare system.
The TB and ab-initio results for both DFT and G^0W^0show an excellent agreement, as can be seen from Fig. <ref>.
In Fig. <ref>, we plot the optical conductivity and density-density response function atq=Kof freestanding graphene obtained with the G^0W^0+BSE method, and compare between ab-initio and TB calculations.
We stress that with TB G^0W^0we mean a TB model with parameters fitted over the ab-initio G^0W^0band structure.
For what concerns the optical conductivity, there is a perfect agreement between TB and ab-initio calculations about the position and shape of theπplasmon peak.
The TB calculation is more converged at smallthan the ab-initio calculation due to the denser TB k-grid, allowed by the lower computational effort required by our calculations.
The TB calculation of the density-density response function atq=Kis in excellent agreement with the ab-initio calculation only in the low-energy rangeħω< 2eV. At high momentum transfer, the accuracy of the band structure in a higher energy range is more important, as well as the accurate description of the microscopic shape of the electrons wavefunctions. Further, in this regime an accurate inclusion of local field effects is required, which is not achieved by a TB model. Still, we find remarkable that the qualitative behavior of the ab-initio spectrum is well reproduced by the TB model with a computational cost which is two orders of magnitude lower.
§ ADDITIONAL STUDIES ON DOPING AND ENVIRONMENTAL EFFECTS
In this appendix, we show a more complete set of data with respect to the results section for completeness.
The color code of all the figures, along with detailed information about the system considered, are listed in Tab. <ref>.
In Fig. <ref>, we show the inverse static dielectric function of freestanding, doped and embedded (ε_r = 4) graphene for both SX^0and SX.
For freestanding SX^0calculations, the inverse dielectric function is nearly constant at low momentum transfers, as already shown in Ref. Sohier2015.
Doping instead produces a metallic screening that is reflected in the drop of the inverse dielectric function in the long wavelength limit. The higher the doping, the faster is the decay.
The effect of the static environment is to reduce the inverse dielectric function by≈1.7with respect to the freestanding case.
The inverse dielectric constant is not reduced by simplyε_r = 4because the embedding reduces also the Coulomb interaction entering the RPA resummation [see Eq. (<ref>)], thus reducing the intensity of the RPA correction.
The SX inverse dielectric function is larger than the SX^0case, meaning that screening effects are weaker as a result of the self-consistent calculation.
This is due to the quantitative difference of band structures between the two calculations near the K point.
As a consequence, in the SX case the quasi-particle corrections reduce screening effects.
In general, freestanding and doped SX inverse dielectric functions are≈1.5larger than their SX^0counterpart, thus enhancing the electron interaction and the corresponding many-body effects.
The embedded inverse dielectric function is instead nearly independent on the self-consistent procedure as the quasi-particle corrections are quenched by the environmental screening.
Most importantly, both the freestanding and embedded inverse dielectric functions tends logarithmically to1in the long-wavelength limit, due to the logarithmic divergence of the group velocity in the quasi-particle bands, which is quenched in the doped case.
In Fig. <ref>, we show the electronic band structure and group velocity of theπband in the neighborhood of the Dirac point for freestanding and doped graphene with three different levels of doping (see Tab. <ref> for details) with te SX approximation.
The doped quasi-particle corrections are lower than the freestanding case due to metallic screening, as previously discussed.
Thus, the dopedπband is nearer to the DFT band with respect to the freestanding case.
While the band structures of the three different dopings are indistinguishable, it is clear from the group velocity that increasing the doping increases the quenching of the group velocity renormalization, due to the enhanced metallic screening.
In Fig. <ref>, we show the optical conductivity and density-density response functions atq=Kas a function of energy for freestanding, doped and embedded graphene.
As shown in the main text for freestanding graphene, we note that the electronic response is less affected by self-consistency also in the doping and embedded cases due to the partial cancellation between quasi-particle corrections and excitonic effects.
This partial cancellation result in a non trivial relation between the amount of screening and the peak shift.
In fact, the self-consistent procedure, that enhance the excitonic effect, causes instead a blue-shift, due to the enhance of the quasi-particle gap. For embedded graphene instead the self-consistent solution of the screened interaction does not affect the spectra, as noted for the inverse dielectric function.
The density-density response function atq=Kat low energy is nearly insensitive of doping, embedding and self-consistency.
Instead, the high energyπplasmon peak is red-shifted by the embedding, and nearly invariant with respect to doping and self-consistency. | Since its discovery in 2004 <cit.>, graphene excitations have been thoroughly studied in the literature due to its remarkable transport, plasmonic and opto-electronic applications <cit.>.
Initially, theoretical studies within the framework of the random-phase approximation (RPA) have been used to describe some electronic graphene properties, such as the Dirac <cit.> and π plasmon dispersions <cit.>.
Nevertheless, the inclusion of the Coulomb interaction between electrons, beyond the RPA, is relevant to describe several spectroscopic features, such as the logarithmic divergence of the Fermi velocity near the neutrality point <cit.> and the shape and position of the π plasmon peak both in optics <cit.> and low momentum-transfer electron-energy loss (EEL) spectroscopy <cit.>.
Further, as suggested in previous studies, excitonic effects may be relevant in determining the optical phonon dispersion near the K point.
In particular, theoretical arguments based on the renormalization group approach<cit.> and experimental Raman spectra <cit.> suggest a steepening of the Khon-anomaly spectrum guided by an enhancement of the electron-phonon interactions.
The same observations seem to be consistent with Raman spectra of bilayer graphene <cit.>.
These effects cannot be reproduced by a single particle framework, e.g. a TB model parametrized with experimental data or density-functional theory (DFT) in a local or semi-local approximation.
Thus, we refer to them as many-body effects.
Many-body effects in graphene have been studied with ab-initio methods in the framework of many-body perturbation theory.
Within the G^0W^0 <cit.> plus Bethe-Salpeter <cit.> (BSE) formalism, the high-energy π plasmon peak at ∼ 5 eV have been successfully reproduced. <cit.>
G^0W^0 studies <cit.> have been also able to reproduce the low-energy (∼ 100 meV) Fermi velocity logarithmic behaviour, despite the renormalization is underestimated with respect to experimental data <cit.>.
The reason of this discrepancy may be attributed to the necessity to include more classes of Feynman diagrams, e.g. by calculating self-consistently the screened interaction and quasi-particle properties. Also, ab-initio frameworks require a too high computational cost to converge graphene calculations at such low-energy scale, both for the freestanding and low-doping cases, probably affecting the robustness of the results.
Dirac cone models <cit.> with an Hartree-Fock self-energy <cit.> or with renormalization group approaches <cit.> have shown to be able to reproduce the experimental Fermi velocity increase near the neutrality point.
However, the parameter choice is not unique between different studies, and there is no possibility to describe with the same framework higher-energies π-plasmon features.
A TB model plus many-body perturbation theory approach, as the one studied in Ref. Stauber_17, successfully describes the Fermi velocity renormalization, containing also the physics to describe many-body effects above the range of validity of the Dirac cone model.
However, in Ref. Stauber_17, the effect on doping on the velocity renormalization has not been studied and graphene is considered as a 2D sheet without thickness through the definition of the Coulomb potential.
In addition, up to now a BSE approach has never been used on top of a graphene TB model to describe both low- and high-energy many body effects in a unique framework.
In this work, we develop a computational scheme based on a TB model plus many-body perturbation theory corrections that is able to describe both low- and high- energy many-body effects in graphene with high accuracy, thus avoiding the need to resort to costly ab-initio methods.
The quasi-particle corrections are calculated with the static screened exchange (SX) approximation, while the electron-hole interaction is included by solving the BSE with the same static screened interaction used for the quasi-particle properties.
We focus on the effects of self-consistency between the quasi-particle corrections and the screened interaction, which we find to be important to compare with experiments.
Finally, we discuss the effect of both doping and of a static dielectric environment on both the band structure and the electronic response.
The paper is organized as follows: in Sec. <ref> we summarize the main equations of many-body perturbation theory concerning the quasi-particle properties (Sec. <ref>) and the evaluation of density response functions (Sec. <ref>).
In Sec. <ref>, we focus on two-dimensional (2D) materials and describe how we approximate the electronic orbitals in the non-periodic direction.
The application of many-body perturbation theory to the graphene TB model is described in Sec. <ref> for what concerns quasi-particle corrections (Sec. <ref>), screened interaction (Sec. <ref>), the BSE (Sec. <ref>) and how to evaluate doping and environmental effects (Sec. <ref>).
Numerical details of the calculations are provided in Sec. <ref>.
In Sec. <ref>, we show the results. We firstly study the band structure, Fermi velocity renormalization and optical conductivity and compare with experiments in Sec. <ref>.
Next, in Sec. <ref> we analyze doping and environmental effects.
The conclusions are drawn in Sec. <ref>.
Details about the TB model are included in Appx. <ref>.
Next, in Appx. <ref> we compare the TB results with ab-initio results from the literature.
Finally, in Appx. <ref> we provide additional data about doping and environment effects in the group velocity renormalization and dielectric response functions. | null | null | In this section, we summarize the results obtained with the computational scheme proposed in the theoretical section.
Firstly, we discuss the quasi-particle band structure, and compare the Fermi velocity renormalization with experiments.
Then, we study the optical conductivity.
In both cases, we focus in the role of self-consistency in the quasi-particle correction.
Finally, we study the effects of doping and dielectric embedding on both the group velocity near the K point and the optical conductivity.
A validation between our TB and ab-initio calculations can be found in Appx. <ref>. In Appx. <ref>, we show a wider range of data set for completeness, along with the density-density response function atq=K.
In all the figures of the results section, we highlight two region intervals in the low-energy sector of the graphene spectrum.
The first region (|_n| ≲0.1eV) is typically relevant for low-doping electron-phonon mediated transport properties or for the hydrodynamic transport regime dominated by e-e interaction <cit.>. In such region, the band structure is strongly affected by the many-body renormalization of Fermi velocities, with an expected large impact on the evaluation of transport properties.
The second region is determined by the phonon energy range (|_n| ≲0.2eV), where electron excitations are in resonance with optical phonons, originating non-adiabatic effects <cit.>.
§.§ Band structure, Fermi velocity renormalization and optical conductivity in the SX approximation
In Fig. <ref>, we compare the quasi-particle band structure and self-energy of freestanding graphene in the vicinity of the Fermi level.
The ab-initio and G^0W^0results are taken from Ref. Guandalini_2024. SX^0bands are slightly more ripid than the G^0W^0band structure, mainly due to the missing of the renormalization factorZ_n𝐤, that is≈0.75in the G^0W^0calculation.
In fact, the SX^0TB and G^0W^0ab-initio self-energies are very similar in the vicinity of the K point, in particular in the low-energy range relevant for transport and phonon properties.
Near the M point, the SX^0self-energy slightly deviate from the G^0W^0probably due to the missing of the dynamical nature of the screened interactionW. SX calculations are qualitatively similar to the SX^0ones, with strongly quantitative differences near the K point that we will highlight in the following.
In Fig. <ref>, we compare the Fermi velocity renormalization calculated with SX^0and SX starting from a TB band structure, G^0W^0obtained ab-initio as in Ref. Guandalini_2024, and the experimental results.
While the DFT results provide a constant and underestimated Fermi velocity, the SX^0Fermi velocity show the qualitative correct behavior, even if the renormalization effect is underestimated in the same fashion of previous G^0W^0calculations <cit.>.
Only in the SX case we reach a renormalization comparable with the experiments, in agreement with what found in Ref. Stauber_17.
The inclusion of a proper band structure and screened interaction with doping effects included is important to describe the decay of the renormalization at higher dopings.
We note these effects are difficult to include with an ab-initio calculation, due to the large amounts of screened interactions (one for each doping level) are necessary to compute.
In Fig. <ref>, we compare instead the optical conductivity, calculated with the SX^0or SX band structure solving the BSE or within the RPA, with experiments.
As already shown in the literature <cit.>, the inclusion of excitonic effects through the BSE is mandatory in order to have an optical conductivity in agreement with experiments.
Contrary to the band structure, the optical conductivity is not sensible to the self-consistent calculation of the screened interaction.
In fact, there is a cancellation between the quasi-particle corrections, that enhance the gap thus blue shifting theπplasmon position, and the excitonic effects, which red shift the spectrum <cit.>.
Overall, SX results are in excellent agreement with experimental data.
In summary, we have shown in this section that the use of a self-consistent screened interactionWis relevant to understand low-energy many-body effects, while at higher energies it seems less crucial, as evidenced by previous studied with a non self-consistentW<cit.>.
§.§ Doping and environmental effects
Having validated our model for freestanding graphene in the previous section, we now deal with the effects of doping and environment on the group velocity renormalization and optical conductivity, which are easily treated within our TB approach.
In Fig. <ref>, we show the group velocity of theπband in the neighborhood of the Dirac point with the SX approximation for the case of freestanding, doped (n = 2.075 ×10^12cm^-2) and embedded (ε_r = 4) graphene, and optical conductivity obtained with the same band structure by solving the BSE.
The group velocity of doped graphene does not diverge at the Dirac point, due to the quenching of the long wavelength component of screened interaction due to metallic screening (see Appx. <ref> for a discussion about doping effects in the inverse dielectric function).
Instead, the dielectric environment reduces the renormalization of the group velocity, but without removing it. This is because the screened interaction is reduced by the embedding, but still not quenched in the long-wavelength limit contrary to the doped case.
Doping and environment affect less the dielectric response with respect than the band structure.
This is due to the partial cancellation between quasi-particle blue-shift and excitonic red shift.
This cancellation is nearly perfect in the doped case, where theπplasmon is indistinguishable from the freestanding one.
The embedding instead causes a small red-shift of theπplasmon with an enhancement of the peak height. | null | null |
http://arxiv.org/abs/2409.17804v1 | 20240926125747 | Enriched Functional Tree-Based Classifiers: A Novel Approach Leveraging Derivatives and Geometric Features | [
"Fabrizio Maturo",
"Annamaria Porreca"
] | stat.ML | [
"stat.ML",
"cs.LG",
"stat.ME",
"62H30, 62M10, 68T05, 62G08, 62P10",
"G.3; I.2.6; H.2.8; J.3; J.2"
] |
E-scooter effects on public transport demand: a case study in Santiago, Chile
[
=============================================================================
§ ABSTRACT
The positioning of this research falls within the scalar-on-function classification literature, a field of significant interest across various domains, particularly in statistics, mathematics, and computer science. This study introduces an advanced methodology for supervised classification by integrating Functional Data Analysis (FDA) with tree-based ensemble techniques for classifying high-dimensional time series. The proposed framework, Enriched Functional Tree-Based Classifiers (EFTCs), leverages derivative and geometric features, benefiting from the diversity inherent in ensemble methods to further enhance predictive performance and reduce variance. While our approach has been tested on the enrichment of Functional Classification Trees (FCTs), Functional K-NN (FKNN), Functional Random Forest (FRF), Functional XGBoost (FXGB), and Functional LightGBM (FLGBM), it could be extended to other tree-based and non-tree-based classifiers, with appropriate considerations emerging from this investigation. Through extensive experimental evaluations on seven real-world datasets and six simulated scenarios, this proposal demonstrates fascinating improvements over traditional approaches, providing new insights into the application of FDA in complex, high-dimensional learning problems.
Keywords: Functional data analysis, derivatives, geometric features, enriched functional tree-based classifiers, supervised classification, enriched functional random forest.
§ INTRODUCTION
In today's world, data is collected from diverse sources such as biomedical devices, smartphones, and environmental sensors, and used across applications in healthcare, environmental monitoring, and more. Technological advancements have improved our capacity to store and process this data, but managing high-dimensional datasets remains a challenge. Dimensionality reduction and classification techniques are essential for effectively handling such data in fields like medicine, environmental monitoring, security, and robotics. Key issues include irregularly spaced time points, computational complexity, the bias-variance trade-off, and the need for interpretable models with strong performance metrics such as accuracy, precision, and recall.
In the supervised classification literature, one of the most well-known challenges is the curse of dimensionality, which arises whenever dealing with a large number of variables or, in the context of time series, when there are many time points. This issue impacts numerous statistical aspects, such as distance measures, the identification of causal relationships, or finding the best-performing model when many models with similar performance exist but rely on different variables. It also introduces problems like data sparsity and multicollinearity. For these reasons, the challenge of both supervised and unsupervised classification in high-dimensional data, whether it involves numerous different variables or many time points for the same variable, remains a complex and relevant area of research in mathematics, statistics, and computer science.
Functional data analysis (FDA) is a research area that has actively tackled many of these challenges over the past decades. In FDA, dimensionality reduction is inherent, as it is achievable simply through the representation of the data itself.
More generally, FDA represents a statistical domain focused on the theory and application of statistical methods in scenarios where data can be expressed as functions, contrasting with the traditional representation using real numbers or vectors. FDA introduces a paradigm shift in statistical concepts, representations, modeling, and predictive techniques by treating functions as single entities. The benefits of employing FDA have been extensively discussed in contemporary literature, including the utilization of derivatives when they provide more insight than the original functions due to the nature of the phenomena, the adoption of non-parametric strategies without restrictive assumptions, data dimensionality reduction, and the exploitation of critical sources of pattern and variation <cit.>.
The literature on FDA is currently dynamic and highly relevant, especially in regression, ANOVA, unsupervised classification, supervised classification, and outlier detection. Within this broad framework, we focus on supervised classification with functional predictors and a scalar response variable.
Recent research has explored the development of classification methods that combine the strengths of FDA and tree-based techniques. <cit.> advocated using spline trees for functional data, applying them to analyze time-of-day patterns for international call customers. Assessing variable importance in the fusion of FDA and tree-based methods was the focus of the work by <cit.>. <cit.> proposed a classification approach based on random forests for functional covariates. Investigating the construction of a classifier for dose-response predictions involving curve outcomes was the aim of <cit.>. <cit.> proposed using functional principal components to train a classification tree. <cit.> suggested combining clustering and supervised tree-based classification to enhance prediction model accuracy. Finally, <cit.> proposed an innovative evaluation of leaves' quality for functional classification trees applied to biomedical signals with binary outcomes.
<cit.> explore methods for classifying multivariate functional data adapting and extending PLS techniques to handle the complexity of functional data across varying domains.
<cit.> propose a mixture-based segmentation approach for heterogeneous functional data, aimed at identifying hidden structures and subgroups within complex functional datasets by combining multiple segmentations.
Recently, <cit.> proposed to exploit functional representation to increase diversity in ensemble methods and improve the accuracy of classifiers. Finally, <cit.> suggested a new algorithm to exploit the previous idea but further improving the accuracy and variance of estimates.
Building on the established foundation of combining FDA and statistical learning techniques, significant exploration is still needed to handle large datasets and interpret results from both statistical and causal perspectives. Research in this area is rapidly evolving and holds great potential. We expect a growing focus on improving functional classifiers' precision, interpretability, and explainability in the coming years.
Leveraging this landscape and its vast research opportunities, this paper introduces a novel functional supervised classification framework, namely the Enriched Functional Tree-Based Classifiers (EFTCs). To address the challenge of learning from high-dimensional data and enhancing functional classification performance by leveraging additional characteristics of the original data,
derivatives, curvature, radius of curvature, and elasticity are used to enrich the information provided to functional classification tree ensembles. In other words, we refer to EFTCs to denote the joint utilisation of sequential transformations for extracting unexplored features from the original signals. In essence, this approach involves viewing functions from diverse perspectives to capture additional aspects that can contribute to enhancing classification performance. Practically, it is like using a magnifying glass to reveal attributes that the original functions may miss. Moreover, the motivation behind this proposal is also driven by the well-known fact that ensemble methods, such as tree-based classifiers, benefit significantly from introducing diversity, as it tends to improve generalisation and performance. By enriching the feature space with diverse functional characteristics, ETBCs can leverage this diversity to further enhance classification accuracy, exploiting the strengths of each transformation to capture complementary information from the data.
The paper conducts extensive experimental evaluations on seven real-world datasets and six simulated signals to measure the proposed methodology's efficacy. Comparative analyses with existing methods reveal promising results in terms of classification performance.
The study yields promising results, indicating that the enrichment approach significantly improves performance with certain methods. While our approach has been tested on functional classification trees, KNN, random forest, XGBoost, and LightGBM. However, it can be extended to other tree-based or non-tree-based classifiers, with appropriate adjustments based on our findings. This framework demonstrates notable improvements over traditional methods, offering valuable insights into applying FDA in complex, high-dimensional learning problems.
The paper's structure is as follows: Section 2 introduces the core concepts of FDA, Enriched Functional Data, and the Enriched Functional Classification frameworks, including trees, random forests, XGBoost, and LightGBM. Section 3 covers applying the proposed methods to real and simulated data.
Section 4 discusses key issues related to model explainability.
Finally, Section 5 concludes the paper by discussing the main findings and highlighting directions for future research.
§ MATERIAL AND METHODS
§.§ Data Representation in the Functional Data Analysis (FDA) framework
In FDA, the fundamental concept revolves around treating data functions as distinct entities. However, in practical scenarios, functional data is frequently encountered as discrete data points. This means that the original function, denoted by z = f(x), is transformed into a collection of discrete observations represented by T pairs (x_j, z_j), where x_j denotes the points at which the function is assessed, and z_j represents the corresponding function values at those points. We define a functional variable X as a random variable with values in a functional space Ξ. Accordingly, a functional data set is a sample x_1, x_2, ..., x_N drawn from the functional variable X <cit.>.
Focusing specifically on the case of a Hilbert space with a metric d(·,·) associated with a norm, such that d(x_1(t), x_2 (t)) = |x_1(t) - x_2(t)|, and where the norm |· | is associated with an inner product ⟨·,·⟩, such that |x(t)|=⟨ x(t),x(t) ⟩^1/2, we can derive the space ℒ^2 of real square-integrable functions defined on τ by ⟨ x_1(t),x_2(t) ⟩=∫_τ x_1(t)x_2(t)dt, where τ is a Lebesgue measurable set on T. Therefore, considering the specific case of ℒ_2, a basis function system comprises a set of linearly independent functions ϕ_j(t) that span the space ℒ_2 <cit.>.
The initial step in FDA involves transforming the observed values z_i1, z_i2, ..., z_iT for each unit i=1,2,...,N into a functional form. The prevalent method for estimating functional data is basis approximation. Various basis systems can be employed depending on the characteristics of the curves.
A common approach is to represent functions using a finite set of basis functions in a fixed basis system. This can be mathematically expressed as:
x_i(t) ≈∑_s=1^S c_isϕ_s(t),
where, c_i = (c_i1, ..., c_iS)^T represents the vector of coefficients defining the linear combination, ϕ_s(t) is the s-th basis function, and S is a finite subset of functions used to approximate the complete basis expansion. Another trending methodology involves leveraging a data-driven basis, with Functional Principal Components (FPCs) decomposition. This approach effectively reduces dimensionality while preserving essential information from the original dataset <cit.>. In this context, the approximation of functional data can be expressed as follows:
x_i(t) ≈∑_k=1^Kν_ikξ_k(t),
where K is the number of FPCs, ν_ik represents the score of the generic FPC ξ_k for the generic function x_i (i=1,2,...,N). By reducing this representation to the initial p FPCs, we obtain an estimate of the sample curves, and the explained variance is given by ∑_k=1^p λ_k, where λ_k denotes the variance associated with the k-th functional principal component. The construction of the FPCs approximation is designed such that the variance explained by the k-th FPC decreases with increasing values of k.
Various domains, such as time, space, or other parameters, can represent the variable T. The response can be categorical or numerical, leading to classification or regression challenges. However, this study is specifically concerned with a particular scenario: a scalar-on-function classification problem.
In functional classification, the objective is to forecast the class or label Y for an observation X within a separable metric space (Ξ, d). Consequently, our methodology is tailored for functional data represented as y_i, x_i(t), where x_i(t) is a predictor curve defined for t ∈ T, and y_i denotes the scalar response observed at sample i = 1, ..., N. The classification of a novel observation x from X involves the creation of a mapping f:Ξ⟶{ 0, 1, ..., C }, referred to as a “classifier”, which assigns x to its predicted label. The error probability is quantified by P { f(X) ≠ Y }.
§.§ Enriched Functional Features
§.§.§ Functional Derivatives
Let the functional derivative of order r for the i-th curve be represented by a fixed basis system (e.g. b-splines) as:
x_i^(r)(t)=∑_j=1^S c_i j ^(r)ϕ_j^(r)(t) j=1, …, S
where c_i j ^(r) is the coefficient of the i-th curve, j-th b-spline, and r-th derivative order;
ϕ_j^(r)(t) is the r-th derivative of the j-th basis function.
<cit.> stressed that the selection of the basis system plays a crucial role in estimating derivatives. It is essential to ensure that the chosen basis for representing the object can accommodate the order of the derivative to be calculated. In the case of b-spline bases, this implies that the spline's order must be at least one higher than the order of the derivative under consideration. In this research, we concentrate on a b-spline basis of fourth order.
In the following sections, we will limit our attention to the first two derivatives.
The first derivative of the function x_i(t) in the B-spline representation is given by:
x_i^(1)(t) = ∑_j=1^S c_ij^(1)ϕ_j^(1)(t)
where c_ij^(1) are the coefficients corresponding to the first derivative of the function, and ϕ_j^(1)(t) is the first derivative of the j-th B-spline basis function.
Similarly, the second derivative of the function x_i(t) can be expressed as:
x_i^(2)(t) = ∑_j=1^S c_ij^(2)ϕ_j^(2)(t)
where c_ij^(2) are the coefficients for the second derivative of the function, and ϕ_j^(2)(t) is the second derivative of the j-th B-spline basis function.
In the context of functional supervised classification, B-spline versions of derivatives enhance the representation of functional features in the data by providing additional information on local variations of the curves, such as local speed and acceleration, which can be crucial for distinguishing between different classes. In supervised classification, the speed at which a functional signal changes over time can be a key factor for class separation. For example, knowing how the heart rate varies over time in a medical dataset becomes more illuminating when considering the speed of these changes at different time intervals.
On the other hand, acceleration can indicate specific events or sharp changes that help differentiate one class from another, thus further improving the accuracy of the model.
Additionally, B-spline derivatives allow for smoothed and stable derivative calculations that are less noise-sensitive than directly computed derivatives. This enriches the feature set used by classification models, improving predictive performance and class recognition.
§.§.§ Functional Curvature and Radius of Curvature
The curvature κ(t) of the function x_i(t) is a measure of how rapidly the function changes direction at each point t. The curvature is defined as:
κ(t) = |x_i^(2)(t)|/(1 + (x_i^(1)(t))^2)^3/2
where x_i^(1)(t) is the first derivative and x_i^(2)(t) is the second derivative of the function x_i(t).
The numerator |x_i^(2)(t)| represents the magnitude of the acceleration, while the denominator adjusts for the influence of the slope to ensure that the curvature is independent of the scale of t. This expression provides a comprehensive measure of the function's tendency to bend at each point t, capturing both the speed of change and the rate at which this speed itself changes.
The curvature κ(t) of the function x_i(t) can also be defined in terms of B-spline basis as follows:
κ(t) = |∑_j=1^S c_ij^(2)ϕ_j^(2)(t)|/(1 + (∑_j=1^S c_ij^(1)ϕ_j^(1)(t))^2)^3/2
where c_ij^(1) and c_ij^(2) are the coefficients corresponding to the first and second derivatives of the function x_i(t), and ϕ_j^(1)(t) and ϕ_j^(2)(t) are the first and second derivatives of the j-th B-spline basis function, respectively.
To use the curvature κ(t) in a classifier, we need to extract the curvature coefficients associated with the B-spline basis functions. However, directly extracting coefficients from the above nonlinear expression for curvature is challenging because it involves a nonlinear combination of the B-spline basis functions. To overcome this, for practical use in classification, we can use the following steps.
First, we compute the curvature κ(t) at a set of sampled points t_1, t_2, …, t_M over the domain τ. This results in a vector of curvature values κ(t_1), κ(t_2), …, κ(t_M).
Next, we fit these discrete curvature values to a B-spline basis:
κ(t) ≈∑_k=1^S d_ikϕ_j(t)
where ϕ_j(t) are the B-spline basis functions, and d_ik are the coefficients representing the curvature in the B-spline basis.
The coefficients d_ik extracted from the B-spline fit are then used as features in the classifier.
The radius of curvature R(t) is defined as the reciprocal of the curvature κ(t):
R(t) = 1/κ(t)≈1/∑_k=1^S d_ikϕ_j(t)
In this form, the radius of curvature is represented as the reciprocal of the B-spline expansion of curvature. However, this form is not linear, which complicates direct coefficient extraction.
To facilitate coefficient extraction, we can compute the radius of curvature at sampled points and then refit these values using a B-spline basis:
R(t) ≈∑_k=1^S e_ikϕ_j(t)
where ϕ_j(t) are the B-spline basis functions, and e_ik are the coefficients representing the radius of curvature in the B-spline basis.
Figure <ref> illustrates two examples of the geometric interpretation of curvature and radius of curvature for a smooth curve. The blue curve represents the original function, while the purple circle is the osculating circle at a point of interest. The red dot marks the point on the curve where the curvature is being evaluated. The closeness of the osculating circle to the curve at this location visualizes the curvature.
Curvature measures how sharply the curve bends at a given point, while the radius of curvature, being the reciprocal of the curvature, provides insight into the tightness or gentleness of this bend. A high curvature results in a small radius of curvature, indicating a sharp turn, whereas a low curvature corresponds to a larger radius, reflecting a gentler bend. The radius of curvature is depicted as the distance from the red dot to the center of the osculating circle.
Curvature and radius of curvature describe the local geometric properties of a curve and are valuable features for supervised classification. They provide complementary insights: curvature highlights sharp local changes in direction, making it crucial for detecting sudden transitions in the signal, while the radius of curvature offers additional context on how the curve behaves over a wider range. These features are particularly useful when distinguishing between different classes in time series or functional data.
By incorporating both curvature and radius of curvature, classification models gain a richer understanding of the signal's local and global geometry. These features capture local shape details, such as turning points or abrupt changes, which may indicate a specific class. For instance, in a medical dataset, significant variations in curvature could signal pathological conditions. Moreover, curvature and radius of curvature are robust to translations and scaling, which enhances their ability to generalize across different datasets. This robustness makes them valuable for improving classification models, as they help preserve important geometric patterns regardless of how the data is presented.
§.§.§ Functional Elasticity
The elasticity E(t) of a function x_i(t) is a measure of how responsive the function is to changes in its input, often expressed as the product of the first derivative of the function and the ratio of the input t to the function value x_i(t):
E(t) = x_i^(1)(t) ·t/x_i(t)
Given that the first derivative x_i^(1)(t) and the function x_i(t) can be expressed using B-spline basis functions, the elasticity can be represented as:
E(t) = (∑_j=1^S c_ij^(1)ϕ_j(t)) ·t/∑_j=1^S c_ijϕ_j(t)
Here, c_ij^(1) are the coefficients for the first derivative, and c_ij are the coefficients for the original function, both represented using the same B-spline basis ϕ_j(t). This expression, however, is not linear due to the ratio t/x_i(t), making direct coefficient extraction complex.
To facilitate the extraction of coefficients for elasticity, we can compute the elasticity at sampled points and then refit these values using a B-spline basis:
E(t) ≈∑_k=1^S f_ikϕ_k(t)
where ϕ_k(t) are the B-spline basis functions, and f_ik are the coefficients representing the elasticity in the B-spline basis.
Elasticity offers a different perspective from other geometric features, such as curvature or radius of curvature, focusing specifically on the function's rate of change relative to the input itself. While curvature deals with how sharply a function bends, elasticity quantifies the proportional change of the function in response to changes in the independent variable. This additional information can be crucial in cases where the magnitude of the input plays a role in interpreting the dynamics of the signal.
One key aspect of elasticity is its ability to capture scale-invariant properties of the function. Unlike curvature, which focuses on the geometry of the curve, elasticity reflects how the function reacts to the growth or decline of its input, making it highly relevant in scenarios where relative change matters more than absolute values. This is especially useful in fields like economics, where the proportional responsiveness of variables is more significant than their absolute variations, or in biological systems where response to stimuli may scale with intensity.
In supervised classification, elasticity highlights the signal's sensitivity to changes in the independent variable over time. For instance, in time series classification, elasticity could identify periods of rapid growth or decay that differentiate one class from another, such as distinguishing between stable versus volatile behaviour periods in financial data. Another vital aspect is elasticity's ability to reveal non-linear relationships between the input and output. Unlike derivative-based measures that are linear, elasticity incorporates both the function’s value and its rate of change, capturing a richer, non-linear interaction. Finally, elasticity complements features like curvature by focusing on input-output responsiveness, making it useful for applications requiring a holistic understanding of local behaviours and global trends. When used together in classification tasks, these features provide a more nuanced understanding of the signal’s behaviour, enriching the feature space and improving the model's ability to capture diverse patterns.
§.§.§ The Enriched Functional Features Matrix
Using fixed systems, the matrix of features for the original functional representation is determined by:
𝐂=
[ c_11 … c_1S; ⋮ ⋱ ⋮; c_N1 … c_NS ],
where the generic element c_is is the coefficient of the i-th curve (i = 1,…,N) relative to the s-th (s = 1,…,S) basis function ϕ_s(t). As a natural consequence, 𝐜_i is the vector containing the i-th statistical unit's characteristics.
Incorporating coefficients derived from derivatives, curvature, radius of curvature, and elasticity, we have:
* First Derivative Coefficients:
𝐂^(1)=
[ c_11^(1) … c_1S^(1); ⋮ ⋱ ⋮; c_N1^(1) … c_NS^(1) ],
where c_is^(1) are the coefficients associated with the first derivative of the function.
* Second Derivative Coefficients:
𝐂^(2)=
[ c_11^(2) … c_1S^(2); ⋮ ⋱ ⋮; c_N1^(2) … c_NS^(2) ],
where c_is^(2) are the coefficients associated with the second derivative of the function.
* Curvature Coefficients:
𝐃=
[ d_11 … d_1S; ⋮ ⋱ ⋮; d_N1 … d_NS ],
where d_is are the coefficients representing the curvature in the B-spline basis.
* Radius of Curvature Coefficients:
𝐄=
[ e_11 … e_1S; ⋮ ⋱ ⋮; e_N1 … e_NS ],
where e_is are the coefficients representing the radius of curvature in the B-spline basis.
* Elasticity Coefficients:
𝐅=
[ f_11 … f_1S; ⋮ ⋱ ⋮; f_N1 … f_NS ],
where f_is are the coefficients representing the elasticity in the B-spline basis.
By incorporating these additional coefficients into the feature matrix, we can significantly enhance the power of the functional classifiers. The complete feature matrix, combining all these elements, can be written as:
𝐒 = [ c_11 ⋯ c_1S c^(1)_11 ⋯ c^(1)_1S c^(2)_11 ⋯ c^(2)_1S d_11; d_1S e_11 ⋯ e_1S f_11 ⋯ f_1S ⋯ ⋯; ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; c_N1 ⋯ c_NS c^(1)_N1 ⋯ c^(1)_NS c^(2)_N1 ⋯ c^(2)_NS d_N1; d_NS e_N1 ⋯ e_NS f_N1 ⋯ f_NS ⋯ ⋯ ]
With the theoretical framework presented above, in the subsequent application and simulations, we focus on the case presented in Equation <ref> to demonstrate the potential of the proposed functional information enhancement via b-spline decomposition.
Equation <ref> represents the Enriched features used to train the different functional classifiers. It is important to emphasize that the curves in the test set are also represented using the same fixed B-spline basis system as the training set. Since the basis functions are fixed and predefined, the representation of test set curves is consistent with that of the training set. This approach ensures that the coefficients derived from the test curves are directly comparable to those obtained from the training curves.
In contrast, with a data-driven basis system, where the basis functions are derived from the data itself (e.g., using functional principal component analysis), we would face the challenge of having to project the test set curves onto a potentially different set of basis functions than those used for the training set. This could lead to inconsistencies and complications, as the basis functions might differ depending on the specific characteristics of the training data. By using a fixed basis system, we avoid these issues, allowing for a more straightforward and robust model application to new, unseen data.
§.§ Enriched Functional Classification Trees
In the context of functional data analysis (FDA), classifying functional observations into distinct categories based on their intrinsic properties is a central problem. Suppose we have a set of functional observations { x_i(t) }_i=1^N, where each x_i(t) is a function defined over a domain t ∈τ, and y_i ∈{1, 2, …, C} represents the categorical outcome associated with each function.
The task of functional classification then involves finding a mapping f: ℝ^S →{1, 2, …, C} such that:
ŷ_i = f(𝐒_i)
where ŷ_i is the predicted class label for the i-th observation and 𝐒 they are the scores that concern the curve i.
By using the entire set of B-spline coefficients 𝐒 as input features, we ensure that the classifier can comprehensively represent the functional data. This allows the classifier to effectively distinguish between different classes based on the shape and other characteristics of the functions. Furthermore, because the B-spline basis is fixed across the training and test sets, the representation remains consistent, avoiding the complexities arising from data-driven basis systems, where the basis functions could differ between datasets.
The core idea behind Enriched Functional Classification Trees (EFCTs) is to extend the classical decision tree methodology by incorporating features derived from functional data representations. This is done explicitly using B-spline coefficients of the original function and its various functional transformations, including the first and second derivatives, curvature, radius of curvature, and elasticity.
Let 𝐒_i denote the vector of B-spline coefficients associated with the i-th functional observation. This vector is composed of coefficients derived from the original function x_i(t), as well as its first and second derivatives, curvature, radius of curvature, and elasticity:
𝐒_i = (c_i1, …, c_iS, c_i1^(1), …, c_iS^(1), c_i1^(2), …, c_iS^(2), d_i1, …, d_iS, e_i1, …, e_iS, f_i1, …, f_iS)^⊤
where:
* c_ij are the B-spline coefficients for the original function x_i(t),
* c_ij^(1) and c_ij^(2) are the coefficients for the first and second derivatives, respectively,
* d_ij, e_ij, and f_ij are the coefficients corresponding to the curvature, radius of curvature, and elasticity, respectively.
The feature vector 𝐒_i provides a comprehensive representation of the i-th functional data, capturing its various transformations and ensuring that the functional nature of the data is effectively utilized within the decision tree framework. In EFCTs, each split in the tree is based on one or more of these coefficients, allowing the tree to make decisions that are sensitive to specific parts of the functional domain τ and different orders of derivatives.
The task of classification can then be expressed as finding a mapping f: ℝ^pS→{1, 2, …, C}, where p is the total number of functional transformations considered (including the original function, derivatives, curvature, etc.):
ŷ_i = f(𝐒_i)
where ŷ_i is the predicted class label for the i-th observation.
The feature matrix used to train the functional classifiers is constructed from the B-spline coefficients of all functional transformations considered. This matrix, as illustrated in Equation <ref>, serves as the input to the EFCT model.
𝐗 =
[ 𝐒_1; 𝐒_2; ⋮; 𝐒_N ]
Obtained as a natural extension of the functional classification tree proposed by <cit.>, that however focuses on the functional principal components scores of the original functions only, the
EFCT algorithm is illustrated in Algorithm <ref>.
When reasoning with a single tree, we can think of pruning it using classical methods like cost-complexity pruning to avoid overfitting. The increase in available features caused by enrichment with functional transformations makes pruning necessary to prevent poor generalization ability. As in the classic case, a single EFCT, although quite accurate and easy to interpret, suffers from high variance, as will also be illustrated in the application section. For this reason, referring to ensemble methods is increasingly advantageous, at least in terms of estimation performance.
§.§ Enriched Ensembles Methods for Functional on Scalar Classification Problems
While ECTs provide a robust framework for functional data classification using B-spline coefficients, they can be further enriched through ensemble methods. Ensemble methods, such as Random Forests, XGBoost, and LightGBM, leverage the strengths of multiple models to improve predictive performance, reduce variance, and increase robustness.
§.§.§ Enriched Functional Random Forests (EFRF)
The Random Forest algorithm is a natural extension of decision trees, where multiple trees are constructed, and their predictions aggregated to produce a final classification. In the context of functional data, the Enriched Functional Random Forest (EFRF) algorithm operates by constructing a collection of EFCTs, each trained on a bootstrap sample of the functional data represented by B-spline coefficients.
Each EFCT within the EFRF is constructed by recursively splitting the feature space, where the features are the B-spline coefficients. At each node in the tree, a split is made based on one of these coefficients, which corresponds to a specific aspect of the functional data, such as the value of the original function, its first derivative, second derivative, curvature, radius of curvature, or elasticity. The threshold used for the split at each node represents a critical value of a particular B-spline coefficient that best separates the data into distinct categories. For example, a split might be based on a coefficient associated with the first derivative, indicating that the decision is influenced by the rate of change in the function at a specific point in time.
In other words, each EFCT in the EFRF is built using the same process as the EFCT but with the added randomness of selecting a subset of the B-spline coefficients at each split. This process ensures that each EFCT in the forest is slightly different, reducing the correlation between EFCTs and thereby improving the overall predictive accuracy of the ensemble. The final prediction is made by aggregating the predictions of all trees in the forest, typically through majority voting.
The structure of each EFCT can be viewed as a hierarchical sequence of decisions, starting from the root node representing the entire functional dataset and progressing down to the leaf nodes where final class decisions are made. Each path from the root to a leaf node reflects a series of rules that successively refine the classification based on different features of the functional data. For instance, a path might start with a split on a B-spline coefficient related to the original function x_i(t) and then proceed with a split on a coefficient associated with the first derivative x_i^(1)(t), suggesting that both the function’s value and its rate of change are crucial for classifying the data.
An essential aspect of interpreting EFRF models is evaluating variable importance, which measures how often each B-spline coefficient is used in the splits and how much those splits contribute to the model’s accuracy. This analysis helps identify which functional features are most critical in distinguishing between classes. For example, if coefficients related to the second derivative are frequently selected for splits, it indicates that the acceleration of the function plays a significant role in the classification process.
Overall, while the individual trees in the forest may be complex, the EFRF model provides a coherent framework for classification by leveraging the rich information in the functional data. The consistency of using the same number of B-spline coefficients across all trees enhances the model's explainability, allowing for a meaningful understanding of how the functional data’s various transformations contribute to the classification decisions.
§.§.§ Enriched Functional XGBoost (EFXGB)
XGBoost (Extreme Gradient Boosting) is an advanced and scalable implementation of gradient boosting algorithms that excels in both predictive accuracy and computational efficiency. In the context of functional data analysis, we extend XGBoost by using B-spline coefficients as features derived from functional data. This approach allows the model to capture and utilise the intricate structure inherent in functional data. The model we propose is termed Enriched Functional XGBoost (EFXGB).
Let 𝐒_i represent the vector of B-spline coefficients for the i-th functional observation, which includes coefficients from the original function x_i(t), its first and second derivatives, curvature, radius of curvature, and elasticity.
The predicted class label for the i-th observation is given by ŷ_i = f(𝐒_i).
In EFXGB, the goal is to minimise a loss function ℒ(𝐒, 𝐲) over the predictions 𝐲̂, where 𝐲 = (y_1, …, y_N)^⊤ are the true labels. The model builds an ensemble of EFCTs sequentially, where each EFCT f_m(𝐒) is trained to correct the errors made by the previous trees. The prediction for the i-th observation after m EFCTs is:
ŷ_i^(m) = ∑_k=1^mα_k f_k(𝐒_i)
where α_k are weights assigned to each tree, typically learned during training. The model iteratively updates these EFCTs by minimising the following objective function:
ℒ^(m) = ∑_i=1^N l(y_i, ŷ_i^(m-1) + α_m f_m(𝐒_i)) + Ω(f_m)
where l(·) is a differentiable convex loss function, such as logistic loss or squared error, and Ω(f_m) is a regularization term that penalizes the complexity of the EFCT f_m(𝐒). The regularization term Ω(f_m) typically includes the number of leaves T in the EFCT and the L_2-norm of the leaf weights:
Ω(f_m) = γ T + 1/2λ∑_j=1^T w_j^2
where w_j represents the weight assigned to leaf j, γ controls the complexity of the model, and λ is the regularization parameter.
During each iteration, the model calculates the first and second-order gradients of the loss function concerning the predictions, known as the gradient g_i^(m) and Hessian h_i^(m):
g_i^(m) = ∂ l(y_i, ŷ_i^(m-1))/∂ŷ_i^(m-1)
h_i^(m) = ∂^2 l(y_i, ŷ_i^(m-1))/∂ŷ_i^(m-1)^2
These gradients and Hessians are used to fit the new EFCT f_m(𝐒) by minimising a second-order approximation of the loss function. The decision rules within each EFCT are based on the B-spline coefficients, allowing the model to leverage the functional characteristics of the data throughout the boosting process.
EFXGB's flexibility in handling various loss functions and incorporating regularisation makes it particularly powerful for complex functional classification tasks. Using B-spline coefficients ensures that the functional nature of the data is preserved and leveraged at each step, resulting in a model that is both accurate and with low variance. Each EFCT adds information about the functional data, gradually refining the model's predictions by focusing on correcting the errors from previous iterations.
§.§.§ Enriched Functional LightGBM (EFLGBM)
In this section, we extend the Light Gradient Boosting Machine (LightGBM) to functional data classification tasks by incorporating previously enriched features, including B-spline coefficients derived from the original function, its derivatives, curvature, radius of curvature, and elasticity. This extension, termed Enriched Functional LightGBM (EFLGBM), efficiently captures the structure of functional data while leveraging LightGBM's computational advantages.
Similar to EFXGB, EFLGBM constructs an ensemble of EFCTs, to minimise the same loss function ℒ(𝐒, 𝐲) defined in Equation <ref>. The prediction for the i-th observation after m EFCTs follows the same formulation:
ŷ_i^(m) = ∑_k=1^mα_k f_k(𝐒_i)
where α_k are the weights associated with each EFCT, as described in the EFXGB section.
A critical difference between EFLGBM and EFXGB lies in the EFCT-growing strategy. While EFXGB grows EFCT via a level-wise approach, EFLGBM employs a leaf-wise growth strategy, where at each iteration, the model splits the leaf leading to the most considerable reduction in the loss function. This approach allows EFLGBM to explore more complex EFCT, potentially capturing subtle patterns in the functional data. The objective function for EFLGBM is identical to the one used for EFXGB, with the regularisation term Ω(f_m) controlling the model's complexity through the number of leaves T and the leaf weights w_j:
Ω(f_m) = γ T + 1/2λ∑_j=1^T w_j^2
As in EFXGB, the model relies on first and second-order gradients g_i^(m) and Hessians h_i^(m) to fit each new EFCT, using g_i^(m) = ∂ l(y_i, ŷ_i^(m-1))/∂ŷ_i^(m-1) and
h_i^(m) = ∂^2 l(y_i, ŷ_i^(m-1))/∂ŷ_i^(m-1)^2.
The leaf-wise growth of EFCTs in EFLGBM, combined with the enriched functional features, enables the model to efficiently capture complex interactions in the functional data, leading to highly accurate and low-variance classification models. By focusing on leaves with the greatest potential to reduce loss, EFLGBM balances computational efficiency with model complexity, making it a robust choice for functional data classification. EFLGBM, like EFXGB, preserves the data's functional characteristics by using B-spline coefficients as input features, ensuring that the underlying structure of the functional data is leveraged throughout the boosting process. However, its leaf-wise strategy allows it to scale more efficiently, especially in large datasets where subtle functional patterns must be detected.
§ APPLICATIONS
In subsections <ref> and <ref>, we use seven-time series datasets from the Time Series Classification Repository <cit.>, covering various application domains such as electrocardiogram (ECG) signals, image analysis, and energy demand. Table <ref> summarises the main characteristics of these datasets, including the number of training and test samples, the length of the time series, and the number of classes. Supervised classification of functional data is performed using the methods described in Section 2. While we evaluate the performance of the proposed methods across all datasets, we focus particularly on the Car dataset to illustrate the methodology in detail. This includes the steps of data preparation, applying functional classification models, and interpreting the results. We will only present the final results for the other datasets, comparing our approach with existing methods in the literature.
In subsection <ref>, we test the method on six additional simulated datasets, each with a different number of classes. A detailed description of the simulation scenarios can be found in the same subsection. Table <ref> summarises key details of the simulated datasets, including the number of classes, time points, and the size of the training and test datasets.
§.§ Detailed Methodology Description using the Car Dataset
The Car dataset contains outlines of four different types of cars, extracted from traffic videos using motion information. These images were mapped onto a 1-D time series, and the vehicles were classified into one of the four categories: sedan, pickup, minivan, or SUV. Further details about the dataset are available in the work by <cit.>.
We utilize B-spline basis functions to approximate the original curves and extract enriched features, such as derivatives, curvature, radius of curvature, and elasticity, as described in Section 2.
Figures <ref> and <ref> present the functional data for the training and test sets. The original curves and the first and second derivatives, curvature, radius of curvature, and elasticity, are shown. The first plot displays the original curves, highlighting the characteristic shapes of the four vehicle types. The other plots focus on enriched functional features, such as the rate of change captured by the first derivatives, acceleration and deceleration seen in the second derivatives, the degree of bending in the curvature and radius of curvature, and the responsiveness measured by elasticity.
Concerning the functional representation through the b-splines fixed basis system, we stress that despite the fact we could select the number of bases through cross-validation on each dimension considered, in this context, we prefer to use the classic rule, i.e. the number of b-splines equals the number of time instants plus the order of b-splines minus two <cit.>.
This guarantees we have an identical number of bases for each dimension. Since the coefficients of the linear combinations are the functional classifiers' features, having a different number of coefficients to represent the dimensions of each statistical unit could lead to bias within the classifiers. In other words, we avoid an imbalance between the weight of the various dimensions and give more importance to some dimensions rather than others (we aim to prevent, for example, that cross-validation recommends using hundreds of b-splines for the second derivative and few b-splines for the first derivative or other dimensions). Instead, using a consistent number of b-spline basis functions across all curve dimensions ensures that the coefficient matrices used as features have the same dimensionality.
Once we have chosen the same number of bases for all, only based on the number of time instants and the order of the b-splines (here we always use cubic splines, therefore of order equal to 4), we proceed to extract the scores and use them as features to train different functional classifiers.
Although, from a conceptual point of view, we expect that the proposed enrichment may perform poorly when extending the enriched features to the context of functional K-NN, we want to test its performance and compare it with a traditional functional K-NN without enriched features. This choice is driven both by a desire for experimentation to understand whether performance deteriorates and to provide a comparison between the enriched tree-based classifiers and a functional classifier that, in previous studies, has often shown competitive performance compared to more advanced methods <cit.>.
The expectation that the proposed enrichment may perform poorly in functional K-NN is based on how K-NN operates as a distance-based classifier. Functional K-NN relies on calculating distances between entire functions to classify new observations based on their proximity to existing labelled data. The introduction of enriched features, such as derivatives, curvature, and elasticity, could alter the functional data's underlying geometry, leading to distorted distance metrics. Moreover, K-NN's sensitivity to local noise and outliers could further exacerbate this issue when working with enriched functional data, as the additional features might amplify minor variations in the functional curves that are irrelevant for classification.
Therefore, in this section, we adopt the Enriched Functional K-NN (EFKNN), EFCTs, EFRF, EFXGB, and EFLGBM.
The main goal is to compare the accuracy of each of these methods, using only the coefficients of the splines of the original functions (non-enriched functional classifiers) and then considering the enriched version, that is, our proposal using all the coefficients of each dimension (derivatives, curvature, radius, and elasticity).
Subsequently, to compare with other classical functional methods, which do not necessarily use splines, we refer to some known functional classifiers in the R package fda.usc <cit.>.
Although parameter optimisation could improve individual model performance, we intentionally avoid in-depth optimisation for each classifier. This decision is based on two reasons. First, we compare 15 different models, each requiring separate optimisations, leading us to various configurations, even between the pairs of methods we want to directly compare (for example, FRF with and without enrichment). Hence, we aim to use the same configuration for each couple of classifiers and understand if, under the same conditions, without optimising either one or the other, the enrichment produces effects in terms of performance.
The second reason is that to produce more robust results and not limit ourselves to the trivial comparison between single accuracy values of each model, we introduce variability in the basic configurations of the hyperparameters to have a more robust comparison between accuracy distributions for each functional classifier. In practice, this randomisation we produce to compare the results turns out to be a sort of random search tuning because we can get a deeper understanding of the classifier's potential by examining the upper range of its accuracy distribution. It reveals how each model can push its performance without extensive parameter tuning. Additionally, the goal is not to achieve the best possible accuracy but to evaluate whether enriching the features improves classification and, if so, with which models it works best. Most importantly, this approach ensures that improvements or drops in performance are due to the enriched feature representations and not a result of optimised hyperparameters for any specific method. This controlled approach helps eliminate the potential confounding effect of hyperparameter tuning and creates a controlled environment to observe how including enriched functional features impacts classification accuracy directly. Therefore, while in-depth optimisation for each classifier is possible, we prioritise comparability and robustness across all methods.
Estimates' variability is introduced in several ways across different classifiers to ensure robust results that are not due to random chance or overfitting. For each classification method, randomness is injected primarily through randomly altering specific hyperparameters during each iteration and the inherent randomness in the algorithms themselves.
For the EFCTs and FCTs, the maximum depth and minimum number of samples required to split nodes are randomly selected, ensuring that different models are generated for each run. EFRF and FRF utilise their natural variability in bootstrapping and feature selection. For EFXGB, FXGB, EFLGBM, and FLGBM, parameters like tree depth, learning rate, and sample ratios are randomly adjusted in each iteration. For EFKNN and FKNN, the number of neighbours varies to observe the impact on classification accuracy. This approach ensures that the models are evaluated under a wide range of conditions, allowing a more robust performance comparison across classifiers.
In addition, several classical methods from the package are used, including recursive partitioning (rpart), neural networks (nnet), Support Vector Machines (svm), Random Forest, and cross-validated elastic-net regression (cv.glmnet). In these last methods, we use the default starting parameters, introduce variability as previously to ensure robustness, and do not work on b-splines or even enriched features.
This comprehensive comparison of methods allows for a systematic evaluation of the performance of traditional and modern machine learning techniques applied to raw functional data and feature-enriched representations across all simulated scenarios.
The accuracy results for different classifiers applied to the Car dataset are summarised in Figure <ref>. EFRF, EFXGB, EFLGBM, and EFCTs show improvements with enriched features. Classical FDA methods implemented using the package also yields competitive results but is inferior to EFRF and EFXGB.
As we expected, EFCTs have greater variability than ensemble methods.
As expected, EFKNN is poorly suited for handling enriched features, as its performance significantly declines when incorporated.
The enriched features significantly enhance classification accuracy, particularly in ensemble methods. EFRF and EFXGB show notable improvements, while classical methods, particularly SVM and Random Forest without enrichment and b-splines' scores, also achieve competitive results.
§.§ Application to other Real Data
This section presents the main results of comparisons across six additional datasets, each with a different number of classes. As shown in Table <ref>, the outcomes classes range between 2 and 7, the time series lengths range from 24 to 275 instants of time, and the training and test sets have different sizes from a minimum of 23 to a maximum of 1029 instances. In all cases, the classes are well-balanced, ensuring fair comparisons.
Figure <ref> illustrates the original curve representations of the six datasets, showing the time series for each class. The visualisations highlight the varying characteristics of the datasets, from the simple patterns to the more intricate structures in datasets like Plane and Trace.
Figure <ref> shows the boxplots for accuracy across six datasets. The methods used are the same as those proposed for illustrating the Car dataset.
Figure <ref> highlights a sharp distinction between the enriched and non-enriched versions of several classifiers for the ECG200 dataset. EFRF demonstrates the best overall performance, surpassing all other methods in accuracy. In comparison, the non-enriched FRF, FXGB, and FLGBM show consistently lower performance, reinforcing the value of enriched features.
However, FKNN performs poorly with enriched features, showing a significant drop in accuracy compared to using original curves. Classical methods such as randomForest and cv.glmnet from the FDA.usc package perform competitively.
In the results of the ECGFiveDays dataset, there is a clear advantage for the enriched version of the FCT, surpassing its non-enriched counterpart. On the other hand, FKNN and FLGBM perform extremely poorly with both original and enriched features, while FXGB shows an improvement in enriched and original features. The FRF demonstrates improved performance with enriched features, positioning itself as one of the top-performing models, together with nnet. This suggests that the enriched features generally contribute positively to the performance, especially for EFRF and EFXGB.
In the Italy Power Demand dataset, the enriched versions of FRF, FXGB, and FLGBM perform slightly worse than their original curve counterparts, making this an exception to the usual trend. FRF still delivers high accuracy and close to the best results, but the enriched features don't provide a clear advantage this time. On the other hand, the SVM from shows a wide distribution of accuracy, indicating more variability in performance compared to other methods, particularly in this dataset.
In the Plane dataset, the distinction between the enriched and original curve methods is minimal and slightly in favour of the enriched features for FRF and the two functional boosting methods. As usual, FKNN fails to perform when enriched. FCT shows significant variability and even experiences a slight drop in performance when enriched. Despite these small shifts, EFRF remains the top-performing classifier, with consistent accuracy across the board, demonstrating its robustness even in this dataset.
In the results of the Trace dataset, we observe a similar trend to the previous datasets. While both EFRF and EFXGB show strong performance, the best-performing model, in this case, slightly favours nnet. The enriched features boost the performance of FRF compared to their original curve counterparts, but EFKNN continues to underperform. As before, FCT shows considerable variability, though still yielding relatively high accuracy with enriched and original features.
Finally, for the TwoLeadECG dataset, FRF and FXGB show significant improvement when enriched. EFXGB achieves the highest overall accuracy in this dataset, surpassing other classifiers. On the other hand, FKNN and FLGBM exhibit poor performance, especially in the enriched versions. This highlights the limitations of FKNN and FLGBM in handling enriched features compared to methods like FRF and FXGB, which greatly benefit from the enriched representation.
§.§ Application to Simulated Datasets
To gather stronger evidence of the effectiveness of our methods, which already emerge from the analysis of the seven datasets presented, we conduct a simulation study. This approach allows us to evaluate performance across a broader range of scenarios, enhancing the reliability of our conclusions. Through controlled simulations, we can further assess how the enriched features interact with various classifiers and verify whether the observed improvements are consistent and significant across different synthetic data settings.
To evaluate the classifier's performance, we modify and adapt several models previously considered by <cit.> to generate functional data with distinct shapes.
Figure <ref> presents six simulated scenarios, each involving a different number of classes. The first three panels (Simulations 1, 2, and 3) represent binary classification problems, where the functional data are divided into two groups. Simulation 4 introduces a scenario with four distinct classes, while Simulations 5 and 6 involve three classes each. In binary classification scenarios, 100 curves per class are generated, and the functional data are plotted over 50 time observations. The only difference for multi-class classifications is that in these cases we generate 50 curves per class, instead of 100.
Scenario 1: To simulate the first scenario, we generated two groups of functional data using a model based on a Gaussian process. The base model is defined as X_i(t) = μ t + e_i(t), where t ∈ [0, 1] and e_i(t) is a Gaussian process with zero mean and covariance function γ(s,t) = αexp(-β |t - s|^ν). The two groups are created by adjusting some parameters in a way that introduces moderate differences between them, making the classification task non-trivial but not too simple.
For the first group, we set μ = 8 and generated 100 curves over 50 time points. For the second group, we slightly modified the base model by introducing a shift in the function, defined as X_i(t) = μ t + q k_i I_T_i ≤ t + e_i(t), where q = 3 and k_i ∈{-1, 1} with equal probability, and T_i is drawn from a uniform distribution over [a, b] = [0.5, 0.9]. This shift creates functional curves for the second group that differ moderately from those in the first group, ensuring the classification task is not overly simple.
The covariance structure for both groups is controlled by the parameters α = 1, β = 1, and ν = 1, ensuring consistent variability across the curves. The probability of the shift being positive or negative is set to 0.5 to avoid overly distinct group separation.
Scenario 2: For the second scenario, we generated two groups of periodic functional data using a model based on sinusoidal components with Gaussian noise. The base model for the first group is defined as X_i(t) = a_1isin(π t) + a_2icos(π t) + e_i(t), where e_i(t) is a Gaussian process with zero mean and covariance function γ(s,t) = αexp(-β |t - s|^ν). The parameters a_1i and a_2i were set to a_1i = 1 and a_2i = 8, respectively, to generate periodic curves for the first group.
For the second group, we applied a slight variation to the model by adding a shift, resulting in the modified model X_i(t) = (b_1isin(π t) + b_2icos(π t)) (1 - u_i) + (c_1isin(π t) + c_2icos(π t)) u_i + e_i(t), where u_i follows a Bernoulli distribution with P(u_i = 1) = 0.60, and b_1i∈ [1.5, 2.5], c_1i∈ [5, 10.5], creating functional curves that have subtle differences from those in the first group, while still retaining the periodic nature of the data.
The covariance structure remains the same for both groups, with α = 1, β = 1, and ν = 1. The variations introduced by the parameters b_1i and c_1i, combined with the probabilistic shift governed by u_i, create a moderate difference between the two groups, ensuring that the classification problem is non-trivial.
Scenario 3: For the third scenario, we generated two groups of functional data using a model that introduces differences in the shape of the curves over a specific portion of the domain. The base model for the first group is defined as X_i(t) = μ t + e_i(t), where e_i(t) is a Gaussian process with zero mean and covariance function γ(s,t) = αexp(-β |t - s|^ν). For this group, we set μ = 8.
The second group is generated by applying a shift and a change in the shape of the function, modeled as X_i(t) = μ t + (-1)^u q + (-1)^1-u( 1/√(π r)) exp(-z(t - v)^w) + e_i(t), where u follows a Bernoulli distribution with probability P(u = 1) = 0.1. In this scenario, we set q = 1.8, r = 0.02, z = 90, and w = 2. The parameter v is drawn from a uniform distribution over the interval [0.45, 0.55], introducing a localized change in the shape of the curve for the second group.
The covariance structure for both groups is controlled by the parameters α = 1, β = 1, and ν = 1, ensuring consistent variability across the curves. The slight shift and shape changes in the second group make the classification task more challenging, as the groups are not trivially distinguishable.
Scenario 4: For the fourth scenario, we used the same model described in the third simulation. The model introduces differences in the shape of the curves for a portion of the domain. The base model is given by X_i(t) = μ t + e_i(t), where e_i(t) is a Gaussian process with zero mean and covariance function γ(s,t) = αexp(-β |t - s|^ν).
For the first two groups (Group 1 and Group 2), we set μ = 0, q = 1, and controlled the timing of the shift and shape change using a uniform distribution for v, drawn from the interval [0.45, 0.45]. The other parameters governing the shape change were r = 0.02, w = 2, and z = 90. The covariance parameters were set to α = 1.3, β = 1.2, and ν = 1.
For the second set of groups (Group 3 and Group 4), we introduced further variations. Here, we used μ = -2, q = 1.8, and controlled the shift with v drawn from the interval [0.15, 0.15]. The shape-related parameters were set to r = 0.01, w = 5, and z = 90. The covariance parameters were adjusted to α = 0.8, β = 0.8, while keeping ν = 1.
Scenario 5: For the fifth scenario, we used the same model described in the previous simulations, but generated three distinct groups.
For Groups 1 and 2, the parameters were set as follows: μ = 0, q = 1.8, and the timing of the shape and shift was controlled by drawing v from the interval [0.45, 0.45]. The shape-related parameters were configured as r = 0.02, w = 2, and z = 90. The covariance parameters were set to α = 1, β = 1, and ν = 1.
For Group 3, we introduced different parameter values to create a distinct third group. Specifically, we set μ = 1, q = 0.8, and drew v from the interval [0.65, 0.65] to control the shift. The other parameters remained the same: r = 0.02, w = 2, and z = 90, while the covariance parameters were kept at α = 1, β = 1, and ν = 1.
Scenario 6: For the sixth scenario, we adapted the model used in the first simulation to generate three distinct classes by adjusting the parameters for each class.
For the first two groups (Group 1 and Group 2), the model was configured with μ = 2, q = 3, and a uniform distribution for T_i drawn from the interval [0.6, 0.75]. The covariance parameters were set to α = 2, β = 1, and ν = 0.5.
For the third group, we applied further parameter variations to create a distinct class. Here, μ = 2 and q = 3 were kept the same, but the uniform distribution for T_i was drawn from a narrower interval [0.8, 0.9]. The covariance parameters remained unchanged, with α = 2, β = 1, and ν = 0.5. This group was generated with 50 curves, introducing more pronounced differences compared to the first two, adding complexity to the classification task.
Figure <ref> presents the accuracy results for each of the six simulated scenarios, comparing the performance of various classification methods applied to both the original curves and enriched feature representations alongside classical functional data analysis methods.
The binary classification task in Scenario n.1 shows that the enriched features generally outperform the original curve methods. The enriched version for FCT provides better accuracy than the original curves. Similarly, for FKNN and FLGBM, the enriched feature versions yield higher accuracy than their counterparts. FRF-enriched features outperform all other methods, achieving the best overall accuracy. FXGB slightly improves with enriched features, though its performance remains behind FRF. Classical methods, such as , perform well but are outperformed by FRF enriched, making the second-best approach.
In Scenario 2, the enriched features yield better performance for FCT, showing clear improvement compared to the original curves. FRF, on the other hand, performs similarly for both original and enriched features, with no difference in accuracy. However, other methods such as FKNN, FLGBM, and FXGB show a slight decrease in performance when enriched features are used. Notably, FKNN with enriched features performs significantly worse than the original curve version, highlighting that this approach does not work well with FKNN.
The binary classification problem in Scenario n.3 highlights a more robust performance from the enriched feature methods. FCT, FKNN, and FLGBM all perform better with enriched features than their original curve counterparts. FRF and FXGB, while already performing well with original curves, show slight improvements when enriched features are used. The only method that has performance comparable to enriched FRF is .
Scenario 4 deals with a four-class problem. The enriched features improve performance for several methods. FRF with enriched features emerges as the best performer, achieving the highest overall accuracy. FLGBM and FXGB also benefit from enriched features, showing clear improvements compared to their original curve counterparts. FCT and FKNN, on the other hand, do not exhibit substantial gains from the enriched features. Among the classical methods, is the best.
In Scenario n. 5's three-class classification problem, FRF, FCT, and FXGB show improved performance when using enriched features. FRF remains highly competitive, but FXGB, with enriched features, emerges as the best performer. FCT also benefits from enriched features, achieving better accuracy than its original curve counterpart. However, FKNN continues to perform poorly with enriched features, reinforcing the expectation that this method is not well-suited for feature enrichment.
In the final three-class scenario n.6, enriched feature methods again tend to outperform their original curve counterparts. FCT and FLGBM show better results when using enriched features, while FRF and FXGB remain competitive with minimal differences between the two approaches.
§ ON ENRICHED FUNCTIONAL TREE-BASED CLASSIFIERS INTERPRETABILITY AND EXPLAINABILITY
There is no doubt that enriching functional features provides fascinating benefits in terms of accuracy. However, within the proposed framework, we must pay particular attention to two distinct aspects: interpretability and explainability. Although the primary focus of this paper is not on explainable artificial intelligence (XAI), but rather on introducing methodologies to enhance performance, some considerations regarding interpretability and explainability within the proposed framework can help deepen the understanding of the model and suggest potential avenues for future research.
§.§ Enriched Functional Classification Trees' Interpretability
From the perspective of interpretability, it is evident that, as with all ensemble methods, we lose the ability to interpret the classification rules easily. Simpler models, such as regression or classification trees, allow for a straightforward interpretation of how predictors affect the outcome, but they lose credibility when assumptions are violated in the former and due to high variability in the latter. Consequently, a model perfect for interpretability, accuracy, and low variance does not exist.
Focusing on the EFCT model, however, interpretability is still possible. The EFCT is an extension to enriched functional data of the un-enriched FCT (Functional Classification Tree) proposed by <cit.>. Therefore, with appropriate considerations for derivatives, curvature, radius of curvature, and elasticity, it is always possible to interpret the splitting rules in EFCT. Similarly, following the approach of <cit.>, it is also feasible to construct both theoretical separation curves (based on the formula that reconstructs the separation curve as a linear combination of basis functions) and empirical separation curves (based on the actual curve that is closest to the theoretical separation curve). The main difference here is that the reconstruction is based on splines rather than functional principal components, and the interpretation must be made based on the specific functional transformation involved in each node's splitting rule.
Thus, some splitting rules between groups of curves that end in a left or right node must be interpreted based on the original curves, speed, acceleration, curvature and radius of curvature, elasticity, and their intrinsic meanings.
Let 𝐒_i = (s_i1, s_i2, …, s_iD) represent the B-spline coefficients of the i-th functional observation in the D-dimensional enriched feature space. Each dimension d corresponds to different aspects of the functional data, such as the original function, derivatives, curvature, radius of curvature, or elasticity.
At each node z in the classification tree (EFCT), a decision is made to split the data based on a specific feature or combination of features, which can include scores from any dimension (e.g., original functions, derivatives, or geometric features like curvature and radius of curvature). The theoretical separation curve at node z, denoted by f_sep, z(t), is defined as a linear combination of the B-spline basis functions and the corresponding generalised coefficients selected up to that specific node.
We introduce γ_zs as the generalised coefficients representing any feature in the enriched feature matrix, whether corresponding to the original function, derivatives, or any geometric features such as curvature or elasticity.
For the z-th node, the theoretical separation curve is expressed as:
f_sep, z(t) = ∑_s ∈Ω_zγ_zsϕ_s(t)
where
Ω_z = {k_1, …, k_z} is the set of B-spline basis functions and corresponding generalised coefficients selected up to the split at node z,
γ_zs are the generalised coefficients associated with the s-th basis function ϕ_s(t), where γ_zs generalises the coefficients for any functional transformation (original function, first/second derivatives, curvature, etc.),
ϕ_s(t) is the B-spline basis function corresponding to the feature involved in the split.
In this notation, γ_zs is the generic element of the enriched feature matrix, which we previously defined in Equation <ref>. γ_zs refers to any coefficient from this expanded feature matrix, covering all possible dimensions (original functions, derivatives, and geometric features like curvature, elasticity, etc.). As each split occurs, the specific coefficients involved are determined by the splitting rule, which can span different dimensions of the feature matrix.
The interpretation of the theoretical separation curve f_sep, z(t) depends on the type of B-spline basis functions involved in the split, which can vary across different dimensions.
If the basis functions correspond to the original function, the separation curve reflects the overall shape of the functional signal.
If the basis functions represent first or second derivatives, the separation curve highlights the signal's local speed or acceleration.
If the split involves curvature or radius of curvature, the separation curve focuses on how sharply the signal bends at different points.
If the split is based on elasticity, the separation curve captures the function’s responsiveness to changes in its input.
Thus, the type of functional feature that drives the split dictates the specific interpretation of the theoretical separation curve. The generalised coefficient γ_zs ensures that the combination of different functional transformations is flexible and accurately represents the decision boundary at each node of the classification tree.
§.§ Enriched Functional Tree-Based Classifiers Explainability
Focusing on ensemble models, it is natural that interpretability is diminished, and we must instead rely on explainability as a tool to understand what is happening within the black-box model. The unique aspect of the proposed framework is that introducing these enriched features can result in correlations between variables. While it is widely accepted that multicollinearity does not pose significant issues in tree-based methods, unlike in multiple regression where it can artificially inflate R-squared values, it is essential to acknowledge that the presence of multicollinearity can distort explainability measures in black-box models.
In other words, while from a performance standpoint, the introduction of enriched features has a purely positive effect by improving accuracy through the observation of various functional characteristics and increasing the diversity of the ensemble, which further boosts accuracy and significantly reduces variance, there is a trade-off when it comes to explainability measures. This results in a potential bias, as the complexity added by the enriched features can make it more challenging to understand the model's internal decision-making processes fully.
In other words, their importance measures may become skewed when introducing correlated scores.
In reality, while it is true that enrichment exacerbates this bias in explainability measures, the solution is relatively straightforward. The key is that when performing enrichment, as proposed in this study, it is essential to rely on variable importance measures that account for correlations between features. Despite the fixed basis system producing uncorrelated functions, the substantial increase in predictors when enrichment introduces numerous new features across many dimensions, including curvature, radius of curvature, and elasticity, can lead to correlations between coefficients across different splines and dimensions.
To solve this issue, we can use two possible approaches. The first option is to condition on the scores of the same B-spline of different dimensions when calculating the importance of a feature. For example, when assessing the importance of the B-spline scores for the radius of curvature, we could condition on the B-spline scores for the derivatives, curvature, elasticity, and original functions, as there is likely a significant association between these coefficients. The second alternative is not to assume any correlation between the scores of the same b-splines' different dimensions and conditioning on all correlated variables beyond a certain threshold when assessing feature importance via classical methods. The latter approach is similar to those used in bioinformatics, where the number of independent variables often far exceeds the number of observations, resulting in high dimensionality <cit.>. By accounting for all potential correlations, this method provides a more robust and reliable measure of model explainability. Extending the two approaches to the context described is quite immediate.
Let 𝐒_i = (s_i1, s_i2, …, s_iD) represent the B-spline coefficients of the i-th functional observation in the D-dimensional enriched feature space. Each dimension corresponds to a different aspect of the functional data, such as original function coefficients, first derivatives, second derivatives, curvature, radius of curvature, and elasticity.
Let I(f, S_j) represent the importance of a feature S_j (e.g., the B-spline scores for the radius of curvature) in predicting the outcome y.
The first approach can be summarized as follows. Let C_j represent the set of coefficients for these associated dimensions (i.e. associated in the sense that we deal with the scores of the same b-spline function used to reconstruct different transformations of the functional data). The conditional feature importance of S_j (e.g., radius of curvature) is given by:
I(f, S_j | C_j) = 𝔼_S_j | C_j[ L(f(S_j, C_j), y) ] - 𝔼_S_j | C_j[ L(f(C_j), y) ]
where
f(S_j, C_j) represents the model including both the feature S_j and the conditioning set C_j,
f(C_j) represents the model excluding S_j but including C_j,
L(f, y) is the loss function used to evaluate the model (e.g. cross-entropy),
𝔼_S_j | C_j denotes the expectation conditioned on C_j.
Equation <ref> quantifies the difference in performance between the full model (with S_j and C_j) and the reduced model (without S_j, but conditioned on C_j). This approach provides the conditional importance of S_j by controlling for correlations with the associated dimensions.
Alternatively, the second approach does not assume any direct correlation between the scores of different dimensions. Instead, it conditions on all variables correlated beyond a certain threshold. Let ρ_ij denote the correlation between two feature dimensions S_i and S_j. We define the set of conditioning variables C_j based on a threshold τ as:
C_j = { S_i : |ρ_ij| > τ}
The importance of S_j is then calculated by conditioning on the set C_j as in Equation
<ref> but, in this case, the conditioning set C_j includes all variables that exceed the correlation threshold τ, providing a more flexible method to account for the correlations within the enriched feature space. This approach is particularly useful when the correlations are not restricted to certain dimensions but are instead scattered across the feature space.
§ DISCUSSION AND CONCLUSIONS
The evolving field of supervised curve classification has made significant advances in recent decades, yet integrating Functional Data Analysis (FDA) with tree-based classifiers remains an area ripe for further development. While some previous studies have examined this combination from various perspectives, critical areas still require deeper exploration. Key areas for enhancement include improving the accuracy of functional classifiers, developing advanced graphical tools for interpreting classification rules, conducting comprehensive simulation studies, and designing effective strategies for optimising parameters in the supervised classification of functional data.
This paper introduces a novel supervised classification strategy that synergises FDA with tree-based ensembles to extract richer insights from curve analysis. The proposed Enriched Functional Tree-Based Classifiers (EFTCs) address the challenges associated with high-dimensional data, focusing on improving classification performance. By incorporating additional features derived from various functional transformations, such as sequential derivatives, curvature, the radius of curvature, and elasticity, the enriched functional data strategy captures detailed information about the functional data's global and local behaviour.
Extensive experimental evaluations on real-world and simulated datasets underscore the effectiveness of the proposed approach. The results demonstrate substantial improvements in classification performance over existing methods, confirming the value of the enriched functional features in managing high-dimensional data. The proposed classifiers effectively capture local characteristics often overlooked by traditional methods, highlighting the importance of these additional features in achieving accurate classification, even in scenarios involving multiple classes and complex curve shapes.
Furthermore, the enhanced performance observed in the EFTCs can also be attributed to introducing diversity, a crucial factor in ensemble methods. By incorporating multiple perspectives of the functional data, such as derivatives, curvature, and elasticity, into the model, we effectively increase the variety of decision patterns available to the ensemble, allowing it to capitalise on the complementary strengths of each feature and ultimately boost classification accuracy.
This significant result, achieved through a truly original approach to introducing diversity in the ensemble by leveraging the available functional tools, aligns with insights from the broader machine learning literature in non-functional contexts. This innovative integration strengthens the model's performance and opens new pathways for exploring ensemble methods in functional data analysis.
While this study uses B-splines for feature extraction, the underlying methodology can be extended to other functional transformations, functional classifiers, and basis functions. The fixed-basis system here offers a consistent framework for training and testing, avoiding the complications associated with data-driven basis systems, where the basis functions may vary between datasets.
Future research could delve into enhancing the interpretability and explainability of these models, or explore the integration of a weighted selection of the number of splines, potentially guided by cross-validation criteria, a choice deliberately avoided in this context, as explained in Section 2.
At the same time, once it has been demonstrated that the enrichment performs well in terms of accuracy, future studies can further explore parameter optimisation by focusing on specific functional classifiers. As thoroughly explained in Section 2, we avoided deep optimisation to maintain an experimental setup conducive to comparison, which was the primary objective of the study.
§.§ Declarations
The authors declare that they received no funding for this study and have no affiliations or involvement with any organization that has a financial or non-financial interest in the subject matter of this manuscript.
§.§ Funding and/or Conflicts of Interest/Competing Interests
The authors confirm that they received no support from any organization for the submitted work. They also declare no affiliations or involvement with any organization or entity that has a financial or non-financial interest in the subject matter of this manuscript.
§.§ Use of Generative AI in Scientific Writing
AI-assisted technologies were used only in the writing process to improve the readability and language of the manuscript. The authors reviewed and edited the content as necessary and took full responsibility for the publication's content.
§.§ Data availability statement
The authors used publicly available data for real-world applications. Simulation data can be provided upon request.
unsrtnat
| In today's world, data is collected from diverse sources such as biomedical devices, smartphones, and environmental sensors, and used across applications in healthcare, environmental monitoring, and more. Technological advancements have improved our capacity to store and process this data, but managing high-dimensional datasets remains a challenge. Dimensionality reduction and classification techniques are essential for effectively handling such data in fields like medicine, environmental monitoring, security, and robotics. Key issues include irregularly spaced time points, computational complexity, the bias-variance trade-off, and the need for interpretable models with strong performance metrics such as accuracy, precision, and recall.
In the supervised classification literature, one of the most well-known challenges is the curse of dimensionality, which arises whenever dealing with a large number of variables or, in the context of time series, when there are many time points. This issue impacts numerous statistical aspects, such as distance measures, the identification of causal relationships, or finding the best-performing model when many models with similar performance exist but rely on different variables. It also introduces problems like data sparsity and multicollinearity. For these reasons, the challenge of both supervised and unsupervised classification in high-dimensional data, whether it involves numerous different variables or many time points for the same variable, remains a complex and relevant area of research in mathematics, statistics, and computer science.
Functional data analysis (FDA) is a research area that has actively tackled many of these challenges over the past decades. In FDA, dimensionality reduction is inherent, as it is achievable simply through the representation of the data itself.
More generally, FDA represents a statistical domain focused on the theory and application of statistical methods in scenarios where data can be expressed as functions, contrasting with the traditional representation using real numbers or vectors. FDA introduces a paradigm shift in statistical concepts, representations, modeling, and predictive techniques by treating functions as single entities. The benefits of employing FDA have been extensively discussed in contemporary literature, including the utilization of derivatives when they provide more insight than the original functions due to the nature of the phenomena, the adoption of non-parametric strategies without restrictive assumptions, data dimensionality reduction, and the exploitation of critical sources of pattern and variation <cit.>.
The literature on FDA is currently dynamic and highly relevant, especially in regression, ANOVA, unsupervised classification, supervised classification, and outlier detection. Within this broad framework, we focus on supervised classification with functional predictors and a scalar response variable.
Recent research has explored the development of classification methods that combine the strengths of FDA and tree-based techniques. <cit.> advocated using spline trees for functional data, applying them to analyze time-of-day patterns for international call customers. Assessing variable importance in the fusion of FDA and tree-based methods was the focus of the work by <cit.>. <cit.> proposed a classification approach based on random forests for functional covariates. Investigating the construction of a classifier for dose-response predictions involving curve outcomes was the aim of <cit.>. <cit.> proposed using functional principal components to train a classification tree. <cit.> suggested combining clustering and supervised tree-based classification to enhance prediction model accuracy. Finally, <cit.> proposed an innovative evaluation of leaves' quality for functional classification trees applied to biomedical signals with binary outcomes.
<cit.> explore methods for classifying multivariate functional data adapting and extending PLS techniques to handle the complexity of functional data across varying domains.
<cit.> propose a mixture-based segmentation approach for heterogeneous functional data, aimed at identifying hidden structures and subgroups within complex functional datasets by combining multiple segmentations.
Recently, <cit.> proposed to exploit functional representation to increase diversity in ensemble methods and improve the accuracy of classifiers. Finally, <cit.> suggested a new algorithm to exploit the previous idea but further improving the accuracy and variance of estimates.
Building on the established foundation of combining FDA and statistical learning techniques, significant exploration is still needed to handle large datasets and interpret results from both statistical and causal perspectives. Research in this area is rapidly evolving and holds great potential. We expect a growing focus on improving functional classifiers' precision, interpretability, and explainability in the coming years.
Leveraging this landscape and its vast research opportunities, this paper introduces a novel functional supervised classification framework, namely the Enriched Functional Tree-Based Classifiers (EFTCs). To address the challenge of learning from high-dimensional data and enhancing functional classification performance by leveraging additional characteristics of the original data,
derivatives, curvature, radius of curvature, and elasticity are used to enrich the information provided to functional classification tree ensembles. In other words, we refer to EFTCs to denote the joint utilisation of sequential transformations for extracting unexplored features from the original signals. In essence, this approach involves viewing functions from diverse perspectives to capture additional aspects that can contribute to enhancing classification performance. Practically, it is like using a magnifying glass to reveal attributes that the original functions may miss. Moreover, the motivation behind this proposal is also driven by the well-known fact that ensemble methods, such as tree-based classifiers, benefit significantly from introducing diversity, as it tends to improve generalisation and performance. By enriching the feature space with diverse functional characteristics, ETBCs can leverage this diversity to further enhance classification accuracy, exploiting the strengths of each transformation to capture complementary information from the data.
The paper conducts extensive experimental evaluations on seven real-world datasets and six simulated signals to measure the proposed methodology's efficacy. Comparative analyses with existing methods reveal promising results in terms of classification performance.
The study yields promising results, indicating that the enrichment approach significantly improves performance with certain methods. While our approach has been tested on functional classification trees, KNN, random forest, XGBoost, and LightGBM. However, it can be extended to other tree-based or non-tree-based classifiers, with appropriate adjustments based on our findings. This framework demonstrates notable improvements over traditional methods, offering valuable insights into applying FDA in complex, high-dimensional learning problems.
The paper's structure is as follows: Section 2 introduces the core concepts of FDA, Enriched Functional Data, and the Enriched Functional Classification frameworks, including trees, random forests, XGBoost, and LightGBM. Section 3 covers applying the proposed methods to real and simulated data.
Section 4 discusses key issues related to model explainability.
Finally, Section 5 concludes the paper by discussing the main findings and highlighting directions for future research. | null | null | null | null | null |
http://arxiv.org/abs/2409.17659v1 | 20240926091416 | Hierarchical End-to-End Autonomous Driving: Integrating BEV Perception with Deep Reinforcement Learning | [
"Siyi Lu",
"Lei He",
"Shengbo Eben Li",
"Yugong Luo",
"Jianqiang Wang",
"Keqiang Li"
] | cs.AI | [
"cs.AI"
] |
Entanglement witnesses with local partial ordering
Eric A. Galapon
September 28, 2024
==================================================
empty
empty
§ ABSTRACT
End-to-end autonomous driving offers a streamlined alternative to the traditional modular pipeline, integrating perception, prediction, and planning within a single framework. While Deep Reinforcement Learning (DRL) has recently gained traction in this domain, existing approaches often overlook the critical connection between feature extraction of DRL and perception. In this paper, we bridge this gap by mapping the DRL feature extraction network directly to the perception phase, enabling clearer interpretation through semantic segmentation. By leveraging Bird’s-Eye-View (BEV) representations, we propose a novel DRL-based end-to-end driving framework that utilizes multi-sensor inputs to construct a unified three-dimensional understanding of the environment. This BEV-based system extracts and translates critical environmental features into high-level abstract states for DRL, facilitating more informed control. Extensive experimental evaluations demonstrate that our approach not only enhances interpretability but also significantly outperforms state-of-the-art methods in autonomous driving control tasks, reducing the collision rate by 20%.
§ INTRODUCTION
End-to-end autonomous driving has garnered significant attention for its ability to unify perception, prediction, and planning into a single, integrated model, offering an alternative to the traditional modular approach <cit.>. In contrast to the classic pipeline, where separate modules for perception, prediction, and planning are prone to error propagation and high computational complexity <cit.>. Since it can significantly reduce the writing of manual rule codes, end-to-end has gradually become the mainstream trend in the intelligent development of smart connected vehicles <cit.>.
Recent advancements have seen Deep Reinforcement Learning (DRL) applied to end-to-end autonomous driving <cit.>, where the system encodes environmental and vehicular state information into high-dimensional latent feature representations. From these representations, a DRL agent outputs driving strategies for autonomous navigation. However, existing research has often treated feature extraction as an isolated component, without explicitly connecting it to the perception module, which is crucial in traditional driving systems. In this paper, we bridge this gap by mapping DRL’s feature extraction network to the perception phase, and more importantly, utilizing semantic segmentation decoding to interpret the extracted features in a structured manner.
Bird’s-Eye-View (BEV) representations have emerged as an effective means for capturing driving scenarios, particularly in urban settings <cit.>. BEV consolidates multi-sensor inputs in a unified three-dimensional space, providing a comprehensive understanding of the vehicle’s surroundings. In our framework, BEV features serve as the input to the DRL policy, which outputs driving control signals. While BEV offers a robust means of feature representation, the process of extracting and translating these features into abstract states suitable for DRL remains challenging. To overcome this, we propose an expressive yet efficient neural network to extract relevant features from BEV inputs and map them directly to the perception phase of the autonomous driving pipeline. By incorporating semantic segmentation into the feature decoding process, we aim to provide a clearer interpretation of the environment, making the DRL agent’s decisions more transparent and informed.
In this paper, we propose a DRL-based end-to-end autonomous driving framework that integrates BEV. The system incorporates input from cameras facing different directions, and construct a BEV representation of the driving environment. A neural network module is designed to extract salient features from the BEV data, capturing relevant information about the surroundings and the vehicle's own state. The extracted BEV features are then fed into a DRL agent, which learns to decode an appropriate driving strategy directly from the sensory inputs, without the need for explicit modeling of the environment. By incorporating the BEV representation, the proposed framework aims to provide the DRL agent with a more holistic and structured understanding of the driving scenario and enhance the agent's ability to reason about the environment and make more informed decisions, leading to improved autonomous driving performance. To the best of our knowledge, this paper is the first solution to combine BEV and deep reinforcement learning for end-to-end autonomous driving.
The main contributions of this paper are as follows:
* We proposed a feature extraction network in DRL based on bird's-eye view and surround camera inputs to obtain complete environmental information around the vehicle, unify the coordinate system transformation of vehicles, roads, and image inputs, and greatly improve the performance of end-to-end autonomous driving control methods.
* Based on semantic segmentation-a classic autonomous driving perception task-we decode the high-dimensional environmental features extracted by the proposed feature extraction network from the surround camera, and visualize the decoded information as other vehicles in the environment, improving the interpretability of DRL.
* We evaluate the proposed algorithm on 7 maps in the CARLA, and compare it with the DRL algorithm based on the traditional feature extraction network. The experiment shows that the BEV-based feature extraction network enables the DRL policy network to obtain more rewards, which greatly improves the performance of the autonomous driving control strategy.
§ RELATED WORK
§.§ Traditional Modular Approach
The traditional modular approach to autonomous driving consists of four main modules: perception, prediction, planning, and control <cit.>. Each of these modules impacts overall performance.
The early perception module relied on traditional computer vision algorithms, such as edge detection <cit.>, corner detection <cit.>, and target tracking <cit.>.
Vehicle trajectory prediction methods based on traditional machine learning include the Kalman filter <cit.>, Bayesian networks <cit.>, and Markov methods <cit.>. In contrast, deep learning approaches often use long short-term memory (LSTM) encoder-decoder structures <cit.>.
The planning module is divided into global and local path planning, calculating trajectory points for the vehicle's low-level controller <cit.>. The control module generates safe and reliable real-time instructions based on the driving trajectory <cit.>.
A key advantage of this modular design is its interpretability; it breaks down a complex system into independent yet interrelated modules, each focusing on a specific task, making understanding and analysis easier.
§.§ Deep Reinforcement Learning for Autonomous Driving
Deep Reinforcement learning is a powerful and effective method to obtain end-to-end autonomous driving policies with superior performance. There are many works on end-to-end autonomous driving using reinforcement learning <cit.>. Ref.<cit.> proposed a framework designed to facilitate model-free deep reinforcement learning within complex urban autonomous driving environments. Ref. <cit.> proposed reinforcement learning within a simulation environment to develop a driving system capable of controlling a full-scale real-world vehicle. The driving policy utilizes RGB images captured from a single camera along with their semantic segmentation as input data. Ref. <cit.> proposed a comprehensive framework for decision-making, amalgamating the principles of planning and learning. This fusion leverages Monte Carlo tree search and deep reinforcement learning to address challenges such as environmental diversity, sensor information uncertainty, and intricate interactions with other road users.
§.§ Explainability of Autonomous Driving
Autonomous driving is a high-stakes, safety-critical application. Explainability, which combines interpretability (human comprehensibility) and completeness (exhaustive explanations) <cit.>, is essential for users and traffic participants to trust and accept autonomous systems <cit.>. Researchers also rely on explainability to optimize and enhance the performance of driving algorithms <cit.>. As end-to-end autonomous driving develops, the need for interpretability becomes increasingly important. Deep reinforcement learning models, consisting of multiple layers and complex neural networks, often make their decision-making processes and feature representations difficult to understand <cit.>. Visual analysis is a key method for enhancing the interpretability of these models <cit.>. This paper proposes a deep reinforcement learning feature extraction network from a bird's eye view (BEV) that integrates perception tasks with feature decoding and visualization, improving both the performance and interpretability of end-to-end autonomous driving algorithms.
§ APPROACH
The end-to-end algorithm framework has attracted great attention in the field of autonomous driving due to its more concise algorithm process and stronger generalization performance. Building on the previous end-to-end autonomous driving approach <cit.>, <cit.>, we use multiple cameras mounted on the car as the output of the end-to-end autonomous driving algorithm, and output control signals that control the accelerator, brake, and steering wheel corner.
§.§ Problem Formulation
Our work focuses on designing end-to-end autonomous driving algorithms aimed at efficiently reaching a target location while avoiding collisions with other traffic participants. This problem can be modeled as Partially Observable Markov Decision Processes (POMDPs) <cit.>.
A POMDP can be represented by a tuple <𝒜,𝒮,ℛ,𝒫,𝒪,𝒵,γ>, where 𝒜 denotes the action space, 𝒮 denotes the state space, R represents the reward function, 𝒫 is the state transition function, 𝒪 is the observation space, 𝒵 is the observation function, and γ is the discount factor. In the context of autonomous driving, the state transition function 𝒫 and the observation function 𝒵 are often not available in closed-form, rendering this a model-free POMDP problem. The following sections will outline the POMDP formulation for the autonomous driving task.
* State Space 𝐒: we use the CARLA driving simulator <cit.> as the environment for the agent. The state space 𝐒 is defined by CARLA and cannot be directly obtained by the agent. At time step t, the state of the environment is represented by s_t.
* Observation Space 𝐎: similar to <cit.>, the observation of the agent at time step t is o_t=⋃_m=1^M{< I,R,V,N > _m }, where I is a 6×3×128×352 image obtained by six RGB camera (front view and rear view, a total of six cameras). R is a 9-dimensional vector representing road features, V is a 4-dimensional vector embedding vehicle features, and N is a 5-dimensional vector containing navigational features.
* Action Space 𝐀: the actions of the agent include acceleration or deceleration (braking) values and steering angle. Their value range are all [ -1,1 ].
* Reward Function 𝐑: the reward function is designed as follows:
𝐑( s_t,a_t ) =r_t=
-k_c, if collision;
v_m-v_c, if v_c-v_m>0;
4v_c· v_s/ p_c-p_w _2^2, otherwise,
where v_m, v_c, and v_s are respectively the maximum speed limit of the vehicle, the current speed of the vehicle and the similarity of the vehicle with next waypoint w. k_c is the penalty value when a collision occurs. p_c and p_w represent the current position of the vehicle and the position of waypoint w respectively.
§.§ Deep Reinforcement Learning-Based Autonomous Driving
Reinforcement learning has proven to be a powerful technique for solving Partially Observable Markov Decision Processe.By modeling the autonomous driving process as a POMDP, reinforcement learning can be leveraged to derive optimal driving strategies.
In this paper, we adopt the Proximal Policy Optimization (PPO) <cit.> algorithm as the core reinforcement learning method. PPO is known for its stability and efficiency in continuous control tasks, making it suitable for autonomous driving applications.
The network architecture in our approach adopts the Actor-Critic architecture, and specific details are shown in Fig. <ref>. The input to the deep reinforcement learning system includes not only road features (such as road conditions, lane markings, etc.), vehicle features (such as speed, direction, etc.), and navigation feature, but also images from the surround-view camera. Each observation has a separate channel to extract features, and the RNN captures the temporal dependency of the features. Finally, the features of each channel are concentrated and handed over to the Critic network and the Actor network for decision and estimation.
The feature extractor network of road features and vehicle features is based on the MLP backbone architecture and the feature extractor network of images from the surround-view camera is based on a BEV feature extraction network named SC Block. By integrating the BEV feature extraction network into the actor network, our DRL-based autonomous driving system gains a clearer and more comprehensive understanding of its surroundings, which significantly enhances decision-making performance. The next section will discuss the details of the BEV feature extraction network and its implementation.
§.§ BEV Feature Extraction Network
Traditional image feature extraction algorithms usually process in the same coordinate frame as the input image without coordinate transformation. However, other inputs in the perception space of the autonomous driving algorithm are in the BEV space coordinate system. Different coordinate systems will cause errors in the feature fusion process.
The core idea behind this network is to transform raw image data into a 3D representation and project it into the BEV grid similar to <cit.>. The process can be divided into two main steps: Lift and Splat.
Lift Step. In the Lift step, the transformation from a 2D image to a 3D representation is achieved by predicting a depth distribution for each pixel. Given an image I∈ℝ ^3× H× W, where H and W are the height and width of the image, we learn a depth distribution α_h,w∈Δ_|D|-1 for each pixel at location (h,w). This distribution is over a predefined set of depth bins D={d_1,d_2,⋯,d_n}. The transformation can be formulated as
I(h,w,d)=∑_d∈ Dα_h,w(d)· f_h,w(d),
where f_h,w(d) is the context vector associated with pixel (h,w) at depth d, and α_h,w(d) is the learned probability that the pixel corresponds to that depth. This step effectively produces a frustum of points for each pixel in the image across multiple depth values, leading to a 3D point cloud. The depth distribution α_h,w(d) is learned through a supervised or self-supervised approach, typically using depth ground truth or monocular depth estimation techniques. This allows the network to predict scene geometry, improving the perception of distance and object positioning.
Splat Step. The Splat step projects all the frustums onto a BEV grid. Each frustum is mapped onto a predefined BEV grid using the camera's intrinsic and extrinsic parameters. This allows features from multiple camera views to be fused into a common BEV plane. This step is crucial for generating a consistent representation that facilitates tasks like motion planning and obstacle detection. Sum pooling is employed to assign each 3D point to the nearest voxel in the BEV grid
b_i=∑_iEGO_Ts( u_i ).
where u_i represents the context vector associated with the 3D point and EGO_Ts is the transformation to the ego-vehicle's coordinate system. This step ensures that the 3D points from different views are correctly combined into a consistent BEV representation.
By using this approach, we ensure the model effectively learns to combine data from different camera views into a cohesive BEV representation. The Lift-Splat architecture respects the symmetries in multi-view data, such as permutation invariance and translation equivariance, making it robust to calibration errors and able to fuse inputs from various viewpoints into a unified BEV output.
§.§ Semantic Segmentation of Latent Feature
The input feature extraction network in deep reinforcement learning significantly impacts the algorithm's overall performance, but its relationship with the final results is often unclear. To address this, we decode and visualize intermediate features using semantic segmentation to evaluate the performance of our proposed EV Feature Extraction Network.
Semantic segmentation is a key perception task in autonomous driving. It provides essential contextual information, allowing the system to understand road layouts and identify pedestrians, vehicles, and obstacles. In deep reinforcement learning, the input feature extraction network processes information to derive additional environmental features, aligning with the goals of the traditional perception module. We propose a decoding mechanism that utilizes semantic segmentation to transform the latent features from the extraction network into interpretable outputs. In this article, we leverage semantic segmentation to visualize the performance of the BEV Feature Extraction Network.
We use the pre-trained ResNet as the decoder backbone network for semantic segmentation. The latent features output by the BEV Feature Extraction Network are first processed by the convolutional layer for simple feature extraction, and then further processed by the backbone network to obtain higher-level features. Then, through upsampling and feature concatenation, the low-level features are combined to preserve spatial information. This network can effectively generate bird's-eye view semantic segmentation results.
§ EXPERIMENTAL RESULTS
In order to verify the performance of representation features in the bird's-eye view space on the autonomous driving method based on reinforcement learning, we tested two of our proposed autonomous driving algorithms based on BEV space-based representation features and reinforcement learning (Ours-3 and Ours-6), using three cameras and six cameras, respectively. It is also compared with other autonomous driving algorithms based on reinforcement learning (DRL and DRL-pan) <cit.>.
Experiments are conducted with different maps and different traffic participant densities to verify the generalization performance of our proposed algorithm. At the same time, the interpretability of the representation features in the BEV space is visually demonstrated.
§.§ Experimental Setup
We use CARLA as a simulator for training and testing autonomous driving algorithms, and autonomous vehicles are equipped with RGB cameras to perceive their surroundings. The DRL method is equipped with three cameras with a 60-degree field of view (FOV), which can observe the image within 180 degrees in front of it. Based on the DRL method, the DRL-pan use three cameras with a FOV of 120 degrees, enabling a 360-degree view of the vehicle's surroundings. The camera setup of the Ours-3 is exactly the same as that of the DRL-pan. Ours-6 uses six cameras with a FOV of 60 to view a 360-degree view of the vehicle's surroundings. Fig. <ref> shows the transformation of the reward function during the training process of the DRL method with three cameras as input and our proposed method.
We selected the Town03 map in CARLA and the traffic flow with low congestion to train four autonomous driving algorithms based on reinforcement learning (DRL, DRL-pan, Ours-3, Ours-6), with 50 pedestrians and 50 cars in the low congestion traffic. The test was conducted in CARLA's Town01 Town07 with a combination of low congestion traffic and high congestion traffic. During autonomous driving, if there is a collision, the mission fails, and conversely, if there is no collision within 128 steps, the mission succeeds.
The evaluation indicators of autonomous driving are Collision Rate, Similarity, Timesteps and Waypoint Distance.Collision Rate refers to the probability of collision while driving, Similarity refers to the average value of the cosine similarity between the direction of vehicle movement and the current vehicle position pointing to the direction of the next planned route waypoint during driving, the Timesteps refers to the driving time before the success or failure of the driving task, and the Waypoint Distance refers to the average value of the distance between the vehicle position and next planned route waypoint during driving.
§.§ Evaluation of autonomous driving in different maps
In order to comprehensively evaluate the performance of our proposed autonomous driving algorithm, we trained the reinforcement learning algorithm on the Town03 map, and verified the performance of the algorithm on seven maps from Town01 to Town07. The results are shown in Table <ref>, and our method achieves the best results on most maps and averages for the three indicators Collision Rate, Similarity, and Timesteps. On the average of the 7 maps, ours-6 reduces collision rate by 22%, improves similarity by 3%, and increases timesteps by 11.92 compared to the DRL method. On the Waypoint Distance, our method also achieves the best results on 5 maps.This strongly proves that the feature representation in the BEV space enhances the spatial understanding ability of the reinforcement learning agent, which greatly improves the performance of autonomous driving.
Unexpectedly, DRL and DRL_pan use the same feature extraction network and the same number of cameras, and the DRL_pan method using cameras with a larger field of view can obtain more information in the environment to assist autonomous driving decision-making than DRL. However, the DRL_pan method is much worse than the DRL method in three indicators: Collision Rate, Similarity, and Timesteps.The experiment indirectly proves that the expressive power of the feature extraction network shufflenet is limited, thus limiting the performance of the overall reinforcement learning.On the contrary, when our proposed method uses the input of 3 cameras and 6 cameras respectively, the increase in the number of cameras will greatly improve the overall autonomous driving algorithm performance.
§.§ Evaluation of autonomous driving in high-congestion environments
In high-congestion environments, the performance of autonomous driving systems faces greater challenges due to the increased number of dynamic traffic participants. To evaluate the robustness of our proposed BEV-based autonomous driving algorithms under such conditions, we conducted tests using both low and high traffic densities across various CARLA maps. High-congestion scenarios involve 100 pedestrians and 100 vehicles, doubling the complexity compared to the low-congestion settings.
The results of these experiments are shown in Table <ref>, where we compare our algorithms (Ours-3 and Ours-6) with the baseline methods (DRL and DRL-pan). As expected, the collision rate increases under high traffic densities for all methods due to the higher likelihood of encountering obstacles and unpredictable behaviors of other traffic participants. However, our proposed methods, particularly Ours-6, demonstrate significantly better collision avoidance capabilities. Ours-6 reduces the collision rate by an average of 18% compared to DRL across the tested maps, proving that the enhanced spatial understanding from the BEV feature representation is effective even in high-traffic situations.
In addition to collision rate, other evaluation metrics such as similarity, timesteps, and waypoint distance further illustrate the superior performance of our methods in handling congestion. Ours-6 maintains a high degree of similarity, even when surrounded by numerous other vehicles, ensuring the vehicle follows the planned route more precisely. The timesteps achieved by Ours-6 are consistently longer, indicating that the algorithm can successfully navigate through congested environments for extended periods without collisions. Lastly, the waypoint distance remains low for our method, proving that the vehicle stays closer to the optimal route in complex traffic situations.
§.§ Interpretability
To assess the interpretability of To implement our proposed framework, we conducted experiments using several randomly selected sampled frames. Fig. <ref> shows the results of semantically segmented (bird’s eye view) decoding of the latent variables obtained by the BEV feature extraction network. Due to the domain gap between the CARLA simulator and the nuScenes dataset, pre-training on nuScenes alone does not generalize well to the simulated environment in CARLA. The differences in sensor configurations, traffic scenarios, and environment dynamics between these datasets may lead to poor decoding accuracy when transferring to a new domain. Fig. <ref> shows that the decoding quality is significantly improved after fine-tuning the model using deep reinforcement learning. Fine-tuning adapts the model to the specific features of the CARLA environment, enabling it to generate clearer and more accurate BEV masks. These masks effectively capture the spatial layout of objects and obstacles, providing interpretable insights into the decision-making process.
§ CONCLUSION
In this paper, we present a novel end-to-end control framework for autonomous driving that utilizes a DRL-based approach to integrate perception and control. Our method employs a BEV feature extraction network to convert visual input into latent features, which are then decoded using semantic segmentation for improved interpretability. We tackle the challenges of partial observability by framing the problem as a partially observable
markov decision processe, enhancing the system's ability to make informed control despite incomplete environmental data.
Our approach demonstrates significant advancements in autonomous driving by providing a robust feature extraction and explanation mechanism. It not only improves the interpretability of end-to-end control strategy but also contributes to making autonomous systems more transparent and reliable. Future work will focus on refining depth prediction and camera parameter integration to enhance the accuracy and robustness of BEV feature extraction. Additionally, we plan to explore real-world implementations to evaluate the practical viability of our approach in diverse driving environments.
IEEEtran
| End-to-end autonomous driving has garnered significant attention for its ability to unify perception, prediction, and planning into a single, integrated model, offering an alternative to the traditional modular approach <cit.>. In contrast to the classic pipeline, where separate modules for perception, prediction, and planning are prone to error propagation and high computational complexity <cit.>. Since it can significantly reduce the writing of manual rule codes, end-to-end has gradually become the mainstream trend in the intelligent development of smart connected vehicles <cit.>.
Recent advancements have seen Deep Reinforcement Learning (DRL) applied to end-to-end autonomous driving <cit.>, where the system encodes environmental and vehicular state information into high-dimensional latent feature representations. From these representations, a DRL agent outputs driving strategies for autonomous navigation. However, existing research has often treated feature extraction as an isolated component, without explicitly connecting it to the perception module, which is crucial in traditional driving systems. In this paper, we bridge this gap by mapping DRL’s feature extraction network to the perception phase, and more importantly, utilizing semantic segmentation decoding to interpret the extracted features in a structured manner.
Bird’s-Eye-View (BEV) representations have emerged as an effective means for capturing driving scenarios, particularly in urban settings <cit.>. BEV consolidates multi-sensor inputs in a unified three-dimensional space, providing a comprehensive understanding of the vehicle’s surroundings. In our framework, BEV features serve as the input to the DRL policy, which outputs driving control signals. While BEV offers a robust means of feature representation, the process of extracting and translating these features into abstract states suitable for DRL remains challenging. To overcome this, we propose an expressive yet efficient neural network to extract relevant features from BEV inputs and map them directly to the perception phase of the autonomous driving pipeline. By incorporating semantic segmentation into the feature decoding process, we aim to provide a clearer interpretation of the environment, making the DRL agent’s decisions more transparent and informed.
In this paper, we propose a DRL-based end-to-end autonomous driving framework that integrates BEV. The system incorporates input from cameras facing different directions, and construct a BEV representation of the driving environment. A neural network module is designed to extract salient features from the BEV data, capturing relevant information about the surroundings and the vehicle's own state. The extracted BEV features are then fed into a DRL agent, which learns to decode an appropriate driving strategy directly from the sensory inputs, without the need for explicit modeling of the environment. By incorporating the BEV representation, the proposed framework aims to provide the DRL agent with a more holistic and structured understanding of the driving scenario and enhance the agent's ability to reason about the environment and make more informed decisions, leading to improved autonomous driving performance. To the best of our knowledge, this paper is the first solution to combine BEV and deep reinforcement learning for end-to-end autonomous driving.
The main contributions of this paper are as follows:
* We proposed a feature extraction network in DRL based on bird's-eye view and surround camera inputs to obtain complete environmental information around the vehicle, unify the coordinate system transformation of vehicles, roads, and image inputs, and greatly improve the performance of end-to-end autonomous driving control methods.
* Based on semantic segmentation-a classic autonomous driving perception task-we decode the high-dimensional environmental features extracted by the proposed feature extraction network from the surround camera, and visualize the decoded information as other vehicles in the environment, improving the interpretability of DRL.
* We evaluate the proposed algorithm on 7 maps in the CARLA, and compare it with the DRL algorithm based on the traditional feature extraction network. The experiment shows that the BEV-based feature extraction network enables the DRL policy network to obtain more rewards, which greatly improves the performance of the autonomous driving control strategy. | §.§ Traditional Modular Approach
The traditional modular approach to autonomous driving consists of four main modules: perception, prediction, planning, and control <cit.>. Each of these modules impacts overall performance.
The early perception module relied on traditional computer vision algorithms, such as edge detection <cit.>, corner detection <cit.>, and target tracking <cit.>.
Vehicle trajectory prediction methods based on traditional machine learning include the Kalman filter <cit.>, Bayesian networks <cit.>, and Markov methods <cit.>. In contrast, deep learning approaches often use long short-term memory (LSTM) encoder-decoder structures <cit.>.
The planning module is divided into global and local path planning, calculating trajectory points for the vehicle's low-level controller <cit.>. The control module generates safe and reliable real-time instructions based on the driving trajectory <cit.>.
A key advantage of this modular design is its interpretability; it breaks down a complex system into independent yet interrelated modules, each focusing on a specific task, making understanding and analysis easier.
§.§ Deep Reinforcement Learning for Autonomous Driving
Deep Reinforcement learning is a powerful and effective method to obtain end-to-end autonomous driving policies with superior performance. There are many works on end-to-end autonomous driving using reinforcement learning <cit.>. Ref.<cit.> proposed a framework designed to facilitate model-free deep reinforcement learning within complex urban autonomous driving environments. Ref. <cit.> proposed reinforcement learning within a simulation environment to develop a driving system capable of controlling a full-scale real-world vehicle. The driving policy utilizes RGB images captured from a single camera along with their semantic segmentation as input data. Ref. <cit.> proposed a comprehensive framework for decision-making, amalgamating the principles of planning and learning. This fusion leverages Monte Carlo tree search and deep reinforcement learning to address challenges such as environmental diversity, sensor information uncertainty, and intricate interactions with other road users.
§.§ Explainability of Autonomous Driving
Autonomous driving is a high-stakes, safety-critical application. Explainability, which combines interpretability (human comprehensibility) and completeness (exhaustive explanations) <cit.>, is essential for users and traffic participants to trust and accept autonomous systems <cit.>. Researchers also rely on explainability to optimize and enhance the performance of driving algorithms <cit.>. As end-to-end autonomous driving develops, the need for interpretability becomes increasingly important. Deep reinforcement learning models, consisting of multiple layers and complex neural networks, often make their decision-making processes and feature representations difficult to understand <cit.>. Visual analysis is a key method for enhancing the interpretability of these models <cit.>. This paper proposes a deep reinforcement learning feature extraction network from a bird's eye view (BEV) that integrates perception tasks with feature decoding and visualization, improving both the performance and interpretability of end-to-end autonomous driving algorithms. | null | null | null | In this paper, we present a novel end-to-end control framework for autonomous driving that utilizes a DRL-based approach to integrate perception and control. Our method employs a BEV feature extraction network to convert visual input into latent features, which are then decoded using semantic segmentation for improved interpretability. We tackle the challenges of partial observability by framing the problem as a partially observable
markov decision processe, enhancing the system's ability to make informed control despite incomplete environmental data.
Our approach demonstrates significant advancements in autonomous driving by providing a robust feature extraction and explanation mechanism. It not only improves the interpretability of end-to-end control strategy but also contributes to making autonomous systems more transparent and reliable. Future work will focus on refining depth prediction and camera parameter integration to enhance the accuracy and robustness of BEV feature extraction. Additionally, we plan to explore real-world implementations to evaluate the practical viability of our approach in diverse driving environments.
IEEEtran |
http://arxiv.org/abs/2409.17467v1 | 20240926015727 | What is the social benefit of hate speech detection research? A Systematic Review | [
"Sidney Gig-Jan Wong"
] | cs.CL | [
"cs.CL"
] |
Towards Forever Access for Implanted Brain-Computer Interfaces
Abhishek Bhattacharjee
September 28, 2024
==============================================================
§ ABSTRACT
While NLP research into hate speech detection has grown exponentially in the last three decades, there has been minimal uptake or engagement from policy makers and non-profit organisations. We argue the absence of ethical frameworks have contributed to this rift between current practice and best practice. By adopting appropriate ethical frameworks, NLP researchers may enable the social impact potential of hate speech research. This position paper is informed by reviewing forty-eight hate speech detection systems associated with thirty-seven publications from different venues.
§ INTRODUCTION
Social impact is a conceptual model used to determine the practice and science of social good factoring: 1) social good domains (including diversity and inclusion; environmental justice and sustainability; and peace and collaboration); 2) unconventional systems of change; and 3) innovative technologies <cit.>. Indeed, one area of natural language processing (NLP) which seamlessly unites all three elements of social impact is hate speech detection <cit.>. In the last three decades, we have seen an exponential growth into hate speech research with rapid developments in the last decade alone as a result of methodological advancement in NLP <cit.>.
The main contribution of NLP research in combating hate speech is through the development of hate speech detection training data sets. This is because hate speech detection is often treated as a text classification task and the development of hate speech detection systems follow a similar workflow: a) data set collection and preparation; b) feature engineering; c) model training; and lastly d) model evaluation <cit.>. A systematic review of hate speech literature has identified over sixty-nine hate speech detection systems <cit.>. However, these systems pose a number of ethical challenges and risks to the vulnerable communities they are meant to protect <cit.>.
As an area of research enquiry, hate speech research is highly productive. For example, the flagship publisher of computational linguistics and natural language processing research, ACL Anthology, returned 6,570 results for `hate speech' as of June 2024. This number pales in comparison to the staggering 116,000 publications indexed by Google Scholar. While hate speech research has been purported as a valuable resource in policing anti-social behaviour online <cit.>, some researchers are beginning to question the social benefits of proposed NLP solutions in combating hate speech <cit.>.
The efforts of NLP researchers are rarely used to combat hate speech. In a review of hate speech policies, the key players in this space were non-profit organisations, social media platforms, and government agencies <cit.>. Hate speech detection research rarely appear in policy documents. As an example, the most cited hate speech publication had 2,861 citations on Google Scholar <cit.>, but only twice in Overton - a database of policy documents and working papers for 188 countries. The absence of NLP research suggest that methodological innovations are of are incongruent with legal and ethical concerns of this social issue <cit.>.
NLP researchers do not seem to be concerned that their hate speech systems are not being widely applied or implemented. This is because the primary concern in hate speech research is poor model performance which is often attributed noisy training data <cit.>. <cit.> critiqued the `datafication' of hate speech research has become an unnecessary distraction for NLP researchers in combating this social issue. This is a well-attested issue in NLP research for positive social impact <cit.>
As a relatively new field of academic enquiry <cit.>, there remains a paradigmatic rift between current practice and evidence-based best practice. <cit.> expressed their concerns on the negative social impacts of NLP research. This is because NLP research was previously immune from research ethics as NLP approaches did not directly involve human subjects. NLP researchers are increasingly aware they are not immune from ethical dilemmas. As an example, recent work have identified racial bias in hate speech systems <cit.>.
If NLP researchers wish to enable the intended positive social impact of hate speech detection systems, then there must be a re-orientation of how the problem of hate speech detection is conceived from a methods-based problem towards collaborative solution <cit.>. This view is shared by the broader field of NLP for social good whereby the needs of users and communities are centred over the methods <cit.>. One proposed approach is to determine the responsibility of NLP solutions and system to consider its broader impact on target users and communities.
§.§ Responsible Innovation in AI
As strands of AI, including NLP, become more intertwined with society, researchers must consciously reflect on the broader ethical implications of their solutions and systems. The ACM Code of Ethics exists to support computing professionals <cit.>. However, the perceived opacity in AI research (i.e., poor transparency, explainability, and accountability) led to the recent development of a proposed deliberative framework on responsible innovation <cit.>. The proposed dimensions of the deliberative framework include:
* Responsibility to Prevent Harm: AI researchers are required to implement risk management strategies in preventing potentially negative outcomes for humans, society, and the environment.
* Obligation to `do good': AI researchers and systems are required to improve the conditions for humans, society, and the environment.
* Responsibility to Govern: AI researchers are stewards of responsible AI systems.
The conceptual model was influenced by the Principlist approaches in biomedical ethics <cit.>. In a similar vein the Principlist principles are used to guide medical professionals in cases of conflict or confusion, the framework was developed to address some of the challenges in AI research at a systemic level. The first dimension corresponds with the Principlist principles of respect for autonomy and non-maleficence, while the second dimension corresponds with beneficence and justice.
When we evaluate existing hate speech research against the proposed deliberative framework, we begin to see where the existing hate speech systems may fall short in terms of social benefits. For example, known biases in hate speech detection systems (e.g., ) may further exacerbate inequities of target groups and communities. Additionally, socially or culturally agnostic hate speech systems may offer limited value when applied without considering the sociocultural context of target groups and communities <cit.>.
§.§ Responsible NLP
Building on the proposed deliberative framework for responsible innovation in AI <cit.>, <cit.> proposed a conceptual model entitled Responsible Natural Language Processing (rnlp) to determine the social benefits of NLP systems throughout its operational life-cycle. The conceptual model was developed from semi-structured interviews with NLP researchers in the health, finance, and retail and e-commerce industries to understand the efficacy of the framework. The NLP researchers found the rnlp a suitable tool for ethical decision making at the structural level.
Principle 1: Human-Centred Values NLP systems should respect individual autonomy, diversity, and uphold human rights. NLP systems should not be used to replace cognitive functions (i.e., reasoning, learning, problem solving, perception, and rationality). This also means the perspectives of target communities should be included in the development of the system (i.e., data collection, annotation, deployment). An example of this may involve co-creating NLP informed solutions with target communities <cit.>.
Principle 2: Transparency NLP systems should include responsible disclosures especially if a system may have substantial influence on individuals <cit.>. Within a hate speech detection context, disclosures should include a detailed descriptions of the research design including decision-making processes and possible biases or data quality issues. NLP researchers are encouraged to provide data statements profiling participants or annotators and their affiliation to a target group <cit.>.
Principle 3: Well-being NLP systems should be used to benefit humans, society, and the environment; more importantly, there should be no negative impacts to humans, society, or the environment. These benefits should be explicitly defined and justified. An example of this may involve contextualising the research using the Researcher Impact Framework which highlights key achievements in the generation of knowledge, the development of individuals and collaborations, supporting the research community, and supporting broader society <cit.>.
Principle 4: Privacy and Security NLP systems should uphold and respect the private rights of individuals. Individuals should not be identified within the system and the system is stored securely. Where appropriate, anonymisation, confidentialisation, or homomorphic encryption should be applied. An example of this may include publishing numerical identifiers of social media posts and not the content without consent <cit.>.
Principle 5: Reliability NLP systems should operate in a consistent manner (i.e., precise, dependable, and repeatable) in accordance with the intended purpose. An example of this may include publishing code and training data securely as well as relevant model evaluation metrics <cit.>. NLP systems should not pose safety risks to individuals.
Principle 6: Fairness NLP systems should be inclusive and accessible (i.e., user-centric) of marginalised or vulnerable communities. Furthermore, NLP systems should not perpetuate existing prejudice towards marginalised and vulnerable communities. An example of this may include additional assessments for social bias <cit.>. Systems should be deployed on no-code or low-code development platforms as target communities may not have the capability to deploy the system from the source code. Within the context of hate speech detection research, this principle is correlated with Principle 2: Transparency and Principle 8: Accountability.
Principle 7: Interrogation There should be effective and accessible methods that enable individuals to challenge NLP systems. Shared tasks is a useful approach to determine the limitations of the system <cit.>.
Principle 8: Accountability There should be human oversight over the development and deployment of NLP systems throughout various phases of the NLP system life-cycle. Evidence of this principle may include participatory design process with stakeholders <cit.>; and ethics or internal review board approval obtained.
§.§ Summary
As target communities continue to experience online hate despite these opaque strategies <cit.>, NLP researchers may still play a significant role in unleashing the social impact potential of NLP research - to enable equitable digital inclusion and to close the `digital divide' <cit.>. The introduction of the deliberative framework for responsible innovation in AI <cit.> and the Responsible NLP (rnlp) conceptual model <cit.> provide a useful tool to understand the current state of hate speech detection systems. The main contribution of this position paper is a systematic review of existing hate speech detection systems to determine possible areas of improvement with the aim to enable positive social benefits for target groups or communities. We posit the low social impact of hate speech detection research, as evident from the lack of engagement from key stakeholders <cit.>, may stem from the lack of ethical decision making in the development of these NLP systems.
§ ANALYSIS
We retroactively apply the rnlp conceptual model to evaluate the ethical and responsible performance of hate speech systems. Each system is rated on a three-point scale: where there is no evidence (not met), some evidence (partially met), and good evidence (met). While the rnlp evaluates an NLP system in its entirety, we restrict our analysis to the training data sets used to train these systems. As part of our systematic review, we only refer to publicly available publications (or in some instances, pre-prints) and associated data or metadata repository for evidence when evaluating each system.
§.§ Data
Even though there are hundreds (possibly thousands) of hate speech detection systems, we have included forty-eight hate speech detection systems which were also reviewed as part of <cit.>. The list of systems with limited corpus information are presented in the Appendix in Table <ref>. For a technical summary of the sample, refer to Tables 11 and 12 in <cit.>. The systems are associated with thirty-eight publications published between 2016-2020. Furthermore, these hate speech data sets span multiple language conditions.
§ RESULTS
A summary of the results from our systematic evaluation is presented in Table <ref>. The evaluation for each hate speech detection system is presented in Table <ref> of the Appendix. We do not provide a ranking of the systems in our analysis as the purpose of the systematic review is not to determine the ethical robustness of individual systems. Some systems associated with one publication may appear to have duplicate results as they were developed with a similar methodology.
Most systems (68.8%) partially met Principle 1: Well-being (P1) by explicitly stating the contribution of the system; however, almost a third (27.1%) of systems did not. Over half (56.3%) of the systems partially met Principle 2: Human-Centred Values (P2) by recruiting manual annotators from relevant sociocultural or linguistic backgrounds; while a third (35.4%) relied on anonymous crowd-sourcing platforms. Only a third (33.3%) of systems met Principle 3: Fairness (P3) provided a discussion on possible biases, limitations, or data quality issues. The remaining systems did not include a discussion of limitations at all.
Nineteen systems (39.6%) met Principle 4: Privacy and Security (P4) and twenty-one systems (43.8%) partially met this principle. The systems which met this principle published de-identified data with a small number stored securely with approval required. Eight systems (16.7%) did not meet this principle which raises both ethical and legal concerns. Thirty-nine systems (81.3%) met Principle 5: Reliability (P5) while nine systems (18.8%) partially met this principle. Thirty-one systems (64.6%) did not meet Principle 6: Fairness (P6) as there were no responsible disclosures. The remaining systems (33.3%) partially met this principle with limited information about the annotators. Over half (52.1%) of the systems met Principle 7: Interrogation (P7). Lastly, the majority (95.8%) of systems did not meet Principle 8: Accountability (P8).
§ DISCUSSION
While the systematic review provides useful insights of hate speech detection systems from a structural perspective, it does not provide insights into systemic issues. We therefore organise our discussion using the deliberative framework on responsible innovation in AI <cit.> to determine the broader ethical implications of the sample of hate speech detection systems as highlighted from our systematic review.
Responsibility to Prevent Harm
The principles associated with this dimension are Principle 2: Human-Centred Values and Principle 6: Transparency. Based on the systematic review, the sample of systems performed poorly for this dimension. Evidence for Principle 2: Human-Centred Values was largely determined by the annotation process of which heavily relied on anonymous crowd-sourcing when labelling the training data sets. Anonymous crowd-sourcing decreases the reliability of the annotated data <cit.>. Manual annotators who may not affiliate with a target group may over generalise linguistic features (i.e., slurs) as hate speech. This dimension requires researchers to implement risk management strategies in preventing negative outcomes for humans, society, and the environment. Only <cit.> co-created the detection system alongside target groups and communities. Even though the use of crowd-sourced annotators may seem innocuous from a research design perspective, there is a growing body of evidence that content moderators (in this case manual annotators) are unnecessarily exposed to secondary trauma from harmful content with limited mental health support <cit.>. This means annotators, whether recruited from within a target group/community or anonymously, may experience harm through the system development process. In terms of evidence for Principle 6: Transparency, only one system provided both disclosures and detailed profiles of annotators <cit.>. For example, poor documentation may reinforce existing biases against target communities <cit.>.
Obligation to `do good'
The principles associated with this dimension are Principle 1: Well-being and Principle 4: Privacy and Security. The evidence for Principle 1: Well-being was largely determined by the aims and research questions. There was little discussion on the suitability of these systems or the role of target communities or the role of annotators in combating online hate speech. Only two systems, both associated with <cit.>, had clear contributions to target communities. While this dimension requires researchers to improve the conditions for humans, society, and the environment, the contributions for most systems were largely methodological and the social benefits were negligible. This reinforces the belief that methodological innovations are incongruent with the social or ethical concerns <cit.>. In terms of evidence for Principle 4: Privacy and Security, this was largely determined by data management practices. The systems which met this principle published de-identified data with a small number stored securely with approval from the researchers required. It is important to note that identifiable social media data contravenes the data use policy of most social media platforms. This means the publication of the availability of these data sets with limited security poses ethical and legal issues. The social benefits of the systems developed resulting from the research should be clear to target groups and communities.
Responsibility to Govern
The remaining four principles are associated with this dimension. The systematic revealed a high degree of polarity in the performance of the principles associated with this dimension. The evidence for Principle 5: Reliability was largely determined by the available documentation (i.e., journal article, conference proceeding, or pre-print). We can attribute the high performance of systems in this principle as all associated publications were required to undergo peer-review. The high performance of this principle is in direct contrasts Principle 6: Reliability which performed poorly as a majority of systems were not deployed beyond publishing the training data. This meant none of the systems met this principle in its entirety as they are not accessible to target communities. Similarly, all systems performed poorly for Principle 8: Accountability as participatory design approaches were non-evident and ethics and internal review board approvals were rarely obtained for these studies. In terms of evidence for Principle 7: Interrogation, over half the systems met this principle as the datasets were indexed in Papers with Code or involved with shared tasks which are both effective methods to enable robust interrogation of the systems. Crucially, this is where NLP researchers can enable positive social benefits as this dimension requires researchers to be stewards of responsible AI systems. Social media platforms (such as X (Twitter) and Facebook) remove harmful content using in-house detection algorithms and content moderators <cit.>. This suggests NLP researchers may play a role in challenging these opaque systems and promote transparency, explainability, and accountability of these in-house detection algorithms which continue to fail and expose target groups and communities to hate speech.
§ CONCLUSION
While the systematic review cannot determine why there is a lack of engagement from key stakeholders of target groups and communities, the insights on how NLP researchers can improve ethical decision making in the development of hate speech detection systems. Based on the systematic review, NLP researchers working in the field of hate speech detection are consistently meeting the principles of Principle 5: Reliability, Principle 7: Interrogation, and Principle 4: Privacy and Security. The two principles which require the most attention are Principle 8: Accountability and Principle 3: Fairness. Some of these ethical concerns may be addressed systemically and structurally through the adoption of ethical frameworks (such as or ); however, true positive social benefits may only be achieved by working alongside target groups and communities most impacted by this social issue.
§ ETHICS STATEMENT
The purpose of this position paper is not to take a punitive view of hate speech detection research, but to determine how NLP researchers can enable ethical research practices in this area. As demographic bias in language models may have unintended downstream impacts on vulnerable and marginalised communities <cit.>; research practices of existing and former hate speech detection systems may also perpetuate unintentional harms on vulnerable and marginalised communities. Even though this position paper is not an NLP system in itself, it does contribute to the development of ethical research practices for NLP systems; therefore, we will use the RNLP <cit.> conceptual model to reinforce current best practice in NLP research.
Principle 1: Well-being We use the Researcher Impact Framework proposed by <cit.> to determine the contributions of this position paper. This position paper contributes to the generation of knowledge in NLP research by evaluating current research practices in hate speech research and the steps needed to enable best practice and ethical research practices. This position supports the development of individuals and the research community by synthesising different ethical conceptual models and frameworks to support best practice in NLP research. While this position paper does not involve vulnerable and marginalised groups, the main contribution of this position paper is to support NLP researchers to effectively address the social issues of broader society by encouraging researcher reflexivity on existing research practices.
Principle 2: Human-Centred Values This position paper is a systematic review of existing hate speech detection systems. These are subjective ratings based on the perspectives and experiences of the authors and the ratings have not been automated. We have not used AI assistants in research or writing as this will replace the cognitive functions of the authors. The authors intersect communities often targeted by online hate speech which in turn brings a unique and nuanced perspective on the efficacy of NLP solutions in combating this social issue. The positionality of the authors will be released following anonymous peer-review.
Principle 3: Fairness This position paper does not perpetuate existing prejudice towards marginalised and vulnerable communities. We are aware that ethical research practice may differ between social, cultural, linguistic, or political affiliations; therefore, we have not associated hate speech systems and their research practices as more or less ethical. We have focused our discussion on social benefits and enabling digital inclusion to avoid taking a deficit approach towards hate speech detection research. We have written this paper in plain language to ensure full accessibility of the content.
Principle 4: Privacy and Security This position paper does not contain individually identifying information or examples of hate speech or offensive language. All hate speech detection systems and associated documentation which we have explicitly referenced are available in the public domain.
Principle 5: Reliability We have identified no potential risks of this position paper; however, we have not included the complete evaluation of individual systems as this may cause reputational risks for both the developers of the individual systems and the authors of this position paper. As this position paper is largely a qualitative assessment of hate speech detection systems, there are no model evaluation metrics or statistics and we have not included any experimental settings or hyper-parameters.
Principle 6: Transparency We have included a brief description of the forty-eight hate speech detection systems which can be located in Table 11 and Table 12 of <cit.>. We have not involved human subjects or external annotators in our systematic review of hate speech detection systems.
Principle 7: Interrogation We encourage other NLP researchers to conduct a similar systematic review based on their own perspectives and experiences. The evaluation with supporting evidence can be made available by contacting the authors.
Principle 8: Accountability This position paper does not include human subjects or external annotators; therefore, ethics or internal review board approval have not been sought. However, we encourage NLP researchers working in hate speech detection to contact the authors to discuss the contents of the position paper. We believe there is value in taking a participatory design approach to determine the needs of NLP researchers in hate speech detection to enable ethical research practices.
§ LIMITATIONS
This position paper evaluates a sample (48) of existing hate speech detection systems. Naturally, this is not a true reflection of all hate speech detection systems developed or available on the public domain. We suggest elevating this position paper to a bibliometric evaluation of hate speech detection systems to capture the evidence needed to support the claims in this position paper. Furthermore, the qualitative evaluation in this position paper is limited to the perspectives and experiences of the authors; therefore, we do not expect the views expressed in this position paper can be generalised across the NLP research community who may have differing perspectives on best practice ethical research practice which will vary depending on the social, cultural, linguistic, or political affiliations of individuals. This position paper uses one ethical conceptual model and may benefit from the inclusion of other ethical frameworks.
§ ACKNOWLEDGEMENTS
The author wants to thank Dr. Benjamin Adams (University of Canterbury | Te Whare Wānanga o Waitaha) and Dr. Jonathan Dunn (University of Illinois Urbana-Champaign) for their supervision and mentorship. The author wants to thank the three anonymous peer reviewers, the area chair, the programme chair, and James Kay for their constructive feedback. Lastly, the author wants to thank Fulbright New Zealand | Te Tūāpapa Mātauranga o Aotearoa me Amerika and their partnership with the Ministry of Business, Innovation, and Employment | Hīkina Whakatutuki for their support through the Fulbright New Zealand Science and Innovation Graduate Award.
§ APPENDIX
Continued on the next page
| Social impact is a conceptual model used to determine the practice and science of social good factoring: 1) social good domains (including diversity and inclusion; environmental justice and sustainability; and peace and collaboration); 2) unconventional systems of change; and 3) innovative technologies <cit.>. Indeed, one area of natural language processing (NLP) which seamlessly unites all three elements of social impact is hate speech detection <cit.>. In the last three decades, we have seen an exponential growth into hate speech research with rapid developments in the last decade alone as a result of methodological advancement in NLP <cit.>.
The main contribution of NLP research in combating hate speech is through the development of hate speech detection training data sets. This is because hate speech detection is often treated as a text classification task and the development of hate speech detection systems follow a similar workflow: a) data set collection and preparation; b) feature engineering; c) model training; and lastly d) model evaluation <cit.>. A systematic review of hate speech literature has identified over sixty-nine hate speech detection systems <cit.>. However, these systems pose a number of ethical challenges and risks to the vulnerable communities they are meant to protect <cit.>.
As an area of research enquiry, hate speech research is highly productive. For example, the flagship publisher of computational linguistics and natural language processing research, ACL Anthology, returned 6,570 results for `hate speech' as of June 2024. This number pales in comparison to the staggering 116,000 publications indexed by Google Scholar. While hate speech research has been purported as a valuable resource in policing anti-social behaviour online <cit.>, some researchers are beginning to question the social benefits of proposed NLP solutions in combating hate speech <cit.>.
The efforts of NLP researchers are rarely used to combat hate speech. In a review of hate speech policies, the key players in this space were non-profit organisations, social media platforms, and government agencies <cit.>. Hate speech detection research rarely appear in policy documents. As an example, the most cited hate speech publication had 2,861 citations on Google Scholar <cit.>, but only twice in Overton - a database of policy documents and working papers for 188 countries. The absence of NLP research suggest that methodological innovations are of are incongruent with legal and ethical concerns of this social issue <cit.>.
NLP researchers do not seem to be concerned that their hate speech systems are not being widely applied or implemented. This is because the primary concern in hate speech research is poor model performance which is often attributed noisy training data <cit.>. <cit.> critiqued the `datafication' of hate speech research has become an unnecessary distraction for NLP researchers in combating this social issue. This is a well-attested issue in NLP research for positive social impact <cit.>
As a relatively new field of academic enquiry <cit.>, there remains a paradigmatic rift between current practice and evidence-based best practice. <cit.> expressed their concerns on the negative social impacts of NLP research. This is because NLP research was previously immune from research ethics as NLP approaches did not directly involve human subjects. NLP researchers are increasingly aware they are not immune from ethical dilemmas. As an example, recent work have identified racial bias in hate speech systems <cit.>.
If NLP researchers wish to enable the intended positive social impact of hate speech detection systems, then there must be a re-orientation of how the problem of hate speech detection is conceived from a methods-based problem towards collaborative solution <cit.>. This view is shared by the broader field of NLP for social good whereby the needs of users and communities are centred over the methods <cit.>. One proposed approach is to determine the responsibility of NLP solutions and system to consider its broader impact on target users and communities.
§.§ Responsible Innovation in AI
As strands of AI, including NLP, become more intertwined with society, researchers must consciously reflect on the broader ethical implications of their solutions and systems. The ACM Code of Ethics exists to support computing professionals <cit.>. However, the perceived opacity in AI research (i.e., poor transparency, explainability, and accountability) led to the recent development of a proposed deliberative framework on responsible innovation <cit.>. The proposed dimensions of the deliberative framework include:
* Responsibility to Prevent Harm: AI researchers are required to implement risk management strategies in preventing potentially negative outcomes for humans, society, and the environment.
* Obligation to `do good': AI researchers and systems are required to improve the conditions for humans, society, and the environment.
* Responsibility to Govern: AI researchers are stewards of responsible AI systems.
The conceptual model was influenced by the Principlist approaches in biomedical ethics <cit.>. In a similar vein the Principlist principles are used to guide medical professionals in cases of conflict or confusion, the framework was developed to address some of the challenges in AI research at a systemic level. The first dimension corresponds with the Principlist principles of respect for autonomy and non-maleficence, while the second dimension corresponds with beneficence and justice.
When we evaluate existing hate speech research against the proposed deliberative framework, we begin to see where the existing hate speech systems may fall short in terms of social benefits. For example, known biases in hate speech detection systems (e.g., ) may further exacerbate inequities of target groups and communities. Additionally, socially or culturally agnostic hate speech systems may offer limited value when applied without considering the sociocultural context of target groups and communities <cit.>.
§.§ Responsible NLP
Building on the proposed deliberative framework for responsible innovation in AI <cit.>, <cit.> proposed a conceptual model entitled Responsible Natural Language Processing (rnlp) to determine the social benefits of NLP systems throughout its operational life-cycle. The conceptual model was developed from semi-structured interviews with NLP researchers in the health, finance, and retail and e-commerce industries to understand the efficacy of the framework. The NLP researchers found the rnlp a suitable tool for ethical decision making at the structural level.
Principle 1: Human-Centred Values NLP systems should respect individual autonomy, diversity, and uphold human rights. NLP systems should not be used to replace cognitive functions (i.e., reasoning, learning, problem solving, perception, and rationality). This also means the perspectives of target communities should be included in the development of the system (i.e., data collection, annotation, deployment). An example of this may involve co-creating NLP informed solutions with target communities <cit.>.
Principle 2: Transparency NLP systems should include responsible disclosures especially if a system may have substantial influence on individuals <cit.>. Within a hate speech detection context, disclosures should include a detailed descriptions of the research design including decision-making processes and possible biases or data quality issues. NLP researchers are encouraged to provide data statements profiling participants or annotators and their affiliation to a target group <cit.>.
Principle 3: Well-being NLP systems should be used to benefit humans, society, and the environment; more importantly, there should be no negative impacts to humans, society, or the environment. These benefits should be explicitly defined and justified. An example of this may involve contextualising the research using the Researcher Impact Framework which highlights key achievements in the generation of knowledge, the development of individuals and collaborations, supporting the research community, and supporting broader society <cit.>.
Principle 4: Privacy and Security NLP systems should uphold and respect the private rights of individuals. Individuals should not be identified within the system and the system is stored securely. Where appropriate, anonymisation, confidentialisation, or homomorphic encryption should be applied. An example of this may include publishing numerical identifiers of social media posts and not the content without consent <cit.>.
Principle 5: Reliability NLP systems should operate in a consistent manner (i.e., precise, dependable, and repeatable) in accordance with the intended purpose. An example of this may include publishing code and training data securely as well as relevant model evaluation metrics <cit.>. NLP systems should not pose safety risks to individuals.
Principle 6: Fairness NLP systems should be inclusive and accessible (i.e., user-centric) of marginalised or vulnerable communities. Furthermore, NLP systems should not perpetuate existing prejudice towards marginalised and vulnerable communities. An example of this may include additional assessments for social bias <cit.>. Systems should be deployed on no-code or low-code development platforms as target communities may not have the capability to deploy the system from the source code. Within the context of hate speech detection research, this principle is correlated with Principle 2: Transparency and Principle 8: Accountability.
Principle 7: Interrogation There should be effective and accessible methods that enable individuals to challenge NLP systems. Shared tasks is a useful approach to determine the limitations of the system <cit.>.
Principle 8: Accountability There should be human oversight over the development and deployment of NLP systems throughout various phases of the NLP system life-cycle. Evidence of this principle may include participatory design process with stakeholders <cit.>; and ethics or internal review board approval obtained.
§.§ Summary
As target communities continue to experience online hate despite these opaque strategies <cit.>, NLP researchers may still play a significant role in unleashing the social impact potential of NLP research - to enable equitable digital inclusion and to close the `digital divide' <cit.>. The introduction of the deliberative framework for responsible innovation in AI <cit.> and the Responsible NLP (rnlp) conceptual model <cit.> provide a useful tool to understand the current state of hate speech detection systems. The main contribution of this position paper is a systematic review of existing hate speech detection systems to determine possible areas of improvement with the aim to enable positive social benefits for target groups or communities. We posit the low social impact of hate speech detection research, as evident from the lack of engagement from key stakeholders <cit.>, may stem from the lack of ethical decision making in the development of these NLP systems. | null | null | A summary of the results from our systematic evaluation is presented in Table <ref>. The evaluation for each hate speech detection system is presented in Table <ref> of the Appendix. We do not provide a ranking of the systems in our analysis as the purpose of the systematic review is not to determine the ethical robustness of individual systems. Some systems associated with one publication may appear to have duplicate results as they were developed with a similar methodology.
Most systems (68.8%) partially met Principle 1: Well-being (P1) by explicitly stating the contribution of the system; however, almost a third (27.1%) of systems did not. Over half (56.3%) of the systems partially met Principle 2: Human-Centred Values (P2) by recruiting manual annotators from relevant sociocultural or linguistic backgrounds; while a third (35.4%) relied on anonymous crowd-sourcing platforms. Only a third (33.3%) of systems met Principle 3: Fairness (P3) provided a discussion on possible biases, limitations, or data quality issues. The remaining systems did not include a discussion of limitations at all.
Nineteen systems (39.6%) met Principle 4: Privacy and Security (P4) and twenty-one systems (43.8%) partially met this principle. The systems which met this principle published de-identified data with a small number stored securely with approval required. Eight systems (16.7%) did not meet this principle which raises both ethical and legal concerns. Thirty-nine systems (81.3%) met Principle 5: Reliability (P5) while nine systems (18.8%) partially met this principle. Thirty-one systems (64.6%) did not meet Principle 6: Fairness (P6) as there were no responsible disclosures. The remaining systems (33.3%) partially met this principle with limited information about the annotators. Over half (52.1%) of the systems met Principle 7: Interrogation (P7). Lastly, the majority (95.8%) of systems did not meet Principle 8: Accountability (P8). | While the systematic review provides useful insights of hate speech detection systems from a structural perspective, it does not provide insights into systemic issues. We therefore organise our discussion using the deliberative framework on responsible innovation in AI <cit.> to determine the broader ethical implications of the sample of hate speech detection systems as highlighted from our systematic review.
Responsibility to Prevent Harm
The principles associated with this dimension are Principle 2: Human-Centred Values and Principle 6: Transparency. Based on the systematic review, the sample of systems performed poorly for this dimension. Evidence for Principle 2: Human-Centred Values was largely determined by the annotation process of which heavily relied on anonymous crowd-sourcing when labelling the training data sets. Anonymous crowd-sourcing decreases the reliability of the annotated data <cit.>. Manual annotators who may not affiliate with a target group may over generalise linguistic features (i.e., slurs) as hate speech. This dimension requires researchers to implement risk management strategies in preventing negative outcomes for humans, society, and the environment. Only <cit.> co-created the detection system alongside target groups and communities. Even though the use of crowd-sourced annotators may seem innocuous from a research design perspective, there is a growing body of evidence that content moderators (in this case manual annotators) are unnecessarily exposed to secondary trauma from harmful content with limited mental health support <cit.>. This means annotators, whether recruited from within a target group/community or anonymously, may experience harm through the system development process. In terms of evidence for Principle 6: Transparency, only one system provided both disclosures and detailed profiles of annotators <cit.>. For example, poor documentation may reinforce existing biases against target communities <cit.>.
Obligation to `do good'
The principles associated with this dimension are Principle 1: Well-being and Principle 4: Privacy and Security. The evidence for Principle 1: Well-being was largely determined by the aims and research questions. There was little discussion on the suitability of these systems or the role of target communities or the role of annotators in combating online hate speech. Only two systems, both associated with <cit.>, had clear contributions to target communities. While this dimension requires researchers to improve the conditions for humans, society, and the environment, the contributions for most systems were largely methodological and the social benefits were negligible. This reinforces the belief that methodological innovations are incongruent with the social or ethical concerns <cit.>. In terms of evidence for Principle 4: Privacy and Security, this was largely determined by data management practices. The systems which met this principle published de-identified data with a small number stored securely with approval from the researchers required. It is important to note that identifiable social media data contravenes the data use policy of most social media platforms. This means the publication of the availability of these data sets with limited security poses ethical and legal issues. The social benefits of the systems developed resulting from the research should be clear to target groups and communities.
Responsibility to Govern
The remaining four principles are associated with this dimension. The systematic revealed a high degree of polarity in the performance of the principles associated with this dimension. The evidence for Principle 5: Reliability was largely determined by the available documentation (i.e., journal article, conference proceeding, or pre-print). We can attribute the high performance of systems in this principle as all associated publications were required to undergo peer-review. The high performance of this principle is in direct contrasts Principle 6: Reliability which performed poorly as a majority of systems were not deployed beyond publishing the training data. This meant none of the systems met this principle in its entirety as they are not accessible to target communities. Similarly, all systems performed poorly for Principle 8: Accountability as participatory design approaches were non-evident and ethics and internal review board approvals were rarely obtained for these studies. In terms of evidence for Principle 7: Interrogation, over half the systems met this principle as the datasets were indexed in Papers with Code or involved with shared tasks which are both effective methods to enable robust interrogation of the systems. Crucially, this is where NLP researchers can enable positive social benefits as this dimension requires researchers to be stewards of responsible AI systems. Social media platforms (such as X (Twitter) and Facebook) remove harmful content using in-house detection algorithms and content moderators <cit.>. This suggests NLP researchers may play a role in challenging these opaque systems and promote transparency, explainability, and accountability of these in-house detection algorithms which continue to fail and expose target groups and communities to hate speech. | While the systematic review cannot determine why there is a lack of engagement from key stakeholders of target groups and communities, the insights on how NLP researchers can improve ethical decision making in the development of hate speech detection systems. Based on the systematic review, NLP researchers working in the field of hate speech detection are consistently meeting the principles of Principle 5: Reliability, Principle 7: Interrogation, and Principle 4: Privacy and Security. The two principles which require the most attention are Principle 8: Accountability and Principle 3: Fairness. Some of these ethical concerns may be addressed systemically and structurally through the adoption of ethical frameworks (such as or ); however, true positive social benefits may only be achieved by working alongside target groups and communities most impacted by this social issue. |
http://arxiv.org/abs/2409.17505v1 | 20240926032459 | Sequential Kernelized Stein Discrepancy | [
"Diego Martinez-Taboada",
"Aaditya Ramdas"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
-0.35ex
< g r a p h i c s >
EAGLE: Egocentric AGgregated Language-video Engine
Chenliang Xu
September 28, 2024
========================================================================================
§ ABSTRACT
We present a sequential version of the kernelized Stein discrepancy, which allows for conducting goodness-of-fit tests for unnormalized densities that are continuously monitored and adaptively stopped. That is, the sample size need not be fixed prior to data collection; the practitioner can choose whether to stop the test or continue to gather evidence at any time while controlling the false discovery rate. In stark contrast to related literature, we do not impose uniform boundedness on the Stein kernel. Instead, we exploit the potential boundedness of the Stein kernel at arbitrary point evaluations to define test martingales, that give way to the subsequent novel sequential tests. We prove the validity of the test, as well as an asymptotic lower bound for the logarithmic growth of the wealth process under the alternative. We further illustrate the empirical performance of the test with a variety of distributions, including restricted Boltzmann machines.
§ INTRODUCTION
Many statistical procedures heavily rely on the assumptions made about the distribution of the empirical observations, and prove invalid if such conditions are violated. For instance, a high-energy physicist may develop a generative model seeking to produce synthetic observations of particle collisions, motivated by the high cost of experimenting on a real particle collider. If the model is not accurate, then any conclusion that derives from such synthetic data will lack any scientific interest. The question of whether the data follows a particular distribution may be posed in terms of goodness-of-fit testing. Formally, the simple goodness-of-fit testing problem considers the null hypothesis H_0: Q = P against the alternative H_1: Q ≠ P, for a given P and access to data X_1, X_2, …∼ Q. It may also be the case that we have a set of given distributions as candidates and we wish to test if any of them accurately models the empirical data. In such a case, the goodness-of-fit problem is posed in terms of a composite null hypothesis H_0: Q ∈𝒫, against the alternative hypothesis H_1: Q ∉𝒫.
In this work, we focus on a particularly challenging scenario that arises from not having full access to P (or 𝒫). We handle classes of distributions whose densities are only known up to normalizing constants. This is a common scenario when working with general energy-based models, such as Ising models and restricted Boltzmann machines, and in Bayesian statistics, where normalizing factors are generally neither known in closed form nor computable. The existing works in the literature regarding goodness-of-fit for unnormalized densities have focused on the batch or fixed sample size setting, where the number of samples n is decided before conducting the analysis, which is later on carried on using observations X_1, …, X_n. Nonetheless, this approach counts with several major drawbacks. Because the number of observations in batch tests is determined prior to data collection, there is a risk that too many observations may be allocated to simpler problem instances, thus wasting resources, or that insufficient observations are assigned to more complex instances, which can yield insufficient evidence against the null hypothesis. Furthermore, when test outcomes appear encouraging but not definitive (for instance, if a p-value is marginally higher than a specified significance threshold), one may be tempted to augment the dataset and undertake the investigation again. Traditional batch testing methods, however, do not allow for this approach.
In contrast, we seek to develop tests that are anytime valid. That is, we can repeatedly decide whether to collect more data based on the current state of the procedure without compromising later assessments, halting the process at any time and for any reason. Formally, a sequential test which is stopped at any given arbitrary moment can be represented by a random stopping time τ taking values in { 1, 2, …}∪{∞}. The stopping time τ denotes the random time at which the null hypothesis is rejected. A sequential test is called level-α if it satisfies the condition (τ < ∞) ≤α under the null for any stopping time τ.
In this work, we develop level-α sequential goodness-of-fit tests that accommodate for unnormalized densities, building on the concept of kernel Stein discrepancies. The type I error is controlled even if the tests are continuously monitored and adaptively stopped, hence automatically adapting the sample size to the unknown alternative. In particular, we believe these anytime-valid tests will be of remarkable interest in the following scenarios:
* Measuring the quality of a proposed distribution: We may have a specific candidate distribution (or set of candidate distributions) to model the data that does not count with a computable normalizing constant (e.g., some complex network that models the interaction between genes). Does this distribution accurately model the true data generation process?
* Measuring sample quality: We may have obtained an unnormalized density, such as some posterior distribution in Bayesian statistics, that we wish to draw from using Markov Chain Monte Carlo (MCMC) methods. Is the chosen MCMC algorithm yielding samples accurately from such a density?
The anytime-valid guarantees allow for minimizing the sample size required to reject the null if it does not hold, such as the number of expensive gene experiments that we have to run in a lab, or the number of samples that we have to draw from a costly MCMC algorithm.
The work is organized as follows. Section <ref> presents the related work. Section <ref> introduces preliminary work, with special emphasis on the kernel Stein discrepancy and testing by betting. These constitute the theoretical foundations of the novel goodness-of-fit tests, presented in Section <ref>. Section <ref> subsequently provides a general derivation for the lower bounds required in the test, alongside different examples. The empirical validity of the tests is presented in Section <ref>, followed by concluding remarks in Section <ref>.
§ RELATED WORK
Our contribution falls within the scope of `sequential, anytime-valid inference', a field which encompasses confidence sequences <cit.> and e-processes <cit.>. In particular, we exploit the techniques of `testing by betting', which was recently popularized by <cit.>, and is based on applying Ville’s maximal inequality <cit.> to a test (super)martingale <cit.>. Any other approach for constructing confidence sequences can be, in principle, outperformed by applying maximal inequalities to (super)martingale constructions <cit.>. The interest on this line of research has recently exploded; we refer to reader to <cit.> and the references therein for a detailed introduction to the field.
Concurrently, kernel methods have received widespread attention due to their notable empirical performance and theoretical guarantees. Reproducing kernel Hilbert spaces provide the theoretical foundation for these methods <cit.>. We highlight three predominant applications of kernel methods. First, the maximum mean discrepancy (MMD) has given way to kernel-based two sample tests <cit.>. Second, independence tests have been developed based on the Hilbert Schmidt independence criterion (HSIC) <cit.>. Third, kernel Stein discrepancies (KSD) have led to novel goodness-of-fit tests <cit.>.
At the intersection of testing by betting and kernels, <cit.> developed a sequential MMD for conducting two sample tests that can be arbitrarily stopped. Similarly, <cit.> extended the HSIC to the sequential setting. Both works rely on the uniform boundedness of the chosen kernel, which is key for the construction of nonnegative martingales. While some kernels commonly used for the MMD and HSIC (such as the RBF or Laplace kernels) are uniformly bounded, this is very rarely the case for the Stein kernel exploited by the KSD. The methodology proposed in this contribution does not assume uniform boundedness of the Stein kernel, which poses a number of theoretical challenges that will be addressed in the subsequent sections.
§ BACKGROUND
§.§ The kernel Stein discrepancy
Throughout this contribution we will draw upon the kernel Stein discrepancy (KSD), which is a distributional divergence that builds on the concept of reproducing kernel Hilbert space (RKHS) and the general Stein's method <cit.>. The KSD may be understood as a kernelized version of score-matching divergence <cit.>.
Reproducing kernel Hilbert spaces (RKHS): Consider 𝒳≠∅ and a Hilbert space (ℋ, ⟨·, ·⟩_ℋ) of functions f: 𝒳→ℝ. If there exists a function k: 𝒳×𝒳→ℝ satisfying (i) k(·, x) ∈ℋ for all x ∈𝒳, (ii) ⟨ f, k(·, x) ⟩_ℋ = f(x) for all x ∈𝒳 and f ∈ℋ, then ℋ is called an RKHS and k a reproducing kernel. We denote by ℋ^d the product RKHS containing elements h := (h_1, …, h_d) with h_i ∈ℋ and ⟨ h, h̃⟩ = ∑_i = 1^d ⟨ h_i, h̃_i⟩_ℋ.
Kernel Stein discrepancies (KSD): Assume now that P has density p on 𝒳⊂^d, and let ℋ be an RKHS with reproducing kernel k such that ∇ k(·, x) exists for all x ∈𝒳. Based on the Stein operator
(T_p h)(x) = ∑_i = 1^d(∂logp (x)/∂ x_ih_i(x) + ∂ h_i(x)/∂ x_i), h ∈^d,
the KSD is defined as
KSD_(Q, P) = sup_f∈_MMD_X ∼ Q[(T_pf)(X)],
where _KSD = { h ∈^d: h_^d≤ 1 }. Defining s_p(x) := ∇_x log p(x) and ξ_p(·, x) := [ s_p(x) k(·, x) + ∇ k(·, x)] ∈ℋ^d, it follows that
h_p(x, x̃) : = ⟨ξ_p (·, x), ξ_p (·, x̃) ⟩_ℋ^d
=⟨ s_p(x), s_p(x̃) ⟩_^d k(x, x̃) + ⟨ s_p(x̃), ∇_x k(x, x̃) ⟩_^d
+ ⟨ s_p(x), ∇_x̃ k(x, x̃) ⟩_^d + ∇_x·∇_x̃ k(x, x̃)
is a reproducing kernel based on the Moore-Aronszajn theorem. Again, if _X∼ Q√(h_p(X, X)) < ∞, then KSD_(Q, P) = _X ∼ Q[ξ_p(·, X)]_ℋ^d. If k is universal and under certain regularity conditions, then KSD_(Q, P) = 0 if, and only if, Q = P <cit.>. In the batch setting and simple null hyposthesis, a test statistic is generally taken as a V-statistic or U-statistic, and parametric or wild bootstrap is used to calibrate the test. We refer the reader to <cit.> for the more challenging composite null hypothesis case.
§.§ Testing by betting
We now present the theoretical foundation of the sequential testing strategy that will be presented in Section <ref>, which is commonly referred to as testing by betting. The key idea behind this general, powerful concept relies on defining a test (super)martingale, and couple it with a betting interpretation <cit.>. Test (super)martingales allow for exploiting Ville's inequality <cit.> and hence they yield level-α sequential tests. In turn, this enables practitioners to peek at the data at anytime and decide whether to keep gathering data or stop accordingly while preserving theoretical guarantees.
Intuitively, let H_0 be the null hypothesis to be tested. A fictional bettor starts with an initial wealth of _0=1. The fictional bettor then bets sequentially on the outcomes (X_t)_t ≥ 1. To do so, at each round t, the fictional bettor chooses a payoff function _t: 𝒳→ [0, ∞) so that 𝔼_H_0[_t(X_t)|_t-1] ≤ 1, where _t-1 = σ(X_1, …, X_t-1) (this ensures a fair bet if the null is true).
After the outcome X_t is revealed, the bettor's wealth grows or shrinks by a factor of _t(Z_t). The bettor's wealth is _t =_0 ∏_i=1^t _i(X_i) after t rounds of betting.
Under the null hypothesis, these fair bets ensure that the sequence (_t)_ t≥ 0 forms a `test supermartingale' (i.e., a nonnegative supermartingale that starts at 1), and in particular a `test martingale' if 𝔼_H_0[_t(X_t)|_t-1] is 1 almost surely. Based on Ville's inequality, rejecting the null if _t ≥ 1/α, where α∈ (0,1) is the desired confidence level, leads to sequential tests that control the type-I error at level α. Under the alternative H_1, the payoff functions {_t: t ≥ 1} should seek a fast growth rate of the wealth, ideally exponentially.
§ SEQUENTIAL GOODNESS-OF-FIT BY BETTING
We now consider a stream of data X_1, X_2, …∼ Q. In the context of testing by betting, it is natural to consider test martingales of the form
_t = _t-1× 1 + λ_t g_t(X_t) , _0 = 1,
τmin{t ≥ 1: _t ≥ 1/α},
where λ_t and g_t are predictable and
* 𝔼_H_0[ g_t(X_t) |_t-1] ≤ 0 (to ensure _t is a supermartingale under the null hypothesis),
* g_t ≥ -1 and λ_t ∈ [0, 1] (to ensure that _t is non-negative),
so that _t is a test supermartingale. Intuitively, g_t(X_t) defines the payoff function, and λ_t the proportion of the wealth that the bettor is willing to risk at round t. Usually, the payoff function g_t is assumed to belong to a set 𝒢 that is uniformly bounded by one in absolute value and such that 𝔼_H_0[ g(X) |_t-1] ≤ 0 for all g ∈𝒢, and is chosen predictably seeking to maximize _t under the alternative.
For the Stein kernel h_p, we note that, if [h_p(X, X)] < ∞ <cit.>,
𝔼_H_0[ 1/t-1∑_i = 1^t-1 h_p(X_i, X_t)|_t-1] = 𝔼_H_0[ 1/t-1∑_i = 1^t-1⟨ξ_p (·, X_i), ξ_p(·, X_t) ⟩_ℋ^d|_t-1]
= 1/t-1∑_i = 1^t-1⟨ξ_p (·, X_i), 𝔼_H_0[ ξ_p (·, X_t) |_t-1] ⟩_ℋ^d
=1/t-1∑_i = 1^t-1⟨ξ_p (·, X_i), 0 ⟩_ℋ^d
= 0.
Hence, if h_p was uniformly bounded by one, we could define g_t(x) = 1/t-1∑_i = 1^t-1 h_p(X_i, x). Similar payoff functions have been proposed in <cit.> for two sample testing, and in <cit.> for independence testing. In those settings, it is natural to consider kernels that are indeed uniformly bounded by one, and this bound is not too loose[<cit.> proposed an extension to unbounded kernels. They exploit the symmetry (around zero) of their payoff function under the null, which follows from the nature of two sample testing. Such a symmetry does not hold for us.]. This is the case for the ubiquitous Gaussian (or RBF) and Laplace kernels.
In stark contrast, the Stein kernel h_p need not be uniformly bounded even if built from a uniformly bounded k (or it may be uniformly bounded by an extremely large constant, which would imply a remarkable loss in power if simply normalizing by that constant). Consider the inverse-multi quadratic (IMQ) kernel
k_IMQ(x, y) = (1 + x - y ^2)^-1/2
The IMQ kernel has been extensively employed as the base kernel for constructing the Stein kernel. The success of the IMQ kernel over other common characteristic kernels can be attributed to its slow decay rate <cit.>. For this reason, we build the Stein kernel h_p from k_IMQ throughout. Now take P = 𝒩(0, 1) to be a standard normal distribution.
In such a case, s_p(x) = -x, and h_p(x, x) = x^2 + 1. This implies that sup_x ∈ h_p(x, x) = ∞, i.e., the kernel h_p is not bounded.
Nonetheless, it is very possible that x ↦ h_p(x, x̃) is bounded for every fixed x̃. We can rewrite
h_p(x, x̃) = ⟨ξ_p (·, x), ξ_p (·, x̃) ⟩_ℋ^d = ξ_p (·, x)_^dξ_p (·, x̃)_^dcosβ(x, x̃),
where cosβ(x, x̃) is the angle between ξ_p (·, x) and ξ_p (·, x̃) in the Hilbert space ^d . Interestingly, while the embeddings ξ_p (·, x)_^d may not be uniformly bounded (this is equivalent to h_p not being uniformly bounded), cosβ(x, x̃) may decay to zero faster than ξ_p (·, x)_ approaches infinity (for any given x̃). From now on, we assume that we have access to a function M_p: 𝒳↦_≥ 0 such that
M_p(x̃) ≥ -inf_x ∈𝒳 h_p(x, x̃).
The upper bound M_p plays a key role in the proposed test, as it allows for defining martingales that are nonnegative. We defer to Section <ref> a general method to derive such an upper bound, as well as specific examples of such derivation.
§.§ Simple null hypothesis
We are now ready to present the novel sequential goodness-of-fit tests. Let P be a distribution which is known up to its normalizing constant, and let us consider the null hypothesis H_0: Q = P against the alternative H_1: Q ≠ P. Note that the scores s_p(x) of P do not depend on such normalizing factors and thus the reproducing kernel h_p is computable.
Again, we emphasize that defining a wealth process based on the payoff function 1/t-1∑_i = 1^t-1 h_p(X_i, x) does not, in principle, yield a nonnegative martingale under the null. Nonetheless, the normalized payoff function
g_t(x) = 1/1/t-1∑_i=1^t-1 M_p(X_i)(1/t-1∑_i = 1^t-1 h_p(X_i, x)).
is lower bounded by -1. Hence, as long as the betting strategy λ_t is nonnegative and bounded above by 1, we are able to define a wealth process that forms a test martingale. The following theorem formally establishes such a result; the proof is deferred to Appendix <ref>.
Assume that [h_p(X, X')]=0, and let λ_t ∈ [0, 1] be predictable. The wealth process
_t = _t-1× 1 + λ_t g_t(X_t) , _0 = 1,
where g_t is defined as in (<ref>),
is a test martingale. The stopping time
τmin{t ≥ 1: _t ≥ 1/α}
defines a level-α sequential test.
Algorithm <ref> summarizes the procedure introduced in this section, with a complexity of O(T^2) for T rounds. Note that [h_p(X, X')] = 0 under the null if [h_p(X, X)] < ∞, and thus in that case Theorem <ref> establishes the validity of Algorithm <ref> for any betting strategy that is predictable. However, the power of the test will heavily depend on the chosen betting strategy. For instance, if we take λ_t = 0 for all t (this is, we never bet any money), then the wealth will remain constant as _t = 1, and so the test is powerless. There exists a variety of betting strategies that have been studied in the literature. In this contribution, we focus on aGRAPA (`approximate GRAPA') and LBOW (`Lower-Bound On the Wealth'). We refer the reader to <cit.> to a detailed presentation of different betting strategies.
Let g_i(X_i)_t ≥ 1∈ [-1, ∞)^ denote a sequence of outcomes. Initialize λ_1^aGRAPA = 0. For each round t = 1, 2, …, observe payoff g_i(X_i) and update
λ_t+1^aGRAPA = min1, max0, 1/t-1∑_i = 1^t-1 g_i(X_i)/1/t-1∑_i = 1^t-1 g_i^2(X_i).
Let g_i(X_i)_t ≥ 1∈ [-1, ∞)^ denote a sequence of outcomes. Initialize λ_1^LBOW = 0. For each round t = 1, 2, …, observe payoff g_i(X_i) and update
λ_t+1^LBOW = max0, 1/t-1∑_i = 1^t-1 g_i(X_i)/1/t-1∑_i = 1^t-1 g_i(X_i) + 1/t-1∑_i = 1^t-1 g_i^2(X_i).
We have introduced versions of the betting strategies where we force λ_t to be nonnegative. This need not always be the case. The idea of not allowing for negative bets was exploited by <cit.> as well, motivated by the fact that positive payoffs are expected under the alternative. The motivation is double in this work, as we also expect positive payoffs under the alternative, but the nonnegativity of λ_t allows us to only having to lower bound the payoff function g_t, instead of lower and upper bounding it. While the aGRAPA strategy shows better empirical performance, LBOW allows for providing theoretical guarantees of the asymptotic wealth under the alternative.
Assume [h_p(X, X)] < ∞ and [M_p(X)] < ∞. Denote g^*(x) := [ h_p(X, x)] / [M_p(X)]. Under H_1, if [g^*(X)] > 0, the LBOW betting strategy yields
lim inf_t →∞log_t/t≥([g^*(X)])^2 / 2/[g^*(X)] + [(g^*(X))^2] := r^*.
It follows that _t a.s.→∞ and _H_1 (τ < ∞) = 1.
Informally, the above theorem states that under the alternative, _t ≥exp(r^* t (1-o(1))), meaning that up to asymptotically negligible terms, the wealth grows exponentially fast (in the number of data points t) at the rate r^*. Note that under the null, [h_p(X, X')] = 0, implying [g^*(X)] = 0, and thus r^* = 0, which accords with our claim that under the null, the wealth is a nonnegative martingale (whose expectation stays constant with t) and thus does not grow with t. This then implies that the stopping time of the test, which is the time at which the wealth exceeds 1/α, is (up to leading order) given by the expression log(1/α)/r^*.
The proof of Theorem <ref> is deferred to Appendix <ref>. We point out that the unboundedness of the Stein kernel does not allow for easily extending the arguments presented in <cit.>, which rely on a different betting strategy whose guarantees stem from uniformly bounded payoff functions. Furthermore, we highlight the mildness of the assumptions in Theorem <ref>. Assumption [M_p(X)] < ∞ only requires the first moment of the upper bounds to exist. This condition usually reduces to the existence of a specific moment of the original distribution, which is often easily verifiable. Assumption [h_p(X, X)] < ∞ is an ubiquitous assumption in the KSD theory; it is equivalent to the existence of the second moment of ξ_p(·, X)_ℋ^d, which is precisely the theoretical object that the KSD builds on. Finally, note that P ≠ Q does not necessarily imply [g^*(X)] > 0 unless some regularity conditions hold, such as C_0-universality of k and [∇log (p(X)/q(X))] < ∞. These conditions have been extensively studied in the batch setting <cit.>, and are inherent to the KSD approach.
§.§ Composite null hypothesis
Let now 𝒫 = { P_θ: θ∈Θ} be a set of distributions with unknown normalizing constants parametrized by θ. By a slight abuse of notation, we denote s_θ = s_p_θ, h_θ = h_p_θ, M_θ = M_p_θ, and so on. Consider the null hypothesis H_0: Q ∈𝒫 against H_1: Q ∉𝒫. We define
g_t^θ(x) = 1/1/t-1∑_i=1^t-1 M_θ(X_i)(1/t-1∑_i = 1^t-1 h_θ(X_i, x)) ∀θ∈Θ.
We note that, under the null hypothesis, there exists θ_0 ∈Θ such that Q = P_θ_0. Inspired by universal inference, we propose to consider the wealth process
_t^C = min_θ∈Θ^θ_t.
_t^C is dominated by a test supermartingale, and τ is a level-α sequential test.
The minimizer may be computed differently depending on the nature of the problem. If Θ is finite, then it is obtained as the minimum of a discrete set. For arbitrary Θ, it may be computed using numerical optimisation algorithms.
§ DERIVATION OF SENSIBLE BOUNDS
§.§ A general approach
We highlight that the derivation of the bounds M_p(x) is key on this approach. While these bounds heavily depend on the distribution P through its score function s_p, we explore general ways of deriving bounds M_p(x). We focus on deriving bounds for the IMQ. Nonetheless, we highlight that similar derivations would follow for other kernels, such as the RBF or Laplace kernels. We start by noting that, for the IMQ kernel,
∇_xk(x, x̃) = -(1 + x - x̃^2)^-3/2(x - x̃),
∇_x·∇_x̃ k(x, x̃) = -3(1 + x - x̃^2)^-5/2x-x̃^2 + d(1 + x - x̃^2)^-3/2.
Hence, for a fixed x̃,
* k(x, x̃) is O(x - x̃^-1),
* ∇_x k(x, x̃) is O( x - x̃^-2) ,
* ∇_x·∇_x̃ k(x, x̃) ≥min(-3 + d, 0).
In order to explicitly obtain a bound, we work with each of the terms (i) |⟨ s_p(x), s_p(x̃) ⟩_^1 k(x, x̃)| ≤s_p(x) s_p(x̃) | k(x, x̃)|, (ii) ⟨ s_p(x̃), ∇_x k(x, x̃) ⟩_^d + ⟨ s_p(x), ∇_x̃ k(x, x̃) ⟩_^d, (iii) ∇_x·∇_x̃ k(x, x̃) separately. For term (i), we upper bound s_p(x) by γ(x - x̃) + γ'(x̃), where γ and γ' are appropriate functions. For term (ii), we note that
⟨ s_p(x̃), ∇_x k(x, x̃) ⟩_^d + ⟨ s_p(x), ∇_x̃ k(x, x̃) ⟩_^d = ⟨ s_p(x̃) - s_p(x), -∇_x k(x, x̃) ⟩_^d
= ⟨ s_p(x̃) - s_p(x), (1 + x - x̃^2)^-3/2(x - x̃)⟩_^d
= (1 + x - x̃ |^2)^-3/2⟨ s_p(x̃) - s_p(x), x - x̃⟩_^d,
so it suffices to upper bound work with s_p(x̃) - s_p(x) and upper bound s_p(x̃) - s_p(x) by Γ(x - x̃) + Γ'(x̃), where Γ and Γ' are again appropriate functions. Note that (iii) has already been lower bounded. The choices of γ, γ', Γ, Γ' will become clear in the next subsection, where we provide specific examples.
§.§ Specific examples of bound derivations
We present specific instances of bound derivations in order to elucidate the use of the general approach. The bounds obtained here will be later used in the experiments displayed in Section <ref>.
Gaussian distribution: Let us consider P_θ = 𝒩(θ, 1), i.e., a normal distribution with mean θ and unit variance, and k to be the inverse multi-quadratic kernel. Given that s_θ(x) = - (x - θ), we first derive
|⟨ s_θ(x), s_θ(x̃) ⟩ k(x, x̃)| ≤ | x - θ | | x̃ - θ | k (x, x̃)
≤ | x - x̃| + | x̃ - θ | | x̃ - θ | k (x, x̃)
≤ |x̃ - θ| 1 + | x̃ - θ |,
where the last inequality can be easily derived by separately considering the two cases |x - x̃| ≤ 1 and |x - x̃| > 1. Secondly,
(1 + | x - x̃ |^2)^-3/2⟨ s_p(x̃) - s_p(x), x - x̃⟩ = -(1 + | x - x̃ |^2)^-3/2 | x - x̃ | ∈ [-1, 0].
For a fixed x̃, it thus follows that
* |⟨ s_θ(x), s_θ(x̃) ⟩ k(x, x̃)| ≤ |x̃ - θ | { 1 + | x̃ - θ |},
* ⟨ s_θ(x̃), ∇_x k(x, x̃) ⟩ + ⟨ s_θ(x), ∇_x̃ k(x, x̃) ⟩∈ [-1, 0],
* ∇_x·∇_x̃ k(x, x̃) ≥ -2.
Consequently, it suffices to define M_θ(x̃) = |x̃ - θ | { 1 + | x̃ - θ |} + 3. Note that [M_θ(X)] < ∞ given that the first two moments of a Gaussian distribution are finite.
Intractable model: Following the examples in <cit.> and <cit.>, we consider the intractable model with density p_θ(y) ∝exp(θ_1 tanh x_1 + θ_2 tanh x_2 - 0.5 x^2), where θ = (θ_1, θ_2)∈^2, x ∈^3. For θ = 0, we recover the density of a Gaussian distribution 𝒩(0, I_3). Given that
s_θ(x) = θ_1(1 - tanh^2x_1), θ_2(1 - tanh^2x_2), 0^T - x,
we first derive
θ_1(1 - tanh^2x_1), θ_2(1 - tanh^2x_2), 0^T - x ≤θ_1(tanh^2x̃_1 - tanh^2x_1), θ_2(tanh^2x̃_2 - tanh^2x_2), 0^T
+ x̃ - θ_1(1 - tanh^2x̃_1), θ_2(1 - tanh^2x̃_2), 0^T + x̃ - x
≤θ + s_θ(x̃) + x̃ - x ,
and so
|⟨ s_θ(x), s_θ(x̃) ⟩ k(x, x̃)| ≤ s_θ(x) s_θ(x̃) k (x, x̃)
≤θ + s_θ(x̃) + x̃ - x s_θ(x̃) k (x, x̃)
≤θ + s_θ(x̃) + 1 s_θ(x̃) .
Secondly,
s_p(x̃) - s_p(x) ≤θ + x - x̃,
and so
(1 + x - x̃^2)^-3/2⟨ s_p(x̃) - s_p(x), x - x̃⟩ ≤ (1 + x - x̃^2)^-3/2 x - x̃θ + x - x̃
≤θ + 1.
For a fixed x̃, it thus follows that
* |⟨ s_θ(x), s_θ(x̃) ⟩ k(x, x̃)| ≤θ + s_θ(x̃) + 1 s_θ(x̃),
* |⟨ s_θ(x̃), ∇_x k(x, x̃) ⟩ + ⟨ s_θ(x), ∇_x̃ k(x, x̃) ⟩ | ≤θ + 1,
* ∇_x·∇_x̃ k(x, x̃) ≥ 0,
Hence it suffices to define M_θ(x̃) = θ + s_θ(x̃) + 1 s_θ(x̃) + θ + 1. Note that [M_θ(X)] < ∞ given that the distribution has Gaussian type tails, and so its first two moments are finite.
Gaussian-Bernoulli Restricted Boltzmann Machine: We consider P to be a Gaussian-Bernoulli Restricted Boltzmann Machine, following related contributions in the literature <cit.>. It is a graphical model that includes a binary hidden variable h, taking values in {1, -1}^d_h, and a continuous observable variable x within ℝ^d. These variables are linked by the joint density function
p(x, h) = 1/Zexp1/2x^TBh + b^Tx + c^Th - 1/2x_2^2 ,
where Z is the normalizing constant. It follows that the density p of x is
p(x) = ∑_h ∈{ - 1, 1}^d_h p(x, h).
The computation of p for large dimension d_h is intractable; nonetheless, the score function is computable as
s_p(x) = b - x + B/2ϕ(B^Tx/2 + c), ϕ(y) = e^2y - 1/e^2y + 1.
We first derive
B/2ϕ(B^Tx/2 + c) - B/2ϕ(B^Tx̃/2 + c)(i)≤ B _op√(d_h),
where (i) is obtained given that ϕ(y) ∈ [-1, 1]^d_h for all y. Thus
|⟨ s_p(x), s_p(x̃) ⟩ k(x, x̃)| ≤ s_p(x) s_p(x̃) k (x, x̃)
≤ s_p(x̃) + x̃ - x + B/2ϕ(B^Tx/2 + c) - B/2ϕ(B^Tx̃/2 + c) s_p(x̃) k (x, x̃)
≤ s_p(x̃) +1 + B _op√(d_h) s_p(x̃) .
Secondly,
s_p(x̃) - s_p(x) ≤ B _op√(d_h) + x - x̃,
and so
(1 + x - x̃^2)^-3/2⟨ s_p(x̃) - s_p(x), x - x̃⟩ ≤ (1 + x - x̃^2)^-3/2 x - x̃ B _op√(d_h) + x - x̃
≤ B _op√(d_h) + 1.
Lastly, we have that ⟨∇_x k(·, x), ∇_x̃ k(·, x̃) ⟩_ℋ^d≥ 0. Hence it suffices to define M_p(x̃) = s_p(x̃) +1 + B _op√(d_h) s_p(x̃) + B _op√(d_h) + 1. We can further upper bound B _op≤ B _fr, with the Frobenius norm being easily computable. Note that [M_p(X)] < ∞ given that the distribution has Gaussian type tails, and so its first two moments are finite.
§ EXPERIMENTS
We present here the empirical performance of the test. We consider the distributions introduced in the previous section, using the upper bounds derived therein. Gaussian distribution: We take θ = 0 under the null, and θ = 1 under the alternate. Intractable model: We take θ = (0, 0) under the null, and θ = (1, 1) under the alternate. Gaussian-Bernoulli Restricted Boltzmann Machine: We take d_h = 10, and d = 50. We sample from it using Gibbs sampling with a burn-in of 1000 iterations. Under the null, we take b = 0, c = 0, and matrix B such that B_ij is one if visible node i is connected to hidden node j, and zero otherwise, with each hidden node connected to five visible nodes. We consider two different alternatives. For one of them, each entry of B is shifted by 0.5. For the other one, b = 1.
Figure <ref> exhibits the performance of the proposed test under the four alternatives considered, with α = 0.05. We emphasize the exponential growth of the wealth process. This behaviour is expected, in view of Theorem <ref> and the fact that all these examples fulfil regularity conditions so that _H_1[g^*(X)] > 0. Furthermore, we emphasize that aGRAPA empirically outperforms LBOW. This is due to the fact that λ_t+1^aGRAPA > λ_t+1^LBOW, so aGRAPA bets more aggressively. While not shown, we highlight that all the tests control the type-1 error at the desired level 0.05. We defer to Appendix <ref> a discussion concerning the empirical performance of the proposed test and the fixed sample size kernel Stein discrepancy test, and to Appendix <ref> an empirical verification of the lower bound derived in Theorem <ref>.
§ CONCLUSION
We have developed a sequential version of the kernel Stein discrepancy, which gives way to goodness-of-fit tests that can handle distributions with unknown normalizing constants and can be continuously monitored and adaptively stopped. We have done so by combining tools from testing by betting with RKHS theory, while avoiding assuming uniform boundedness of the Stein reproducing kernel. We have proved the validity of the test, as well as exponential growth of the wealth process under the alternative and mild regularity conditions. Our experiments have exhibited the empirical performance of the test in a variety of scenarios.
In this contribution, we have presented a novel martingale construction that does neither exploit nor need uniform boundedness of the kernel. While the theory presented here has been developed for and motivated by the Stein kernel, such a martingale construction may be exploited by any kernel that is either unbounded or uniformly bounded by a constant that is too loose. This opens the door to develop sequential two sample and independence tests with kernels that are currently unexplored, which conforms an exciting direction of research.
§.§.§ Acknowledgements
DMT gratefully acknowledges that the project that gave rise to these results received the support of a fellowship from `la Caixa' Foundation (ID 100010434). The fellowship code is LCF/BQ/EU22/11930075. AR was funded by NSF grant DMS-2310718.
apalike
§ ADDITIONAL EXPERIMENTS
§.§ Comparison to batch setting
In Section <ref>, we exhibited the performance of the proposed, sequential test. We emphasize that the proposed test is anytime valid, counting with the subsequent substantial advantages that have been discussed in the main body of the work. Thus, it is expected to be outperformed in the batch setting by algorithms that are tailored to fixed sample sizes. Nonetheless, it is of interest to explore how the proposed test compares with the more classical kernel Stein discrepancy test. We present in this appendix the empirical performance of both the sequential and fixed sample size algorithms.
Figure <ref> exhibits the proportion of rejections for the Gaussian distribution considered in Section <ref> with different alternatives. As expected, the classical batch setting test outperforms the proposed test for fixed sample sizes, with the proportions of rejections of the former being always larger than those of the latter. Nonetheless, the figure illustrates the convenience of employing anytime valid tests. If using the classical kernel Stein discrepancy test, 50 observations suffice to reject the null hypothesis for every sample in the right plot, but they only allow to reject for around 80% of the samples in the left plot. The null hypothesis cannot be ever rejected for the samples encompassed in the remaining 20%: based on the interpretation of p-values, it is not possible to keep gathering evidence to rerun the test. In stark contrast, all the null hypotheses are eventually rejected by the proposed test, being always able to collect more data and keep running the experiment with anytime validity.
§.§ Empirical verification of the lower bound of the wealth
In Section <ref>, we stressed that the stopping time τ of the proposed test (i.e. τ is the smallest t verifying _t ≥ 1/α) is roughly upper bounded by log(1/α) / r^* (where r^* is defined as in Theorem <ref>). We devote this section to exhibit the empirical validity of this claim. Note that, while r^* is unknown in practice, it can be easily estimated via a Monte Carlo approach when we have access to the ground truth distribution (as it is the case in our experimental settings), given that it only depends on the first and second moments of the h_p and M_p evaluations.
Figure <ref> displays the stopping times τ of the proposed test for the Gaussian setting (presented in Section <ref> and Section <ref>) as a function of r^*, both for the LBOW and aGRAPA strategies. We take θ_0 = 0 under the null, and a range of θ_1 under different alternatives. Intuitively, the larger the distance between the means θ_0 and θ_1 is, the larger the theoretical quantities r^* become. Thus, this setting provides a range of r^* for which we can study the empirical stopping times τ. Figure <ref> shows that the average stopping time curve, as well as its empirical 95% confidence interval, are empirically dominated by the upper bound log(1/α)/r^*
(and once more, we note that the aGRAPA strategy empirically outperforms the LBOW strategy).
§ PROOFS
§.§ Notation
Throughout, we denote
g_i(x) = 1/1/t-1∑_k=1^i-1 M_p(X_k)(1/i-1∑_k = 1^i-1 h_p(X_k, x)), g^*(x) = 1/[M_p(X)][ h_p(X, x)],
f_i(x) = 1/i-1∑_k = 1^i-1 h_p(X_k, x), f^*(x) = [ h_p(X, x)],
M^i_p = 1/i-1∑_k=1^t-1 M_p(X_k), M_p = E[M_p(X)].
§.§ Preliminary results
For completeness, we enunciate the following theorems, which will be exploited in subsequent proofs.
Let B be a separable Banach space with norm ·_B. Let (χ_t)_t ≥ 1 be a sequence of i.i.d. integrable B-valued random variables. Then
1/t∑_i = 1^t χ_i - [χ] _B a.s.→ 0.
Let (S_t)_t ≥ 0 be a nonnegative supermartingale process adapted to a filtration {_t ≥ 0 }. It holds that
(∃ t ≥ 1 : S_t ≥ x) ≤[S_0]/x
for any x > 0.
§.§ Auxiliary results
We present here two results that Theorem <ref> builds on.
If [√(h_p(X, X))] < ∞ and M_p < ∞, then
1/t∑_i=1^t | g_i(X_i) - g^*(X_i) | a.s.→ 0, 1/t∑_i=1^t g_i(X_i) a.s.→[g^*(X)].
Without loss of generality, we can assume that M_p > 0. We first highlight that, based on 0 < [√(h_p(X, X))] < ∞,
* [ f^*(X) ] is finite, as [ f^*(X) ] = _X, X'[ h_p(X', X)] ≤_X, X'[ ξ_p(·, X) _ℋ^dξ_p(·, X') _ℋ^d] = _X[ ξ_p(·, X) _ℋ^d]^2 = [√(h_p(X, X))]^2 < ∞,
* ξ_p(·, X) _ℋ^d = [√(h_p(X, X))] is finite.
* [ g^*(X)] is finite, as g^*(x) = f^*(x) / M_p, [ f^*(X) ] is also finite and M_p > 0.
We now note that 1/t∑_i=1^t | g_i(X_i) - g^*(X_i) | a.s.→ 0 implies 1/t∑_i=1^t g_i(X_i) a.s.→[g^*(X)], i.e., 1/t∑_i=1^t g_i(X_i) - [g^*(X)] a.s.→ 0. To see this, we decompose
1/t∑_i=1^t g_i(X_i) - [g^*(X)] = 1/t∑_i=1^t g_i(X_i) - 1/t∑_i=1^t g^*(X_i) + 1/t∑_i=1^t g^*(X_i) - [g^*(X)] ,
and we highlight that the second term converges to zero almost surely in view of the strong law of large numbers (SLLN) and [ g^*(X)] < ∞. Further, 1/t∑_i=1^t | g_i(X_i) - g^*(X_i) | a.s.→ 0 implies 1/t∑_i=1^t g_i(X_i) - 1/t∑_i=1^t g^*(X_i) a.s.→ 0.
Hence, it remains to prove that
1/t∑_i=1^t | g_i(X_i) - g^*(X_i) | a.s.→ 0.
That is, for any given ϵ > 0 and δ > 0, there exists N ∈ℕ such that
sup_t ≥ N1/t∑_i=1^t |g_i(X_i) - g^*(X_i) | > ϵ≤δ.
In order to prove the result, we are going to decompose 1/t∑_i=1^t |g_i(X_i) - g^*(X_i) |, and then combine the scalar-valued SLLN and the Banach space-valued SLLN. We will finish the proof by taking a union bound over the different terms.
Introducing the SLLN terms and the probabilistic bounds: By the scalar valued SLLN and the finiteness of M_p > 0, [ f^*(X) ], and ξ_p(·, X) _ℋ^d, we have that
* M_p^i a.s.→ M_p, and so 1/M_p^ia.s.→1/M_p (given that M_p ≠ 0),
* 1/t∑_i=1^t|f^*(X_i)| a.s.→[ f^*(X) ],
* 1/t∑_i=1^tξ_p(·, X_i) _ℋ^da.s.→ξ_p(·, X) _ℋ^d.
From the Banach space-valued SLLN (Theorem <ref>) and the finiteness of ξ_p(·, X) _ℋ^d, it also follows that
* 1/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)] _ℋ^d 0.
Hence there exist B > 0 and N_1 ∈ such that
sup_t ≥ N_1| 1/M_p^i| > B ≤δ/10,
sup_t ≥ N_11/t∑_i=1^t|f^*(X_i)| > B ≤δ/10,
sup_t ≥ N_11/t∑_i=1^tξ_p(·, X_i) _ℋ^d > B ≤δ/10,
sup_t ≥ N_1| 1/M_p^i - 1/M_p| > ϵ/4B≤δ/10,
sup_i ≥ N_11/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)] _ℋ^d > ϵ/4B^2≤δ/10.
Note that sup_t ≥ N_11/t∑_i=1^tξ_p(·, X_i) _ℋ^d > B ≤δ/10 and sup_t ≥ N_11/t∑_i=1^t|f^*(X_i)| > B ≤δ/10 imply that sup_t ≥ 2N_11/t-N_1∑_i=N_1^tξ_p(·, X_i) _ℋ^d > B ≤δ/10 and sup_t ≥ 2N_11/t-N_1∑_i=N_1^t|f^*(X_i)| > B ≤δ/10, given that the data are iid.
Combining the probabilistic bounds: We start by noting that
1/t∑_i=1^t |g_i(X_i) - g^*(X_i) | = 1/t∑_i=1^N_1-1| g_i(X_i) - g^*(X_i) | + 1/t∑_i=N_1^t| g_i(X_i) - g^*(X_i) |.
Consider the sequence of random variables Y_t = 1/t(∑_i=1^N_1-1| g_i(X_i) - g^*(X_i) |). Clearly, Y_t a.s.→ 0. Thus there exists N_2 such that sup_t ≥ N_2 Y_t ≤ϵ/2 with probability at least 1 - δ /2.
Moreover,
1/t∑_i=N_1^t| g_i(X_i) - g^*(X_i) | = 1/t∑_i=N_1^t| 1/M_p^i f_i(X_i) - 1/M_pf^*(X_i) |
= 1/t∑_i=N_1^t| 1/M_p^i f_i(X_i) - 1/M_p^i f^*(X_i) + 1/M_p^i f^*(X_i) - 1/M_pf^*(X_i) |
≤1/t∑_i=N_1^t| 1/M_p^i f_i(X_i) - 1/M_p^i f^*(X_i) |_(I) +
+1/t∑_i=N_1^t|1/M_p^i f^*(X_i) - 1/M_pf^*(X_i) |_(II).
We now handle terms (I) and (II) separately. For term (II), we observe that
sup_t ≥ 2N_11/t∑_i=N_1^t|1/M_p^i f^*(X_i) - 1/M_pf^*(X_i) |
= sup_t ≥ 2N_11/t∑_i=N_1^t| 1/M_p^i - 1/M_p| |f^*(X_i)|
≤sup_t ≥ 2N_1|1/M_p^i - 1/M_p| sup_t ≥ 2N_11/t∑_i=N_1^t|f^*(X_i)|
≤sup_t ≥ N_1|1/M_p^i - 1/M_p| sup_t ≥ 2N_11/t-N_1∑_i=N_1^t|f^*(X_i)|.
Note that this is upper bounded by ϵ/4B B = ϵ/4 with probability 1 - 2δ/10 in view of the union bound.
For term (I), we have that
sup_t ≥ 2N_11/t∑_i=N_1^t| 1/M_p^i f_i(X_i) - 1/M_p^i f^*(X_i) | ≤sup_t ≥ N_1|1/M_p^i| sup_t ≥ 2N_11/t∑_i=N_1^t|f_i(X_i) - f^*(X_i)|.
Now we observe that
sup_t ≥ 2N_11/t∑_i=N_1^t|f_i(X_i) - f^*(X_i)| = sup_t ≥ 2N_11/t∑_i=N_1^t⟨1/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)], ξ_p(·, X_i) ⟩_ℋ^d
≤sup_t ≥ 2N_11/t∑_i=N_1^t1/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)] _ℋ^dξ_p(·, X_i) _ℋ^d
≤sup_i ≥ 2N_1{1/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)] _ℋ^d}sup_t ≥ 2N_1{1/t∑_i=N_1^tξ_p(·, X_i) _ℋ^d}
≤sup_i ≥ N_1{1/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)] _ℋ^d}sup_t ≥ 2N_1{1/t-N_1∑_i=N_1^tξ_p(·, X_i) _ℋ^d}
Note that this is upper bounded by ϵ/4B^2 B = ϵ/4B with probability 2δ/10 in view of the union bound. Thus term (I) is upper bounded by ϵ/4B B = ϵ/4 with probability 1 - 3δ/10 in view of the union bound.
Concluding the step by considering the union bound over all the terms: Taking M:= max(2N_1, N_2), we conclude that
sup_t ≥ M|1/t∑_i=1^t g_i(X_i) - 1/t∑_i=1^t g^*(X_i) | ≤ϵ/2 + ϵ/4 + ϵ/4 = ϵ
with probability 1 - (δ/2 + 2δ/10 + 3δ/10) = 1 - δ in view of the union bound.
If [h_p(X, X)] < ∞ and M_p < ∞, then
1/t∑_i=1^t g_i^2(X_i) a.s.→[ g^*(X)^2].
Without loss of generality, we can further assume that M_p > 0. We first highlight that, based on 0 < [h_p(X, X)] < ∞,
* both [ f^*(X) ] and [(f^*(X))^2] are finite, as [(f^*(X))^2] = _X[ _X'[ h^2_p(X', X)]] = _X, X'[ h^2_p(X', X)] ≤_X, X'[ ξ_p(·, X) _ℋ^d^2 ξ_p(·, X') _ℋ^d^2 ] = _X[ ξ_p(·, X) _ℋ^d^2 ]^2 = [h_p(X, X)]^2 < ∞ and ^2[f^*(X)] ≤[(f^*(X))^2],
* both ξ_p(·, X) _ℋ^d and ξ_p(·, X) ^2_ℋ^d is finite, as _X[ ξ_p(·, X) _ℋ^d^2 ] = [h_p(X, X)] < ∞ and ^2ξ_p(·, X) _ℋ^d≤ξ_p(·, X) ^2_ℋ^d.
* both [ g^*(X)] and [ g^*(X)^2] are finite, as g^*(x) = f^*(x) / M_p, [ f^*(X) ] and [ f^*(X) ^2] are also finite, and M_p > 0.
We want to show that 1/t∑_i=1^t g_i^2(X_i) a.s.→[(g^*(X))^2], i.e. 1/t∑_i=1^t g_i^2(X_i) - [(g^*(X))^2] a.s.→ 0. We decompose
1/t∑_i=1^t g_i^2(X_i) - [(g^*(X))^2] = 1/t∑_i=1^t g_i(X_i) - g^*(X_i) + g^*(X_i)^2 - [(g^*(X))^2]
= 1/t∑_i=1^t g_i(X_i) - g^*(X_i)^2 - 2/t∑_i=1^t g_i(X_i) - g^*(X_i) g^*(X_i) +
+ 1/t∑_i=1^t g^*(X_i)^2 - [(g^*(X))^2].
The third term converges to zero almost surely in view of [ g^*(X) ^2] < ∞ and the SLLN. For the second term, we apply Cauchy-Schwarz inequality to obtain
2/t∑_i=1^t g_i(X_i) - g^*(X_i) g^*(X_i) ≤2/t∑_i=1^t g_i(X_i) - g^*(X_i) g^*(X_i)
≤ 2 1/t∑_i=1^t g_i(X_i) - g^*(X_i)^2 ^1/21/t∑_i=1^t g^*(X_i) ^2 ^1/2
Given that 1/t∑_i=1^t g^*(X_i) ^2 [ g^*(X) ^2], we derive
1/t∑_i=1^t g^*(X_i) ^2 ^1/2 E^1/2[ g^*(X) ^2].
Note that if 1/t∑_i=1^t g_i(X_i) - g^*(X_i)^2 0, it also follows that 1/t∑_i=1^t g_i(X_i) - g^*(X_i)^2 ^1/2 0, implying that the second term converges to zero almost surely. Thus it suffices to show that the first term converges to zero almost surely, i.e.
1/t∑_i=1^t g_i(X_i) - g^*(X_i)^2 0, to conclude the result. We prove so similarly to Proposition <ref>.
Introducing the SLLN terms and the probabilistic bounds: By the SLLN and the finiteness of M_p, [ f^*(X) ^2], and ξ_p(·, X) _ℋ^d^2, we have that
* M_p^i a.s.→ M_p, and so 1/M_p^ia.s.→1/M_p (given that M_p ≠ 0),
* 1/t∑_i=M_1^t f^*(X_i)^2 a.s.→[ f^*(X)^2],
* 1/t∑_i=1^tξ_p(·, X_i) _ℋ^d^2 a.s.→ξ_p(·, X) _ℋ^d^2.
From the Banach space-valued SLLN (Theorem <ref>) and the finiteness of ξ_p(·, X) _ℋ^d, it also follows that
* 1/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)] _ℋ^d 0.
Hence there exist B > 0 and N_1 ∈ such that
sup_t ≥ N_11/M_p^i^2 > B ≤δ/10,
sup_t ≥ N_11/t∑_i=1^t f^*(X_i)^2 > B ≤δ/10,
sup_t ≥ N_11/t∑_i=1^tξ_p(·, X_i) _ℋ^d^2 > B ≤δ/10,
sup_t ≥ N_11/M_p^i - 1/M_p^2 > ϵ/8B≤δ/10.
sup_i ≥ N_11/i-1∑_k = 1^i-1ξ_p (·, X_k) - [ξ_p (·, X)] _ℋ^d^2 > ϵ/8B^2≤δ/10.
Note that sup_t ≥ N_11/t∑_i=1^tξ_p(·, X_i) _ℋ^d^2 > B ≤δ/10 and sup_t ≥ N_11/t∑_i=1^t f^*(X_i)^2 > B ≤δ/10 imply that sup_t ≥ 2N_11/t-N_1∑_i=N_1^tξ_p(·, X_i) _ℋ^d^2 > B ≤δ/5 and sup_t ≥ 2N_11/t-N_1∑_i=N_1^t f^*(X_i)^2 > B ≤δ/10, given that the data are iid.
Combining the probabilistic bounds: We start by noting that
1/t∑_i=1^t g_i(X_i) - g^*(X_i) ^2 = 1/t∑_i=1^N_1-1 g_i(X_i) - g^*(X_i) ^2 + 1/t∑_i=N_1^t g_i(X_i) - g^*(X_i) ^2.
Consider the sequence of random variables Y_t = 1/t(∑_i=1^N_1-1 g_i(X_i) - g^*(X_i) ^2). Clearly, Y_t a.s.→ 0. Thus there exists N_2 such that sup_t ≥ N_2 Y_t ≤ϵ/2 with probability 1 - δ / 2.
Moreover,
1/t∑_i=N_1^t g_i(X_i) - g^*(X_i)^2 = 1/t∑_i=N_1^t( 1/M_p^i f_i(X_i) - 1/M_p^i f^*(X_i) + 1/M_p^i f^*(X_i) - 1/M_pf^*(X_i) )^2
(i)≤2/t∑_i=N_1^t( 1/M_p^i f_i(X_i) - 1/M_p^i f^*(X_i) )^2_(I) +
+2/t∑_i=N_1^t(1/M_p^i f^*(X_i) - 1/M_pf^*(X_i) )^2_(II),
where (i) is obtained in view of (a + b)^2 ≤ 2a^2 + 2b^2. It now remains to prove that terms (I) and (II) converge almost surely to zero. For term (II), we observe that
2sup_t ≥ 2N_11/t∑_i=N_1^t1/M_p^i f^*(X_i) - 1/M_pf^*(X_i) ^2
= 2sup_t ≥ 2 N_11/t∑_i=N_1^t1/M_p^i - 1/M_p^2 f^*(X_i)^2
≤ 2sup_t ≥ 2N_11/M_p^i - 1/M_p^2 sup_t ≥ 2N_11/t∑_i=N_1^t f^*(X_i)^2
≤ 2sup_t ≥ N_11/M_p^i - 1/M_p^2 sup_t ≥ 2N_11/t-N_1∑_i=N_1^t f^*(X_i)^2.
Note that this is upper bounded by 2 ϵ/8B B = ϵ/4 with probability 1 - 2δ/10 in view of the union bound.
For term (I), we have that
2sup_t ≥ 2N_11/t∑_i=N_1^t1/M_p^i f_i(X_i) - 1/M_p^i f^*(X_i) ^2 ≤ 2sup_t ≥ N_11/M_p^i^2sup_t ≥ 2N_11/t∑_i=N_1^t f_i(X_i) - f^*(X_i)^2.
Now we observe that
sup_t ≥ 2N_11/t∑_i=N_1^t f_i(X_i) - f^*(X_i)^2 = sup_t ≥ 2N_11/t∑_i=N_1^t⟨[ξ_p (·, X)] - 1/i-1∑_k = 1^i-1ξ_p (·, X_k), ξ_p(·, X_i) ⟩_ℋ^d^2
≤sup_t ≥ 2N_11/t∑_i=N_1^t[ξ_p (·, X)] - 1/i-1∑_k = 1^i-1ξ_p (·, X_k) _ℋ^d^2ξ_p(·, X_i) _ℋ^d^2
≤sup_t ≥ 2N_1[ξ_p (·, X)] - 1/i-1∑_k = 1^i-1ξ_p (·, X_k) _ℋ^d^2 sup_t ≥ 2N_11/t∑_i=N_1^tξ_p(·, X_i) _ℋ^d^2
≤sup_t ≥ N_1[ξ_p (·, X)] - 1/i-1∑_k = 1^i-1ξ_p (·, X_k) _ℋ^d^2 sup_t ≥ 2N_11/t-N_1∑_i=N_1^tξ_p(·, X_i) _ℋ^d^2
Note that this is upper bounded by ϵ/8B^2 B = ϵ/8B with probability 1 - 2δ/10 in view of the union bound, and hence term (I) is upper bounded by 2ϵ/8B B = ϵ/4 with probability 1 - 3δ/10 in view of the union bound.
Concluding the step by considering the union bound over all the terms:
Taking N:= max(2N_1, N_2), we conclude that
sup_t ≥ N|1/t∑_i=1^t g_i(X_i) - 1/t∑_i=1^t g^*(X_i) | ≤ϵ/2 + ϵ/4 + ϵ/4 = ϵ
with probability 1 - (δ/2 + 2δ/10 + 3δ/10) = 1 - δ in view of the union bound.
§.§ Proofs of main theorems
We first note that
𝔼_H_0[ g_t(X_t) |_t-1] = 𝔼_H_01/1/t-1∑_i=1^t-1 M_p(X_i)(1/t-1∑_i = 1^t-1 h_p(X_i, X_t)) |_t-1
= 1/1/t-1∑_i=1^t-1 M_p(X_i)𝔼_H_0[ 1/t-1∑_i = 1^t-1 h_p(X_i, X_t)|_t-1]
(i)=1/1/t-1∑_i=1^t-1 M_p(X_i)× 0
= 0,
where (i) is obtained in view of (<ref>).
Hence
𝔼_H_0[_t|_t-1] = 𝔼_H_0[_t-1× 1 + λ_t g_t(X_t) |_t-1]
= _t-1× 1 + λ_t𝔼_H_0[ g_t(X_t) |_t-1]
= _t-1× 1 + 0
= _t-1,
which implies that _t-1 is a martingale. Furthermore,
g_t(x) = 1/1/t-1∑_i=1^t-1 M_p(X_i)(1/t-1∑_i = 1^t-1 h_p(X_i, x))
≥1/1/t-1∑_i=1^t-1 M_p(X_i)( 1/t-1∑_i=1^t-1 -M_p (X_i))
= -1.
Given that λ_t ∈ [0, 1], λ_t g_t is also lower bounded by -1, and so _t is non-negative. Thus, _t is a test martingale. In view of 𝔼_H_0[_0] = 1 and Ville's inequality, we conclude that τ is a level-α sequential test.
We want to prove that lim inf_t →∞log_t/t≥(_H_1[g^*(X)])^2 / 2/_H_1[g^*(X)] + _H_1[(g^*(X))^2] =: L.
Following the LBOW strategy, we take
λ_t = max0, 1/t-1∑_i = 1^t-1 g_i(X_i)/1/t-1∑_i = 1^t-1 g_i(X_i) + 1/t-1∑_i = 1^t-1 g_i^2(X_i).
Given that [h_p(X, X)] < ∞, it also holds that [√(h_p(X, X))] < ∞. Consequently, Proposition <ref> and Proposition <ref> yield 1/t-1∑_i = 1^t-1 g_i(X_i) [g^*(X)], as well as 1/t-1∑_i = 1^t-1 g_i^2(X_i) [ g^*(X)^2]. Given that [g^*(X)] > 0, it follows that
λ_t [g^*(X)]/[g^*(X)] + [ g^*(X)^2] =: λ^* ∈ (0, 1).
For y ≥ -1 and λ∈ [0, 1), it holds that
log(1 + λ y) ≥λ y + y^2 ( log(1 - λ) + λ).
Further, for λ∈ [0, 1), it holds that
log(1 - λ) + λ≥ -λ^2/2(1 - λ).
Thus, for y ≥ -1 and λ∈ [0, 1),
log(1 + λ y) ≥λ y - y^2λ^2/2(1 - λ).
Consequently, we derive that
log_t/t = 1/t∑_i = 1^t log 1 + λ_i g_i(X_i)
≥1/t∑_i = 1^t λ_i g_i(X_i) - g_i^2(X_i)λ_i^2/2(1 - λ_i).
It thus suffices to prove that 1/t∑_i = 1^t λ_i g_i(X_i) - g_i^2(X_i)λ_i^2/2(1 - λ_i) L to conclude the result. To see this, we denote κ(λ) = λ^2/2(1 - λ) and highlight that
1/t∑_i = 1^t λ_i g_i(X_i) - g_i^2(X_i)λ_i^2/2(1 - λ_i) = 1/t∑_i = 1^t λ_i g_i(X_i) - g_i^2(X_i)κ(λ_i)
= 1/t∑_i = 1^t (λ_i - λ^* + λ^*) g_i(X_i) - κ(λ_i) - κ(λ^*) + κ(λ^*) g_i^2(X_i)
= 1/t∑_i = 1^t λ^* g_i(X_i) - κ(λ^*) g_i^2(X_i)_(I) +
+ 1/t∑_i = 1^t (λ_i - λ^*) g_i(X_i)_(II) - 1/t∑_i = 1^tκ(λ_i) - κ(λ^*) g_i^2(X_i)_(III).
Now we note that term (I) converges almost surely to λ^* [g^*(X)] - κ(λ^*) [ g^*(X)^2] = L, given that 1/t∑_i = 1^t g_i(X_i) [g^*(X)] and 1/t∑_i = 1^t g_i^2(X_i) [ g^*(X)^2]. Thus, it suffices to prove that (II) and (III) converge almost surely to zero to derive that
1/t∑_i = 1^t λ_i g_i(X_i) - g_i^2(X_i)λ_i^2/2(1 - λ_i) L,
hence concluding that
lim inf_t→∞log_t/t ≥lim inf_t→∞1/t∑_i = 1^t λ_i g_i(X_i) - g_i^2(X_i)λ_i^2/2(1 - λ_i) = L.
Let us now prove that (II) and (III) converge almost surely to 0. Let ϵ > 0 and δ > 0 be arbitrary. We ought to show that there exists N ∈ℕ such that
sup_t ≥ N|1/t∑_i = 1^t (λ_i - λ^*) g_i(X_i) | > ϵ≤δ, sup_t ≥ N|1/t∑_i = 1^tκ(λ_i) - κ(λ^*) g_i^2(X_i) | > ϵ≤δ.
Based on 1/t∑_i=1^t | g_i(X_i) - g^*(X_i) | a.s.→ 0, we derive that that
1/t∑_i = 1^t | g_i(X_i) | = 1/t∑_i = 1^t | g_i(X_i) - g^*(X_i) + g^*(X_i) |
≤1/t∑_i = 1^t g_i(X_i) - g^*(X_i) _ 0+ 1/t∑_i = 1^t g^*(X_i) _[ g^*(X)],
and so lim sup_t→∞1/t∑_i = 1^t | g_i(X_i) | ≤[ g^*(X)]. Furthermore, we now that 1/t∑_i = 1^t g_i^2(X_i) [ g^*(X)^2]. Given that λλ^* and κ is continuous, it follows that κ(λ) κ(λ^*). Thus, by the SLLN, there exist B>0 and N_1 ∈ such that
sup_t ≥ N_11/t∑_i = 1^t | g_i(X_i) | > B ≤δ/3, sup_t ≥ N_11/t∑_i = 1^t g_i^2(X_i) > B ≤δ/3,
sup_t ≥ N_1|λ_i - λ^*| > ϵ/2B≤δ/3, sup_t ≥ N_1|κ(λ_i) - κ(λ^*)| > ϵ/2B≤δ/3.
Note that sup_t ≥ N_11/t∑_i = 1^t | g_i(X_i) | > B ≤δ/3 and sup_t ≥ N_11/t∑_i = 1^t g_i^2(X_i) > B ≤δ/3 imply that sup_t ≥ 2N_11/t-N_1∑_i = N_1^t | g_i(X_i) | > B ≤δ/3 and sup_t ≥ 2N_11/t-N_1∑_i = N_1^t g_i^2(X_i) > B ≤δ/3, given that the data are iid.
Thus,
sup_t ≥ 2N_1|1/t∑_i = N_1^t (λ_i - λ^*) g_i(X_i) | ≤sup_t ≥ 2N_11/t∑_i = N_1^t | (λ_i - λ^*) g_i(X_i) |
≤sup_t ≥ N_1| λ_i - λ^*| sup_t ≥ 2N_11/t∑_i = N_1^t | g_i(X_i) |
≤sup_t ≥ N_1| λ_i - λ^*| sup_t ≥ 2N_11/t-N_1∑_i = N_1^t | g_i(X_i) |
≤ϵ/2BB
= ϵ/2
with probability 1 - 2/3δ, as well as
sup_t ≥ 2N_1|1/t∑_i = N_1^tκ(λ_i) - κ(λ^*) g_i^2(X_i) | ≤sup_t ≥ 2N_11/t∑_i = N_1^t|κ(λ_i) - κ(λ^*) g_i^2(X_i) |
≤sup_t ≥ 2N_1|κ(λ_i) - κ(λ^*) |
sup_t ≥ 2N_11/t∑_i = N_1^t g_i^2(X_i)
≤sup_t ≥ N_1|κ(λ_i) - κ(λ^*) |
sup_t ≥ 2N_11/t-N_1∑_i = N_1^t g_i^2(X_i)
≤ϵ/2BB
= ϵ/2
with probability 1 - 2/3δ.
Consider now the sequence of random variables
Y_t = 1/t∑_i = 1^N_1-1 (λ_i - λ^*) g_i(X_i), Ỹ_t = 1/t∑_i = 1^N_1 - 1κ(λ_i) - κ(λ^*) g_i^2(X_i).
Clearly, Y_t a.s.→ 0 and Ỹ_t a.s.→ 0. Thus there exists N_2 such that Y_t ≤ϵ/2 and Ỹ_t ≤ϵ/3, both with probability 1 - δ/3. Taking N = max(2N_1, N_2) and in view of the union bound, we derive that
sup_t ≥ N|1/t∑_i = 1^t (λ_i - λ^*) g_i(X_i) | ≤sup_t ≥ N|1/t∑_i = 1^N_1 - 1 (λ_i - λ^*) g_i(X_i) + 1/t∑_i = N_1^t (λ_i - λ^*) g_i(X_i) |
≤sup_t ≥ N|1/t∑_i = 1^N_1 - 1 (λ_i - λ^*) g_i(X_i)| + sup_t ≥ N| 1/t∑_i = N_1^t (λ_i - λ^*) g_i(X_i) |
≤ϵ/2 + ϵ/2
= ϵ
with probability 1 - (2/3δ+ 1/3δ) = 1-δ, as well as
sup_t ≥ N|1/t∑_i = 1^tκ(λ_i) - κ(λ^*) g_i^2(X_i) | ≤sup_t ≥ N|1/t∑_i = 1^N_1 - 1κ(λ_i) - κ(λ^*) g_i^2(X_i) + 1/t∑_i = N_1^tκ(λ_i) - κ(λ^*) g_i^2(X_i) |
≤sup_t ≥ N|1/t∑_i = 1^N_1 - 1κ(λ_i) - κ(λ^*) g_i^2(X_i)| + sup_t ≥ N| 1/t∑_i = N_1^t κ(λ_i) - κ(λ^*) g_i^2(X_i)|
≤ϵ/2 + ϵ/2
= ϵ
with probability 1 - (2/3δ+ 1/3δ) = 1-δ.
Let θ_0 ∈Θ be such that P_θ_0 = P. By definition of _t^C, we have that _t^C ≤_t^θ_0, with _t^θ_0 being a test martingale. The validity of the test is concluded by noting that
_t^C ≥1/α≤_t^θ_0≥1/α≤α,
where the second inequality is obtained in view of Ville's inequality.
| Many statistical procedures heavily rely on the assumptions made about the distribution of the empirical observations, and prove invalid if such conditions are violated. For instance, a high-energy physicist may develop a generative model seeking to produce synthetic observations of particle collisions, motivated by the high cost of experimenting on a real particle collider. If the model is not accurate, then any conclusion that derives from such synthetic data will lack any scientific interest. The question of whether the data follows a particular distribution may be posed in terms of goodness-of-fit testing. Formally, the simple goodness-of-fit testing problem considers the null hypothesis H_0: Q = P against the alternative H_1: Q ≠ P, for a given P and access to data X_1, X_2, …∼ Q. It may also be the case that we have a set of given distributions as candidates and we wish to test if any of them accurately models the empirical data. In such a case, the goodness-of-fit problem is posed in terms of a composite null hypothesis H_0: Q ∈𝒫, against the alternative hypothesis H_1: Q ∉𝒫.
In this work, we focus on a particularly challenging scenario that arises from not having full access to P (or 𝒫). We handle classes of distributions whose densities are only known up to normalizing constants. This is a common scenario when working with general energy-based models, such as Ising models and restricted Boltzmann machines, and in Bayesian statistics, where normalizing factors are generally neither known in closed form nor computable. The existing works in the literature regarding goodness-of-fit for unnormalized densities have focused on the batch or fixed sample size setting, where the number of samples n is decided before conducting the analysis, which is later on carried on using observations X_1, …, X_n. Nonetheless, this approach counts with several major drawbacks. Because the number of observations in batch tests is determined prior to data collection, there is a risk that too many observations may be allocated to simpler problem instances, thus wasting resources, or that insufficient observations are assigned to more complex instances, which can yield insufficient evidence against the null hypothesis. Furthermore, when test outcomes appear encouraging but not definitive (for instance, if a p-value is marginally higher than a specified significance threshold), one may be tempted to augment the dataset and undertake the investigation again. Traditional batch testing methods, however, do not allow for this approach.
In contrast, we seek to develop tests that are anytime valid. That is, we can repeatedly decide whether to collect more data based on the current state of the procedure without compromising later assessments, halting the process at any time and for any reason. Formally, a sequential test which is stopped at any given arbitrary moment can be represented by a random stopping time τ taking values in { 1, 2, …}∪{∞}. The stopping time τ denotes the random time at which the null hypothesis is rejected. A sequential test is called level-α if it satisfies the condition (τ < ∞) ≤α under the null for any stopping time τ.
In this work, we develop level-α sequential goodness-of-fit tests that accommodate for unnormalized densities, building on the concept of kernel Stein discrepancies. The type I error is controlled even if the tests are continuously monitored and adaptively stopped, hence automatically adapting the sample size to the unknown alternative. In particular, we believe these anytime-valid tests will be of remarkable interest in the following scenarios:
* Measuring the quality of a proposed distribution: We may have a specific candidate distribution (or set of candidate distributions) to model the data that does not count with a computable normalizing constant (e.g., some complex network that models the interaction between genes). Does this distribution accurately model the true data generation process?
* Measuring sample quality: We may have obtained an unnormalized density, such as some posterior distribution in Bayesian statistics, that we wish to draw from using Markov Chain Monte Carlo (MCMC) methods. Is the chosen MCMC algorithm yielding samples accurately from such a density?
The anytime-valid guarantees allow for minimizing the sample size required to reject the null if it does not hold, such as the number of expensive gene experiments that we have to run in a lab, or the number of samples that we have to draw from a costly MCMC algorithm.
The work is organized as follows. Section <ref> presents the related work. Section <ref> introduces preliminary work, with special emphasis on the kernel Stein discrepancy and testing by betting. These constitute the theoretical foundations of the novel goodness-of-fit tests, presented in Section <ref>. Section <ref> subsequently provides a general derivation for the lower bounds required in the test, alongside different examples. The empirical validity of the tests is presented in Section <ref>, followed by concluding remarks in Section <ref>. | §.§ The kernel Stein discrepancy
Throughout this contribution we will draw upon the kernel Stein discrepancy (KSD), which is a distributional divergence that builds on the concept of reproducing kernel Hilbert space (RKHS) and the general Stein's method <cit.>. The KSD may be understood as a kernelized version of score-matching divergence <cit.>.
Reproducing kernel Hilbert spaces (RKHS): Consider 𝒳≠∅ and a Hilbert space (ℋ, ⟨·, ·⟩_ℋ) of functions f: 𝒳→ℝ. If there exists a function k: 𝒳×𝒳→ℝ satisfying (i) k(·, x) ∈ℋ for all x ∈𝒳, (ii) ⟨ f, k(·, x) ⟩_ℋ = f(x) for all x ∈𝒳 and f ∈ℋ, then ℋ is called an RKHS and k a reproducing kernel. We denote by ℋ^d the product RKHS containing elements h := (h_1, …, h_d) with h_i ∈ℋ and ⟨ h, h̃⟩ = ∑_i = 1^d ⟨ h_i, h̃_i⟩_ℋ.
Kernel Stein discrepancies (KSD): Assume now that P has density p on 𝒳⊂^d, and let ℋ be an RKHS with reproducing kernel k such that ∇ k(·, x) exists for all x ∈𝒳. Based on the Stein operator
(T_p h)(x) = ∑_i = 1^d(∂logp (x)/∂ x_ih_i(x) + ∂ h_i(x)/∂ x_i), h ∈^d,
the KSD is defined as
KSD_(Q, P) = sup_f∈_MMD_X ∼ Q[(T_pf)(X)],
where _KSD = { h ∈^d: h_^d≤ 1 }. Defining s_p(x) := ∇_x log p(x) and ξ_p(·, x) := [ s_p(x) k(·, x) + ∇ k(·, x)] ∈ℋ^d, it follows that
h_p(x, x̃) : = ⟨ξ_p (·, x), ξ_p (·, x̃) ⟩_ℋ^d
=⟨ s_p(x), s_p(x̃) ⟩_^d k(x, x̃) + ⟨ s_p(x̃), ∇_x k(x, x̃) ⟩_^d
+ ⟨ s_p(x), ∇_x̃ k(x, x̃) ⟩_^d + ∇_x·∇_x̃ k(x, x̃)
is a reproducing kernel based on the Moore-Aronszajn theorem. Again, if _X∼ Q√(h_p(X, X)) < ∞, then KSD_(Q, P) = _X ∼ Q[ξ_p(·, X)]_ℋ^d. If k is universal and under certain regularity conditions, then KSD_(Q, P) = 0 if, and only if, Q = P <cit.>. In the batch setting and simple null hyposthesis, a test statistic is generally taken as a V-statistic or U-statistic, and parametric or wild bootstrap is used to calibrate the test. We refer the reader to <cit.> for the more challenging composite null hypothesis case.
§.§ Testing by betting
We now present the theoretical foundation of the sequential testing strategy that will be presented in Section <ref>, which is commonly referred to as testing by betting. The key idea behind this general, powerful concept relies on defining a test (super)martingale, and couple it with a betting interpretation <cit.>. Test (super)martingales allow for exploiting Ville's inequality <cit.> and hence they yield level-α sequential tests. In turn, this enables practitioners to peek at the data at anytime and decide whether to keep gathering data or stop accordingly while preserving theoretical guarantees.
Intuitively, let H_0 be the null hypothesis to be tested. A fictional bettor starts with an initial wealth of _0=1. The fictional bettor then bets sequentially on the outcomes (X_t)_t ≥ 1. To do so, at each round t, the fictional bettor chooses a payoff function _t: 𝒳→ [0, ∞) so that 𝔼_H_0[_t(X_t)|_t-1] ≤ 1, where _t-1 = σ(X_1, …, X_t-1) (this ensures a fair bet if the null is true).
After the outcome X_t is revealed, the bettor's wealth grows or shrinks by a factor of _t(Z_t). The bettor's wealth is _t =_0 ∏_i=1^t _i(X_i) after t rounds of betting.
Under the null hypothesis, these fair bets ensure that the sequence (_t)_ t≥ 0 forms a `test supermartingale' (i.e., a nonnegative supermartingale that starts at 1), and in particular a `test martingale' if 𝔼_H_0[_t(X_t)|_t-1] is 1 almost surely. Based on Ville's inequality, rejecting the null if _t ≥ 1/α, where α∈ (0,1) is the desired confidence level, leads to sequential tests that control the type-I error at level α. Under the alternative H_1, the payoff functions {_t: t ≥ 1} should seek a fast growth rate of the wealth, ideally exponentially. | null | null | null | We have developed a sequential version of the kernel Stein discrepancy, which gives way to goodness-of-fit tests that can handle distributions with unknown normalizing constants and can be continuously monitored and adaptively stopped. We have done so by combining tools from testing by betting with RKHS theory, while avoiding assuming uniform boundedness of the Stein reproducing kernel. We have proved the validity of the test, as well as exponential growth of the wealth process under the alternative and mild regularity conditions. Our experiments have exhibited the empirical performance of the test in a variety of scenarios.
In this contribution, we have presented a novel martingale construction that does neither exploit nor need uniform boundedness of the kernel. While the theory presented here has been developed for and motivated by the Stein kernel, such a martingale construction may be exploited by any kernel that is either unbounded or uniformly bounded by a constant that is too loose. This opens the door to develop sequential two sample and independence tests with kernels that are currently unexplored, which conforms an exciting direction of research.
§.§.§ Acknowledgements
DMT gratefully acknowledges that the project that gave rise to these results received the support of a fellowship from `la Caixa' Foundation (ID 100010434). The fellowship code is LCF/BQ/EU22/11930075. AR was funded by NSF grant DMS-2310718.
apalike |
http://arxiv.org/abs/2409.17813v1 | 20240926131050 | Spin-Dependent Signatures of Majorana Vortex Fusion within Planar Josephson Junctions | [
"Krishnan Ganesh",
"Derek K. K. Lee",
"Jiannis K. Pachos"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.supr-con"
] |
[Correspondence email address: ][email protected]
Blackett Laboratory, Imperial College London, London SW7 2AZ, United Kingdom
Blackett Laboratory, Imperial College London, London SW7 2AZ, United Kingdom
School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, United Kingdom
§ ABSTRACT
We investigate the magnetic characteristics and tunnelling signatures of a planar Josephson junction with Rashba spin-orbit coupling during the fusion of two Majorana vortices. By employing the topological phase diagram and conducting tight-binding simulations of the proposed device, we demonstrate that this fusion process induces a parity-dependent magnetic moment aligned with the junction axis. We further propose a method to probe the spin properties of the fusing Majorana zero modes through spin-resolved Andreev conductance measurements at the junction endpoints. To support our findings, we derive a low-energy effective Hamiltonian that provides a detailed microscopic description of the numerically observed phenomena. Our analysis enables the detection of Majorana fusion outcome from accessible spin current measurements, thus paving the way for future experimental verification and potential applications in topological quantum computation.
Spin-Dependent Signatures of Majorana Vortex Fusion within Planar Josephson Junctions
Jiannis K. Pachos
September 28, 2024
=====================================================================================
§ INTRODUCTION
One of the defining features of non-Abelian anyons is their ability to fuse into different particle types. For example, Majorana zero modes (MZMs), denoted as γ, can fuse to form either the vacuum state, denoted as 1, or a fermionic quasiparticle, denoted as ψ. This fusion process is mathematically captured by the fusion rule <cit.>
γ×γ = 1 + ψ.
Over the past decade, MZMs have gained significant interest since they have been theoretically predicted to emerge in topological superconductors (TSCs) <cit.>. These MZMs are typically localised at topological defects, such as the ends of a topological superconductor or within Abrikosov vortices in the bulk <cit.>. The fusion of two MZMs can result in either a fully paired state with an even number of particles, thus corresponding to the vacuum 1, or a state with an unpaired quasiparticle, ψ. Distinguishing between these two fusion outcomes will enable the demonstration of non-Abelian statistics in topological superconductors and thus open the way for realising topological qubits.
Numerous proposals for realising topological superconductors have concentrated on nanowires with strong spin-orbit coupling, which are proximitized to s-wave superconductors <cit.>. The literature also offers a wide range of possible methods for measuring the fusion channel of two Majorana zero modes in these systems. These approaches include coupling the nanowire to a quantum dot <cit.>, embedding the nanowire within a Josephson flux qubit <cit.>, and integrating the wire into an Aharonov-Bohm interferometer <cit.>. However, to date there have been very few experimental implementations of these protocols due to challenges in identifying the topological phase in these devices. Thus, there is an active interest in investigating alternative devices and searching for new signatures of the topological properties of Majorana zero modes that can conclusively demonstrate their non-Abelian character.
In this letter, we focus on a recent proposal to realise topological superconductivity within a planar Josephson junction <cit.>. This system consists of a two-dimensional electron gas (2DEG) with strong Rashba spin-orbit coupling, contacted by two s-wave superconductors and subjected to a magnetic field B⃗, as shown in Figure <ref>. This device exhibits extensive regions of topological superconductivity as the phase bias φ and the in-plane magnetic field B_x are varied <cit.>. Further theoretical studies demonstrated that an out-of-plane magnetic field can generate Josephson vortices which induce topological domain walls that exponentially localise Majorana zero modes <cit.>.
Our focus here is on the challenge of reading out the fusion channel of the Majoranas. In contrast to previous studies of Majorana fusion, such as Ref. <cit.>, which focus on a charge based signature, we investigate the potential to distinguish between the even and odd parity states of two topological domain walls by probing the magnetisation of the Josephson junction.
In particular, we investigate the spin properties and tunable coupling of Majorana zero modes in a planar Josephson junction with strong Rashba spin-orbit coupling. We show that the in-plane Zeeman field enables control over the separation and coupling of Majorana modes, influencing their energy splitting. By deriving an effective Hamiltonian, we describe their localization near topological domain walls and analyze how spin-dependent Andreev conductance can be used to experimentally detect the spin characteristics of the fusing Majorana zero modes.
§ JOSEPHSON VORTICES AND TOPOLOGICAL DOMAIN WALLS
To investigate the behaviour of the planar Josephson junction depicted in Fig. <ref>, we first consider the case where the out-of-plane magnetic field B_z = 0. Hence, the applied magnetic field is purely in the x-direction and decays exponentially into the superconducting leads due to Meissner screening. Assuming the London penetration depth of the superconducting leads is small compared to the width of the junction W and the coherence length of the superconductor, we can approximate the in-plane magnetic field as
B_x(y) = B ϑ(W/2 - |y|),
where ϑ(y) is the Heaviside step function. We also assume the system is in the quasi-2D electron gas regime, so orbital effects of the in-plane magnetic field are negligible. The Hamiltonian written in the Nambu basis Ψ(r) = ( c_↑(r) , c_↓(r) , c^†_↓(r) , -c^†_↑(r))^T is
H = 1/2∫ d^2r Ψ^†(r)ℋΨ(r),
The Bogoliubov-de-Gennes Hamiltonian for the system reads (ħ = 1):
ℋ = (-∇^2/2m - μ)σ_0τ_z + α( k×σ)·ẑτ_z
+ Δ(y) σ_0τ_+ + Δ^*(y)σ_0τ_- + E_Z(y)σ_xτ_0,
where m is the effective electron mass, μ is the chemical potential, α is the Rashba spin-orbit coupling energy and Δ(y) is the superconducting pair potential, which is approximated by
Δ(y) = ϑ(y - W)Δ_0e^iφ + ϑ(- y) Δ_0.
The Pauli matrices σ_i and τ_i act on spin and particle-hole space respectively, so the notation σ_iτ_j is shorthand for the Kronecker product σ_i⊗τ_j. We have also defined the raising and lowering operators τ_± = (τ_x± i τ_y)/2.
The Hamiltonian ℋ anticommutes with the particle-hole symmetry operator, which is
P = σ_yτ_yK,
in the Nambu basis where K stands for complex conjugation. We perform our numerical investigation on a square lattice with lattice constant a, hopping parameter t = 1/2ma^2 and the tight-binding Hamiltonian given in Equation <ref>. All energies are given in units of the hopping parameter t.
As described in Ref.<cit.> and further detailed in Appendix <ref>, this model exhibits topological phase transitions at Josephson phase differences of
φ_± = π±2E_ZW/v_F,
where v_F is the Fermi velocity. As shown in Figure <ref>(a), this gives rise to the diamond regions of topological superconductivity as φ and E_Z are varied.
Let us now consider the presence of a small out-of-plane field, the Josephson phase difference winds linearly in the x-direction <cit.><cit.>
φ(x) = 2πΦ/Φ_S Lx - θ,
where Φ = B_zLW is the flux through the junction, Φ_S = h/2e and θ is a global phase shift, which generates topological domain walls at positions where φ(x) = φ_±, each binding a single Majorana zero mode.
§ TUNABLE COUPLING BETWEEN MAJORANA ZERO MODES
The in-plane Zeeman coupling provides a useful experimental control for fusing topological domain walls. As illustrated in the top panel of Figure <ref>(b), varying E_Z controls the separation between the localised Majorana zero mode wavefunctions. When E_Z≈ 0, the Majorana zero modes strongly couple, forming a localised fermionic mode. This behaviour is also reflected in the low-energy spectrum of the Josephson junction as shown in bottom panel of Figure <ref>(b). As the Zeeman field is reduced, the energy splitting between the Majorana zero modes grow exponentially, accompanied by oscillations.
These features can be heuristically understood by considering the case of a single Josephson vortex phase distribution, φ(x) = 2π x/L, with both E_Z and the spin-orbit coupling α set to zero. In this scenario, a discrete quasiparticle spectrum ε_n emerges, which is approximately spin-degenerate at E_Z = 0, up to a small Zeeman field in the z-direction which we neglect <cit.>. As the in-plane Zeeman coupling is increased, the spin-degenerate levels split resulting in a spectrum E_n,σ(E_Z) = ε_n +σ E_Z, where σ = ± 1 labels the σ_x eigenvalue of the state. Level crossings occur at zero energy for Zeeman coupling values E_z, n = ε_n. Upon increasing α, level repulsions occur between σ = +1 states and σ = -1 states, which eventually separates two states, shown in purple in Figure <ref>(b) (bottom), from the rest of the spectrum, shown in blue. Therefore, the oscillations in the energy splitting appear to originate from the discrete level structure of the Josephson vortex in the absence of Zeeman splitting and spin-orbit coupling.
§ SPIN CHARACTERISTICS
For Φ = Φ_S, close to the coordinates of the topological domain wall the Majorana zero mode wavefunctions can be approximated by
ψ_±(x) ∼exp(-1/ξ|x - x_±|),
where x_± = Lφ_±/2π and ξ is the localisation length, which is related to the bulk topological gap. Therefore, the tunnel splitting between the zero modes can be estimated to be
δ E ∼exp(-|x_+ - x_-|/ξ) = exp(-4LE_ZW/πξ v_F).
The coupling between Majorana zero modes γ̂_1 and γ̂_2 introduces the term
Ĥ_tun. = iδ E/2γ̂_1γ̂_2
in the Hamiltonian. Given that δ E has an exponential dependence on the in-plane field B_x, a clear physical observable that depends on the joint parity of two Majorana zero modes is the magnetisation of the Andreev bound states, which we may define as
m = Ω∂Ĥ_tun./∂BΩ,
where |Ω⟩ is a many-body ground state of the superconductor. Since the total Hamiltonian commutes with the parity operator 𝒫̂ = iγ̂_1γ̂_2, |Ω⟩ must be an eigenstate of 𝒫̂ with eigenvalue +1 or -1. Using the Feynman-Hellmann theorem, the magnetisation can be expressed as
m_± = ±∂(δ E)/∂B
where ± is the parity eigenvalue. Since the out-of-plane magnetic field in our model is very small, the magnetisation should predominantly point in the ± x directions depending on the parity of the Majorana zero modes.
Furthermore, we numerically calculate the spin of the many-body state as a function of the in-plane Zeeman coupling E_Z. The spin-operator in second quantised form is given by
Ŝ^μ_i = ∑_σ , σ'ĉ^†_iσS^μ_σσ'ĉ_iσ'
where μ∈{x , y , z}, i is a site index and σ∈↑ , ↓. We calculate the expectation value of this operator in the even and odd parity ground states |Ω_+⟩ and |Ω_-⟩, respectively. The diagonalised many-body Hamiltonian reads
Ĥ = ∑_n = 0^2N-1ε_nb̂^†_nb̂_n
where ε_n > 0 and N is the number of lattice sites in the system. The even-parity ground state |Ω_+⟩ is annihilated by all the Bogoliubov destruction operators b̂_n. At zero temperature, the odd-parity state will have the lowest energy quasiparticle occupied:
|Ω_-⟩ = b̂^†_0|Ω_+⟩.
The expectation values ⟨Ŝ^μ_i⟩_± = ⟨Ω_±|Ŝ^μ_i|Ω_±⟩ are calculated using the inverse Bogoliubov transformation
ĉ_i↑ = ∑_n = 0^2N - 1u^n_i↑b̂_n - (v^n_i ↑)^*b̂^†_n
ĉ_i↓ = ∑_n = 0^2N - 1 u^n_i↓b̂_n + (v^n_i ↓)^*b̂^†_n
ĉ^†_i↓ = ∑_n = 0^2N - 1 (u^n_i↓)^*b̂^†_n + v^n_i ↓b̂_n
ĉ^†_i↑ = ∑_n = 0^2N - 1(u^n_i↑)^*b̂^†_n - v^n_i ↑b̂_n,
where the eigenvector corresponding to the n^th positive energy state is given by (u^n_i↑ , u^n_i↓ , v^n_i↓ , v^n_i↑)^T. Using the property b̂_n|Ω_+⟩ = 0 ∀ n, the expectation values are
⟨Ŝ^μ_x⟩_+ = -∑_n = 0^2N - 1( v_i↑^nv^n *_i↓ +v_i↓^nv^n *_i↑)
⟨Ŝ^μ_y⟩_+ = -∑_n = 0^2N - 1( -iv_i↑^nv^n *_i↓ + iv_i↓^nv^n *_i↑)
⟨Ŝ^μ_z⟩_+ = ∑_n = 0^2N - 1( v_i↑^nv^n *_i↑ - v_i↓^nv^n *_i↓).
The expectation values in the state |Ω_-⟩ are given by
⟨Ŝ^μ_x⟩_- = ⟨Ŝ^μ_x⟩_+ + ∑_σσ'u_iσ^0*S^x_σσ'u_iσ'^0 + v_iσ^0S^x_σσ'v_iσ'^0*
⟨Ŝ^μ_y⟩_- = ⟨Ŝ^μ_y⟩_+ + ∑_σσ'u_iσ^0*S^y_σσ'u_iσ'^0 + v_iσ^0S^y_σσ'v_iσ'^0*
⟨Ŝ^μ_z⟩_- = ⟨Ŝ^μ_z⟩_+ + ∑_σσ'u_iσ^0*S^z_σσ'u_iσ'^0 - v_iσ^0S^z_σσ'v_iσ'^0*.
In Figure <ref> we plot the value of ⟨Ŝ_x⟩ = ∑_i⟨Ŝ^x_i⟩ for the even and odd parity states as well their absolute values in the insets. We do not plot ⟨Ŝ_y⟩ and ⟨Ŝ_z⟩ because they were found to be zero. In Figure <ref>(a) a clear difference in ⟨ S_x⟩ for small but finite E_Z is observed, whereas for large E_Z the states of different parity are indistinguishable. We also repeated the calculation for a trivial Josephson junction in Figure <ref>(b) by setting α = 0. In contrast to the case with topological domain walls, in Figure <ref>(b) there is always a clear difference in the values of ⟨ S_x⟩_±. At exactly E_Z = 0 however, the difference between ⟨ S_x⟩ _+ and ⟨ S_x⟩_- is zero for both the topological and trivial junction because the bound states are spin-degenerate. Hence, the spin signature of Majorana fusion can only be observed as E_Z→ 0.
§ SPIN DEPENDENT ANDREEV CONDUCTANCE
The spin characteristics of the Josephson vortex energy levels can be effectively probed by coupling a lead to one side of the junction and measuring the Andreev conductance. However, when this measurement is performed in the presence of an out-of-plane flux Φ_S, it fails to provide information about the sub-gap modes, as these modes are exponentially localised near the centre of the junction, as shown in Figure <ref>(b)(Top). To obtain a non-zero Andreev conductance at least one of the Majorana zero modes must be located at the edge of the junction. This can be achieved by tuning the out-of-plane flux to 0.5Φ_S (a `half-vortex'), which generates a single topological domain wall in the junction, hosting a Majorana zero mode at one end <cit.>. As the in-plane Zeeman coupling is reduced, the topological domain wall shifts towards the edge, resulting in the hybridisation of the two Majoranas. In the limit of large E_Z, the two Majorana zero modes become well separated, and coupling a lead to one end of the junction will result in a Andreev reflection with unit probability at zero voltage bias, due to the presence of unpaired Majorana zero mode at the end <cit.>.
Figure <ref>(a) shows the quasi-particle energy levels colour-coded with the expectation value of the operator σ̂_xτ̂_0. As E_Z is reduced, the Majoranas hybridise to form spinful quasiparticle states. Consequently, the Andreev reflection probability becomes highly spin-dependent. This can be tested using the setup shown in Figure <ref>(b), where a metallic lead, spin polarized in the x-direction by an underlying ferromagnet, is tunnel-coupled to one end of the Josephson junction. By varying the voltage V_L and measuring the current in the wire, one can obtain the Andreev conductance as a function of V_L for different in-plane Zeeman fields. It is expected that the Andreev conductance will exhibit a strong dependence on the sign of the spin splitting for low E_Z.
The spin-split metallic lead is modelled by the Hamiltonian
ℋ_lead = (-∇^2/2m - μ)σ̂_0τ̂_z + E_Z,Lσ̂_xτ̂_0,
where E_Z, L is the Zeeman splitting in the lead. A potential barrier of height V_B is included at the interface between the semi-infinite lead and the Josephson junction. We use the Kwant toolbox to evaluate the scattering matrix of the lead-junction system at energy eV_L <cit.>:
ŝ(eV_L)= [ r̂_ee(eV_L) r̂_eh(eV_L); r̂_he(eV_L) r̂_hh(eV_L) ],
where the diagonal blocks correspond to normal reflection amplitudes for electrons and holes of the potential barrier, and the off-diagonal blocks correspond to Andreev reflection amplitudes. The Andreev conductance G_A(eV_L) is computed using the trace formula <cit.>
G_A(eV_L)= e^2/h[ N_e - Tr{r̂_ee
^†r̂_ee} + Tr{r̂_he
^†r̂_he}],
where N_e is the number of occupied electronic bands at voltage bias V_L.
As shown in Figure <ref>, the Andreev conductance depends strongly on the orientation of the spin polarisation of the metallic lead, controlled by the sign of E_Z,L. The spin splitting, E_Z,L, is chosen so that the incident electrons are fully spin-polarised for the range of voltages shown and V_B = 2.0. In Figure <ref>(a), we set E_Z,L = 2.0 which polarises all electron spins to be in the |←⟩ state, whilst in Figure <ref>(b) E_Z,L= -2.0 and all spins are in the |→⟩ state.
We can gain a phenomenological understanding of these differences using the tunnelling characteristics of two Majorana zero modes <cit.>
G_A(eV_L) = 2e^2/h(2eV_LΓ)^2/(e^2V^2_L - 4δ^2)^2 + (eV_LΓ)^2,
where Γ is the tunnel coupling between the lead and the junction, and δ is the tunnel coupling between the Majorana zero modes within the junction. the coupling Γ is expected to decrease as the potential barrier V_B is increased, while δ is expected to scale as exp(-E_Z/ξ) as discussed in Section <ref>. In the limit of E_Z≫ξ we can take δ→ 0, resulting in a conductance resonance at eV_L = 0 with peak height of 2e^2/h, as observed in both Figures <ref>(a) and (b). However, the difference in the width of the zero-bias resonances indicates that the |←⟩ electrons undergo stronger Andreev reflection than |→⟩ electrons, suggesting that the isolated Majorana zero mode at the edge is a coherent superposition of |←⟩ species only: γ̂(x) ∼∫ dx f(x)( ĉ_←(x) + ĉ^†_←(x)). In the limit of weak E_Z, the tunnel coupling between MZMs increases, leading to two resonances at eV_L = ±δ. In this regime, we observe negligible conductance for |←⟩ electrons (Figure <ref>(a)) and a pair of sharp resonances for |→⟩ electrons (Figure <ref>(a)). This suggests that |←⟩ electrons undergo complete normal reflection whilst |→⟩ electrons undergo complete Andreev reflection. This behaviour demonstrates that the resulting quasiparticle has a definite spin-polarisation in the x-direction.
§ CONCLUSIONS AND OUTLOOK
In this paper, we explored the spin properties and tunable coupling of Majorana zero modes in a planar Josephson junction with strong Rashba spin-orbit coupling. Our aim was to find experimentally measurable quantities that can conclusively determine the non-Abelian fusion outcome of Majorana zero modes. Our analysis reveals that the in-plane Zeeman coupling serves as a powerful tool to control the separation between Majorana zero modes, which in turn affects their coupling and the resulting energy splitting. We demonstrated that the magnetisation of the Andreev bound states, which depends on the parity of the Majorana zero modes, offers a concrete physical observable to probe these spin-dependent properties. In principle, the magnetisation may be probed by coupling the Josephson junction to a quantum dot with spin-polarised levels or an STM tip <cit.>.
We derived an effective Hamiltonian that accurately describes the localisation of Majorana zero modes near the center of topological domain walls. The wavefunction overlap between these modes leads to an exponentially small energy splitting, which is strongly dependent on the in-plane Zeeman field. This tunable splitting allows for a precise control of the hybridisation between Majorana modes, and thus their fusion process, thereby enabling potential applications in topological quantum computation.
Furthermore, we investigated the spin-dependent Andreev conductance as a method to probe the spin states of the sub-gap Majorana modes in the Josephson junction. By coupling a spin-polarised metallic lead to the junction, we showed that the Andreev conductance is highly sensitive to the spin polarisation of the incident electrons, especially at low in-plane Zeeman fields. This spin dependence provides a viable approach for experimentally detecting the spin characteristics of Majorana modes and their associated parity states resulting from their fusion.
In summary, our work provides a detailed understanding of the interplay between spin, parity, and coupling of Majorana zero modes in a planar Josephson junction. The insights gained from our theoretical analysis offer promising avenues for the design of spin-sensitive devices that can detect the fusion outcome of Majorana zero modes, which are critical for advancing the field of topological quantum computing.
§ ACKNOWLEDGEMENTS
J. K. Pachos acknowledges funding from EPSRC with Grant No. EP/R020612/1. K. Ganesh and D. K.K. Lee acknowledge funding from EPSRC with Grant No. EP/R513052/1 and Grant No. EP/T51780X/1. Discussions with C. Benjamin are gratefully acknowledged.
unsrt
*
§.§ Tight Binding Hamiltonian
For our numerical simulations we use the following tight-binding Hamiltonian:
ℋ = ∑_r⃗[(4t - μ)σ_0τ_z +Δ(r)σ_0τ_+ + Δ^*(r)σ_0τ_-
+ E_Z(r)σ_xτ_0]⊗|r⟩⟨r|
+∑_r , i = {x , y}(-tσ_0τ_z⊗|r⟩⟨r + ê_i| + h.c.)
+ α/2a∑_r(i σ_yτ_z⊗|r⟩⟨r + ê_x|
- i σ_xτ_z⊗|r⟩⟨r + ê_y| + h.c. )
where r labels lattice sites, t = 1/2ma^2 and a is the lattice constant of the simulation. The tight-binding calculations were performed using the Kwant library <cit.>.
§.§ Deriving the low-energy effective model
In this Appendix we outline the derivation of the low-energy effective model for the Andreev bound states in the junction, and the various approximations used to get there. This gives us access to the explicit form of the Majorana zero mode spinors localised to the topological domain walls. In what follows, we work in the short junction regime where the width of the junction W is much smaller than the superconducting coherence length ξ = v_F/Δ_0. We will also work in the limit where the spin-orbit momentum shift is much smaller than the Fermi momentum: mα≪ k_F.
§.§.§ Andreev Hamiltonian
As it stands, the Bogoliubov de Gennes Hamiltonian in Equation <ref> is a complicated differential operator to solve in all generality. To make progress, we will work in the Andreev approximation, which assumes that the coherence length ξ is much larger than the Fermi wavelength k_F^-1. In this limit, eigenspinors will generally have a rapid oscillatory factor e^ik_F·r, with smooth envelope functions u⃗(r), v⃗(r) which we wish to calculate. We will also assume the magnetic length ℓ_B≫ k_F^-1. One can obtain an `Andreev Hamiltonian' for the envelope functions u⃗(r) and v⃗(r) by Taylor expanding the Hamiltonian around the Fermi points (0 , ± k_F)^T, whilst dropping all terms 𝒪(∂^2_y), 𝒪(k_x^2) and higher. This gives us
ℋ_± k_F(k_x) ≈ -i(± v_Fσ_0 - ασ_x)τ_z∂_y
∓α k_Fσ_xτ_z +ασ_yτ_zk_x
+ E_Z(y)σ_xτ_0 + Δ(y) σ_0τ_+ + Δ^*(y)σ_0τ_-,
where the momentum operator -i∂_y:= k̂_y∓ k_F. We take the x-axis to be parallel to the junction and the y-axis to lie perpendicular to the junction of width W, as shown in Figure <ref>. ℋ_± k_F(k_x) is easier to solve since it is linear in spatial derivatives. Particle-hole symmetry for the Andreev Hamiltonian is defined as
P ℋ_k_F(k_x) P^-1 = -ℋ_-k_F(-k_x),
where P = σ_yτ_y K.
§.§.§ Topological Phase Transitions
In the continuum, the ℤ_2 topological index can only change when there are gap-closings at k_x = 0 <cit.>. In this section, we solve ℋ(k_x = 0) for sub-gap energies ε < Δ_0, and thus obtain the gap-closing points as a function of the Josephson phase difference φ. For clarity, we outline solutions in the vicinity of the Fermi point (0 , k_F)^T, since solutions around (0 , -k_F)^T are readily obtained using the particle-hole operation defined in Equation <ref>. In order to obtain the sub-gap spectrum, we must solve ℋ_k_F(k_x = 0)ψ⃗(y) = εψ⃗(y) in the different regions of the system: y < 0 , 0 ≤ y≤ W and y > W and match wave-functions at y = 0 , W.
We begin by noticing the spin conservation law at k_x = 0
[ℋ_k_F(k_x = 0) , σ_xτ_0] = 0,
which allows us to label wave-functions with their σ_xτ_0 eigenvalues σ = ± 1. Up to a normalization constant and overall phases, we obtain the following piece-wise spinor for the sub-gap states
ψ⃗_σ(y) ∼ e^i σ mα y [ 1; σ ]⊗[ Δ_0; ε + iv_Fκ ] e^κ y y < 0
A_σ[ 1; σ ]⊗[ 1; 0 ]exp[i(ε - σ E_Z)y/v_F] + B_σ[ 1; σ ]⊗[ 0; 1 ]exp[-i(ε - σ E_Z)y/v_F] 0 ≤ y ≤ W
[ 1; σ ]⊗[ Δ_0e^iφ; ε - iv_Fκ ] e^-κ(y - W) y > W
where A_σ, B_σ are complex amplitudes to be determined by wave-function matching and κ = √(Δ_0^2 - ε^2)/v_F is an energy-dependent decay constant for the quasiparticle wavefunction in the superconducting region.
Matching wavefunctions at y = 0, W determines the relative phase between electrons and holes, A_σ/B_σ, which results in the following transcendental equation for the Andreev bound state energies
2W/v_F(ε_σ - σ E_Z) = φ + 2nπ + 2cos^-1( ε_σ/Δ_0).
In the short junction limit, v_F/W ≫Δ_0, we obtain the Andreev spectrum
ε_σ(φ) = Δ_0cos( φ/2 + σ E_Z W/v_F),
which provides an explicit derivation for the result reported in <cit.>. Gap closings occur at phase differences
φ^k_F_σ = π - 2σ E_Z W/v_F,
which correspond to topological phase transitions where the ℤ_2 index changes sign. Note we have now incorporated an extra label `k_F' in order to specify that this solution is obtained in the vicinity of the (0 , k_F)^T Fermi point. We also obtain a branch of solutions in the vicinity of (0 , -k_F)^T with gap-closings at phase differences
φ^-k_F_σ = π + 2σ E_Z W/v_F.
§.§.§ Majorana spinors
Due to particle-hole symmetry, there are a pair of zero-energy states at each gap-closing point since the spinors ψ⃗^k_F_σ(y) and Pψ⃗^k_F_σ(y) are satisfy.
ℋ_k_F(0)ψ⃗^k_F_σ(y) = 0
ℋ_-k_F(0)Pψ⃗^k_F_σ(y) = 0
We define the Majorana spinor basis in this degenerate manifold using the transformation
Γ⃗_1 σ(y) = 1/√(2)( ψ⃗^k_F_σ(y) + Pψ⃗^k_F_σ(y) )
Γ⃗_2 σ(y) = i/√(2)( ψ⃗^k_F_σ(y) - Pψ⃗^k_F_σ(y) )
Furthermore, Γ⃗_iσ(y) should be eigenvectors of the particle-hole operator. Therefore we define the following orthonormal basis kets
|χ_σ⟩ = 1/2exp(iπσ/4σ̂_z⊗τ̂_0)[ 1; σ; σ; -1 ]
|η_σ⟩ = 1/2exp(iπσ/4σ̂_z⊗τ̂_0)[ σ; -1; -1; -σ ]
which satisfy P|χ_σ⟩ = |χ_σ⟩ and P|η_σ⟩ = |η_σ⟩. In this basis, the Majorana spinors read
Γ⃗_1σ(y) ∼[cos(θ_α y) |χ_σ⟩ + sin(θ_α y) |η_σ⟩]e^Δ_0y / v_F y < 0
cos(θ_Zy)[cos(θ_α y) |χ_σ⟩ + sin(θ_α y) |η_σ⟩]
+ σsin(θ_Zy)[cos(θ_αy)|χ_-σ⟩ -sin(θ_αy)|η_-σ⟩] 0 ≤ y ≤ W
cos(θ_ZW)[cos(θ_α y) |χ_σ⟩ + sin(θ_α y) |η_σ⟩]e^-Δ_0(y - W) / v_F
+ σsin(θ_ZW)[cos(θ_αy)|χ_-σ⟩ -sin(θ_αy)|η_-σ⟩]e^-Δ_0(y - W) / v_F y > W
Γ⃗_2σ(y) ∼[-sin(θ_α y) |χ_σ⟩ + cos(θ_α y) |η_σ⟩]e^Δ_0y / v_F y < 0
cos(θ_Zy)[-sin(θ_α y) |χ_σ⟩ + cos(θ_α y) |η_σ⟩]
- σsin(θ_Zy)[sin(θ_αy)|χ_-σ⟩ +cos(θ_αy)|η_-σ⟩] 0 ≤ y ≤ W
cos(θ_ZW)[-sin(θ_α y) |χ_σ⟩ + cos(θ_α y) |η_σ⟩]e^-Δ_0(y - W) / v_F
- σsin(θ_ZW)[sin(θ_αy)|χ_-σ⟩ +cos(θ_αy)|η_-σ⟩]e^-Δ_0(y - W) / v_F y > W
where θ_α = σ m α and θ_Z= σ E_Z/v_F. The normalization constant for these spinors is N = ( v_F/Δ_0 + W)^-1/2. These are real superpositions of the basis kets |χ_σ⟩ and |η_σ⟩ so we see that they are themselves Majorana modes.
§.§.§ Degenerate perturbation theory
Therefore, the most general Majorana wavefunction is given by
Γ⃗(y) = ∑_σ = ±γ_1σΓ⃗_1σ(y) + γ_2σΓ⃗_2σ(y)
where the amplitudes γ_iσ∈ℝ. If we now switch on a small k_x component, these amplitudes will inherit a dependence on the x coordinate. Likewise, perturbing the Josephson phase away from φ_± may cause a coupling between the Majorana spinors Γ⃗_1/2,σ(y). We treat such effects using the perturbation Hamiltonian
δℋ = δℋ_k_x + δℋ_φ,
where δℋ_k_x = α k_xσ_yτ_z and δℋ_φ =Δ_0(φ - φ_±)σ_0τ_yϑ(y - W). From this, a low energy effective Hamiltonian for the amplitudes γ_iσ(x) may be obtained by projecting δℋ onto the basis Γ⃗_iσ(y) <cit.>:
H_eff = ∫ dx ∑_ij,σσ'γ_iσ(x)⟨Γ_iσ|δℋ|Γ_jσ'⟩γ_jσ'(x).
To this end, it is useful to write down the matrix elements of σ_yτ_z in the basis {|χ_±⟩ , |η_±⟩}:
⟨χ_σ|σ_yτ_z|χ_σ'⟩ = -δ_σσ'
⟨η_σ|σ_yτ_z|η_σ'⟩ = +δ_σσ'
⟨χ_σ|σ_yτ_z|η_σ'⟩ = 0
Projecting δℋ_k_x onto the basis Γ⃗_iσ(y) we get
[δℋ^ij_k_x] = ṽk_x[ -cos(θ_αW) sin(θ_αW); sin(θ_αW) cos(θ_αW) ],
where we have defined the effective group velocity
ṽ≈αΔ^2_0[ mv_Fαcos(mα W) + Δ_0sin(mα W)]/mα v_F((mα v_F)^2 + Δ^2_0).
In the above expression we have dropped terms 𝒪(Δ_0W/v_F). Projecting δℋ_φ onto the basis Γ⃗_iσ(y) we get
[δℋ^ij_φ] = Δ_0/2(φ - φ_σ)cos(2WE_Z/v_F)[ 0 -i; i 0 ].
Finally we perform the following SO(2) basis rotation on (γ_1σ , γ_2σ)^T
[ γ̃_1σ; γ̃_2σ ]→[ sin(θ_αW/2) cos(θ_αW/2); cos(θ_αW/2) -sin(θ_αW/2) ][ γ_1σ; γ_2σ ]
which gives us the effective Hamiltonian for each spin sector σ = ±
ℋ^σ_eff = -iṽ∂_xν̂_z + Δ_0/2(φ - φ_σ)cos(2WE_Z/v_F) ν̂_y.
The Pauli matrices ν̂_i act on the basis (γ̃_1σ , γ̃_2σ)^T. Equation <ref> gives us an effective description for Majorana modes in the junction for a constant superconducting phase difference φ across the junction near the boundary
of the topological phase transition (diamond in Figure <ref>(a)).
| One of the defining features of non-Abelian anyons is their ability to fuse into different particle types. For example, Majorana zero modes (MZMs), denoted as γ, can fuse to form either the vacuum state, denoted as 1, or a fermionic quasiparticle, denoted as ψ. This fusion process is mathematically captured by the fusion rule <cit.>
γ×γ = 1 + ψ.
Over the past decade, MZMs have gained significant interest since they have been theoretically predicted to emerge in topological superconductors (TSCs) <cit.>. These MZMs are typically localised at topological defects, such as the ends of a topological superconductor or within Abrikosov vortices in the bulk <cit.>. The fusion of two MZMs can result in either a fully paired state with an even number of particles, thus corresponding to the vacuum 1, or a state with an unpaired quasiparticle, ψ. Distinguishing between these two fusion outcomes will enable the demonstration of non-Abelian statistics in topological superconductors and thus open the way for realising topological qubits.
Numerous proposals for realising topological superconductors have concentrated on nanowires with strong spin-orbit coupling, which are proximitized to s-wave superconductors <cit.>. The literature also offers a wide range of possible methods for measuring the fusion channel of two Majorana zero modes in these systems. These approaches include coupling the nanowire to a quantum dot <cit.>, embedding the nanowire within a Josephson flux qubit <cit.>, and integrating the wire into an Aharonov-Bohm interferometer <cit.>. However, to date there have been very few experimental implementations of these protocols due to challenges in identifying the topological phase in these devices. Thus, there is an active interest in investigating alternative devices and searching for new signatures of the topological properties of Majorana zero modes that can conclusively demonstrate their non-Abelian character.
In this letter, we focus on a recent proposal to realise topological superconductivity within a planar Josephson junction <cit.>. This system consists of a two-dimensional electron gas (2DEG) with strong Rashba spin-orbit coupling, contacted by two s-wave superconductors and subjected to a magnetic field B⃗, as shown in Figure <ref>. This device exhibits extensive regions of topological superconductivity as the phase bias φ and the in-plane magnetic field B_x are varied <cit.>. Further theoretical studies demonstrated that an out-of-plane magnetic field can generate Josephson vortices which induce topological domain walls that exponentially localise Majorana zero modes <cit.>.
Our focus here is on the challenge of reading out the fusion channel of the Majoranas. In contrast to previous studies of Majorana fusion, such as Ref. <cit.>, which focus on a charge based signature, we investigate the potential to distinguish between the even and odd parity states of two topological domain walls by probing the magnetisation of the Josephson junction.
In particular, we investigate the spin properties and tunable coupling of Majorana zero modes in a planar Josephson junction with strong Rashba spin-orbit coupling. We show that the in-plane Zeeman field enables control over the separation and coupling of Majorana modes, influencing their energy splitting. By deriving an effective Hamiltonian, we describe their localization near topological domain walls and analyze how spin-dependent Andreev conductance can be used to experimentally detect the spin characteristics of the fusing Majorana zero modes. | null | null | null | null | null |
http://arxiv.org/abs/2409.17746v1 | 20240926112216 | Paraformer-v2: An improved non-autoregressive transformer for noise-robust speech recognition | [
"Keyu An",
"Zerui Li",
"Zhifu Gao",
"Shiliang Zhang"
] | eess.AS | [
"eess.AS",
"cs.SD"
] |
Paraformer-v2
Keyu An et al.
Speech Lab, Alibaba Group, China
ankeyu.aky,lzr265946,zhifu.gzf,[email protected]
Paraformer-v2: An improved non-autoregressive transformer for noise-robust speech recognition
Keyu An, Zerui Li, Zhifu Gao, Shiliang Zhang
September 28, 2024
=============================================================================================
§ ABSTRACT
Attention-based encoder-decoder, e.g. transformer and its variants, generates the output sequence in an autoregressive (AR) manner. Despite its superior performance, AR model is computationally inefficient as its generation requires as many iterations as the output length. In this paper, we propose Paraformer-v2, an improved version of Paraformer, for fast, accurate, and noise-robust non-autoregressive speech recognition. In Paraformer-v2, we use a CTC module to extract the token embeddings, as the alternative to the continuous integrate-and-fire module in Paraformer. Extensive experiments demonstrate that Paraformer-v2 outperforms Paraformer on multiple datasets, especially on the English datasets (over 14% improvement on WER), and is more robust in noisy environments.
§ INTRODUCTION
The attention-based encoder-decoder (AED) model has significantly advanced the field of Automatic Speech Recognition (ASR). However, a key limitation of the autoregressive (AR) decoder is the inherent inefficiency; each output token depends on those generated before it, leading to a decoding time directly proportional to the length of the output sequence. This linear scalability can result in slow processing, particularly troublesome for lengthy outputs.
In response, non-autoregressive transformers (NATs) have emerged as a potential solution to this bottleneck. NAT models, as exemplified by works like Mask-CTC <cit.>, CASS-NAT <cit.>, Spike-triggered NAT <cit.>, and notably Paraformer <cit.>, revolutionize the process by concurrently generating all tokens, thereby sidestepping the sequential dependency and significantly accelerating inference.
Paraformer, in particular, stands out among NAT-based ASR models due to its success in Mandarin speech recognition. Paraformer uses a Continuous Integrate-and-Fire (CIF) based predictor to estimate token embeddings in parallel, functioning as the query input to a non-autoregressive decoder. Nevertheless, the CIF predictor confronts two principal challenges:
Multilingual Limitations: The CIF predictor's efficacy may wane when applied to languages such as English. Unlike Mandarin, English often employs byte-pair encoding (BPE) to tokenize text, resulting in a higher variability in token counts that the CIF struggles to accurately estimate.
Noise Sensitivity: The CIF mechanism exhibits reduced robustness against ambient noise, which is particularly problematic in settings prone to acoustic disturbances, like conference or meeting environments. This sensitivity can lead to a noticeable decline in recognition accuracy under suboptimal acoustic conditions.
In this paper, we introduce Paraformer-v2, a novel architecture designed to overcome the limitations observed in the original Paraformer. Central to our approach is the integration of a Connectionist Temporal Classification (CTC) module for extracting token embeddings, which performs better than CIF in language adaptability and noise-robustness. During training, the token embeddings are extracted based on the CTC alignment. During inference, we use the non-blank CTC prediction as the token embeddings to the non-autoregressive transformer decoder. Comprehensive experimental evaluations have validated the superiority of Paraformer-v2, showcasing its capability to achieve state-of-the-art performance among NAT models across multiple benchmark datasets. Specifically, our model sets new benchmarks on AISHELL-1 (a Mandarin Chinese dataset), LibriSpeech (an English corpus), and an in-house English dataset comprising 50,000 hours of speech. Notably, Paraformer-v2 rivals the performance of strong autoregressive models such as conformer AED and conformer transducers, a testament to its effectiveness. Moreover, Paraformer-v2 has demonstrated substantial improvements in noisy environments compared to Paraformer.
§ METHODS
We first introduce Paraformer, and then present our modifications in Paraformer-v2.
§.§ Paraformer
Given the speech feature X_1:T, the encoder produces a sequence of hidden representations:
H_1:T = Encoder( X_1:T),
and the predicted token embedding is obtained by a CIF module
E_1:U' = CIF( H_1:T).
Specifically, CIF predicts the weights α_1:T using
α_1:T = Sigmoid( Linear( Conv( H_1:T))).
Then, it forwardly accumulates the weights and integrates the encoder outputs until the accumulated weight reaches a given threshold β_ CIF. It then instantly fires the integrated acoustic information for token prediction and updates the accumulated weights. The reader is recommended to refer to CIF paper <cit.> for more details.
Given the encoder hidden representations H_1:T and the token embedding E_1:U', the final prediction is obtained by
D'_1:U' = Decoder( E_1:U'; H_1:T; H_1:T).
Here Decoder is a bi-directional (non-causal) transformer decoder.
Note that in the training stage, the weights α_1:T are scaled by U/∑_t=1^Tα_t so that the predicted length U' of the predicted sequence D' is equal to the length U of the target sequence Y. and the model can be optimized using cross-entropy (CE) loss. A quantity loss term | ∑_t=1^Tα_t - U | is added to the total loss to encourage the model to predict the length closer to the targets. Thus, the total training loss is defined as
ℒ = CrossEntropyLoss( D'_1:U', Y_1:U) + | ∑_t=1^Tα_t - U |
While the CIF-based Paraformer achieved remarkable success in Mandarin speech recognition tasks, its generalizability to other languages and robustness against noisy environments were identified as key limitations. Mandarin, being a tonal language with structured syllables comprised mainly of an initial consonant and a final vowel, presents a more predictable pattern for CIF to estimate the number of tokens in a given speech segment. Conversely, languages like English that typically employ subword tokenization methods like Byte Pair Encoding (BPE) often result in tokens that don't neatly align with phonetic or acoustic boundaries. Consequently, the CIF's performance in estimating the correct number of tokens for these languages is compromised. Another major drawback lies in the CIF's vulnerability to environmental noise. The CIF calculates weights α_1:T independently of the actual token predictions. Noise in the input speech can produce high CIF weights, causing the model to interpret noise as meaningful tokens.
To address the drawbacks mentioned above, we propose Paraformer-v2, as detailed in the following section.
§.§ Paraformer-v2
Different from Paraformer, we utilize a CTC module to obtain the token embedding, which proved to have better multilingual adaptability and to be more noise-robust. Specifically, we obtain the frame-wise CTC posterior P^ CTC_1:T and CTC greedy decoding results Y^ CTC_1:T by
P^ CTC_1:T = Softmax( Linear( H_1:T))
Y^ CTC_1:T = argmax(P^ CTC_1:T)
Then we obtain the compressed CTC posterior P^ CTC-comp_1:U' by averaging frames with the same predictions and removing blank frames in P^ CTC_1:T according to Y^ CTC_1:T.
P^ CTC-comp_1:U' = RemoveBlanks( AverageRepeats(P^ CTC_1:T, Y^ CTC_1:T))
For example, given the CTC posterior P^ CTC_1:5 = [p_1, p_2, p_3, p_4, p_5] and the greedy decoding results Y^ CTC_1:5 = [a, <blank>, b, b, c], the compressed CTC posterior will be P^ CTC-comp_1:3 = [p_1, average([p_3, p_4]), p_5].
The token embedding is defined as
E_1:U' = Embed(P^ CTC-comp_1:U' ).
Here Embed∈ℝ^d_ Dec× (VocabSize + 1) is a linear layer that transforms the compressed CTC posterior to the decoder input. d_ Dec is the dimension of the decoder, and VocabSize + 1 stands for the size of token vocabulary plus one <blank> symbol.
Similar to Paraformer, the final prediction of Paraformer-v2 is obtained by
D'_1:U' = Decoder( E_1:U'; H_1:T; H_1:T).
Regarding the training process of the Paraformer-v2, a non-trivial issue is the length mismatch between the predicted token length U' and the length of the ground truth U, which prohibits the calculation of the cross-entropy (CE) loss. Some previous studies work around this problem by scheduling the training process to bypass the decoder optimization <cit.>, or using the ground truth as the decoder input <cit.> when the predicted token length is not equal to the ground truth. In contrast, we use the viterbi algorithm to find the most probable CTC alignment 𝒜_1:T and then extract the compressed CTC posterior aligned with the target sequence based on 𝒜_1:T :
P^ CTC-comp_1:U' = RemoveBlanks( AverageRepeats(P^ CTC_1:T, 𝒜_1:T))
For example, given the CTC posterior P^ CTC_1:5 = [p_1, p_2, p_3, p_4, p_5], the target sequence Y_1:2 = [a, b], and the corresponding posterior-token alignment 𝒜_1:5 = [<blank>, a, <blank>, b, b] generated using the viterbi algorithm, the compressed CTC posterior will be P^ CTC-comp_1:2 = [p_2, average([p_4, p_5])], which has the same length as the target sequence. The total training objective is defined as follows:
ℒ = CrossEntropyLoss( D'_1:U', Y_1:U) + CTCLoss(P^ CTC_1:T, Y_1:U)
§.§ Experiments
We conduct experiments on the openly available 170-hour Mandarin AISHELL-1, 960-hour English LibriSpeech, and an in-house 50000-hour English task. For all datasets, we use filterbanks as input. The input features are extracted on a window of 25ms with a 10ms shift, and then subsampled by a factor of 4 using a convolutional input layer (on AISHELL-1 and LibriSpeech), or by a factor of 6 by stacking consecutive frames (on our in-house dataset). On AISHELL-1 and LibriSpeech, we adopt a conformer encoder <cit.> and a bidirectional transformer decoder. The configurations of the encoder and decoder are shown in Table. <ref>. On the in-house 50000-hour task, we adopt conformer or SAN-M encoder <cit.>, and a bidirectional transformer decoder.
To gauge the models' resilience to real-world noise interference, we compile a collection of 314 authentic noise samples. An ideal noise-robust model would recognize noise inputs and produce no output (null output). We quantify the noise robustness of different models by reporting the ratio of instances where the model correctly outputs nothing among the total 314 noise samples, thereby highlighting the model's capability to discriminate between speech and non-speech signals.
The results of AISHELL-1 and Librispeech are shown in the Table. <ref> and Table. <ref> respectively. It can be seen that Paraformer-v2 outperforms all NAR models in terms of character error rate (CER) and word error rate (WER). Notably, on AISHELL-1, Paraformer-v2 outperforms AR conformer and conformer transducer (with beam search) significantly. On Librispeech, Paraformer-v2 is comparable with AR conformer AED and conformer transducer with similar model sizes.
The results of the in-house 50000-hour dataset are shown in Table. <ref>. SAN-M Paraformer-v2 is significantly better than the NAR SAN-M Paraformer (over 14% improvement on WER), and comparable with the AR conformer AED and conformer transducer. Conformer Paraformer-v2 performs slightly better than SAN-M Paraformer-v2.
We use real time factor (RTF, calculated by dividing the time taken by the ASR transcription by the audio duration, benchmarked on Aishell-1 test set) to measure the inference speed, which is evaluated on a Tesla V100 GPU using batch size 1. The results are shown in Table. <ref>. It can be seen that Paraformer-v2 performs comparably with Paraformer in speed, and is more than 20 times faster than the AR conformer AED.
We compare the noise robustness of Paraformer and Paraformer-v2 in Table. <ref>. It can be seen that Paraformer-v2 produces over 20% less undesired output given the noise input, presumably because CTC is able to resist the noise by considering the semantic information, while CIF is more noise-sensitive as the prediction of CIF weights is semantic-independent.
§ CONCLUSION
In this paper we propose Paraformer-v2, an advancement on the original Paraformer for fast, accurate, and noise-robust non-autoregressive speech recognition. Compared to Paraformer, Paraformer-v2 shows significantly better performance on recognition accuracy, especially on English benchmarks, and better performance on noise-robustness. Compared with the AR model such as conformer AED, Paraformer-v2 performs comparably in accuracy with over 20 times speedup.
99
self-attention A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser and I. Polosukhin, Attention is all you need, in Proc. Neurips, Long Beach, CA, USA, 2017, pp. 5998-6008.
mask-ctc Y. Higuchi, S. Watanabe, N. Chen, T. Ogawa and T. Kobayashi, Mask CTC: non-autoregressive end-to-end ASR with CTC and mask predict, in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 3655-3659.
cass-nat R. Fan, W. Chu, P. Chang and J. Xiao, CASS-NAT: CTC Alignment-Based single step non-autoregressive transformer for speech recognition, in Proc. ICASSP, Toronto, ON, Canada, 2021, pp. 5889-5893.
spike Z. Tian, J. Yi, J. Tao, Y. Bai, S. Zhang and Z. Wen, Spike-triggered non-autoregressive transformer for end-to-end speech recognition. in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 5026-5030.
paraformer Z. Gao, S. Zhang, I. McLoughlin and Z. Yan, Paraformer: fast and accurate parallel transformer for non-autoregressive end-to-end speech recognition, in Proc. INTERSPEECH, Incheon, Korea, 2022, pp.2063–2067.
cif L. Dong and B. Xu, CIF: Continuous integrate-and-fire for end-to-end speech recognition, in Proc. ICASSP, Barcelona, Spain, 2020, pp.6079-6083.
ctc-nat X. Song, Z. Wu, Y. Huang, C. Weng, D. Su, H. Meng, Non-autoregressive transformer ASR with CTC-enhanced decoder input, in Proc. ICASSP, Toronto, Ontario, Canada, 2021, pp. 5894-5898.
conformer A. Gulati, C. C. Chiu, J. Qin, J. Yu, N. Parmar, R. Pang, S. Wang, W. Han, Y. Wu, Y. Zhang and Z. Zhang, Conformer: Convolution-augmented transformer for speech recognition, in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 5036–5040.
bat K. An and X. Shi and S. Zhang, BAT: Boundary aware transducer for memory-efficient and low-latency ASR, in Proc. INTERSPEECH, Dublin, Ireland, 2023, pp.4963-4967.
sanm Z. Gao, S. Zhang, M. Lei and I. McLoughlin, SAN-M: memory equipped self-attention for end-to-end speech recognition, in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 6-10.
espnet_rnnt F. Boyer, Y. Shinohara, T. Ishii, H. Inaguma, S. Watanabe, A study of transducer based end-to-end ASR with ESPnet: architecture, auxiliary loss and decoding strategies, in Proc. ASRU, Cartagena, Colombia, 2021, pp. 16-23.
improve-cass-nat R. Fan, W. Chu, P. Chang, J. Xiao and A. Alwan, An improved single step non-autoregressive transformer for automatic speech recognition. in Proc. INTERSPEECH, Incheon, Korea, 2022.
scctc J. Nozaki and T. Komatsu, Relaxing the conditional independence assumption of CTC-Based ASR by conditioning on intermediate predictions, in Proc. INTERSPEECH, Brno, Czechia, 2021, pp. 3735-3739.
align-refine E. A. Chi, J. Salazar and K. Kirchhoff, Align-Refine: non-autoregressive speech recognition via iterative realignment, in Proc. NAACL, virtual event, 2021, pp. 1920–1927.
imputer W. Chan, C. Saharia, G. Hinton, M. Norouzi and N. Jaitly, Imputer: Sequence modelling via imputation and dynamic programming, in Proc. ICML, virtual event, 2020, pp. 1403–1413.
A-FMLM N. Chen, et al., Non-autoregressive transformer for speech recognition, in Signal Processing Letters, vol. 28, pp. 121-125, 2021
| The attention-based encoder-decoder (AED) model has significantly advanced the field of Automatic Speech Recognition (ASR). However, a key limitation of the autoregressive (AR) decoder is the inherent inefficiency; each output token depends on those generated before it, leading to a decoding time directly proportional to the length of the output sequence. This linear scalability can result in slow processing, particularly troublesome for lengthy outputs.
In response, non-autoregressive transformers (NATs) have emerged as a potential solution to this bottleneck. NAT models, as exemplified by works like Mask-CTC <cit.>, CASS-NAT <cit.>, Spike-triggered NAT <cit.>, and notably Paraformer <cit.>, revolutionize the process by concurrently generating all tokens, thereby sidestepping the sequential dependency and significantly accelerating inference.
Paraformer, in particular, stands out among NAT-based ASR models due to its success in Mandarin speech recognition. Paraformer uses a Continuous Integrate-and-Fire (CIF) based predictor to estimate token embeddings in parallel, functioning as the query input to a non-autoregressive decoder. Nevertheless, the CIF predictor confronts two principal challenges:
Multilingual Limitations: The CIF predictor's efficacy may wane when applied to languages such as English. Unlike Mandarin, English often employs byte-pair encoding (BPE) to tokenize text, resulting in a higher variability in token counts that the CIF struggles to accurately estimate.
Noise Sensitivity: The CIF mechanism exhibits reduced robustness against ambient noise, which is particularly problematic in settings prone to acoustic disturbances, like conference or meeting environments. This sensitivity can lead to a noticeable decline in recognition accuracy under suboptimal acoustic conditions.
In this paper, we introduce Paraformer-v2, a novel architecture designed to overcome the limitations observed in the original Paraformer. Central to our approach is the integration of a Connectionist Temporal Classification (CTC) module for extracting token embeddings, which performs better than CIF in language adaptability and noise-robustness. During training, the token embeddings are extracted based on the CTC alignment. During inference, we use the non-blank CTC prediction as the token embeddings to the non-autoregressive transformer decoder. Comprehensive experimental evaluations have validated the superiority of Paraformer-v2, showcasing its capability to achieve state-of-the-art performance among NAT models across multiple benchmark datasets. Specifically, our model sets new benchmarks on AISHELL-1 (a Mandarin Chinese dataset), LibriSpeech (an English corpus), and an in-house English dataset comprising 50,000 hours of speech. Notably, Paraformer-v2 rivals the performance of strong autoregressive models such as conformer AED and conformer transducers, a testament to its effectiveness. Moreover, Paraformer-v2 has demonstrated substantial improvements in noisy environments compared to Paraformer. | null | null | null | null | In this paper we propose Paraformer-v2, an advancement on the original Paraformer for fast, accurate, and noise-robust non-autoregressive speech recognition. Compared to Paraformer, Paraformer-v2 shows significantly better performance on recognition accuracy, especially on English benchmarks, and better performance on noise-robustness. Compared with the AR model such as conformer AED, Paraformer-v2 performs comparably in accuracy with over 20 times speedup.
99
self-attention A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser and I. Polosukhin, Attention is all you need, in Proc. Neurips, Long Beach, CA, USA, 2017, pp. 5998-6008.
mask-ctc Y. Higuchi, S. Watanabe, N. Chen, T. Ogawa and T. Kobayashi, Mask CTC: non-autoregressive end-to-end ASR with CTC and mask predict, in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 3655-3659.
cass-nat R. Fan, W. Chu, P. Chang and J. Xiao, CASS-NAT: CTC Alignment-Based single step non-autoregressive transformer for speech recognition, in Proc. ICASSP, Toronto, ON, Canada, 2021, pp. 5889-5893.
spike Z. Tian, J. Yi, J. Tao, Y. Bai, S. Zhang and Z. Wen, Spike-triggered non-autoregressive transformer for end-to-end speech recognition. in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 5026-5030.
paraformer Z. Gao, S. Zhang, I. McLoughlin and Z. Yan, Paraformer: fast and accurate parallel transformer for non-autoregressive end-to-end speech recognition, in Proc. INTERSPEECH, Incheon, Korea, 2022, pp.2063–2067.
cif L. Dong and B. Xu, CIF: Continuous integrate-and-fire for end-to-end speech recognition, in Proc. ICASSP, Barcelona, Spain, 2020, pp.6079-6083.
ctc-nat X. Song, Z. Wu, Y. Huang, C. Weng, D. Su, H. Meng, Non-autoregressive transformer ASR with CTC-enhanced decoder input, in Proc. ICASSP, Toronto, Ontario, Canada, 2021, pp. 5894-5898.
conformer A. Gulati, C. C. Chiu, J. Qin, J. Yu, N. Parmar, R. Pang, S. Wang, W. Han, Y. Wu, Y. Zhang and Z. Zhang, Conformer: Convolution-augmented transformer for speech recognition, in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 5036–5040.
bat K. An and X. Shi and S. Zhang, BAT: Boundary aware transducer for memory-efficient and low-latency ASR, in Proc. INTERSPEECH, Dublin, Ireland, 2023, pp.4963-4967.
sanm Z. Gao, S. Zhang, M. Lei and I. McLoughlin, SAN-M: memory equipped self-attention for end-to-end speech recognition, in Proc. INTERSPEECH, Shanghai, China, 2020, pp. 6-10.
espnet_rnnt F. Boyer, Y. Shinohara, T. Ishii, H. Inaguma, S. Watanabe, A study of transducer based end-to-end ASR with ESPnet: architecture, auxiliary loss and decoding strategies, in Proc. ASRU, Cartagena, Colombia, 2021, pp. 16-23.
improve-cass-nat R. Fan, W. Chu, P. Chang, J. Xiao and A. Alwan, An improved single step non-autoregressive transformer for automatic speech recognition. in Proc. INTERSPEECH, Incheon, Korea, 2022.
scctc J. Nozaki and T. Komatsu, Relaxing the conditional independence assumption of CTC-Based ASR by conditioning on intermediate predictions, in Proc. INTERSPEECH, Brno, Czechia, 2021, pp. 3735-3739.
align-refine E. A. Chi, J. Salazar and K. Kirchhoff, Align-Refine: non-autoregressive speech recognition via iterative realignment, in Proc. NAACL, virtual event, 2021, pp. 1920–1927.
imputer W. Chan, C. Saharia, G. Hinton, M. Norouzi and N. Jaitly, Imputer: Sequence modelling via imputation and dynamic programming, in Proc. ICML, virtual event, 2020, pp. 1403–1413.
A-FMLM N. Chen, et al., Non-autoregressive transformer for speech recognition, in Signal Processing Letters, vol. 28, pp. 121-125, 2021 |
http://arxiv.org/abs/2409.17321v1 | 20240925200027 | Machine learning analysis of structural data to predict electronic properties in near-surface InAs quantum wells | [
"Patrick J. Strohbeen",
"Abtin Abbaspour",
"Amara Keita",
"Tarek Nabih",
"Aliona Lejuste",
"Alisa Danilenko",
"Ido Levy",
"Jacob Issokson",
"Tyler Cowan",
"William M. Strickland",
"Mehdi Hatefipour",
"Ashley Argueta",
"Lukas Baker",
"Melissa Mikalsen",
"Javad Shabani"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
AIP/123-QED
These authors contributed equally to this work
These authors contributed equally to this work
These authors contributed equally to this work
These authors contributed equally to this work
Now at NORDITA, Stockholm University
Center for Quantum Information Physics, Department of Physics, New York University, New York, NY 10003 USA
§ ABSTRACT
Semiconductor crosshatch patterns in thin film heterostructures form as a result of strain relaxation processes and dislocation pile-ups during growth of lattice mismatched materials. Due to their connection with the internal misfit dislocation network, these crosshatch patterns are a complex fingerprint of internal strain relaxation and growth anisotropy. Therefore, this mesoscopic fingerprint not only describes the residual strain state of a near-surface quantum well, but also could provide an indicator of the quality of electron transport through the material. Here, we present a method utilizing computer vision and machine learning to analyze AFM crosshatch patterns that exhibits this correlation. Our analysis reveals optimized electron transport for moderate values of λ (crosshatch wavelength) and ϵ (crosshatch height), roughly 1 μm and 4 nm, respectively, that define the average waveform of the pattern. Simulated 2D AFM crosshatch patterns are used to train a machine learning model to correlate the crosshatch patterns to dislocation density. Furthermore, this model is used to evaluate the experimental AFM images and predict a dislocation density based on the crosshatch waveform. Predicted dislocation density, experimental AFM crosshatch data, and experimental transport characterization are used to train a final model to predict 2D electron gas mean free path. This model shows electron scattering is strongly correlated with elastic effects (e.g. dislocation scattering) below 200 nm λ_MFP.
Machine learning analysis of structural data to predict electronic properties in near-surface InAs quantum wells
Javad Shabani
September 28, 2024
================================================================================================================
One of the most common features of strained-layer epitaxy is the surface crosshatch pattern that forms as the crystal undergoes misfit dislocation formation, glide, and pile-up <cit.>. However, as ubiquitous as these crosshatch patterns are, there are few studies attempting to bridge the gap between the observed surface morphologies and material parameters of interest, such as strain state or carrier scattering lifetimes <cit.>. In principle, internal elastic strain and the surface crosshatch pattern that forms as a result are intimately related via the elasticity tensor of the material system. Proper definition of this interdependence will enable researchers to more rapidly characterize and optimize complex heterostructure growth. For example, an atomic force microscopy (AFM) image of a surface takes over an order of magnitude less time to complete when compared to cryogenic 4-point transport measurements. Previous models from the late 1990s and early 2000s attempted to solve this problem considering either a single uniform strain value <cit.> or a single line of dislocations <cit.>. However, expansion to two-dimensional surfaces and the use of these models to correlate surface corrugation to the quality of electronic transport has yet to be unveiled.
um well hetero tructures within the last decade have been extensively studied for topological superconductivity <cit.> and gate-tunable superconducting circuits for quantum information<cit.>. These studies utilize material stacks in which the quantum well region is placed close to the surface of the wafer. In this case, the active region of the wafer is well-separated from the initial mismatched interface by the growth of substantial metamorphic buffer layers <cit.>. Compared to buried quantum well structures which are well known to have carrier scattering dominated by background and ionized impurities <cit.>, modulation in the surface structure and surface dangling bonds have a much stronger impact on near-surface quantum well systems. As a result, understanding the surface strain state and strain fluctuations becomes of greater importance. Therefore, in these heterostructures it is of great significance to develop models with which we can quickly understand and predict how these materials will perform in cryogenic applications without the need for long fabrication, cooldown, and measurement cycles.
Here, we present a method of AFM image processing for MBE-grown near-surface InAs quantum well structures in which we relate the surface crosshatch pattern to quantum well transport quality. The surface crosshatch pattern is a direct result of misfit dislocation formation during the growth process and is a sign of strain relaxation <cit.>. Anisotropy in this pattern indicates anisotropic adatom diffusion along <110>-type directions in the (001) plane, which corresponds to anisotropic transport as well. We analyze the room temperature AFM images and extract a parametrization of the physical waveforms on the surface that define the crosshatch and corroborate this with the low temperature electron mean free path. By using the measured electron mean free path of the quantum well, we examine the relationship between the surface crosshatch and resulting transport properties in order to draw conclusions regarding quantum well transport behavior from surface morphology. Simulated 2D crosshatch patterns are used to correlate surface crosshatch patterns to an internal dislocation density for the experimental AFM images. The predicted dislocation densities are then used to model electron scattering mean free path values that are compared against the experimentally measured values. This study serves as a proof of concept for the applicability of computer analysis in the context of AFM image processing in which expanding to the utilization of machine learning is highly desirable.
InAs near-surface quantum wells are grown in a Varian Gen II molecular beam epitaxy (MBE) system on 50.8 mm InP (001) wafers. The InAs quantum well structures are grown on a strain-relaxed graded buffer layer with top-layer composition of In_0.81Al_0.19As. The quantum well is grown to be 4 nm thick and is separated from the wafer surface by a 10 nm thick In_0.81Ga_0.19As barrier layer. More details and specifics about this structure growth can be found in reference <cit.>. After growth, AFM imaging is done with a NanoMagnetics Instruments AFM operating in a dynamical tapping mode. Preliminary AFM image normalization and processing is done using the Gwyddion software package <cit.>. Afterwards, quantum Hall transport is measured in an Oxford TelsatronPT 1.5 K He4 cryostat. The 2-in wafers are diced into roughly 5×5 mm pieces which then are etched in Transene Type-D (Transene Company) aluminum etchant to remove the top Al layer. Samples are then cleaned in solvents, rinsed in de-ionized water, and then blow-dried with dry nitrogen. Measurements are done in a Van der Pauw wiring configuration.
We first present our one-dimensional analysis of the crosshatch pattern. Assuming cubic symmetry of the surface, we would not expect drastically different behavior between the [110] and [110] directions (parallel to the crosshatch peaks/valleys) within the (001) plane. Fig. <ref> presents the InAs quantum well heterostructure studied here, an exemplary AFM input image, a schematic of our model definitions, and an example of a 1D analysis relating the electron mobilities as a function of the crosshatch wavelength and amplitude. The structure of the near-surface InAs quantum well is presented in Fig. <ref>a, with an examplary input AFM image seen in Fig. <ref>b. For our AFM crosshatch analysis we define the “waveform” of the crosshatch pattern in 1D. We define the wavelength, λ, as the distance between peaks in the crosshatch and the amplitude, ϵ, is the height above the midpoint of the crosshatch. A schematic of this definition is presented in Fig. <ref>c, which is an adaptation from the definition presented in Fitzgerald et al. in 1997 <cit.>. From the analysis we present in Figs. <ref> and <ref>, we plot the 1D crosshatch definition presented here against quantum well 2D electron gas (2DEG) mobility for a series of 36 samples, seen in Fig. <ref>d-f. The μ_MAX and μ_LOW plots presented in Figs. <ref>e and f, respectively, are the mobilities calculated for the R_xx and R_yy magnetoresistance traces. The average mobility in plot Fig. <ref>d is the average for each sample as calculated from the high and low values seen in Fig. <ref>e,f.
Simple intuition would lead one to believe that the optimal regime in terms of physical structure would be in the limit of ϵ→ 0 and λ→∞. However, we note a general trend in our analysis suggesting an optimal point in the AFM crosshatch pattern in the region of moderate values of λ and ϵ. Thus, to more completely evaluate the AFM image in terms of electronic transport behavior, we extend our analysis software to incorporate the second dimension of the crosshatch pattern and fit our parametrization of the crosshatch to a linear regression model.
The AFM analysis is initialized by inputting the raw 20x20 μm images and removing any residual borders from the normalization routines. The initial pixel dimension is then converted into physical length units (nm) based on the AFM scan size. After this step, AFM images are trimmed to standardized pixel densities. The AFM images are then rotated to align the surface crosshatch pattern such that the peaks and valleys are vertical (horizontal). To preserve as much as possible of the original data-set upon rotation, image resolution is increased, however, some data must be cut-off. For example, with a 512x512 pixel image we calculate a 4% maximum possible loss depending on the degree of rotation.
After rotation, our AFM analysis software produces two images corresponding to the vertical (Y) and horizontal (X) channels and then applies a de-noising routine to enhance the signal-to-noise ratio (SNR). The de-noising procedure averages the pixel values in each column across the entire image, preserving the vertical crosshatch and condensing the original 2D image into an effectively 1D line. This line is then re-expanded to the original pixel size for the purpose of comparison to the original AFM image. This produces a smoothed version of the vertical crosshatch “lines”, as seen in Fig. <ref>a. This procedure is also done for the horizontal channel. We then apply a threshold such that height values are cut-off and replaced with empty pixels, examples of this for both the vertical and horizontal channels are shown in Fig. <ref>b and c, respectively. The thresholding algorithm takes as an input the de-noised waveform that was created in the previous step and calculates the absolute midpoint between the maximum and minimum heights. Then, the local maxima (minima) are calculated for each peak (valley) above (below) the global midpoint. The threshold is applied to the image such that only information below the threshold value is kept. To check this result, we also present an overlay of the recreated crosshatch pattern on the original AFM image it was generated from in Fig. <ref>d. In this image, we see good agreement between the “valleys” present in the crosshatch pattern and the red (vertical) and blue (horizontal) channels that our algorithm picks out.
Once the AFM images are processed and we obtain the thresholded channel plots as shown in Figs. <ref>b and c, we then extract the widths of each line and gap between lines. These values are plotted in separate histogram plots for the vertical and horizontal channels, examples of which are shown in Figs. <ref>a and b. The histogram shows extracted values for the AFM image presented in Fig. <ref>b and corresponds to the de-convoluted crosshatch graphs presented in Fig. <ref>b and c. en fit this histogram to a Gaussian distribution to extract a center of mass (CoM) value and the full-width at half max (FWHM). In the context of a real material evaluation, the CoM is related to the crosshatch density (residual strain state), whereas the FWHM value is then related to the fluctuations in this strain state.
To extract the height of the crosshatch, ϵ, we use the height information from the AFM images to map the color values of each pixel back to the physical value. For each line, the peak-to-valley heights for the individual X- and Y-crosshatch patterns together to get ϵ_x and ϵ_y, respectively. We define a mean value ϵ_x,y across all 512 lines in the AFM image, from which we calculate the standard deviation, σ^ϵ_x,y for each AFM image. Thus, the resultant dataset for each AFM image contains λ, ϵ, σ^λ, and σ^ϵ for X- and Y-channels for each sample analyzed in this method.
We then fit the variables from the compiled dataset across all 36 samples to a multiple linear regression model using the package within base R statistical software (ver 4.3.3) <cit.>. The results of two of these model fits are presented in Fig. <ref>. Figs. <ref>a,c are the regression model and calculated residuals for Model 1 and Figs. <ref>b,d are the same for Model 2. The difference between models is that for Model 2, the λ_x and λ_y variables are removed from the fit. The resulting linear regression for Model 1 and 2 are presented in Equations <ref> and <ref>, respectively.
λ_MFP = 60.59 - 0.01λ_x - 0.05λ_y - 35.98ϵ_x - 50.01ϵ_y
+ 0.39σ^λ_x- 0.29σ^λ_y + 169.72σ^ϵ_x + 35.59σ^ϵ_y
λ_MFP = 57.76 - 33.45ϵ_x - 49.2ϵ_y + 0.4σ^λ_x - 0.39σ^λ_y
+ 168.02σ^ϵ_x + 15.59σ^ϵ_y
The plots presented in Fig. <ref>a,b show the predicted (Model) values for electron mean free path, λ_MFP, as a function of the experimental λ_MFP value. The model λ_MFP value is calculated using the linear regression line shown in Equations <ref> and <ref> for Figs. <ref>a and b, respectively, using the original experimental dataset that is obtained through our computer vision algorithm. The R^2 and p-values for each model fit are presented in the insets in Figs. <ref>a and b.
When considering all variables in our dataset (Model 1), we calculate the coefficient of determination, R^2, to be 0.27, with a statistical significance (p-value) of 0.21. This is suggestive of some relationship in the dataset to our desired transport parameter, though due to the complexity of the dataset the direct relationship is unclear. However, when removing the λ values for both X- and Y-channels (Model 2), we calculate the R^2 value to be 0.26, i.e. roughly the same variance as in Model 1, except in the new model we calculate a p-value of 0.09. Thus, we conclude that there is a reasonably significant correlation between λ_MFP measured in transport and surface corrugation as defined by ϵ_x,y, σ^λ_x,y, and σ^ϵ_x,y (Model 2). The low R^2 value is likely due to our small sample size of only 36 samples. Furthermore, the improvement from the removal of the λ_x,y variables from the linear model suggests that the dependance of transport quality on crosshatch wavelength is weak at best, but rather how non-uniform this crosshatch is. However, it is clear a more in-depth model is required to better capture the correlation between the corrugated crosshatch surface and the resulting electronic transport quality.
As seen in Figure <ref>, a linear regression model fails to accurately capture the complexity of AFM crosshatch patterns. This suggests that higher dimensionality solutions may be necessary to realize the relationship between the elastic behavior of the semiconductors and their electronic transport behavior. To realize such an analysis, we develop simulation tools to simulate two-dimensional surface crosshatch patterns. This allows for the creation of a large machine learning model with which to better understand the physical system.
Our AFM simulation utilizes the model for surface crosshatch proposed by Andrews et al. in 2002 <cit.>. The original model solves Hooke's law in one dimension for an arbitrary elastic medium. The model is initialized by defining some starting density of dislocations along the a 1D line, and then integrating the strain field caused by the dislocations to a specified distance away from this starting point. This results in us obtaining an expression for the displacement field of the elastic medium at an arbitrary distance away. We first expanded the model to include two dimensions to generate a surface crosshatch in InAs from a given dislocation density. This model randomly places dislocations within a 20x20 μm region based on an initial user-defined dislocation density. The model equations are presented below in Equations <ref> and <ref> for calculating the displacement field in x and y, respectively.
u_x = b/π[ h^2 - (x - x_0) · h/(x - x_0)^2 + (y - y_0)^2 + h^2 - arctan(x - x_0/h) ]
u_y = b/π[ h^2 - (y - y_0) · h/(x - x_0)^2 + (y - y_0)^2 + h^2 - arctan(y - y_0/h) ]
Where b is the Burgers vector, h is the effective depth from the dislocation line, and x_0, y_0 are the dislocation positions along x and y, respectively. These equations allow us to compute the displacement field at any arbitrary (x,y) point in the material, considering the influence of dislocations spread across a 2D plane. For the purposes of this model, we assume the dislocations to be randomly placed in the 2D plane as well as non-interacting. An examplary simulated crosshatch pattern is presented in Figure <ref>a. We also present the experimental AFM image presented in Fig. <ref>b again, now side-by-side with the 2D version of the simulated crosshatch pattern for comparison. We note good quantitative agreement regarding lambda values, however, the height at the dislocation density presented here is roughly one order of magnitude lower than the experimental data.
To generate the simulated dataset, for each dislocation density we simulate the surface 1.5 μm away from the misfit interface, consistent with the thickness of the material stack used in our experiments. For simplicity, we assume the majority of dislocations form at the interface with the InP substrate. The simulation was run for 1000 iterations per dislocation density, for 18 different densities ranging from 3 × 10^7 /cm^2 to 3 × 10^9 /cm^2. With this, we then extract the simulated λ and ϵ values for each simulation. The simulated dataset is plotted in Fig. <ref>b,c for λ_x,y and Fig. <ref>d,e ϵ_x,y, respectively.
The parameters extracted from these simulations – crosshatch wavelength λ and amplitude ϵ – are calculated the same as our previous analysis (schematic shown in Fig. <ref>c). We used these parameters to train a Random Forest machine learning model, which predicted the dislocation density from the AFM-derived λ and ϵ values. The best hyperparameters for the model were determined through cross-validation and yielded a test R^2 value of 0.976, indicating our model captures nearly all of the observed variance in predicting dislocation density. More details about the model can be found in the supplemental materials.
We then take the model that defines ρ_disloc.(λ,ϵ) and use it to correlate the experimentally measured values of λ and ϵ in order to evaluate the dislocation density of our experimentally analyzed AFM surfaces. To evaluate the effectiveness of our model in evaluating the experimental dataset, we predict the dislocation densities for each of the 36 original samples. An example of this comparison is presented in Table <ref>.
Where R_a is the calculated surface roughness and the mean height is the average valley-peak height for the crosshatch pattern. We note that while our model is rudimentary in the assumption of only one mismatched interface, we find that it suitably captures the behavior of the real system.
The experimental ρ_disloc.^exp. values are then correlated with the experimental electron mean free path values, λ_MFP, in order to generate a predictive model for λ_MFP. The results of this model are presented in Fig. <ref>.
The divergence near 200 nm from the model predictions that indicates the physics observed at larger mean free path values is not accurately captured by our model. Rather, the dependence of λ_MFP on λ_x,y, ϵ_x,y, and ρ_disloc. is not strongly defined. We emphasize here that the model only accounts for information regarding the structure of the material, and does not consider any form of electronic scattering. This result indicates that this 200 nm λ_MFP is a crossover point where the electron scattering goes from being dominated by dislocation scattering induced by elastic strain and plastic deformation caused by epitaxial growth, to being dominated by ionized impurity and electron-electron scattering.
In conclusion, we present here a new method for AFM image analysis using computer image processing to quantitatively parametrize AFM crosshatch patterns and correlate the extracted parametrization with material quality. Using computer vision thresholding, we deconvolute crosshatch images into separate X- and Y- channels and calculate average crosshatch wavelengths, λ_x,y, and standard deviations, σ^λ_x,y, for both channels. AFM height data is used to calculate the average height variation, ϵ_x,y, and the standard deviations, σ^ϵ_x,y for both channels. We present newly developed simulation tools to generate AFM crosshatch patterns from a known dislocation density. Using this simulation, we are able to generate large datasets with which to train a machine learning model that relates the crosshatch λ and ϵ values to an initial dislocation density. From this model we relate the experimental AFM image to a predicted dislocation density in order to correlate electron mean free path, λ_MFP, to a dislocation density, ρ_disloc. We finally present a model based on predicted dislocation density that attempts to predict λ_MFP based solely on strain-induced displacement fields. We observe a divergence between the experimental values and predicted values near λ_MFP = 200 nm that points to this point as a crossover between transport that is dominated by scattering induced by strain-fields and ionized impurity scattering.
§ SUPPLEMENTAL MATERIAL
The supplemental material document contains further details regarding the machine learning model data curation, model selection, and training.
The authors would like to acknowledge support for this project from the Office of Naval Research award number N00014-22-1-2764.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. The AFM analysis code is available on github: https://github.com/ShabaniLab.
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17369v1 | 20240925212911 | Evaluation of Spectrum Sharing Algorithms for Networks with Heterogeneous Wireless Devices | [
"Ankit Walishetti",
"Igor Kadota",
"Aidan Kim",
"Colin Ward",
"Eduardo Gutierrez",
"Randall Berry"
] | cs.NI | [
"cs.NI"
] |
Evaluation of Spectrum Sharing Algorithms for Networks with Heterogeneous Wireless Devices
Ankit Walishetti1, Igor Kadota2, Aidan Kim1, Colin Ward1, Eduardo Gutierrez1, Randall Berry2
1Illinois Mathematics and Science Academy, USA 2Northwestern University, USA
1{awalishetti, akim4, cward, egutierrez}@imsa.edu 2{kadota, rberry}@northwestern.edu
September 28, 2024
==============================================================================================================================================================================================================================================================================
§ ABSTRACT
As highlighted in the National Spectrum Strategy, Dynamic Spectrum Access (DSA) is key for enabling 6G networks to meet the increasing demand for spectrum from various, heterogeneous emerging applications. In this paper, we consider heterogeneous wireless networks with multiple 6G base stations (BS) and a limited number of frequency bands available for transmission. Each BS is associated with a geographical location, a coverage area, and a bandwidth requirement. We assume that clients/UEs are within the corresponding BS's coverage area. To avoid interference, we impose that BSs with overlapping coverage areas must use different frequency bands. We address the challenging problem of efficiently allocating contiguous frequency bands to BSs while avoiding interference. Specifically, we define performance metrics that capture the feasibility of the frequency allocation task, the number of BSs that can be allocated within the limited frequency bands, and the amount of resources utilized by the network. Then, we consider five different DSA algorithms that prioritize BSs based on different features – one of these algorithms is known in the graph theory literature as Welsh-Powell graph colouring algorithm – and compare their performance using extensive simulations. Our results show that DSA algorithms that attempt to maximize the chances of obtaining a feasible frequency allocation – which have been widely studied in the literature – tend to under-perform in all other metrics.
Dynamic spectrum access, 6G, Spectrum sharing, Coexistence, Wireless networks
§ INTRODUCTION
Emerging and future applications and wireless devices will increasingly rely on Dynamic Spectrum Access (DSA) algorithms that can manage the scarce spectrum resources efficiently.
The importance of DSA for 6G networks was recently highlighted in the National Spectrum Strategy <cit.> and its Implementation Plan <cit.>, as well as in reports by the Next G Alliance <cit.>, Qualcomm <cit.>, Ericsson <cit.>, and many others.
The development of DSA algorithms that can efficiently allocate frequency spectrum to wireless devices (e.g., 6G base stations) while avoiding harmful interference has been extensively investigated in the literature (see surveys <cit.>).
Spectrum sharing is a challenging problem even in simple settings.
For example, consider homogeneous networks in which every base station has the same bandwidth requirement. The problem of finding the minimum number of frequency bands that can accommodate multiple homogeneous base stations is known to be NP-hard <cit.>.
Several heuristic DSA algorithms for homogeneous networks have been proposed in the graph theory literature <cit.>.
Heterogeneous networks, in which base stations may have diverse bandwidth requirements, are more complex and less studied. Recent focus has been on developing machine learning-based DSA algorithms for heterogeneous networks, e.g. <cit.>.
In this paper, we evaluate different low-complexity DSA algorithms for heterogeneous 6G networks, and use extensive simulation results to gain insight into their performance trade-offs.
We define performance metrics that capture the feasibility of the frequency allocation task, the number of base stations that can be allocated within the limited frequency bands, and the amount of resources utilized by the network.
Then, we compare DSA algorithms that prioritize frequency allocation to base stations based on different features, including their number of potentially interfering neighbors, their coverage area, and their required bandwidth.
Our simulation results show that DSA algorithms that prioritize frequency allocation to base stations with higher number of potentially interfering neighbors are more likely to achieve a feasible frequency allocation (for all base stations in the network) within the limited available frequency. This result agrees with the literature on homogeneous networks <cit.>. However, our results also show that, when the frequency allocation task is unfeasible, these DSA algorithms are more likely to leave a high number of unallocated base stations. Performance trade-offs associated with other DSA algorithms are discussed in Section <ref>.
The rest of this paper is organized as follows.
Section <ref> formulates the DSA problem.
In Section <ref>, we develop five distinct sorting algorithms.
In Section <ref>, we describe the simulation platform and discuss our extensive numerical results.
Section <ref> concludes the paper.
§ PROBLEM FORMULATION
In this section, we discuss the network model, the DSA algorithm, and the performance metrics.
Network Model. We consider a network with multiple 6G base stations, called transmitters.[Though we refer to these as transmitters, our model can apply to either the uplink or the downlink in a Frequency Division Duplex (FDD) deployment or to a Time Division Duplex (TDD) deployment in which each base station operates as both a transmitter and receiver.]
Each transmitter is associated with a (potentially) different geographical location, coverage area, and frequency bandwidth.
Let N be the total number of (heterogeneous) transmitters in the network, and let F be the total number of (contiguous) bandwidth units available for transmission.
Each transmitter i∈{1,2…,N} is associated with a location within a two-dimensional region ℛ, a bandwidth requirement of B_i units, and a transmission coverage of R_i meters. We assume that there is a single band manager for this region that collects transmitter requirements and runs a DSA algorithm to allocate bandwidth.
We model the coverage of transmitter i as a circle with radius R_i centered at the location of the transmitter, as can be seen in Figure <ref>.[This models a scenario where transmitters have omni-directional antennas. Most of the following can be extended to allow for more general coverage regions.]
We assume that clients (also known as User Equipment) are within the corresponding transmitter's coverage area.[Depending on the requirements of the users, the user deployment could be constrained to be sufficiently inside the coverage area to allow for "guard regions" between different deployments to reduce interference.]
Hence, if two (or more) transmitters' coverage areas overlap, they may cause harmful interference to one another.
To avoid interference, we impose that transmitters with overlapping coverage areas must use different frequency bands.
Naturally, non-overlapping transmitters may use the same frequency bandwidth,
allowing for more efficient spectrum utilization.
The effects of aggregate interference and time-varying coverage are not considered.
Dynamic Spectrum Access algorithm. The DSA algorithm has two main components: a sorting algorithm and a frequency allocation mechanism.
The sorting algorithm takes the list of transmitters (1,2,…,N) and re-arranges it in a particular order (i_1,i_2,…,i_N).
Different sorting algorithms prioritize transmitters based on different performance metrics (introduced below).
The frequency allocation mechanism assigns a contiguous bandwidth to each transmitter, one at a time, following the order (i_1,i_2,…,i_N).
Specifically, for each transmitter i, the frequency allocation mechanism assigns the B_i units of available bandwidth with the lowest possible indices f∈{1,2,…,F}.[We assume that the bandwidth assigned to a transmitter must be contiguous. This can help to minimize the amount of spectrum needed for guard-bands, but does reduce flexibility in making assignments. We leave the issue of non-contiguous bandwidth for future work.] Recall that transmitter i may overlap with other transmitters, which may restrict its available bandwidth.
Intuitively, the frequency allocation mechanism is attempting to pack transmitter bandwidths as much as possible, leaving contiguous available bandwidth for other transmitters in the network.
The goal of the Dynamic Spectrum Access algorithm is to assign frequency bandwidths to all N transmitters while avoiding harmful interference. The computational complexity of these DSA algorithms is dominated by the sorting algorithm which has complexity O(Nlog(N)) when using the Heap Sort approach.
In Sec. <ref>, we introduce different sorting algorithms and, in Sec. <ref>, we evaluate the impact of these different sorting algorithms.
Performance Metrics. To evaluate DSA algorithms based on different sorting algorithms, we employ the five metrics defined below.
Given a DSA algorithm, a network with N transmitters can be classified as feasible or unfeasible. The network is feasible if the DSA algorithm allocates frequency bands within the range f∈{1,2,…,F} for every transmitter i∈{1,2,…,N}. In this case, all transmitters are classified as admissible. The network is unfeasible if there is at least one inadmissible transmitter that is allocated frequency bands outside of the range, i.e., f≥ F+1. Denote by 𝒩 the set of admissible transmitters and by 𝒩' the set of inadmissible transmitters. It follows that 𝒩∩𝒩'=∅ and 𝒩∪𝒩'={1,2,…,N}.
* Feasibility Indicator (FI∈{0,1}) is a binary metric that indicates whether the network is feasible (FI=1) or unfeasible (FI=0).
* Bandwidth Usage (BU∈ℕ^+) represents the highest bandwidth index f∈{1,2,…,F,F+1,…} allocated by the DSA algorithm to any of the transmitters i∈{1,2,…,N}. In Figure <ref>, we have BU=F=10. If the network is feasible (FI=1), then BU≤ F. If the network is unfeasible (FI=0), then BU≥ F+1. A low BU indicates that the DSA algorithm can effectively pack transmitters within the limited bandwidth.
* Coverage Area (CA∈ℝ^+) represents the sum of the coverage areas of every admissible transmitter
CA = ∑_i∈𝒩π R_i^2 C_i ,
where C_i∈[0,1] is a correction factor accounting for the fraction of the coverage area within the two-dimensional region ℛ. For instance, “w1” in Figure <ref> has a C_1≈ 0.5.
* Bandwidth-Coverage Product (BC∈ℕ^+) is the sum of the product of the coverage radius and the bandwidth requirement of every admissible transmitter[If the coverage regions were not circles, one can define an “effective radius” for each region and still use this algorithm. Alternatively, one could also consider a related metric given by the product of the bandwidth and the coverage area.]
BC = ∑_i∈𝒩 R_i B_i .
A high R_i B_i indicates a transmitter with stringent bandwidth requirement B_i and large coverage radius R_i, i.e., a transmitter that consumes a significant amount of resources. A high R_i B_i may also indicate an important transmitter that serves many clients and can support high quality of service.
* Total Transmitters while Feasible (TF∈ℕ^+) represents the maximum number of transmitters before the network becomes unfeasible. Recall that the sorting algorithm takes the list of transmitters (1,2,…,N) and sorts it in a particular sequence (i_1,i_2,…,i_N). If the network is feasible, then all N transmitters are admissible and Total Transmitters while Feasible is TF=N. Otherwise, let N' be the sorted index of the first inadmissible transmitter. It follows that transmitters in (i_1,i_2,…,i_N'-1) are all admissible, transmitter i_N' is inadmissible, and transmitters in (i_N'+1,i_N'+2,…,i_N) may be admissible or inadmissible. In this case, TF=N'-1.
These metrics provide a comprehensive framework for evaluating the performance of different DSA algorithms.
For instance, Feasibility Indicator, FI, and Bandwidth Usage, BU, have direct connections with the system objective and resource allocation efficiency. Measuring the Total Transmitters while Feasible, TF, help us understand how close/far the DSA algorithm was of achieving network feasibility.
§ SORTING ALGORITHMS
In this section, we develop five sorting algorithms
that prioritize transmitters according to different characteristics. The combination of each sorting algorithm with the frequency allocation mechanism described in Sec. <ref> composes a different DSA algorithm. We will use numerical results to evaluate these different DSA algorithms in Sec. <ref>. The description of the sorting algorithms is below:
* Most-Overlaps Sort prioritizes transmitters in descending order[Ties in every sorting algorithm are broken arbitrarily. For example, by prioritizing transmitters with lowest index i∈{1,2,…,N}] of number of coverage overlaps. The number of coverage overlaps associated with transmitter i represents the number of neighboring transmitters that can cause harmful interference to transmitter i. The number of overlaps is equivalent to the degree of the conflict graph <cit.> underlying the network under consideration. In the context of graph theory, the sorting algorithm that prioritizes transmitters in descending order of degree is called the Welsh-Powell graph colouring algorithm <cit.> and is widely used (together with its variants, including the DSATUR algorithm <cit.>) to find the minimum number of frequency bands F needed to make a network feasible. The main idea of this algorithm is to start the frequency allocation process with the most challenging transmitters.
* Bandwidth-Coverage Sort prioritizes transmitters in descending order of bandwidth-coverage product R_i B_i. Intuitively, this prioritizes transmitters that need the most resources. Transmitters with high R_i B_i may be considered more important, as they may be able to serve more clients and/or provide superior quality of service.
* Least-Bandwidth Sort prioritizes transmitters in ascending order of bandwidth requirement B_i. This algorithm contrasts with the previous algorithms in that it starts the frequency allocation process with the least challenging transmitters.
* Least-Coverage Sort prioritizes transmitters in ascending order of coverage radius R_i.
* Random Sort arranges the N transmitters randomly. Random sort serves as a benchmark for comparison with other sorting algorithms.
These sorting algorithms provide diverse strategies for transmitter allocation. Next, we simulate and compare these strategies, highlighting their performance tradeoffs
in various heterogeneous network scenarios. Recall that the computational complexity of these algorithms is O(N log(N)).
§ SIMULATION RESULTS
In this section, we evaluate five DSA algorithms in terms of the performance metrics described in Sec. <ref>, namely the Feasibility Indicator, FI, the Bandwidth Usage, BU, the Coverage Area, CA, and the Bandwidth-Coverage Product, BC. Each DSA algorithm is a combination of the frequency allocation mechanism described in Sec. <ref> with one of the five sorting algorithms developed in Sec. <ref>, namely Bandwidth-Coverage Sort, Most-Overlaps Sort, Least-Bandwidth Sort, Least-Coverage Sort, and Random Sort. The DSA algorithm is named after the sorting algorithm.
Network Simulator. To evaluate the DSA algorithms, we use an object-oriented, Python-based simulator based on Monte-Carlo methods. Specifically, given the total number of bandwidth units available for transmission F and the total number of transmitters N, the simulator randomly assigns a geographical location, a required bandwidth B_i, and a coverage radius R_i for each transmitter i∈{1,…,N}. The location is assigned uniformly at random within a two-dimensional region ℛ defined as a 100 × 100 meters square. The required bandwidth B_i and coverage radius R_i are assigned uniformly at random from specific ranges. The ranges of B_i and R_i and the values of N and F are as specified in Table <ref>, unless noted otherwise. After establishing the network setup, the simulator runs the five DSA algorithms and computes their associated performance metrics. To account for the effects of different network setups, for each set of network parameters, we create at least 50 randomly generated networks setups and display in Figures <ref>, <ref>, and <ref> the average of the metrics associated with these multiple simulation runs. The error bars in Figures <ref>, <ref>, and <ref> represent the standard deviation.
Numerical Results. In Figures <ref>(a)-(d), we simulate networks with increasing number of transmitters N∈{5,6, …,30} and display the performance of DSA algorithms in terms of the Feasibility Indicator, FI, Bandwidth Usage, BU, and the Total Transmitters while Feasible, TF.
Figures <ref>(a), (b), and (c) show FI, BU, and TF, respectively, associated with networks with the exact same parameters.
Interestingly, the best-performing DSA algorithms in Figures <ref>(a) and (b) are the worse in Figure <ref>(c).
Figures <ref>(a) and (b) show that Most-Overlaps Sort and Bandwidth-Coverage Sort outperform Least-Bandwidth Sort and Least-Coverage Sort in terms of the average FI and BU in every simulation, with the performance gap becoming more pronounced for networks with large number of transmitters N>15.
This result agrees with the graph theory literature <cit.> in that starting frequency allocation with the most challenging transmitters (e.g., the transmitters with most overlaps) is a promising approach for minimizing the bandwidth usage, thus improving the chances of feasibility. In the graph theory literature, the minimum bandwidth usage is called the chromatic number of the graph/network.
In contrast, Figure <ref>(c) shows that
Least-Coverage Sort and Least-Bandwidth Sort outperform Most-Overlaps Sort and Bandwidth-Coverage Sort in terms of the Total Transmitters while Feasible TF in every simulation, with the performance gain being more prominent for networks with larger N. Interestingly, Most-Overlaps Sort and Bandwidth-Coverage Sort show a significant decline in their ability to allocate transmitters within the available bandwidth F as the number of transmitters N increases, while Least-Bandwidth Sort and Least-Coverage Sort continue to benefit from a larger number of transmitters.
Intuitively, this effect can be explained by the manner in which the different sorting algorithms prioritize the increasing number of transmitters N.
Least-Bandwidth Sort and Least-Coverage Sort prioritize transmitters with the lowest bandwidth requirements B_i and with the smallest coverage radius R_i, respectively, allowing early frequency allocations to occupy minimal resources. As a result, a large number of transmitters can be allocated while the network remains feasible, leading to a large TF. Latter frequency allocations have high bandwidth requirements B_i and/or high coverage areas R_i, increasing the chances of the network becoming unfeasible, leading to a low FI.
Most-Overlaps Sort and Bandwidth-Coverage Sort prioritize transmitters with most coverage overlaps and highest Bandwidth-Coverage Product, respectively. By starting with the most challenging transmitters, these algorithms increase the chances of finding a feasible set of frequency allocations for all N transmitters, leading to a large FI. On the other hand, this prioritization also increases the chances of the network becoming unfeasible with only a few allocated transmitters, leading to a low TF. Prioritizing the most challenging transmitters leads to “all-or-nothing” results in which the network is either feasible (with TF=N) or unfeasible with low TF.
While Bandwidth-Coverage Sort prioritizes transmitters that consume most resources, it doesn't necessarily select those with the most overlaps, which have the lowest potential for spectrum sharing, explaining the relatively better performance of Bandwidth-Coverage Sort when compared to Most-Overlaps Sort.
To further investigate the behavior of Most-Overlaps Sort, in Figure <ref>(d) we show the results of a homogeneous networks with R_i = 12 meters and B_i = 2 units, ∀ i. In this homogeneous network, Least-Coverage Sort, Least-Bandwidth Sort, Bandwidth-Coverage Sort, and Random Sort are rendered equivalent, and they all outperform Most-Overlaps Sort in terms of the Total Transmitters while Feasible TF, suggesting that DSA algorithms that prioritize transmitters with most overlaps, or high-degrees, can be extremely inefficient. Notice that as N increases, new transmitters with high-degree are prioritized by Most-Overlaps Sort, affecting the frequency allocation of transmitters with lower degrees, thereby (potentially) further decreasing TF.
In Figures <ref>(a) and (b), we simulate networks with increasing total available bandwidth F∈{5,6,…,15} and show the performance of DSA algorithms with respect to Total Transmitters while Feasible, TF, and the Coverage Area, CA.
In general, as the available spectrum F increases, all DSA algorithms are able to allocate more admissible transmitters which, in turn, increases the coverage area.
When the available spectrum is high enough F ≥ 14, most of the N=25 transmitters become admissible, and the performance gaps between the different DSA algorithms is reduced.
The results in Figure <ref>(a) align with Figure <ref>(b), showing that Least-Bandwidth Sort and Least-Coverage Sort outperform Most Overlaps Sort and Bandwidth-Coverage Sort in terms of TF.
As expected, the results in Figure <ref>(b) show that Least-Coverage Sort under-performs in terms of coverage area CA.
Least-Bandwidth Sort achieves the highest TF in every simulation and the highest CA in bandwidth-constrained networks with F ≤ 6
since it prioritizes transmitters with the lowest bandwidth requirements B_i, thereby maximizing the number of admissible transmitters within F and the corresponding coverage area.
When 6 < F ≤ 12, the coverage area CA of Least-Bandwidth Sort is comparable with the benchmark algorithm, Random Sort, and
Bandwidth-Coverage Sort leads by a significant margin. Bandwidth-Coverage Sort considers both B_i and R_i, ensuring, to some extent, that transmitters with large coverage areas are prioritized, leading to high CA.
In Figures <ref>(a) and (b), we simulate networks with increasing heterogeneity and display the performance of the DSA algorithms in terms of their Bandwidth-Coverage Product, BC=∑_i∈𝒩 R_i B_i. Recall that a high R_i B_i indicates a transmitter that consumes a significant amount of resources, which may represent an important transmitter that can (potentially) serve many receivers and can support high quality of service. Hence, BC can serve as an indicator of whether the DSA algorithm can allocate important transmitters. Figure <ref>(a) and (b) considers transmitters with increasing coverage radius R_i and bandwidth requirements B_i, respectively.
As expected, Bandwidth-Coverage Sort outperforms other DSA algorithms in every simulation, with
the improvement becoming more noticeable as the networks become more heterogeneous.
Least-Bandwidth Sort and Least-Coverage Sort attain the lowest BC, consistently performing below the benchmark algorithm.
Interestingly, Most-Overlaps Sort also under-performs in terms of BC, with a performance that is either close to or below Random Sort, as can be seen in Figure <ref>(b) when B_max > 4.
These results suggest that neither Least-Bandwidth Sort, Least-Coverage Sort, or Most-Overlaps Sort are good options for facilitating important transmitters which support a higher quality of service.
Figure <ref>(b) shows that the DSA algorithms exhibit a sharp decrease in BC when B_max > 7. This decline arises from the increased likelihood of encountering transmitters with demanding bandwidth requirements.
These demanding transmitters limit the total number of transmitters that may be allocated feasibly TF, indirectly reducing the overall BC.
Notice that Bandwidth-Coverage Sort, which prioritizes these demanding transmitters, experiences this decline earlier than other DSA algorithms.
Summary of Numerical Results. Figures <ref>, <ref>, and <ref> compare different DSA algorithms in terms of feasibility, FI, number of transmitters allocated before the network becomes unfeasible, TF, and resource utilization, BU, BC, and CA. Least-Coverage Sort and Least-Bandwidth Sort maximize the number of transmitters allocated before the network becomes unfeasible, having a comparable performance in most simulations. As expected, Least-Bandwidth Sort outperforms Least-Coverage Sort in terms of coverage area CA. Most-Overlaps Sort and Bandwidth-Coverage Sort maximize the chances of feasibility. Bandwidth-Coverage Sort outperforms Most-Overlaps Sort – which is a popular algorithm in the literature – in the vast majority of the simulations.
§ FINAL REMARKS
In this paper, we considered heterogeneous 6G networks with N base stations (called transmitters) and a total of F frequency bands available for transmission. Each transmitter is associated with a geographical location, a coverage radius R_i, and a bandwidth requirement B_i. We used extensive simulation results to evaluate the performance of different DSA algorithms in various heterogeneous networks.
Based on the results, we separate the DSA algorithms in two classes:
(i) Most-Overlaps Sort and Bandwidth-Coverage Sort, which prioritize the most challenging (i.e., resource hungry) transmitters; and
(ii) Least-Coverage Sort and Least-Bandwidth Sort, which prioritize transmitters that use minimal resources.
Class (i) leads to “all-or-nothing” results. They achieve the highest values of a feasible frequency allocation (FI) for all N transmitters, which can be seen as the ultimate objective of DSA. However, when the network is unfeasible, these algorithms tend to significantly under-perform when compared with Class (ii), especially in terms of the number of transmitters, TF, allocated while the network was feasible.
Interesting extensions include consideration of base stations with directional antennas, time-varying bandwidth requirements and that can enter/leave the network.
IEEEtran
| Emerging and future applications and wireless devices will increasingly rely on Dynamic Spectrum Access (DSA) algorithms that can manage the scarce spectrum resources efficiently.
The importance of DSA for 6G networks was recently highlighted in the National Spectrum Strategy <cit.> and its Implementation Plan <cit.>, as well as in reports by the Next G Alliance <cit.>, Qualcomm <cit.>, Ericsson <cit.>, and many others.
The development of DSA algorithms that can efficiently allocate frequency spectrum to wireless devices (e.g., 6G base stations) while avoiding harmful interference has been extensively investigated in the literature (see surveys <cit.>).
Spectrum sharing is a challenging problem even in simple settings.
For example, consider homogeneous networks in which every base station has the same bandwidth requirement. The problem of finding the minimum number of frequency bands that can accommodate multiple homogeneous base stations is known to be NP-hard <cit.>.
Several heuristic DSA algorithms for homogeneous networks have been proposed in the graph theory literature <cit.>.
Heterogeneous networks, in which base stations may have diverse bandwidth requirements, are more complex and less studied. Recent focus has been on developing machine learning-based DSA algorithms for heterogeneous networks, e.g. <cit.>.
In this paper, we evaluate different low-complexity DSA algorithms for heterogeneous 6G networks, and use extensive simulation results to gain insight into their performance trade-offs.
We define performance metrics that capture the feasibility of the frequency allocation task, the number of base stations that can be allocated within the limited frequency bands, and the amount of resources utilized by the network.
Then, we compare DSA algorithms that prioritize frequency allocation to base stations based on different features, including their number of potentially interfering neighbors, their coverage area, and their required bandwidth.
Our simulation results show that DSA algorithms that prioritize frequency allocation to base stations with higher number of potentially interfering neighbors are more likely to achieve a feasible frequency allocation (for all base stations in the network) within the limited available frequency. This result agrees with the literature on homogeneous networks <cit.>. However, our results also show that, when the frequency allocation task is unfeasible, these DSA algorithms are more likely to leave a high number of unallocated base stations. Performance trade-offs associated with other DSA algorithms are discussed in Section <ref>.
The rest of this paper is organized as follows.
Section <ref> formulates the DSA problem.
In Section <ref>, we develop five distinct sorting algorithms.
In Section <ref>, we describe the simulation platform and discuss our extensive numerical results.
Section <ref> concludes the paper. | null | null | null | null | null |
http://arxiv.org/abs/2409.17123v1 | 20240925173651 | On the Bivariate Characteristic Polynomial of the Shuffle Lattice | [
"Annabel Ma"
] | math.CO | [
"math.CO"
] |
Department of Mathematics, Harvard University, Cambridge, MA 02138 [email protected]
On the Bivariate Characteristic Polynomial of the Shuffle Lattice
Annabel Ma
September 28, 2024
=================================================================
§ ABSTRACT
The shuffle lattice was introduced by Greene in 1988 as an idealized model for DNA mutation, when he revealed remarkable combinatorial properties of this structure. In this paper, we prove an explicit formula for the M-triangle of the shuffle lattice, a bivariate refinement of the characteristic polynomial, as conjectured by McConville and Mühle in 2022, and find a relation between the M-triangle and the H-triangle, a bivariate refinement of the rank generating function.
§ INTRODUCTION
Let = x_1… x_m and = y_1… y_n be words of length m and n, respectively, where all m+n letters are distinct. Consider all ways to transform into using a mutation called an indel →̆,̌ where the word $̌ is reached by deleting somex_ior inserting somey_jinto.̆Each intermediate word that appears betweenandvia some sequence of indel mutations is called a shuffle word. The shuffle lattice is the poset of intermediate words, where the order relation is defined by the indel operation.
In 1988, Greene introduced the shuffle poset as an idealized model for DNA mutation, when he discovered several relationships between different combinatorial invariants such as its characteristic polynomial, its zeta polynomial, and its rank generating function <cit.>. Since then, shuffle lattices have been studied extensively by Doran in 1994 <cit.>, by Simion and Stanley in 1999 <cit.>, by Hersh in 2002 <cit.>, and by Mühle in 2022 <cit.>.
Then, in 2022, McConville and Mühle introduced bivariate refinements of the characteristic polynomial and the rank generating function of the shuffle lattice, called the M-triangle and the H-triangle, respectively <cit.>. The authors proved a formula for theH-triangle and conjectured a formula for theM-triangle, which we prove in this paper.
theoremmexplicit
For m,n≥ 0, the M-triangle of the shuffle lattice (m,n) is given by
M_m,n(q,t)
= ∑_a≥ 0manat^a(1-t)^a(q-1)^a(qt-t+1)^m+n-2a.
To prove this theorem, we find the rational function representation of the series∑_m ≥0 ∑_n ≥0 x^my^nM_m,n(q,t),which we state in the following theorem:
theoremmtrianglerational
We have
∑_m, n ≥ 0 x^my^nM_m,n(q,t) = 1/(1-x(qt - t + 1))(1-y(qt - t +1)) - t(1-t)(q-1)xy.
Then, using the explicit form of theM-triangle, we resolve another conjecture by McConville and Mühle on an intriguing relationship between theM-triangle andH-triangle.
corfromhtom
For m, n ≥ 0, we have
M_m,n(q,t) = (1-t)^m+nH_m,n(t(q-1)/1-t,q/q-1).
§ PRELIMINARIES
For integersm, n ≥0,consider the two disjoint sets of letters
X = {x_1,…,x_m} and Y={y_1,…,y_n}.
A shuffle word is an order-preserving, simple word over X ∪ Y, which is a word using letters from X or Y without duplicates, such that if x_i and x_j both appear in the word, x_i appears before x_j and if y_i and y_j both appear in the word, y_i appears before y_j whenever i < j. Let (m,n) be the set of all shuffle words.
For example, the wordy_1y_2x_2y_3x_5x_6belongs to(6,3), but the wordsy_1y_1x_2y_3x_5x_6andy_1y_2x_2y_3x_6x_5do not.
The indel operation is the operation on a word ∈̆(m,n) that either deletes a letter from X or inserts a letter from Y.
This operation gives the set of shuffle words a lattice structure. We call this lattice the shuffle lattice(m,n),which has order relation the reflexive and transitive closure of the indel operation, denoted as<cit.>.
If$̆ and $̌ are shuffle words, let_̆be the subword of$̆ consisting of letters that appear in .̌ Additionally, let and denote the words x_1… x_m and y_1 … y_n, respectively.
§ THE M-TRIANGLE OF (M,N)
We now look at the M-triangle, which is a bivariate specialization of the characteristic polynomial of the shuffle lattice (m,n).
For a finite poset =(P,≤), the Möbius function μ_ P× P→ℤ is defined recursively by
μ_(u,v) 1, if u=v,
-∑_u≤r<vμ_(u,r), if u<v,
0, otherwise.
Additionally, for an element of a graded poset v ∈, let (v) denote the rank function.
In 2022, McConville and Mühle defined an analog of the characteristic polynomial of a graded poset using the Möbius function.
The (reverse) characteristic polynomial of a graded poset is
_(q) ∑_v∈Pμ_(0̂,v)q^(v),
where 0̂ is the minimum element of .
The M-triangle is defined by McConville and Mühle as the bivariate specialization of the reverse characteristic polynomial, providing us more combinatorial information than its single-variable analog:
The M-triangle of a graded poset is
M_(q,t) ∑_u≤vμ_(u,v)q^(u)t^(v).
In this paper, we are interested in the M-triangle of (m,n). So, let _m,n(q) denote _(m,n)(q), let μ_m,n(,̆)̌ denote μ_(m,n)(,̆)̌, and let M_m,n(q,t) denote M_(m,n)(q,t).
The closed form of the reverse characteristic polynomial of a shuffle lattice (m, n) looks like the following
For m,n≥ 0, we have
_m,n(q) = ∑_a≥0mana(-q)^a(1-q)^m+n-a.
McConville and Mühle also conjecture an analogous explicit form for the M-triangle of a shuffle lattice, which is our main result that we repeat here for the convenience of the reader.
*
To prove this theorem, we write the M-triangle of a shuffle lattice (m, n) in terms of the reverse characteristic polynomial, by applying the following lemma of McConville and Mühle:
We have
M_(q,t) = ∑_u∈ P(qt)^(u)_[u,1̂](t).
Then, we have that
M_m,n(q,t) = ∑_∈̆(m, n)(qt)^()̆_[,̆1̂_(m, n)](t).
In (m, n), the maximal element 1̂_(m, n) is . Additionally, it is easy to see that ()̆ = |_̆|+ m - |_̆|. Recall that |_̆| and |_̆| are the number of y_i's and x_i's in ,̆ respectively. Thus, we have that
M_m,n(q,t) = ∑_∈̆(m, n)(qt)^|_̆|+ m - |_̆|_[,̆](t).
Fix a shuffle word ∈̆(m, n). The subword _̆ can be written as
_̆ = y_i_1… y_i_k^,
where k^ = |_̆| and 1 ≤ i_1 < … < i_k^≤ n.
This subword y_i_1… y_i_k^ partitions the remaining letters in $̆ andintok^+1parts, possibly empty. Let the tupleλ^ = (λ_1^, …, λ_k^+1^)count the size of each of these parts in.That is, in,there areλ_1^letters to the left ofy_i_1,there areλ_k^+1^letters to the right ofy_i_k^,and there areλ_i^letters strictly betweeny_i-1andy_ifor2 ≤i ≤k^.We defineη^ = (η_1^, …, η_k^+1^)similarly, as the sizes of the subwords thaty_i_1…y_i_k^ = _̆divides_̆into.
For all ∈̆(m,n) and λ^, η^ defined as above, we have that
[,̆] ≅(η_1^, λ_1^) ×…×(η_k^+1^, λ_k^+1^).
Let _̆ = y_i_1… y_i_k^. Observe that every element of [,̆] can be written in the form
_1'y_i_1_2'…_k^'y_i_k^_k^+1'
in a unique way, where each word consists of letters x_a_s''s such that ∑_t = 1^j-1η^_t < a_s' ≤∑_t = 1^jη^_t and letters y_b''s such that i_j-1 < b' < i_j. For each x_a_s' in _j', let a_s = a_s' - ∑_t = 1^j-1η^_t. For each y_b', let b = b' - i_j-1. Let _j be the word consisting of x_a_s's and y_b's in the same relative order as our x_a_s''s and y_b''s. This is the word that results from shifting the indices of the x_a_s''s in _j' back by ∑_t = 1^j-1η^_t and the y_b''s back by i_j-1.
Now, notice that the map from [,̆] to (η_1, λ_1) ×…×(η_k^+1, λ_k^+1) defined by sending
_1'y_i_1_2'…_k^'y_i_k^_k^+1' ↦ (_1, _2, …, _k^+1)
is a bijection. That this bijection is order-preserving is a routine verification.
Additionally, we can show that the reverse characteristic polynomial of a product of posets decomposes nicely:
Let and be finite, graded posets. Then,
_×(q) = _(q)_(q).
From Proposition 3.8.2 in <cit.>, we know that for any two elements (s, t), (s', t') ∈×, we have that μ_×((s, t), (s', t')) = μ_(s, s')μ_(t, t'). It is also easy to see that _×((s, t)) = _(s) + _(t) for all (s, t) ∈×. Then, using the definition of the reverse characteristic polynomial, we have
_×(q) = ∑_(s, t) ∈ P × Qμ_×(0̂_×, (s, t))q^_×(s, t)
= ∑_(s, t) ∈ P × Qμ_(0̂_, s) μ_(0̂_, t) q^_(s) + _(t)
= (∑_s ∈ Pμ_(0̂_, s) q^_(s))(∑_t ∈ Qμ_(0̂_, t) q^_(t))
= _(q)_(q),
as desired.
Using Proposition <ref>, Lemma <ref> and Proposition <ref>, we can then compute an explicit form of the reverse characteristic polynomial of the subposet[,̆ ]of a shuffle lattice(m, n):_[,̆](q) = _(η_1^, λ_1^) ×…×(η_k^+1^, λ_k^+1^)(q)
= ∏_i = 1^k^+1_(η_i^, λ_i^)(q).
Thus, using the explicit formula of the reverse characteristic polynomial as found in Proposition <ref>, we have that
_[,̆](q) = ∏_i = 1^k^+1∑_a ≥ 0η_i^ aλ_i^ a (-q)^a (1-q)^a.
We can use this explicit formula in Equation <ref> to compute theM-triangle using Lemma <ref>:
M_m,n(q,t) = ∑_∈̆(m, n)(qt)^|_̆|+ m - |_̆|_[,̆](t)
= ∑_∈̆(m, n) (qt)^|_̆|+ m - |_̆|∏_i = 1^k^+1∑_a ≥ 0η_i^ aλ_i^ a (-t)^a (1-t)^a.
Let us now group this sum by the sequencesλ^ = (λ_1^, …, λ_k^+1^)andη^ = (η_1^, …, η_k^+1^).Fix an integerkand sequencesλ= (λ_1, …, λ_k+1)andη= (η_1, …, η_k+1)of nonnegative integers such thatλ_1 + …+ λ_k+1 = n-kandη_1 + …+ η_k+1 ≤m.We will now determine the number of∈̆(m,n)such thatλ^ = λandη^ = η.It ismj,wherej = η_1 + …+ η_k+1.This is because there is exactly one way to choosekletters fromto form_̆so that the remainingn-kletters inform consecutive subwords of lengthλ_1, …, λ_k+1.However, we can choose anyjof themletters ofto form_̆,which we know will be partitioned into consecutive subwords of sizeη_1, …, η_k+1in.̆Therefore, we can rewrite our expression forM_m,n(q,t)to sum overk = |_̆|andj = |_̆|,as well as the sequencesλandμ.M_m,n(q,t) = ∑_∈̆(m, n)(qt)^|_̆|+ m - |_̆|_[,̆](t)
= ∑_∈̆(m, n) (qt)^|_̆|+ m - |_̆|∏_i = 1^k+1∑_a ≥ 0η_i^ aλ_i^ a (-q)^a (1-q)^a
= ∑_j, k ≥ 0 (qt)^k + m - jm j∑_η_1 + … + η_k+1 = j
λ_1 + … + λ_k+1= n-k∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a (-t)^a (1-t)^η_i + λ_i - a.
To simplify this expression, we setqandtto-qand-trespectively. This yields
M_m,n(-q,-t) = ∑_j, k ≥ 0 (qt)^k + m - jm j∑_η_1 + … + η_k+1 = j
λ_1 + … + λ_k+1= n-k∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a
= ∑_j, k ≥ 0 (qt)^j+km j∑_η_1 + … + η_k+1 = m-j
λ_1 + … + λ_k+1= n-k∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a.
To show the explicit form of theM-triangle as stated in Theorem <ref>, we show that∑_m, n ≥0 M_m, n(-q, -t) x^m y^nhas the following rational function expansion:
lemmalhs
We have that
∑_m,n ≥ 0 M_m, n(-q, -t) x^m y^n = 1/(1-x(qt + t+1))(1-y(qt + t+ 1)) - t(t+1)(q+1)xy.
We know that
=. ∑_m, n≥ 0 M_m, n(-q, -t) x^m y^n
= ∑_m, n ≥ 0 x^my^n ∑_j, k ≥ 0 (qt)^j+km j∑_η_1 + … + η_k+1 = m-j
λ_1 + … + λ_k+1= n-k∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a.
Then, re-indexing so there are no dependencies between the indices j, k, m and n, this expression is equal to
= ∑_j, k, m, n ≥ 0 x^m+j y^n+k (qt)^j+km+j j∑_η_1 + … + η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a.
In this series, there is only one binomial coefficient that involves j. So, we can apply the power series expansion 1/(1-x)^n+1 = ∑_k ≥ 0n+k nx^k to eliminate the sum involving j:
= ∑_j, k, m, n ≥ 0 x^m+j y^n+k (qt)^j+km+j j∑_η_1 + … + η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a
= ∑_k, m, n ≥ 0 x^m y^n (qty)^k (∑_η_1 + … + η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a) ∑_j ≥ 0m+j j (qtx)^j
= ∑_k, m, n ≥ 0 x^m y^n (qty)^k (∑_η_1 + … + η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a) 1/(1-qtx)^m+1
= 1/1- qtx∑_k, m, n ≥ 0(x/1-qtx)^m y^n (qty)^k ∑_η_1 + … + η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a.
That is,
∑_m, n ≥ 0 M_m, n (-q, -t) x^m y^n = 1/1- qtx f(q, t, x, y),
where f(q, t, x, y) is defined to be
f(q, t, x, y) = ∑_k, m, n ≥ 0(x/1-qtx)^m y^n (qty)^k ∑_η_1 + … + η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a.
We will now compute f(q, t, x, y). We remove the dependencies between m and η_1, …, η_k+1 and n and λ_1, …, λ_k+1 by summing over all η_i, λ_i ≥ 0 and setting m = η_1 + … + η_k+1 and n = λ_1 + … + λ_k+1, which gives us
= f(q, t, x, y)
= ∑_k, m, n ≥ 0(x/1-qtx)^m y^n (qty)^k ∑_η_1 + … + η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a
= ∑_k ≥ 0 (qty)^k ∑_η_1, …, η_k+1,
λ_1, …, λ_k+1≥ 0(x/1-qtx)^η_1 + … +η_k+1 y^λ_1 + … + λ_k+1∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a
= ∑_k ≥ 0 (qty)^k ∏_i = 1^k+1∑_η_i, λ_i ≥ 0(x/1-qtx)^η_i y^λ_i∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a.
Now, we re-index to remove the dependencies between a and η_i, λ_i:
= ∑_k ≥ 0 (qty)^k ∏_i= 1^k+1∑_η_i, λ_i ≥ 0(x/1-qtx)^η_i y^λ_i∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a
= ∑_k ≥ 0 (qty)^k ∏_i = 1^k+1∑_η_i, λ_i, a ≥ 0(x/1-qtx)^η_i + a y^λ_i + aλ_i + a aη_i + a a t^a (t+1)^η_i + λ_i + a.
Again, we notice that there is only one binomial coefficient involving η_i and one binomial coefficient involving λ_i, so we can eliminate the sums involving these variables
= ∑_k ≥ 0 (qty)^k ∏_i = 1^k+1∑_η_i, λ_i, a ≥ 0(x(t+1)/1-qtx)^η_i (y(t+1))^λ_i + a(xyt(t+1)/1-qtx)^a λ_i + a aη_i + a a
= ∑_k ≥ 0 (qty)^k ∏_i = 1^k+1∑_a ≥ 0(xyt(t+1)/1-qtx)^a 1/(1 - x(t+1)/1-qtx)^a+1(1 - y(t+1))^a+1.
Next, we eliminate the sum involving a by using the closed form of an infinite geometric series.
= ∑_k ≥ 0 (qty)^k ∏_i = 1^k+11 - qtx/(1- qtx - x(t+1))(1 - y(t+1))∑_a ≥ 0(xyt(t+1)/(1 - qtx - x(t+1))(1 - y(t+1)))^a
= ∑_k ≥ 0 (qty)^k ∏_i = 1^k+1(1 - qtx/(1- qtx - x(t+1))(1 - y(t+1)))( 1/1- xyt(t+1)/(1 - qtx - x(t+1))(1 - y(t+1)))
= ∑_k ≥ 0 (qty)^k (1 - qtx/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1))^k+1.
That is,
f(q, t, x, y) = 1-qtx/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1) g(q, t, x, y),
where g(q, t, x, y) is defined to be
g(q, t, x, y) = ∑_k ≥ 0(qty(1 - qtx)/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1))^k.
Then, we can re-write the expression for ∑_m,n ≥ 0 M_m, n(-q, -t) x^m y^n to be in terms of g(q, t, x, y) using Equation <ref> and Equation <ref>:
∑_m,n ≥ 0 M_m, n(-q, -t) x^m y^n = 1/1-qtx f(q, t, x, y)
= 1/1-qtx1-qtx/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1) g(q, t, x, y)
= 1/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1)g(q, t, x, y).
Finally, we determine g(q, t, x, y) using the closed form of a geometric series, which yields the rational function
g(q, t, x, y) = ∑_k ≥ 0(qty(1 - qtx)/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1))^k
= 1/1 - qty(1 - qtx)/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1)
= (1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1)/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1) - qty(1 - qtx).
Therefore, we have
∑_m,n ≥ 0 M_m, n(-q, -t) x^m y^n = 1/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1)g(q, t, x, y)
= 1/(1- qtx - x(t+1))(1 - y(t+1)) - xyt(t+1) - qty(1 - qtx)
= 1/(1-x(qt + t+1))(1-y(qt + t+ 1)) - t(t+1)(q+1)xy,
as desired.
Then, using Lemma <ref>, we can find the rational function representation of theM-triangle, by setting-qtoqand-ttot,which gives us
∑_m, n ≥ 0 M_m, n(q, t) x^m y^n = 1/(1-x(qt - t+1))(1-y(qt - t+ 1)) + t(1-t)(q+1)xy,
proving Theorem <ref> as introduced at the beginning of the paper.
Alternatively, we can prove Lemma <ref> by first proving the following identity using a combinatorial argument, which we include in the appendix of the paper.
lemmatheidentity
For m, n, k ≥ 0, we have
∑_η_1 + … + η_k+1 = m
λ_1+ … + λ_k+1 = n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i - a = n+ k k∑_ℓ≥ 0m+kℓ + kn+ k + ℓℓ t^ℓ,
where the left hand side is a sum over all sequences of nonnegative integers (η_1, …, η_k+1) and (λ_1, …, λ_k+1) such that η_1 + … + η_k+1 = m and λ_1+ … + λ_k+1 = n.
By performing a similar process as in the proof of Lemma <ref>, we can show that the explicit formula for theM-triangle has the same rational function representation:
We have
= ∑_m, n ≥ 0x^m y^n ∑_a ≥ 0n am a t^a (t+1)^a (q+1)^a (qt + t + 1)^m+n-2a
= 1/(1-x(qt + t+1))(1-y(qt + t+ 1)) - t(t+1)(q+1)xy.
First, we change the order of summations, giving us
= ∑_m,n ≥ 0 x^m y^n ∑_a ≥ 0n am a t^a (t+1)^a (q+1)^a (qt + t + 1)^m+n-2a
= ∑_a ≥ 0(t(t+1)(q+1)/(qt+t+1)^2)^a ∑_m ≥ am a (x(qt+t+1))^m ∑_n ≥ an a (y(qt+t+1))^n.
Then, by applying the power series expansion 1/(1-x)^n+1 = ∑_k ≥ 0n+k nx^k twice, we have that this sum is equal to
= ∑_a ≥ 0(t(t+1)(q+1)/(qt+t+1)^2)^a ∑_m ≥ am a (x(qt+t+1))^m ∑_n ≥ an a (y(qt+t+1))^n
= ∑_a ≥ 0(t(t+1)(q+1)/(qt+t+1)^2)^a ((x(qt+t+1))^a/(1- x(qt+t+1))^a+1)( (y(qt+t+1))^a/(1- y(qt+t+1))^a+1).
This is an infinite geometric series with rational function
1/(1- x(qt+t+1))(1- y(qt+t+1))/1 - t(t+1)(q+1)xy/(1- x(qt+t+1))(1- y(qt+t+1)) = 1/(1-x(qt + t+1))(1-y(qt + t+ 1)) - t(t+1)(q+1)xy,
so we are done.
Since the rational function representation of the explicit formula found in Lemma <ref> matches with the rational function representation of theM-triangle in Equation <ref>, we can conclude that theM-triangle has the following explicit formula, which resolves <cit.>. We restate this result here for the reader's convenience:
*
§ RELATING THE M-TRIANGLE AND THE H-TRIANGLE
Finally, we offer a surprising connection between theM-triangle and theH-triangle, which is a bivariate variation of the rank generating function of the shuffle lattice.
To define theH-triangle, we must consider another operation on shuffle words called a (forward) transposition→̆,̌where the word$̌ is obtained from $̆ by reversing an adjacent pairx_i y_j.In <cit.>, McConville and Mühle define a bubble lattice(m,n)by placing a new partial order on(m,n)defined by both indels and transpositions. The order relation of(m,n)is the reflexive and transitive closure of the indel and transposition operation, denoted by.McConville and Mühle also provide a characterization of the covering pairs of the bubble lattice in terms of transpositions and a particular type of indel. Let_̆îdenote the word obtained by deleting letteru_ifrom.̆Specifically, a right indel is defined by
'̌ if and only if =̌and '̌=_̆î, if u_i∈ X and either u_i+1∈ X or i=k,
=̌_̆î and '̌=,̆ if u_i,u_i+1∈ Y..
Ifu_i∈Xandu_i+1∈Y, then a (forward) transposition is a transformation'̆, where'̆=u_1u_2⋯u_i-1u_i+1u_iu_i+2⋯u_k.
For ,̆∈̌(m,n) we have $̌ if and only if either$̌ or .̌
The in-degree of∈̌(m,n)counts the number of shuffle words that$̌ covers:
()̌ |{∈̆(m,n)}|.
Lemma <ref> tells us that the covering pairs in (m,n) can be split into two types. To realize this, the authors of <cit.> define the indel-degree of $̆ to be
()̆|{'̆∈(m,n)'̆}|,
and the transpose-degree of$̆ to be
()̆ |{'̆∈(m,n)'̆}|.
Clearly, we have ()̆ = ()̆ + ()̆.
We now define the H-triangle:
For m, n ≥ 0, the H-triangle of (m,n) is
H_m,n(q,t) ∑_∈̆(m,n)q^()̆t^()̆.
In <cit.>, the authors note that the polynomial H_m,n(q,1) appears already in as Corollary 4.8 in Greene's paper as the rank-generating polynomial of (m,n), and was used to establish the rank symmetry of (m,n) and for constructing a decomposition into symmetrically placed Boolean lattices <cit.>. They also find an explicit formula for H_m,n(q,t):
For m,n≥ 0, we have
H_m,n(q,t) = ∑_a=0^min{m,n} manaq^a(qt+1)^m+n-2a.
From this explicit form and the explicit formula for the characteristic polynomial in Proposition <ref>, the authors show the following relationship between the H-triangle and M-triangle:
For m,n≥ 0, we have
_m,n(q) = q^m+nH_m,n(q-1/q,1-2q/q-1).
We can then show an analogous relationship between the H-triangle and M-triangle, resolving Conjecture 5.5 in <cit.>:
*
Corollary <ref> follows from comparing the explicit formulas found in Theorem <ref> and Theorem <ref>. However, we still are not aware of a conceptual explanation of this relation, nor of Corollary <ref>, which the author encourages the readers to find.
§ ACKNOWLEDGMENTS
This research was conducted at the Duluth REU at the University of Minnesota Duluth, which is supported by National Science Foundation grant No. DMS-2409861, Jane Street Capital, and personal donations from Ray Sidney and Eric Wepsic. I would like to thank Evan Chen, Colin Defant, Noah Kravitz, Maya Sankar, and Carl Schildkraut for providing guidance during the research process. I would also like to extend special thanks to Mitchell Lee, Katherine Tung, and Noah Kravitz for their crucial feedback throughout the editing process. Finally, I am very grateful to Joe Gallian and Colin Defant for their support and for extending an invitation to participate in the Duluth REU.
plain
§ APPENDIX
In this appendix, we provide an alternate proof to Theorem <ref> using Lemma <ref>, which we restate here for convenience:
*
To begin, we expand the binomial on the left hand side of the equation and simplify, giving us
∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i - a = ∑_a ≥ 0λ_i aη_i a t^a ∑_b ≥ 0η_i - a b t^b
= ∑_j ≥ 0∑_a ≥ 0λ_i aη_i η_i - aη_i - a η_i - j t^j
= ∑_j ≥ 0η_i η_i - jλ_i + j j t^j
via a variable substitution and Vandermonde's identity. So, now we know that
∑_η_1 + … +η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i - a = ∑_η_1 + … +η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_j ≥ 0λ_i + j jη_i η_i - j t^j.
Suppose that we have a row of m distinguishable green balls and a row of n distinguishable red balls in a fixed order. We place k dividers to split the row of green balls into sections of size η_1, …, η_k+1, and place another k dividers to split the row of red balls into sections of size λ_1, …, λ_k+1. Notice that this placement of dividers induces a pair of compositions (η_1, …, η_k+1) and (λ_1, …, λ_k+1) of m and n, respectively, into k+1 parts. Then, the coefficient of t^ℓ in the expanded product
∏_i = 1^k+1∑_j≥ 0λ_i + j jη_i η_i - j t^j
is the number of ways to choose m - ℓ green balls to put a star sticker on, and ℓ balls from the remaining n+ℓ balls to put a triangle sticker on, such that within each pair of sections (η_i, λ_i), the number of red balls with a sticker in λ_i is equal to the number of green balls in η_i with no sticker. This additional restriction is due to the fact that within each pair of parts (η_i, λ_i), we put a sticker on a total of η_i balls.
Then, if we sum over all compositions of m and n into k+1 parts,
∑_η_1 + … +η_k+1 = m
λ_1 + … + λ_k+1= n∏_i = 1^k+1∑_j = 0^η_iλ_i + j jη_i η_i - j t^j,
the coefficient of t^ℓ in this sum is the number of ways to
* place the k dividers in a row of m green balls,
* place the k dividers in a row of n red balls,
* choose m-ℓ green balls to put a star sticker on,
* and choose ℓ balls to put a triangle sticker on from all the non-stickered balls,
all subject to the restriction that within each pair of parts (η_i, λ_i) in the compositions η_1 + … + η_k+1 = m and λ_1 + … + λ_k+1 =n induced by the dividers, the number of red balls with a sticker is equal to the number of green balls without a sticker.
Then, suppose that we chose r red balls to put stickers on, for some r ≥ 0. Notice that these stickers must all be triangles. Then, m - r of the green balls will have stickers on them, and ℓ - r of the green balls will have triangle stickers. For a fixed r, we can rewrite items (1) through (4) in the list above as the number of ways to
* place k dividers among m green balls in a line to induce a composition (η_1, …, η_k+1) of m into k+1 parts,
* place k dividers among n red balls in a line to induce a composition (λ_1, …, λ_k+1) of n,
* choose r red balls to put triangle stickers on,
* choose m - r green balls to put stickers on,
* and choose m - ℓ green balls to specifically have star stickers, while the other ℓ - r green balls have triangles,
all subject to the restriction that within each pair of parts (η_i, λ_i) within our compositions, the number of red balls with stickers is equal to the number of green balls without stickers. We find this number and then sum over all r to find the coefficient of t^ℓ in Equation <ref>.
Notice that the order we do the items above does not matter, as long as all of the restrictions are met.
* There are n+k k ways to place k dividers among our n red balls, splitting them into parts of size (λ_1, …, λ_k+1) (item 2).
* Then, there are n r ways to choose r of these red balls to put a triangle sticker on (item 3).
* We find the number of ways to do items 1 and 4 together. In any valid placement of k dividers among our m green balls inducing the composition (η_1, …, η_k+1), the number of stickered red balls in each λ_i must be equal to the number of non-stickered green balls in the corresponding section η_i. This means that the relative order of the k dividers and the r non-stickered green balls is determined by how we already placed the dividers among the red balls. Then, all that remains to do is to determine the positions of the stickered green balls relative to the dividers and non-stickered green balls. There are m-r + (r+k) r + k = m+k r + k ways to arrange the m-r stickered green balls among the r non-stickered green balls and k dividers.
* Finally, there are m-r m-ℓ ways choose m- ℓ of the stickered green balls to have a star sticker, and the remaining stickered green balls to have a triangle (item 5).
So, in total, for a fixed r, the number of ways to divide and put stickers on our red and green balls is
n rn + k km + kr + km - r m - ℓ.
If we sum over all r ≥ 0, we get that the coefficient of t^ℓ in Equation <ref> is
∑_r ≥ 0n rn + k km + kr + km - r m - ℓ.
Finally, we show that
∑_r ≥ 0n rm + kr + km - r ℓ - r = m+k ℓ + kn+k + ℓℓ
via another counting argument, again using balls and stickers. Suppose we have a collection of m+k green balls, and a collection of n red balls. Then, for a fixed a, the expression n am + ka + km - a ℓ - a counts the number of ways to choose a red balls to put a star sticker on, a+k green balls to put a triangle sticker on, and ℓ - a green balls from the remaining m-a green balls to put a star sticker on. So, in total, we are putting a sticker on ℓ+k green balls, and a star on ℓ balls out of the n red balls and ℓ + k stickered green balls, where ℓ - a of the balls with stars are red. Then, if we sum over all a, we are counting the number of ways to first choose ℓ+k green balls to put a triangle sticker on, and then ℓ objects to put a star sticker out of the n red balls and the ℓ + k green balls with a sticker, which is m + k ℓ + kn+ k + ℓℓ.
Using these simplifications, we have
∑_η_1+ …+ η_k+1 = m
λ_1+ …+ λ_k+1 = n∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i - a = ∑_η_1+ …+ η_k+1 = m
λ_1+ …+ λ_k+1 = n∏_i = 1^k+1∑_j ≥ 0λ_i + j jη_i η_i - j t^j
= ∑_ℓ≥ 0∑_r ≥ 0n rn + k km + kr + km - r m - ℓt^ℓ
= n + k k∑_ℓ≥ 0m+k ℓ + kn+k + ℓℓt^ℓ,
as desired.
This identity simplifies our expression for M_m, n(-q, -t), which is
M_m,n(-q,-t) = ∑_j, k ≥ 0 (qt)^j+km j∑_η_1 + … + η_k+1 = m-j
λ_1 + … + λ_k+1= n-k∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i + λ_i - a
= ∑_j, k ≥ 0 (qt)^j+km j (t +1)^n-k∑_η_1 + … + η_k+1 = m-j
λ_1 + … + λ_k+1= n-k∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i - a
where the second equality lies in the fact that ∏_i = 1^k+1(t+1)^λ_i = (t+1)^n-k for all compositions λ of n-k.
By applying Lemma <ref> to our expression for the M-triangle, we have that
M_m, n(-q, -t) = ∑_j, k ≥ 0m j (qt)^k + j (t+1)^n-k∑_η_1+ …+ η_k+1 = m-j
λ_1+ …+ λ_k+1 = n-k∏_i = 1^k+1∑_a ≥ 0λ_i aη_i a t^a (t+1)^η_i - a
= ∑_j, k ≥ 0n km j (qt)^k+j (t+1)^n-k∑_ℓ≥ 0m-j+kℓ + kn+ ℓℓ t^ℓ.
We use this to provide an alternate proof of Lemma <ref>, which we restate here for convenience.
*
We know that
=. ∑_m, n≥ 0 M_m, n(-q, -t) x^m y^n
= ∑_m, n ≥ 0 x^m y^n ∑_j, k ≥ 0n km j (-qt)^k+j (t+1)^n-k∑_ℓ≥ 0m-j+kℓ + kn+ ℓℓ t^ℓ.
Then, re-indexing so there are no dependencies between indices, this expression is equal to
= ∑_j, k, l, m, n ≥ 0 (qtx)^j (qty)^k x^m + ℓ (y(1+t))^n t^ℓn+k km + ℓ +j jm+ ℓ +kℓ + kn +k + ℓℓ,
where the 5 indices are independent and can be freely interchanged. Notice that in this series, there is only one binomial coefficient that involves j. So, we can apply the power series expansion 1/(1-x)^n+1 = ∑_k ≥ 0n+k nx^k to eliminate the sum involving j:
= ∑_j, k, ℓ, m, n ≥ 0 (qtx)^j (qty)^k x^m + ℓ (y(1+t))^n t^ℓn+k km + ℓ +j jm+ ℓ +kℓ + kn +k + ℓℓ
= ∑_k, ℓ, m, n ≥ 0 (qty)^k x^m (y(1+t))^n (xt)^ℓn+k km+ ℓ +kℓ + kn +k + ℓℓ∑_j ≥ 0m + ℓ +j j (qtx)^j
= ∑_k, ℓ, m, n ≥ 0 (qty)^k x^m (y(1+t))^n (xt)^ℓn+k km+ ℓ +kℓ + kn +k + ℓℓ1/(1-qtx)^m + ℓ + 1
= 1/1-qtx f(q, t, x, y),
where f(q, t, x, y) is defined to be
∑_k, ℓ, m, n ≥ 0 (qty)^k (xt/1-qtx)^ℓ (y(t+1))^n (x/1-qtx)^m n+ k kn+k + ℓ n+km+ ℓ +kℓ + k.
Then, notice that there is now only one binomial coefficient in our expression for f that involves m, so we can similarly eliminate the sum including m:
= f(q, t, x, y)
= ∑_k, ℓ, m, n ≥ 0 (qty)^k (xt/1-qtx)^ℓ (y(t+1))^n (x/1-qtx)^m n+ k kn+k + ℓ n+km+ ℓ +kℓ + k
= ∑_k, ℓ, n ≥ 0 (qty)^k (xt/1-qtx)^ℓ (y(t+1))^n n+ k kn+k + ℓ n+k∑_m ≥ 0m+ ℓ +kℓ + k(x/1-qtx)^m
= ∑_k, ℓ,n ≥ 0 (qty)^k (xt/1-qtx)^ℓ (y(t+1))^n n+ k kn+k + ℓ n+k1/(1-x/1-qtx)^ℓ + k + 1
= 1-qtx/1- qtx - x∑_k, ℓ, n ≥ 0(qty(1-qtx)/1- qtx - x)^k (y(t+1))^n (xt/1 - qtx -x)^ℓn+ k kn+k + ℓ n+k
= 1-qtx/1- qtx - x g(q, t, x, y),
where g(q, t, x y) is defined to be
∑_k, ℓ,n ≥ 0(qty(1-qtx)/1- qtx - x)^k (y(t+1))^n (xt/1 - qtx -x)^ℓn+ k kn+k + ℓ n+k.
Then, notice that there is only one binomial coefficient involving ℓ in this expression, so we can eliminate the sum including ℓ as follows:
= g(q, t, x, y)
= ∑_k, ℓ, n ≥ 0(qty(1-qtx)/1- qtx - x)^k (y(t+1))^n (xt/1 - qtx -x)^ℓn+ k kn+k + ℓ n+k
= ∑_k, n ≥ 0(qty(1-qtx)/1- qtx - x)^k (y(t+1))^n n+ k k∑_ℓ≥ 0n+k + ℓ n+k(xt/1 - qtx -x)^ℓ
= ∑_k, n ≥ 0(qty(1-qtx)/1- qtx - x)^k (y(t+1))^n n+ k k1/(1-xt/1 - qtx -x)^n+k+1
= 1-qtx - x/1 - qtx -x -xt∑_k, n ≥ 0n+ k k(y(t+1)(1-qtx - x)/1 - qtx - x - xt)^n(qty(1-qtx)/1 - qtx - x - xt)^k
= 1-qtx - x/1 - qtx -x -xt h(q, t, x, y),
where h(q, t, x, y) is defined to be
∑_k, n ≥ 0n+ k k(y(t+1)(1-qtx - x)/1 - qtx - x - xt)^n(qty(1-qtx)/1 - qtx - x - xt)^k.
Then, we can similarly eliminate the sum involving n, and also use the rational function form of an infinite geometric series, to get that h(q, t, x, y) has the following rational function representation:
= h(q, t, x, y)
= ∑_k, n ≥ 0n+ k k(y(t+1)(1-qtx - x)/1 - qtx - x - xt)^n(qty(1-qtx)/1 - qtx - x - xt)^k
= ∑_k ≥ 0(qty(1-qtx)/1 - qtx - x - xt)^k∑_n ≥ 0n+ k k(y(t+1)(1-qtx - x)/1 - qtx - x - xt)^n
= ∑_k ≥ 0(qty(1-qtx)/1 - qtx - x - xt)^k1/(1 - y(t+1)(1-qtx - x)/1 - qtx - x - xt)^k + 1
= 1 - qtx - x - xt/1 - qtx - x - xt - y(t+1)(1-qtx - x)∑_k ≥ 0(qty(1-qtx)/1 - qtx - x- xt - y(t+1)(1-qtx-x))^k
= (1 - qtx - x - xt/1 - qtx - x - xt - y(t+1)(1-qtx - x))( 1/1 - qty(1-qtx)/1 - qtx - x- xt - y(t+1)(1-qtx-x))
= 1 - qtx - x - xt/1 - qtx - x- xt - y(t+1)(1-qtx-x) - qty(1-qtx)
Therefore, we have
=. ∑_m,n ≥ 0 M_m, n(-q, -t) x^m y^n
= 1/1-qtx f(q, t, x, y)
= (1/1 - qtx)( 1-qtx/1- qtx - x) g(q, t, x, y)
= (1/1 - qtx - x)( 1-qtx - x/1 - qtx -x -xt) h(q, t, x, y)
= (1/1-qtx - x - xt)(1 - qtx - x - xt/1 - qtx - x- xt - y(t+1)(1-qtx-x) - qty(1-qtx))
= 1/(1-x(qt - t+1))(1-y(qt - t+ 1)) + t(1-t)(q+1)xy,
as desired.
Then, using Lemma <ref>, we can find the rational function representation of the M-triangle, by setting -q = q and -t = t, which gives us
∑_m, n ≥ 0 M_m, n(q, t) x^m y^n = 1/(1-x(qt - t+1))(1-y(qt - t+ 1)) + t(1-t)(q+1)xy,
proving Theorem <ref> as introduced at the beginning of the paper.
| Let = x_1… x_m and = y_1… y_n be words of length m and n, respectively, where all m+n letters are distinct. Consider all ways to transform into using a mutation called an indel →̆,̌ where the word $̌ is reached by deleting somex_ior inserting somey_jinto.̆Each intermediate word that appears betweenandvia some sequence of indel mutations is called a shuffle word. The shuffle lattice is the poset of intermediate words, where the order relation is defined by the indel operation.
In 1988, Greene introduced the shuffle poset as an idealized model for DNA mutation, when he discovered several relationships between different combinatorial invariants such as its characteristic polynomial, its zeta polynomial, and its rank generating function <cit.>. Since then, shuffle lattices have been studied extensively by Doran in 1994 <cit.>, by Simion and Stanley in 1999 <cit.>, by Hersh in 2002 <cit.>, and by Mühle in 2022 <cit.>.
Then, in 2022, McConville and Mühle introduced bivariate refinements of the characteristic polynomial and the rank generating function of the shuffle lattice, called the M-triangle and the H-triangle, respectively <cit.>. The authors proved a formula for theH-triangle and conjectured a formula for theM-triangle, which we prove in this paper.
theoremmexplicit
For m,n≥ 0, the M-triangle of the shuffle lattice (m,n) is given by
M_m,n(q,t)
= ∑_a≥ 0manat^a(1-t)^a(q-1)^a(qt-t+1)^m+n-2a.
To prove this theorem, we find the rational function representation of the series∑_m ≥0 ∑_n ≥0 x^my^nM_m,n(q,t),which we state in the following theorem:
theoremmtrianglerational
We have
∑_m, n ≥ 0 x^my^nM_m,n(q,t) = 1/(1-x(qt - t + 1))(1-y(qt - t +1)) - t(1-t)(q-1)xy.
Then, using the explicit form of theM-triangle, we resolve another conjecture by McConville and Mühle on an intriguing relationship between theM-triangle andH-triangle.
corfromhtom
For m, n ≥ 0, we have
M_m,n(q,t) = (1-t)^m+nH_m,n(t(q-1)/1-t,q/q-1). | null | null | null | null | null |
http://arxiv.org/abs/2409.17790v1 | 20240926123722 | CASPFormer: Trajectory Prediction from BEV Images with Deformable Attention | [
"Harsh Yadav",
"Maximilian Schaefer",
"Kun Zhao",
"Tobias Meisen"
] | cs.LG | [
"cs.LG",
"cs.CV"
] |
Physics-driven complex relaxation for multi-body systems of SPH method
[
======================================================================
§ ABSTRACT
Motion prediction is an important aspect for ad and adas. Current state-of-the-art motion prediction methods rely on hd maps for capturing the surrounding context of the ego vehicle. Such systems lack scalability in real-world deployment as hd maps are expensive to produce and update in real-time. To overcome this issue, we propose caspformer, which can perform multi-modal motion prediction from rasterized bev images. Our system can be integrated with any upstream perception module that is capable of generating bev images. Moreover, caspformer directly decodes vectorized trajectories without any post-processing. Trajectories are decoded recurrently using deformable attention, as it is computationally efficient and provides the network with the ability to focus its attention on the important spatial locations of the bev images. In addition, we also address the issue of mode collapse for generating multiple scene-consistent trajectories by incorporating learnable mode queries. We evaluate our model on the nuScenes dataset and show that it reaches state-of-the-art across multiple metrics.
§ INTRODUCTION
In recent years, ad and adas technologies have gained huge attention as they can significantly improve the safety and comfort standards across the automotive industry <cit.>. The current approach to these self-driving tasks is to divide them into multiple independent sub-tasks, mainly i) perception, ii) motion prediction, and iii) motion planning, and optimize each task individually <cit.>. The perception task deals with the detection and segmentation of surrounding dynamic and static environment contexts. The dynamic context captures the motion of the dynamic agents in the scene e.g. pedestrians, cyclists, vehicles, traffic lights, etc., while the static context includes stationary elements of the scene e.g. road and lane boundaries, pedestrian crossings, traffic signs, parked vehicles, construction sites etc. As defined by Cui et al. <cit.>, the motion prediction task involves predicting multi-modal future trajectories for agents in a scene. The prediction of multiple future trajectories enables the model to account for uncertainties in the dynamic context. In addition, to ensure safety critical operation, the predicted trajectories must adhere to the static and dynamic contexts. Lastly, the objective of the motion planning task is to generate the control actions for the ego vehicle to navigate it through the scene while adhering to the traffic rules and dynamics of the vehicle.
Current state-of-the-art models <cit.> in motion prediction require hd maps for static context with centimeter-level accuracy. Such a strict constraint on hd maps leads to high production costs <cit.>. Thus, these models suffer from the problem of scalability in a real-world deployment. A cost-effective and scalable alternative is to construct bev images from a vision perception system deployed on the ego vehicle, as proposed by Li et al. <cit.> in their BEVFormer model. To efficiently decode trajectories and learn spatial attention on the feature maps of bev images, we opt for the deformable attention mechanism proposed in Deformable detr <cit.>. Furthermore, to generate a diverse set of modes in multi-modal trajectory prediction, we incorporate learnable embeddings into our architecture. Contrary to previous studies <cit.>, which use one set of learnable embeddings, our network consists of two sets of learnable embeddings. The first set, temporal queries, is responsible for capturing the temporal correlation in the output trajectories, and the second set, mode queries, aims to address the issue of mode collapse. Following the works <cit.>, we recurrently decode the multi-modal trajectories. This allows the network to update the reference point for deformable attention and the temporal queries through feedback loops of the recurrent decoder.
A depiction of our proposed network caspformer is shown in Figure <ref>. Furthermore, Figure <ref> highlights the components of the recurrent decoder. The contributions of our work are summarised as follows:
* A novel motion prediction architecture is introduced that generates multi-modal vectorized trajectories from bev images.
* It incorporates two sets of learnable embeddings: temporal queries for capturing the temporal correlation in the output trajectories and mode queries for overcoming the issue of mode collapse.
* The trajectory decoding is done recurrently using deformable attention where the feedback loops update the reference point for deformable attention and the temporal queries.
* We evaluate our method on the nuScenes motion prediction benchmark <cit.> and show that it achieves state-of-the-art performance across various metrics.
§ RELATED WORK
In this section, we highlight the corresponding related work. Section <ref> categorized the previous studies based on how their scene representation is constructed. Section <ref> highlights various methods for generating multi-modal prediction. Section <ref> discusses several transformer-based attention mechanisms that can be used to extract meaningful representations from bev images.
§.§ Input Scene Representation
The scene representation in the motion prediction task can be divided into two categories, rasterized scene representation and vectorized scene representation. The studies with rasterized scene representation <cit.> take advantage of matured practices in cnn to extract scene encodings. On the other hand, the vectorized representation was first introduced by LaneGCN <cit.> which identified that hd maps have an underlying graph structure that can be exploited to learn long-range and efficient static scene encodings with gnn. VectorNet <cit.> later showed that not only the static context, but also the dynamic context can also be represented in vectorized format. Follow-up studies <cit.> have provided several motion prediction methods that receive both static and dynamic contexts in vectorized form.
§.§ Multi-Modal Prediction
To accommodate uncertainties in traffic scenarios, autonomous vehicles must predict various scene-consistent trajectories adhering to the static and dynamic context. One approach <cit.> employs a variational auto-encoder to learn multiple latent representations of the entire scene and then decodes these latent representations generating multiple trajectories corresponding to each agent. However, these methods require multiple forward passes during both training and inference and are prone to mode collapse. Other approaches <cit.> use spatial-temporal grids to predict the future position for each agent and sample multiple goal positions. Thereafter, scene-consistent trajectories are generated which connect the proposed goal positions with the current position of the agents. These approaches learn multi-modality inherently without a specific training strategy, however, post-processing is required to generate trajectories from the grid. Alternatively, Multipath <cit.> utilizes fixed anchors corresponding to different modes. It constructs multiple trajectories by generating the offsets and probability distribution corresponding to each one of these anchors. A potential limitation of Multipath is that most of the fixed anchors are not relevant for particular scenes. This issue is addressed in the follow-up studies <cit.>, which learn the anchors during the training with the help of learnable embeddings and predict a diverse set of modes.
§.§ Transformer-based Attention in Image Domain
In recent years, transformer-based attention <cit.> mechanisms have achieved huge success in the image domain. The studies <cit.> establish the foundation for transformer-based encoders for image processing. Since these approaches lack decoder networks, their application is limited only to feature extraction. On the contrary, detr <cit.> introduces a transformer-based encoder-decoder architecture capable of end-to-end object detection. However, detr suffers from two major problems: slow convergence and low performance in detecting small objects, as its encoder is limited to processing features with very small resolution due to its quadratic computational complexity with the size of feature maps.
Deformable detr <cit.> overcomes these problems by sparsifying the selection of values and computing the attention solely based upon queries whilst eliminating the need for keys. The decrease in computational cost allows both the encoder and decoder to attend to every feature map in the feature pyramid generated by the backbone. Deformable detr thus significantly reduces training time while increasing performance in detecting small objects. Follow-up studies <cit.> on Deformable detr establish that a large part of its computational cost comes from the deformable self-attention module, and therefore propose to reduce this cost by limiting the number of queries which undergo self-attention. We compare training time with and without deformable self-attention modules in ablation studies because computational cost plays an important role in the deployment of models on edge devices operating in vehicles.
§ METHODS
This section will explain the methods which are utilized in our work and in particular our contribution to the current state of the art. Section <ref> describes the formulation of the input and output of the network. Section <ref> focuses on network architecture of caspformer and its components. Section <ref> illustrates the loss formulation.
§.§ Input-Output Formulation
caspformer receives static and dynamic contexts of the surrounding region of the ego vehicle and outputs multi-modal vectorized trajectories.
Static Context Input. The static context is rasterized into a grid-based input of shape (H, W). The feature dimension of rasterized static context contains binary feature maps consisting of information about the derivable area, center lines, driving lanes, road boundaries, and pedestrian crossing. The input of static context can be depicted as follows:
I_s∈ℝ^H × W ×| F_s|,
where H is the height of the grid, W is the width of the grid, and | F_s | is the number of input features of static context.
Dynamic Context Input. The dynamic context contains the motion information of all the surrounding road agents for the past T_i time steps. Corresponding to each time
step T_i, a grid of shape (H, W) is created. The feature dimension of these grids contains the velocity, acceleration, location offset, height, width, and heading information. The rasterized input of dynamic context is as follows:
I_d∈ℝ^T_i × H × W ×| F_d|,
where, | F_d | is the number of input features of dynamic context.
Output. The predicted trajectories contain the position information i.e. (x,y) of the ego vehicle, and the output tensor can thus be represented as:
Y ∈ℝ^M× T_o × 2,
where, M is the number of modes, T_o is the number of future time steps.
§.§ Network Architecture
The overall network architecture is shown in Figure <ref>. The network consists of a backbone and a recurrent decoder. For our work, the backbone architecture is adopted from caspnet <cit.>, as it is currently state-of-the-art in the nuScenes dataset <cit.>. It receives static and dynamic contexts in rasterized formats to generate multi-scale scene encodings. It is important to note that the caspformer is not limited to a particular backbone and can be extended to other transformer or cnn based backbones. The works <cit.> suggests that decoding the trajectory in a recurrent fashion results in better prediction capabilities. Inspired by this observation, we also decode the trajectory recurrently from the multi-scale scene encodings. A detailed schematic of the recurrent decoder is depicted in Figure <ref>.
The recurrent decoder employs deformable attention <cit.> to gather essential information from the scene encodings. The deformable attention module consists of deformable self-attention and deformable cross-attention modules. Thereby, the scene encodings are first encoded in the deformable self-attention module, which performs multi-scale feature fusion. The position information in the scene encodings is captured with non-learnable sinusoidal positional embeddings <cit.>. The fused scene encodings are then processed by a deformable cross-attention module, in which the attention map is learned through a linear transformation of queries. During our initial experiments, we only introduced temporal queries corresponding to each mode. The objective of the temporal queries was two-fold, first, they must learn the temporal correlation across the different time steps in the predicted trajectories, and second, they must distinguish between different modes as illustrated in previous works <cit.>. However, our preliminary experiments showed that this setup results in mode collapse (see the left column of Figure <ref>). We observed that although the different modes do correspond to different speeds, they miss out on other possible scene-consistent trajectories. To overcome this issue, we use another set of queries, called mode queries, in our network architecture. The results show that mode queries significantly improve the diversity of modes (see the right column of Figure <ref>). Another aspect of the original deformable cross-attention <cit.> is that it utilizes reference points to help the network focus its attention at a particular location in the image. We exploit this property of deformable cross-attention and set the reference point to the ego vehicle position based on the recurrent predicted trajectory output.
The recurrent behavior in the decoder is achieved by incorporating a feedback loop into the deformable cross-attention module. It outputs queries corresponding to individual modes, which are then transformed into multi-modal trajectories using mlp. To capture the temporal correlation in the predicted trajectories, the temporal queries are updated to output queries of the previous iteration. In addition, the reference point is updated to the end point of the predicted trajectories from the previous iteration.
The working mechanism of the deformable cross-attention module is shown in Figure <ref>. It consists of multiple iterations of deformable cross-attention layers between queries and fused scene encodings. The mode queries are added to the temporal queries before every deformable cross-attention layer.
§.§ Loss Formulation
We use the loss function proposed by HiVT <cit.>. It encourages diversity in predicted trajectories by optimizing only the best mode. The selection of the best mode is done based on the minimum l_2 between the ground truth and the predicted trajectories, averaged over all time steps. The loss function comprises of a regression loss ℒ_reg and a classification loss ℒ_cls:
ℒ = ℒ_reg + ℒ_cls,
Regression loss optimizes negative log-likelihood with the probability density function of the Laplace distribution, 𝕃(·|·), as follows:
ℒ_reg = -1/T_o∑_t=1^T_o log[𝕃(P_t |μ_t, b_t)],
where μ_t, and b_t are the position and uncertainty at each time step of the predicted best mode trajectory respectively, and P_t are the ground truth trajectory positions. The classification loss aims to optimize only the mode probabilities π(k) corresponding to mode k using the cross-entropy loss:
ℒ_cls = - 1/M∑_k=1^M log(π(k)) 𝕃(P_T_o, k|μ_T_o, k, b_T_o, k),
§ EXPERIMENTS
This section focuses on the experiments conducted using caspformer. Section <ref> illustrates the dataset, metrics, and other experimental setting. Section <ref> provides a detailed comparison with the current state-of-the-art. Section <ref> explains the design context of the ablation studies and the corresponding results.
§.§ Experimental Setup
Dataset. We test caspformer on the publicly available nuScenes dataset <cit.>, which contains 1000 twenty-second-long traffic scenes from Boston and Singapore. The dataset consists of various traffic situations.
Metrics. We report the performance of caspformer using minADEk, MRk, minFDEk, and OffRoadRate. minADEk computes the average of pointwise l_2 distance in meters between the ground truth and the predicted modes and then chooses the minimum value across all k modes. minFDEk computes the l_2 distance between the ground truth and predicted modes for the last time step only, and then selects the minimum amongst all k modes. MRk is defined as the fraction of misses, where a miss occurs if the maximum pointwise l_2 distance between the ground truth and the predicted modes is more than two meters. OffRoadRate measures the fraction of predicted trajectories that lie outside the driving area.
Implementation Details. caspformer is trained on an Nvidia A100 GPU with a batch size of 64 using AdamW optimizer <cit.>. The static and dynamic contexts cover a region of size 152 m × 96 m with a resolution of 1 m, leading to the input grid sizes of (152, 96). The ego vehicle is placed at (122, 48) pointing upward in this grid. We perform data augmentation on the rasterized inputs during training. The inputs are randomly rotated in between [-60^∘, 60^∘], and randomly translated in between [-3, 3] with a probability of 0.75. The number of past time steps for dynamic context is set to T_i=3, which is equivalent to 1 s of input trajectory as the sampling rate is 2 Hz. The number of future time steps for the output is set to T_o=12, which is equivalent to 6 s of prediction. The number of modes is set to M=5. The value of repetitions of deformable attention layers N, as depicted in Figure <ref>, is set to four. The number of feature levels in the feature pyramid is also set to four, and the hidden dimension of all feature maps is set to 64.
§.§ Results
We compare our work against the state-of-the-art on the nuScenes Motion Prediction Challenge <cit.> in Table <ref>. caspformer achieves the best performance in minADE5, MR5, and OffRoadRate. It should be noted that we have not included the work by Yao et al. <cit.> in our comparison, as their model Goal-LBP performs significantly worse on minFDE1 (9.20) and OffRoadRate (0.07) in comparison to all other methods mentioned in Table <ref>. Moreover, this study is published after the conclusion of our work and therefore its methods could not have been verified and considered in our approach. Our qualitative results are illustrated in Figure <ref>, which shows that caspformer can predict multiple modes consistent with the scene. In addition, we discover that each mode corresponds to a different driving speed of the ego vehicle. A potential limitation is that in some cases the trajectories are not well aligned with the lanes and we aim to tackle this in our future work.
§.§ Ablation Studies
We perform ablation studies on the nuScenes prediction validation split. The results of our ablation study are shown in Table <ref>. Where experiment #1 represents the baseline network architecture, which includes all modules, as presented in Figure <ref>. In the following, we discuss the experimental setting of all the ablation studies and their results:
Importance of mode queries. To show the significance of mode queries, we conduct an experiment, in which the mode queries are not provided as input to deformable cross-attention module, as presented in Figure <ref>. The results of this experiment illustrate that the network performs worse on all metrics especially on minADE5 when the mode queries are not provided in comparision to when they are (see experiments #1 and #2 in Table <ref>). The corresponding qualitative results of the experiment #2 are illustrated in Figure <ref>, which indicate that even though the modes retain the property of capturing various speeds of the ego vehicle, they follow the same path and miss out on other possible paths, thus leading to mode collapse. Therefore, we deduce that the introduction of mode queries helps avoid mode collapse in caspformer.
Effect of Deformable Self-Attention. The studies <cit.> point out that a significant computational cost in deformable attention comes from its deformable self-attention module. In our experiments, we also discover that if the deformable self-attention module is removed, the training time reduces by 60.3%, while minADE5, MR5 and minFDE1 increase by 11.5%, 15.2% and 7.6% respectively (see experiments #1 and #3 in Table <ref>). This can be a reasonable trade-off depending on the constraints for the motion prediction module. When removing the deformable self-attention module, we sum up the positional embeddings and scene encodings along the channel dimension and provide it directly as input into the deformable cross-attention module.
Importance of Recurrent Architecture.
We also test whether the recurrent feedback loops help the network in performing better across the various metrics. Thus we remove both feedback loops from our baseline network (as shown in Figure <ref>) and decode the complete 6 s trajectories in a single forward pass. The results of this experiment show that the performance of the network decreases across all the metrics when the feedback loops are not present in the network (see experiments #1 and #4 in Table <ref>). This confirms the findings of the works <cit.> that the recurrent architecture improves multimodal trajectory prediction.
Importance of Providing Ego Vehicle Position.
The results of our experiments show that setting the reference point to the ego vehicle position does not improve the network performance by any significant degree (see experiments #1 and #5 in Table <ref>), where in the experiment #5, the reference points are directly learned via linear transformation of mode embeddings as is the case with the original deformable attention <cit.>. Nevertheless, we speculate that setting the reference point to the position of the agent in the scene can play an important role in multi-agent joint motion prediction, and leave a detailed study of this for future work.
§ CONCLUSION
In this study, a novel network architecture, caspformer, is proposed which performs multi-modal trajectory prediction from bev images of the surrounding scene. caspformer employs a deformable attention mechanism to decode trajectories recurrently. Moreover, our work illustrates a mechanism to incorporate mode queries, which prevents the mode collapse and enables the network to generate scene-consistent multi-modal trajectories. We also identify that excluding the deformable self-attention module leads to a significant decrease in computational cost, without much effect on the network performance. Thus, in our future work, we aim to remove or modify the deformable self-attention module. Moreover, our future work would involve further study of the effect of vectorized dynamic context and the impact of reference points in multi-agent joint motion prediction.
splncs04
| In recent years, ad and adas technologies have gained huge attention as they can significantly improve the safety and comfort standards across the automotive industry <cit.>. The current approach to these self-driving tasks is to divide them into multiple independent sub-tasks, mainly i) perception, ii) motion prediction, and iii) motion planning, and optimize each task individually <cit.>. The perception task deals with the detection and segmentation of surrounding dynamic and static environment contexts. The dynamic context captures the motion of the dynamic agents in the scene e.g. pedestrians, cyclists, vehicles, traffic lights, etc., while the static context includes stationary elements of the scene e.g. road and lane boundaries, pedestrian crossings, traffic signs, parked vehicles, construction sites etc. As defined by Cui et al. <cit.>, the motion prediction task involves predicting multi-modal future trajectories for agents in a scene. The prediction of multiple future trajectories enables the model to account for uncertainties in the dynamic context. In addition, to ensure safety critical operation, the predicted trajectories must adhere to the static and dynamic contexts. Lastly, the objective of the motion planning task is to generate the control actions for the ego vehicle to navigate it through the scene while adhering to the traffic rules and dynamics of the vehicle.
Current state-of-the-art models <cit.> in motion prediction require hd maps for static context with centimeter-level accuracy. Such a strict constraint on hd maps leads to high production costs <cit.>. Thus, these models suffer from the problem of scalability in a real-world deployment. A cost-effective and scalable alternative is to construct bev images from a vision perception system deployed on the ego vehicle, as proposed by Li et al. <cit.> in their BEVFormer model. To efficiently decode trajectories and learn spatial attention on the feature maps of bev images, we opt for the deformable attention mechanism proposed in Deformable detr <cit.>. Furthermore, to generate a diverse set of modes in multi-modal trajectory prediction, we incorporate learnable embeddings into our architecture. Contrary to previous studies <cit.>, which use one set of learnable embeddings, our network consists of two sets of learnable embeddings. The first set, temporal queries, is responsible for capturing the temporal correlation in the output trajectories, and the second set, mode queries, aims to address the issue of mode collapse. Following the works <cit.>, we recurrently decode the multi-modal trajectories. This allows the network to update the reference point for deformable attention and the temporal queries through feedback loops of the recurrent decoder.
A depiction of our proposed network caspformer is shown in Figure <ref>. Furthermore, Figure <ref> highlights the components of the recurrent decoder. The contributions of our work are summarised as follows:
* A novel motion prediction architecture is introduced that generates multi-modal vectorized trajectories from bev images.
* It incorporates two sets of learnable embeddings: temporal queries for capturing the temporal correlation in the output trajectories and mode queries for overcoming the issue of mode collapse.
* The trajectory decoding is done recurrently using deformable attention where the feedback loops update the reference point for deformable attention and the temporal queries.
* We evaluate our method on the nuScenes motion prediction benchmark <cit.> and show that it achieves state-of-the-art performance across various metrics. | In this section, we highlight the corresponding related work. Section <ref> categorized the previous studies based on how their scene representation is constructed. Section <ref> highlights various methods for generating multi-modal prediction. Section <ref> discusses several transformer-based attention mechanisms that can be used to extract meaningful representations from bev images.
§.§ Input Scene Representation
The scene representation in the motion prediction task can be divided into two categories, rasterized scene representation and vectorized scene representation. The studies with rasterized scene representation <cit.> take advantage of matured practices in cnn to extract scene encodings. On the other hand, the vectorized representation was first introduced by LaneGCN <cit.> which identified that hd maps have an underlying graph structure that can be exploited to learn long-range and efficient static scene encodings with gnn. VectorNet <cit.> later showed that not only the static context, but also the dynamic context can also be represented in vectorized format. Follow-up studies <cit.> have provided several motion prediction methods that receive both static and dynamic contexts in vectorized form.
§.§ Multi-Modal Prediction
To accommodate uncertainties in traffic scenarios, autonomous vehicles must predict various scene-consistent trajectories adhering to the static and dynamic context. One approach <cit.> employs a variational auto-encoder to learn multiple latent representations of the entire scene and then decodes these latent representations generating multiple trajectories corresponding to each agent. However, these methods require multiple forward passes during both training and inference and are prone to mode collapse. Other approaches <cit.> use spatial-temporal grids to predict the future position for each agent and sample multiple goal positions. Thereafter, scene-consistent trajectories are generated which connect the proposed goal positions with the current position of the agents. These approaches learn multi-modality inherently without a specific training strategy, however, post-processing is required to generate trajectories from the grid. Alternatively, Multipath <cit.> utilizes fixed anchors corresponding to different modes. It constructs multiple trajectories by generating the offsets and probability distribution corresponding to each one of these anchors. A potential limitation of Multipath is that most of the fixed anchors are not relevant for particular scenes. This issue is addressed in the follow-up studies <cit.>, which learn the anchors during the training with the help of learnable embeddings and predict a diverse set of modes.
§.§ Transformer-based Attention in Image Domain
In recent years, transformer-based attention <cit.> mechanisms have achieved huge success in the image domain. The studies <cit.> establish the foundation for transformer-based encoders for image processing. Since these approaches lack decoder networks, their application is limited only to feature extraction. On the contrary, detr <cit.> introduces a transformer-based encoder-decoder architecture capable of end-to-end object detection. However, detr suffers from two major problems: slow convergence and low performance in detecting small objects, as its encoder is limited to processing features with very small resolution due to its quadratic computational complexity with the size of feature maps.
Deformable detr <cit.> overcomes these problems by sparsifying the selection of values and computing the attention solely based upon queries whilst eliminating the need for keys. The decrease in computational cost allows both the encoder and decoder to attend to every feature map in the feature pyramid generated by the backbone. Deformable detr thus significantly reduces training time while increasing performance in detecting small objects. Follow-up studies <cit.> on Deformable detr establish that a large part of its computational cost comes from the deformable self-attention module, and therefore propose to reduce this cost by limiting the number of queries which undergo self-attention. We compare training time with and without deformable self-attention modules in ablation studies because computational cost plays an important role in the deployment of models on edge devices operating in vehicles. | null | null | null | In this study, a novel network architecture, caspformer, is proposed which performs multi-modal trajectory prediction from bev images of the surrounding scene. caspformer employs a deformable attention mechanism to decode trajectories recurrently. Moreover, our work illustrates a mechanism to incorporate mode queries, which prevents the mode collapse and enables the network to generate scene-consistent multi-modal trajectories. We also identify that excluding the deformable self-attention module leads to a significant decrease in computational cost, without much effect on the network performance. Thus, in our future work, we aim to remove or modify the deformable self-attention module. Moreover, our future work would involve further study of the effect of vectorized dynamic context and the impact of reference points in multi-agent joint motion prediction.
splncs04 |
http://arxiv.org/abs/2409.17349v1 | 20240925204948 | Quantum circuit for $\mathbb{Z}_3$ lattice gauge theory at nonzero baryon density | [
"Yoshimasa Hidaka",
"Arata Yamamoto"
] | hep-lat | [
"hep-lat",
"quant-ph"
] |
YITP-24-115
Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan
Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako, Saitama 351-0198, Japan
Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan
§ ABSTRACT
ℤ_3 lattice gauge theory is the simplest discrete gauge theory with three-quark bound states, i.e., baryons.
Since it has a finite-dimensional Hilbert space, it can be used for testing quantum simulation of lattice gauge theory at nonzero baryon density.
We discuss global and local gauge symmetries and their importance in quantum simulation.
We perform quantum emulator calculation and demonstrate how to study the ground state property of baryonic matter.
Quantum circuit for Z3 lattice gauge theory at nonzero baryon density
Arata Yamamoto
September 28, 2024
=====================================================================
§ INTRODUCTION
Quantum computing will be a technological innovation for lattice gauge theory <cit.>.
In the future, it might help us solve long-standing mysteries in particle and nuclear physics; e.g., properties of dense baryonic matter in quantum chromodynamics (QCD) <cit.>.
Although quantum simulation of QCD is too challenging for current technology, we should take a step closer to the ultimate goal.
The SU(3) gauge group is theoretically complicated and practically hard due to its infinite-dimensional Hilbert space <cit.>.
Discrete gauge groups, such as ℤ_N, are good starting points for near-term quantum computing.
ℤ_2 lattice gauge theory is most often used for designing quantum simulations because one ℤ_2 gauge link is encoded using one qubit <cit.>.
Because of its simplicity, simulations of ℤ_2 lattice gauge theory perform well on noisy intermediate-scale quantum (NISQ) devices <cit.>.
In ℤ_2 lattice gauge theory, however, a baryon is a bound state of two quarks.
Since such a bosonic baryon does not form a Fermi surface, the behavior of its dense matter is completely different from the fermionic baryonic matter in the real world.
ℤ_2 lattice gauge theory cannot be used for studying nonzero baryon density.
This discrepancy is resolved by considering ℤ_3 lattice gauge theory <cit.>.
In ℤ_3 lattice gauge theory, three quarks are charge neutral and from a fermionic bound state due to confining force.
ℤ_3 lattice gauge theory can be used for a toy model for QCD at nonzero baryon density <cit.>.
The one-dimensional ℤ_3 lattice gauge theory will be a feasible setup for benchmarking quantum simulations on NISQ devices.
In this paper, we design a quantum simulation of ℤ_3 lattice gauge theory at nonzero baryon density.
We begin with the Hamiltonian and symmetries of the one-dimensional ℤ_3 lattice gauge theory in Sec. <ref>.
To understand the importance of gauge symmetry, we study the energy spectrum at nonzero baryon density in Sec. <ref>.
We construct gauge invariant protocols for a quantum circuit in Sec. <ref> and test ground-state calculations using the Qiskit simulator in Sec. <ref>.
Finally, we provide technical comments in Sec. <ref>.
All equations are written in the lattice unit and all quantities are dimensionless throughout the paper.
§ Z3 LATTICE GAUGE THEORY IN (1+1) DIMENSIONS
The theory is formulated on a one-dimensional lattice, x=1,2,⋯, L.
In ℤ_3 lattice gauge theory, the gauge link operator U(x) and the conjugate operator Π(x) satisfy the relation
Π(x) U(x) = ω U(x) Π(x)
with ω=exp (2π i/3) <cit.>.
The Dirac fermion operators ψ_q(x) satisfy the anticommutation relation
{ψ_qα(x), ψ_q'β^†(x) } = δ_qq'δ_αβ,
where the spinor indices α and β run from 1 to 2.
The Dirac fermion is three-flavor, q=u,d,s, such that the lightest baryon is s-wave <cit.>.
Adopting the Wilson fermion formalism, we consider the Hamiltonian
H = ∑_x g^2 [ 1 - 1/2{Π (x) + Π^† (x) }]
+ ∑_x,q{ (1+m) ψ_q^†(x) γ^0 ψ_q(x)
-1/2ψ_q^†(x) γ^0 (1-i γ^1) U(x) ψ_q(x+1)
-1/2ψ_q^†(x+1) γ^0 (1+i γ^1) U^†(x) ψ_q(x) }
with the gauge parameter g and the quark mass m.
Here, γ^μ are the Gamma matrices, which satisfy (γ^0)^2=-(γ^1)^2=1, and γ^0γ^1=-γ^1γ^0.
The theory has global U(3) flavor symmetry,
ψ_q(x)→ W_θψ_q(x), ψ_q^†(x)→ W_θ^†ψ_q^†(x),
and local ℤ_3 gauge symmetry,
ψ_q(x) →ω^n_xψ_q(x), ψ_q^†(x)→ω^-n_xψ_q^†(x),
U(x) →ω^n_x-n_x+1U(x),
where W_θ is a U(3) flavor matrix, and n_x is an integer depending on the space coordinate.
The flavor symmetry implies that the eigenstates of Hamiltonian are classified by the irreducible representation of U(3) group.
In particular, the diagonal part, U(1)× U(1)× U(1), protects quark numbers.
The quark number operator of flavor q is defined by
Q_q = ∑_x {ψ^†_q(x) ψ_q(x) -1 }.
Because of the commutation relation [H,Q_q]=0, the energy eigenstates have fixed quark numbers,
H|Ψ⟩ = E|Ψ⟩,
Q_q|Ψ⟩ =N_q|Ψ⟩,
for all q.
All the quark numbers (N_u,N_d,N_s) are individually conserved and the baryon number
B= 1/3∑_q N_q= 1/3 (N_u+N_d+N_s)
is conserved.
Note that these quantum numbers do not fully determine the irreducible representation. In the case of U(3), it is also necessary to specify the values of the second and third Casimir invariants in order to obtain an irreducible representation.
The gauge symmetry is defined by the Gauss law operator
G(x) = Π(x)Π^†(x-1) ω^-ρ(x)
with
ρ(x) = ∑_q{ψ^†_q(x)ψ_q(x)-1}.
Physical, i.e., gauge invariant, eigenstates satisfy the constraint
G(x)|Ψ⟩ =|Ψ⟩
for all x but unphysical eigenstates do not satisfy it.
The Gauss law operator commutes with the Hamiltonian, [H,G(x)]=0.
The operator G(x) corresponds to the exponential of the Gauss law in continuous gauge theory.
Π(x) is the exponential of an electric field and 2π/3ρ(x) is the local charge defined in mod 2π.
§ ENERGY SPECTRUM
Before discussing quantum computing, we study this theory by the brute-force matrix calculation on a classical computer.
For concreteness, we specify lattice geometry as shown in Fig. <ref>.
The lattice is one-dimensional with open boundaries.
The number of sites is L=3, and the number of links is L-1=2.
The maximum baryon number is B=3 and the minimum baryon number is B=-3.
The total dimension of the Hilbert space is D = 2^6L× 3^L-1=2,359,296.
Since this is too large for classical matrix calculation, we decompose into fixed-quark-number sectors and use a fixed-quark-number basis, i.e., the basis that labels which lattice sites are occupied or unoccupied by quarks.
This reduces the fermion matrix size from 2^18 to 63^3=8,000 in the vacuum sector (N_u=N_d=N_s=0) etc.
We fix the gauge coupling constant at g^2=1 and the quark mass at m=0.1.
The flavor symmetric sectors, N_u=N_d=N_s, are the most important.
Figure <ref> shows the ground state energies obtained by the exact matrix diagonalization.
In the vacuum sector B=0, the physical ground state has a lower energy than the unphysical ground state.
This is consistent with our intuition; the vacuum is gauge invariant.
For nonzero densities, B=1 and 2, the physical ground states have slightly higher energy than the unphysical ground states.
This can be understood in strong coupling picture.
In the strong coupling limit g →∞, electric fields must vanish, Π(x)|Ψ⟩=|Ψ⟩, to minimize electric field energy.
As for physical states, the Gauss law constraint forces up, down, and strange quarks to form a point-like baryon.
As for unphysical states, since quark positions are not constrained by electric fields, delocalized quarks are possible and energetically favored.
The delocalized quarks have lower fermion energy than the point-like baryon of the physical state.
This observation is crucial for the numerical simulation of the ground state at nonzero baryon density.
The lowest energy state in a fixed-quark-number sector is unphysical.
The physical ground state cannot be obtained by naive energy minimization in the total Hilbert space, but can be obtained only by restricting to the gauge-invariant subspace.
For example, in adiabatic simulations, the ground state is obtained by applying evolution operators to an arbitrary initial state.
The evolution operators must be gauge invariant and the initial state must be physical; otherwise the resulting state will be unphysical.
The ground state energies in other flavor sectors are also shown in Fig. <ref>.
Flavor asymmetric sectors are energetically disfavored due to the Pauli exclusion principle.
For example, the ground state energy of (N_u,N_d,N_s)=(2,1,0) is higher than that of (1,1,1), as shown in the figure.
That of (3,0,0) is even higher.
The B=1 sector consists of three quarks, whose representation is decomposed as 3⊗3⊗3= 10⊕8⊕8⊕1.
For a fixed quark number, (3,0,0) contains only 10 representation, while (2,1,0) contains 10 and 8 representations.
By contrast, the flavor symmetric sector (1,1,1) contains all possible representations:
10, 8, and 1.
These observations imply that the ground states for (N_u,N_d,N_s)=(3,0,0), (2,1,0), and (1,1,1) correspond to 10, 8, and 1 representations, respectively.
We focus only on the flavor symmetric sectors N_u=N_d=N_s in the following sections.
§ GAUGE INVARIANT PROTOCOLS
Preserving global and local gauge symmetries is important for designing quantum circuits.
If the symmetries are violated, a computed state might be a mixture of different quark numbers or unphysical states.
In many algorithms of quantum simulation, such as time evolution and ground-state calculation, we need the exponential operators of individual terms in a Hamiltonian.
We here construct gauge invariant circuits for the exponential operators in the ℤ_3 lattice gauge theory.
Each gauge link is three-dimensional,
and the state vector is represented by two qubits |g_0⟩⊗ |g_1⟩ as
|00⟩ =
[ 1; 0; 0 ]
, |01⟩ =
[ 0; 1; 0 ]
, |11⟩ =
[ 0; 0; 1 ].
For the gauge operators, we choose the U-diagonal basis
U =
[ 1 0 0; 0 ω 0; 0 0 ω^2; ]
, Π =
[ 0 1 0; 0 0 1; 1 0 0; ] .
The argument x is omitted here.
The gauge part of the Hamiltonian (<ref>) can be decomposed into three flip matrices as
Π + Π^† = X_a + X_b + X_c
with
X_a =
[ 0 1 0; 1 0 0; 0 0 0 ]
, X_b =
[ 0 0 0; 0 0 1; 0 1 0 ]
, X_c =
[ 0 0 1; 0 0 0; 1 0 0 ].
The Hamiltonian (<ref>) consists of three parts: the gauge part, the fermion mass part, and the fermion hopping part.
Each part is individually global and local gauge invariant unless further decomposed into gauge variant components.
The exponential operator of the gauge part is rewritten as
e^iθ(Π+Π^†)= V Λ(θ) V^†,
where
V =1/√(3)[ 1 1 i; 1 ω iω^2; 1 ω^2 iω ]
=e^-iπ/4Y_b e^-iφ Y_a e^-i3π/4 X_b,
Λ =
[ e^2iθ 0 0; 0 e^-iθ 0; 0 0 e^-iθ ]
= e^iθ Z_a e^iθ Z_c,
with φ=cos^-1(1/√(3)).
The matrices Y_a, Y_b, Z_a, and Z_c are similarly defined as in Eq. (<ref>).
The circuits are constructed as
e^iθ X_b = [c]150pt
< g r a p h i c s >
e^iθ Y_a = [c]150pt
< g r a p h i c s >
e^iθ Y_b = [c]150pt
< g r a p h i c s >
e^iθ Z_a = [c]150pt
< g r a p h i c s >
e^iθ Z_c = [c]150pt
< g r a p h i c s >
.
For the fermion part, taking γ^0=σ_1 and iγ^1=σ_3, we can rewrite the mass term as
ψ_q^†(x) γ^0 ψ_q(x) = ψ^†_2q(x) ψ_1q(x) + ψ^†_1q(x) ψ_2q(x),
and the hopping term as
1/2{ψ_q^†(x) γ^0 (1-i γ^1) U(x) ψ_q(x+1)
+ ψ_q^†(x+1) γ^0 (1+i γ^1) U^†(x) ψ_q(x) }
= ψ^†_1q(x)U(x)ψ_2q(x+1) + ψ^†_2q(x+1)U^†(x)ψ_1q(x) .
The fermions are arranged on a one-dimensional chain and qubitized by the standard Jordan-Wigner transformation.
Both the mass and hopping terms act on two neighboring fermions.
The exponential operators acting on |f_0⟩⊗ |f_1⟩ are implemented as
e^iθ ( ψ^†_1ψ_0 + ψ^†_0ψ_1 ) =[c]50pt
< g r a p h i c s >
e^iθ ( ψ^†_0Uψ_1 + ψ^†_1U^†ψ_0 )
=[c]200pt
< g r a p h i c s >
for the U-diagonal basis (<ref>).
The exponential operators (<ref>), (<ref>), and (<ref>) commute with the Gauss law operator G(x) and the quark number operators Q_q.
The circuit constructed from them conserves the Gauss law and the quark numbers.
§ EMULATOR TEST
We tested the quantum simulation by utilizing the noiseless Qiskit simulator.
The lattice geometry, the gauge parameter, and the quark mass are the same as those in Sec. <ref>.
Although gauge fields can be eliminated in this geometry, we keep and treat the gauge fields to demonstrate the gauge invariant protocols.
The simulation requires 22 qubits: 18 qubits for fermions and 4 qubits for gauge fields.
We simulated B=0, 1, and 2 for the flavor symmetric sectors.
The fully occupied state B=3 can be analytically solvable and its energy is zero.
We computed the ground states at nonzero baryon density by the quantum adiabatic algorithm <cit.>.
The ground state |Ψ⟩ is obtained by the adiabatic evolution
|Ψ⟩ = e^-i ∫_0^S ds h (s) |Φ⟩,
h(s) = ( 1-s/S) H_0 + s/S H,
where and |Φ⟩ is the ground state of the initial Hamiltonian
H_0 = ∑_x=1^2 g^2 [ 1 - 1/2{Π (x) + Π^† (x) }]
+ ∑_x=1^3 ∑_q {1+m+v(x)}ψ_q^†(x) γ^0 ψ_q(x)
with
v(x) =
0 (B = 0)
-mδ_x2 (B = 1)
+mδ_x2 (B = 2)
.
The initial state is given by zero electric fields, Π(x)|Φ⟩=|Φ⟩, and the flavor-singlet three quarks localized at x=2 (B=1) and at x=1 and 3 (B=2), which can be prepared by a shallow circuit.
This satisfies the eigenvalue equation Q_q|Φ⟩ =N_q|Φ⟩ and the Gauss law constraint G(x)|Φ⟩ =|Φ⟩.
The evolution operator is approximated as exp{ -i ∫_0^S ds h (s)}≃exp{-i δ h (S)}⋯exp{-i δ h (δ)} with step size δ=0.5, and each evolution operator exp{-i δ h (s)} is decomposed into the gauge part (<ref>), the fermion mass part (<ref>), and the fermion hopping part (<ref>) by the second-order Lie-Suzuki-Trotter formula.
As discussed above, all the decomposed operators commute with the quark number operators and the Gauss law operator.
The obtained ground state has the same quark numbers as the initial state, Q_q|Ψ⟩ =N_q|Ψ⟩, and satisfies the Gauss law constraint G(x)|Ψ⟩ =|Ψ⟩.
Let us first check the gauge invariance along the evolution.
Figure <ref> shows the evolution of observables: the baryon number B=1/3⟨ Q_u+Q_d+Q_s ⟩, the Gauss law violation
C=∑_x ⟨ | G(x)-1 |^2 ⟩,
and the energy E=⟨ H ⟩.
The baryon number is conserved and the Gauss law violation is zero during the evolution.
The energy sufficiently converges to the exact ground-state energy at large S.
Therefore, the evolution is gauge invariant and the physical ground state is successfully obtained for a fixed baryon number.
The density dependence of physical observables can be studied from the obtained ground states, as shown in Fig. <ref>.
The energy density is defined by
ϵ = 1/L (E - E_ vac) ,
where the exact vacuum energy E_ vac is subtracted for visibility, and the chiral condensate is defined by
Σ = - 1/L∑_x,q⟨ψ_q^†(x) γ^0 ψ_q(x) ⟩ .
The simulation reproduces the exact values for all the baryon numbers.
Currently, we are limited to small volumes due to the constraints of classical computing resources.
If quantum computers are available in the future, the volume can be taken larger and the thermodynamics limit can be extrapolated.
It is also possible to convert the density dependence to the dependence on a quark chemical potential μ.
At zero temperature, the relation between B and μ is determined by minimizing
Ω(B,μ) = E - μ(N_u+N_d+N_s).
The second term is a constant without statistical error in the fixed-quark-number simulation.
The ground state for a given μ has the smallest Ω(B,μ) among all B.
A transition in B happens at μ where Ω(B,μ) and Ω(B+1,μ) intersect.
Figure <ref> shows the transition points obtained from the simulation data.
The plot is a sum of step functions in a finite box.
In the thermodynamic limit, it will become a frequently-seen plot of a nonzero-density phase transition.
§ COMMENTS
The above computation is based on the gauge invariant circuit of the evolution equation (<ref>).
Although the simulation is, in principle, possible even if the gauge invariance is lost, it will be less efficient.
If the global gauge symmetry is lost, the evolved state is not an eigenstate of the quark number operators.
One must compute the lowest eigenstate of H-μ (Q_u+Q_d+Q_s), instead of H, for many different values of μ.
The quantum adiabatic algorithm assumes a nonzero gap Δ between the ground state and the first excited state and its convergence speed is proportional to Δ^2 <cit.>.
The computational cost will be very large around the transition point of μ, where the gap between B and B+1 disappears.
The violation of the local gauge symmetry is more harmful.
The Gauss-law-violated evolution provides the lowest energy state in the total Hilbert space, including both physical and unphysical subspaces.
The obtained state is physical only when the unphysical ground state has higher energy than the physical ground state.
This condition is satisfied in the vacuum but not satisfied for nonzero density states.
The evolution must be modified with a projection operator onto the physical subspace.
This will cause extra computational costs.
The circuit is generalizable to ℤ_N lattice gauge theory with arbitrary N.
The N-component diagonal matrix Λ and its diagonalizing unitary matrix V can be written down and constructed in the same manner as Eq. (<ref>).
The increase of computational cost is mild.
For example, as for the comparison between ℤ_2 and ℤ_3, exp(iθ X) is replaced by Eq. (<ref>) and exp{iθ Z( ψ^†_0ψ_1 + ψ^†_1ψ_0 ) } is replaced by Eq. (<ref>).
The former increases the number of CNOT gates from two to three and the latter increases from eleven to nineteen if the circuit is compiled with U gates and CNOT gates.
ℤ_3 lattice gauge theory is still economical and the circuit is executable on near-term devices.
Note that the above circuit construction is general but not necessarily the most optimal.
More optimal circuits can exist; e.g., see Ref. <cit.> for the case of ℤ_2 lattice gauge theory.
The authors thank Yuya Tanizaki for fruitful discussions.
This work was supported by JSPS KAKENHI Grant No. 21H01084, 24H00975 (Y. H.), and 19K03841 (A. Y.).
The work of Y. H. was also partially supported by Center for Gravitational Physics and Quantum Information (CGPQI) at Yukawa Institute for Theoretical Physics.
The classical matrix diagonalization was performed on SQUID at the Cybermedia Center, Osaka University.
apsrev4-2
| Quantum computing will be a technological innovation for lattice gauge theory <cit.>.
In the future, it might help us solve long-standing mysteries in particle and nuclear physics; e.g., properties of dense baryonic matter in quantum chromodynamics (QCD) <cit.>.
Although quantum simulation of QCD is too challenging for current technology, we should take a step closer to the ultimate goal.
The SU(3) gauge group is theoretically complicated and practically hard due to its infinite-dimensional Hilbert space <cit.>.
Discrete gauge groups, such as ℤ_N, are good starting points for near-term quantum computing.
ℤ_2 lattice gauge theory is most often used for designing quantum simulations because one ℤ_2 gauge link is encoded using one qubit <cit.>.
Because of its simplicity, simulations of ℤ_2 lattice gauge theory perform well on noisy intermediate-scale quantum (NISQ) devices <cit.>.
In ℤ_2 lattice gauge theory, however, a baryon is a bound state of two quarks.
Since such a bosonic baryon does not form a Fermi surface, the behavior of its dense matter is completely different from the fermionic baryonic matter in the real world.
ℤ_2 lattice gauge theory cannot be used for studying nonzero baryon density.
This discrepancy is resolved by considering ℤ_3 lattice gauge theory <cit.>.
In ℤ_3 lattice gauge theory, three quarks are charge neutral and from a fermionic bound state due to confining force.
ℤ_3 lattice gauge theory can be used for a toy model for QCD at nonzero baryon density <cit.>.
The one-dimensional ℤ_3 lattice gauge theory will be a feasible setup for benchmarking quantum simulations on NISQ devices.
In this paper, we design a quantum simulation of ℤ_3 lattice gauge theory at nonzero baryon density.
We begin with the Hamiltonian and symmetries of the one-dimensional ℤ_3 lattice gauge theory in Sec. <ref>.
To understand the importance of gauge symmetry, we study the energy spectrum at nonzero baryon density in Sec. <ref>.
We construct gauge invariant protocols for a quantum circuit in Sec. <ref> and test ground-state calculations using the Qiskit simulator in Sec. <ref>.
Finally, we provide technical comments in Sec. <ref>.
All equations are written in the lattice unit and all quantities are dimensionless throughout the paper. | null | null | null | null | null |
http://arxiv.org/abs/2409.17404v1 | 20240925222753 | Bayesian Covariate-Dependent Graph Learning with a Dual Group Spike-and-Slab Prior | [
"Zijian Zeng",
"Meng Li",
"Marina Vannucci"
] | stat.ME | [
"stat.ME"
] |
PSWF-Radon approach to
reconstruction from band-limited Hankel transform
Fedor Goncharov
Université Paris-Saclay
CEA, List, F-91120
Palaiseau, France
Mikhail Isaev
School of Mathematics
and Statistics
UNSW Sydney
Sydney, NSW, Australia
[email protected]
Roman G. Novikov
CMAP, CNRS, Ecole polytechnique
Institut Polytechnique de Paris
Palaiseau, France
IEPT RAS, Moscow, Russia
Rodion Zaytsev
Faculty of Mathematics, HSE University
Igor Krichever Center for Advanced Studies
Skolkovo Institute of Science and Technology
Moscow, Russia
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Covariate-dependent graph learning has gained increasing interest in the graphical modeling literature for the analysis of heterogeneous data. This task, however, poses challenges to modeling, computational efficiency, and interpretability. The
parameter of interest can be naturally represented as a three-dimensional array with elements that can be grouped according to two directions, corresponding to node level and covariate level, respectively. In this article, we propose a novel dual group spike-and-slab prior that enables multi-level selection at covariate-level and node-level, as well as individual (local) level sparsity. We introduce a nested strategy with specific choices to address distinct challenges posed by the various grouping directions. For posterior inference, we develop a tuning-free Gibbs sampler for all parameters, which mitigates the difficulties of parameter tuning often encountered in high-dimensional graphical models and facilitates routine implementation. Through simulation studies, we demonstrate that the proposed model outperforms existing methods in its accuracy of graph recovery. We show the practical utility of our model via an application to microbiome data where we seek to better understand the interactions among microbes as well as how these are affected by relevant covariates.
Keywords: Bayesian inference, Gaussian graphical model, global-local prior, human microbiome, variable selection.
§ INTRODUCTION
Gaussian graphical models have been applied in a wide variety of fields to recover the dependence structure among data <cit.>. The idea dates back to <cit.>, who proposed the covariance selection method that estimates conditional independencies based on the inverse covariance matrix (a.k.a., precision matrix or concentration matrix) by linking the absence of an edge in an undirected graph to a zero entry in the precision matrix. Expanding upon this idea, <cit.> showed that neighborhood selection for each node in the graph is equivalent to perform variable selection in a Gaussian linear model, turning the edge detection problem into variable selection for independent regressions. This approach has inspired numerous studies with a focus on using different selection methods to recover edges within a graph <cit.>.
Recent work has demonstrated the value of incorporating covariates in the modeling of subject-specific graphs via Gaussian graphical regression models, in particular for characterizing and discovering interactions in complex biological systems and diseases such as cancer <cit.>. Most of the existing literature has focused on covariate-adjusted mean structures in Gaussian graphical models, with either constant graphs across subjects or group-specific graphs depending on discrete covariates; for a comprehensive review of this rich literature, see <cit.> and Section 1.3 of <cit.>, with <cit.> providing a recent example. In this article, instead, we focus on modeling the dependence of the precision matrix on covariates, a framework referred to as precision-on-scalar regression. This covariate-dependent graph learning task is comparatively much less studied and poses challenges to modeling, computational efficiency, and interpretability. Partition-based Bayesian approaches to model covariate-dependent graphs are explored by <cit.>,
while <cit.> consider an edge regression model for undirected graphs, which estimates conditional dependencies as a function of subject-level covariates, and employs shrinkage priors.
<cit.> introduce bi-level sparsity, where element- and group-wise sparsity are encouraged by lasso and group lasso, respectively. Also, <cit.> consider a conditional DAG model that allows the graph structure to vary with covariates. Their approach assumes a known hierarchical ordering of the nodes, a prior knowledge that may not always be available in practical settings.
Precision-on-scalar regression models are characterised by an ultra-dimensional parameter space, which can be viewed according to more than one grouping direction, e.g. node or covariate. It is desirable to have both node-level and covariate-level group sparsity, in addition to individual (local) level sparsity. This simultaneous sparsity at the local level and the two group levels is crucial for interpretable graphical models, particularly in the presence of many nodes and covariates.
The majority of the existing works on heterogeneous graphs fail to model such structured sparsity, as they typically group parameters in one direction only, and there is a lack of efficient estimation strategies with easy parameter tuning to address the daunting computational challenges. Also, in the work of <cit.>, the authors use lasso and group lasso to induce covariate-level sparsity by imposing node-level sparsity. However, relying on one group level to induce the sparsity of another restricts the ability to flexibly capture interactions between the two group levels.
Here, we introduce a novel dual-group spike-and-slab prior as a general framework to encode group sparsity at both the covariate and the node level. At the covariate level, we allow for group (global) and individual (local) sparsity. Even though this general prior is complementary to a wide range of existing priors and empowers them into dual-group variants, modeling the two grouping directions in the context of graphical models has distinct challenges. To this end, we propose to use particular choices tailored to each grouping direction, leading to a dual-group spike-and-slab prior well suited for graphical models. We complete our proposed modeling construction with tuning-free posterior sampling, that aids model interpretability.
Overall, we are not aware of any work in the Bayesian literature addressing multiple covariates with the aforementioned structured sparsity. Through simulation studies, we demonstrate that the proposed model outperforms the method of <cit.> in its accuracy of graph recovery. We also compare performances to the Bayesian sparse group selection method of <cit.>.
In this paper, we propose a Bayesian Gaussian graphical regression model with a Nested Global-Local Spike-and-Slab prior that achieves multi-level selections, which we term covariate-level, node-level and local-level.
This prior initially employs the structure of global-local spike-and-slab prior <cit.>, excluding a covariate (global-level selection) by its probability of influencing the entire graph. The probability is assessed by counting the number of affected edges in the graph (local-level selection) through a Beta-Binomial conjugacy. This step considers the perspective of the covariates on a graph and sets a penalty parameter for subsequent steps. We then nest this structure in the sparse group selection with spike-and-slab prior <cit.> to perform node-level selection in the manner of conventional Gaussian graphical regression models. This step handles node-level selection from the perspective of each independent node and take advantage of the covariate selection results conditional on the outcome of the global-local spike-and-slab prior as the penalty parameters. We propose a tensor interpretation for this selection prior.
We design a Markov Chain Monte Carlo (MCMC) algorithm that incorporates uncertainty from hyper-parameters and is free of manually tuning.
For posterior inference, we obtain the marginal posterior probabilities of inclusion, allowing for uncertainty quantification and interpretable selection results.
We compare the proposed method to <cit.>, who proposed a Bayesian bi-level selection prior for Gaussian linear regression, and <cit.>, who studied a Frequentist sparse group lasso penalty for the Gaussian graphical regression model. In the context of graphical regression, both methods integrate sparsity solely in relation to a specific node and select edges that originate from this node. This is caused naturally by viewing this problem from the perspective of conditional independent regression <cit.>. Meanwhile, as we point out, if one views all regression coefficients as a precision coefficient tensor, the penalty considered by <cit.> and <cit.> will be only one of the possible directions to slice the precision coefficient tensor. The proposed method not only considers this direction of tensor slice but also adds the covariate-level selection, as a second direction to slice the tensor. We demonstrate in simulations that, when the sparsity is introduced from the covariates' perspective, the proposed method takes advantage of this modelling idea.
As an illustration of the utility of our method, we consider an application to multivariate data arising from microbiome studies.
The human microbiome has been implicated in many diseases including colorectal cancer, inflammatory bowel disease, and immunologically mediated skin diseases. Here, we apply the proposed method to real data from the Multi-Omic Microbiome Study-Pregnancy Initiative (MOMS-PI) study, to estimate the interaction between microbes in the vagina, as well as the interplay between vaginal cytokines and microbial abundances, providing insight into mechanisms of host-microbial interaction during pregnancy. These factors influence the microbiome by introducing new organisms, changing the abundance of metabolites, or altering the pH of their environment. Identifying factors that lead to the prevalence of different microbes can improve the understanding of the importance and the function of the microbiome. Our method identifies a large number of microbiome interactions (edges) that are simultaneously influenced by multiple cytokines. It also highlights a subnetwork of multiple microbes that belong to the same family (phylum) and that appear to be consistently detected as having covariate-dependent interactions for various cytokines, which aligns with previous findings.
The rest of the paper is organized as follows. In Section <ref>, we introduce the proposed prior construction and the sampling procedure. In Section <ref>, we conduct simulations and compare the proposed approach with existing methods. In Section <ref>, we apply the proposed model to a human microbiome study. In Section <ref> we provide some concluding remarks.
§ INTRODUCTION
— by Zijian
Gaussian graphical models have been applied in a wide variety of fields to recover the dependency and network among data <cit.>. Among these models, one model class shown to be useful is the Gassuian Graphical Regression Models. This idea can date back to the 1972 when Dempster proposed the covariance selection <cit.>, aiming at discovering the conditional independence restrictions using the inverse covariance matrix (a.k.a., precision matrix or concentration matrix). This links the absence of an edge in the undirected Gaussian graph with a zero entry in the precision. Expanding upon this idea, <cit.> shows the neighborhood selection for the conditional independence restrictions of each node in the graph is equivalent to the variable selection for Gaussian linear models, turning the edge detection of a graph to the variable selection of independent regressions. This approach has inspired numerous studies with a focus on using different selection methods to study the connectivity within a graph <cit.>. However, unlike the mean regression models often developed with the covariates' effects, previous research of Gaussian graphical regression models mainly regresses one node on the others nodes without incorporating the covariates for the precision.
Recently, <cit.> and <cit.> study the covariate-dependent precision regression from both applied and methodological perspectives for the Gaussian graphical regression models.
<cit.> propose a Bayesian edge regression to study the covariate-dependent edges in a Gaussian graph using hepatocellular carcinoma data.
They reduce a large number of correlation coefficients to small values for all edges via a shrinkage prior. Finally, a separate posterior thresholding method is used for the final edge selection.
<cit.> study the Gaussian graphical regression models with covariates under bi-level sparsity, where the element- and group-wise sparsity are encouraged by lasso and group lasso respectively. <cit.> establish the variable selection consistency and ℓ_2 convergence rate for this combined sparsity penalty, a.k.a., the sparse group lasso penalty, and shows it outperforms simply using one of the lasso or group lasso penalties for highly sparse case in simulation. The two works, although with different focus, demonstrate the value of incorporating the covariates in Gaussian graphical regression models, and reveal two main challenges. Firstly, the number of parameters is usually large with a relatively small sample size.
Secondly, the tuning parameters can be hard to specify.
Inspired by recent work of <cit.>, we propose a Bayesian Gaussian graphical regression model with a Nested Global-Local Spike-and-Slab prior, which has multi-level selections. This prior achieves a global-level selection, excluding a covariate, by measuring the probability of the covariate being influential on the whole graph via Beta-Binomial conjugacy in the local-level spike-and-slab indicators. This step considers the perspective of the covariates on a graph and sets a penalty parameter for subsequent steps. We then nest this prior in the sparse group selection with spike-and-slab prior <cit.> to perform node-level selection in the manner of conventional Gaussian graphical regression models. This step handles node-level selection from the perspective of each independent node and take advantage of the covariate selection results conditional on the outcome of the global-local spike-and-slab prior as the penalty parameters. We propose a tensor interpretation for this selection prior.
As a result, a single edge affected by a covariate has three levels of selection: covariate-level, node-level and local-level. We design a Gibbs sampler in a full Markov Chain Monte Carlo (MCMC) manner to automatically tune the hyper-parameters with incoporating their uncertainty. This Gibbs sampler can provide the marginal posterior probability of inclusion for each pair of edges and covariates, allowing for uncertainty quantification and posterior inference (i.e., Bayesian false discovery rate control), which leads to interpretable and flexible selection results. We provide a Rcpp implementation for this sampler available on GitHub page.
Comparing to the conventional methods without covariates, the proposed method can take advantage of group label information and jointly estimate the graphs of all groups by using the group label encoding from <cit.>. We demonstrate this common advantage shared by all Gaussian graphical regression models with covariates in simulation.
We also compare to <cit.>, who proposed a Bayesian bi-level selection prior for Gaussian linear regression, and <cit.>, who studied a Frequentist sparse group lasso penalty for the Gaussian graphical regression model. Both of the methods consider the sparsity only with respect to a given node. This is caused naturally by viewing this problem from the perspective of conditional independent regression <cit.>. Meanwhile, as we point out, if one views all regression coefficients as a precision coefficient tensor, the penalty considered by <cit.> and <cit.> will be only one of the possible directions to slice the precision coefficient tensor. The proposed method not only considers this direction of tensor slice but also adds the covariate-level selection, as a second direction to slice the tensor. We demonstrate in simulations that, when the sparsity is introduced from the covariates' perspective, the proposed method takes advantage of this modelling idea.
The rest of the paper is organized as follows. In section <ref>, we introduce the proposed method, the prior construction with tensor interpretation and the sampling procedure. In Section <ref>, we conduct simulations and compare the proposed approach with Gaussian graphical regressions with/without covariates. In Section <ref>, we apply the method to human gene expression data and microbiome data.
§ METHODS
§.§ Gaussian Graphical Regression Models with Covariates
Let = (Y^1, …, Y^p) be a p-dimensional outcome vector
and = (X^1, …, X^q) a q-dimensional covariate vector. We denote N independent and identically distributed observations by _n = (y^1_n, …, y^p_n ) and _n = ( x^1_n, …, x^q_n ), for n = 1, …, N. For simplicity, we assume that the outcomes have been centered with zero mean. The covariate-dependent Gaussian graphical model can be written as
_n | _n ∼ N_p( , [(_n) ] ^-1),
where (_n) = ( ω^ij( _n) )_i,j=1^p.
Similarly to the typical covariate-free setting studied in the Gaussian graphical model literature <cit.>, the covariate-dependent precision matrix () encodes independence for node i and node j given the other nodes Y^-(i,j) but in a covariate dependent manner as:
ω^ij(X) = 0 ⟺ Y^i ⊥ Y^j | Y^-(i,j), .
This adds flexibility to modeling the dependence structure of .
Under the Gaussian assumption (<ref>) the elements of the precision matrix, ω^ij( _n), are related to the coefficients in the linear regression of y^i_n on the other y^j_n, 1 ≤ i j ≤ p as
y^i_n = ∑_j i^pθ^ij( _n) y^j_n + ϵ^i_n ϵ^i_n ∼ N( 0, σ^2_i(_n)),
where θ^ij(_n) = - ω^ij( _n)/ω^ii(_n), σ^2_i(_n) = 1/ω^ii(_n). This model generalizes the standard treatment of Gaussian graphical models <cit.> to a covariate-dependent regime. To complete the model specification, we consider specifying θ^ij( _n) and ω^ii(_n) using interpretable structures. In particular, we assume ω^ii( _n) = ω^ii to be independent of covariates as in <cit.> and <cit.>, and assume a linear structure θ^ij(_n) = ∑^q_k=1β^ij_k x^k_n for 1≤ i j ≤ p.
These assumptions lead to the following model
y^i_n = ∑_j i^p ∑^q_k=1β^ij_k x^k_n y^j_n + ϵ^i_n, ϵ^i_n ∼ N( 0, σ^2_i), 1 ≤ i ≤ p, n= 1, ..., N.
§.§ Dual group spike-and-slab prior
The conditional regression in Eq. (<ref>) models the effect of covariate x^k_n on edge (i,j) via coefficient β^ij_k. Therefore, sparsity of the regression coefficients β^ij_k induces sparsity of the covariate-dependent precision (_n).
Let us collect the coefficients β^ij_k in Eq. (<ref>) into a p-by-p-by-q array ℬ = ( β^ij_k), for i, j = 1, …, p and k = 1, ..., q, with diagonal elements β^ii_k = 0. The elements of this multi-dimensional array ℬ can be grouped in different ways, i.e., as node-level and covariate-level groupings.
Simultaneous sparsity at the two group levels, as well as locally at individual level, can improve the interpretability and estimability of covariate-dependent graphical models, particularly in the case of many nodes and covariates <cit.>.
Here, we introduce a novel dual-group spike-and-slab prior as a general framework to encode group sparsity at both the covariate and the node level. At the covariate level, we allow for group (global) and individual (local) sparsity. We complete our proposed modeling construction with tuning-free posterior sampling, that aids model interpretability. Even though this general prior is complementary to a wide range of existing priors and empowers them into dual-group variants, the two grouping directions in the context of graphical models have distinct challenges.
In our construction, we allow covariate-dependent directed effects between two nodes to be asymmetric; symmetrizing β^ij_k, if desired, can be achieved by enforcing the constraint β^ij_k = β^ji_k as in <cit.>, or via posterior summary, as commonly done in the literature <cit.>.
We start with a conventional spike-and-slab prior of the type:
β^ij_k | δ^ij_k, σ^ij_k ∼δ^ij_k N[ 0, (σ^ij_k )^2] + ( 1 - δ^ij_k) δ_0,
where δ^ij_k ∈{ 0, 1 } is the overall selection indicator for a combination of nodes (i,j) and covariate k, σ^ij_k represents the prior variance of the slab distribution, and δ_0 is the Dirac mass at 0 (see <cit.> for a comprehensive treatment of this class of priors).
To encode sparsity, we decompose the selection indicator δ^ij_k into three parts: δ^ij_k = δ^ij×δ_k ×γ^ij_k, where δ_k is the covariate-level effect, δ^ij is the node-level effect, and γ^ij_k represents the local-level effect. For each (i, j, k), the marginal prior on β^ij_k is
β^ij_k | δ^ij, δ_k, γ^ij_k, σ^ij_k ∼δ^ijδ_k γ^ij_k N[ 0, (σ^ij_k )^2] + ( 1 - δ^ijδ_k γ^ij_k ) δ_0.
One particular challenge in this dual-group approach is to build interpretable group structures into the prior on β^ij_k jointly across (i, j, k) beyond the marginal specification in (<ref>), which not only should encode two group structures but also account for the distinct challenges posed by high-dimensional graphical models.
The two sets of group indicators (δ^ij) and (δ_k) are symmetric in (<ref>) in that no particular order between the two groups is enforced when combined with the local-level indicator γ^ij_k. The spike-and-slab specification allows us to consider any sequential order of them, leading to a notion of nested decomposition of the two groups. Without loss of generality, below we focus on the sequence of first δ^ij then δ_k, and discuss the different corresponding model structures along with our specification of each group sparsity. To this end,
we let τ^ij_k = δ_k γ^ij_kσ^ij_k and reparameterize Eq (<ref>) as
β^ij_k | δ^ij, τ^ij_k ∼δ^ij N[ 0, (τ^ij_k )^2] + ( 1 - δ^ij) δ_0,
and describe our dual-group structured prior specification below.
§.§.§ Node-level group sparsity: outer-layer structured prior for scalar response
For a given pair of nodes (i,j), let the coefficient vector B^ij = (β^ij_k)_1≤ k ≤ q indicate the coefficients grouped based on the paired indices (i,j). The model in Eq. (<ref>) leades to
y^i_n = ∑_j i ^p (_n^T ^ij ) y^j_n + ε^i_n,
where ^ij = implies that node j has no effect on node i.
The group sparsity at the node-level is described by the sparsity of vector ^i = (^ij)^1 ≤ j ≤ p_j ≠ i with group label j. One challenge in defining a prior on ^ij is the need to achieve sparsity at both the group level and individual level, with the added difficulty when addressing dual group sparsity.
Note that the model above is a high-dimensional linear regression model with standard scalar response, for which a rich menu of group priors has been proposed, such as <cit.>. We propose to use the multivariate spike-and-slab prior for group sparsity in the outer layer of our model:
^ij = diag(τ^ij_1, ..., τ^ij_q ) ^ij, 1 ≤ i j ≤ p,
b^ij | δ^ij∼δ^ij MVN( 0_q, I_q ) + ( 1 - δ^ij) δ_0_q ,
δ^ij | π^i∼Bernoulli( π^i), π^i∼Beta( a^i, b^i),
where ^ij = (b^ij_1, ..., b^ij_q )^T. Conditional on τ^ij_k for k = 1, …, q, Eqs. (<ref>) and (<ref>) provide node-level selection for the paired indices (i,j) for all k. The node-level indicator δ^ij models the group effect of node j on node i through all covariates. For every j, if δ^ij = 0 then effects b^ij will be excluded from the model, implying that node j does not affect node i through any of the covariates, i.e., ^ij =. The parameter π^i can be interpreted as the prior probability that y^i is affected by the other nodes.
§.§.§ Covariate-level group and local sparsity: inner-layer structured prior for multivariate response
For a given covariate x^k, let the coefficient matrix B_k = (β^ij_k)^1≤ i,j ≤ p indicate the coefficients grouped based on the index k. One challenge in modeling the sparsity of matrix _k is
to achieve simultaneous group sparsity and individual sparsity in an interpretable manner. This calls for a strategy different from the treatment of node-level sparsity. To see this, the independent regression system from Eq. (<ref>) has the representation
_n = ∑^q_k=1(_k _n ) x^k_n +ε_n,
which, unlike the preceding node-level representation, is a high-dimensional vector-on-scalar regression model with multivariate response, where B_k = 0 implies that covariate x^k has no influence on the precision matrix.
We propose to
jointly model (δ_k, γ_k^ij) by
δ_k = I(π_k ≥ d_k), γ^ij_k | π_k ∼Bernoulli( π_k ), π_k ∼Beta( a_k, b_k).
This global-local structure has been recently advocated by <cit.> in a different setting when studying image-on-scalar regression, which is the inner layer of our prior.
The global level indicator δ_k represents the covariate-level selection, as δ_k = 0 zeros out τ^ij_k for any 1 ≤ i j ≤ p, eliminating the covariate from the model. That is, δ_k = 0 implies that covariate x^k has no influence on any of the edges, hence the whole graph. At the local level γ^ij_k refers to the influence of the covariate on the pair of nodes (i,j). The importance of a covariate is characterized by the total number of pairs influenced by that covariate. The parameter π_k, which can be interpreted as the probability that x^k has an influence on the graph, is called participation rate in <cit.> and is estimated under the assumption that the covariate affects all pairs independently. The participation rate π_k also informs the selection by excluding those covariates expected to affect less than d_k × 100% pairs, saying τ_k = 0, if π_k < d_k, leading to _k = 0. This hard-threshold provides a probability-based selection rule and uses the local-level selection to inform the global-level selection. Without prior domain knowledge, <cit.> recommend d_k = 0.05 (i.e., 5%) as a conventional probability threshold for sparse models.
Eqs. (<ref>), (<ref>), and (<ref>) lead to a dual-group spike-and-slab prior. The prior sparsity encoded therein can be obtained by calculating the expectation of
the selection indicator δ^ij_k = [δ_k ×γ^ij_k] ×δ^ij = [I(π_k ≥ d_k)×γ^ij_k] ×[ δ^ij]:
E[δ^ij_k] = E[I(π_k ≥ d_k)γ^ij_k δ^ij] = E_π_k[ E[ I(π_k ≥ d_k)γ^ij_k ] | π_k ] E_π^i[ E[ δ^ij] | π^i]
= E_π_k[ I(π_k ≥ d_k)π_k ] E_π^i[ π^i] = ∫^1_d_kπ_k 1/B(a_k, b_k)π_k^a_k -1 (1-π_k)^b_k -1 d π_k a^i/a^i + b^i
= E[π_k ] [ 1 - F_Beta_k(d_k)]E[ π^i] = a^i/a^i + b^ia_k/a_k+b_k[ 1 - F_Beta_k(d_k)] ,
where F_Beta_k(·) is the cumulative distribution function of Beta(a_k+1, b_k). Eq. (<ref>) offers a flexible way to incorporate prior beliefs on the graphical structure. For example, a possible belief is that the graph consists of a dense population level and a sparse covariate level, which corresponds to an always-included intercept term with imbalanced penalization for the intercept and covariates, as seen in <cit.>. Although our model does not intentionally incorporate an intercept, it can adapt to this belief by incorporating x^1_n = 1 for all n and adjusting the prior parameters d_k, a_k and b_k, i.e., d_1 = 0 and a_1/a_1+b_1 > a_k/a_k+b_k for k ≠ 1. Similarly, prior parameters a^i and b^i can be adjusted to adapt to beliefs on node sparsity. Without any prior information, a non-informative prior can be used, allowing the model to learn from the data.
§.§.§ Complete tuning-free prior specification
We place a prior ℱ^ij_k with positive support on σ^ij_k. In particular, we specify ℱ^ij_k = N^+(0, s^2_k), ie. a truncated normal on the positive line. Combining this with Eq. (<ref>), our prior on τ^ij_k is
τ^ij_k = τ̃^ij_k I( π_k ≥ d_k )
τ̃^ij_k | γ^ij_k ∼γ^ij_k N^+( 0, s^2_k) + ( 1 - γ^ij_k) δ_0
γ^ij_k | π_k ∼Bernoulli( π_k ), π_k ∼Beta( a_k, b_k).
Finally, we complete the model by assigning conjugate priors s^2_k ∼InvGamma(1,t), with t ∼Gamma(a_t, b_t), for k=1…, q, and inverse gamma priors on the error variances, σ^2_i ∼InvGamma( a^i_σ, b^i_σ), for i=1, …, p. We set a_t = b_t = 0, which leads to a commonly used flat and improper prior on t, although other values can be used. These specifications, along with Eqs. (<ref>), (<ref>) and (<ref>), complete our prior model.
We conclude this section by noting that in the presentation of our proposed dual-group spike-and-slab prior we have used the sequential order of nesting a vector-on-scalar regression into a scalar-on-scalar regression. In practice, one can also vary this order and substitute particular choices for each module with alternative structures, following the same nesting strategy to address multi-level sparsity.
§.§ Nested Global-Local Spike-and-Slab prior
We now define a prior on the regression coefficients β^ij_k in Eq. (<ref>) that achieves selection at multiple levels. In order to better understand the role of the regression coefficients in (<ref>) and the actions of the proposed selection prior, it is helpful to view the set of coefficients as a p-by-p-by-q tensor ℬ = ( β^ij_k), i, j = 1, …, p, and k = 1, …, q, with diagonal elements of each slice set to 0, i.e., β^ii_k = 0 ∀ i,k , as they are covariate-independent. Plot (a) in Figure <ref> shows an illustration of the tensor ℬ when the number of nodes is p = 4 and the number of covariates is q = 3, with points in red indicating zeros and points in blue indicating those to be estimated.
We make the following considerations:
* Covariate-level. For a given covariate x^k, let the coefficient matrix B_k = (β^ij_k)^1≤ i,j ≤ p indicate the k-th vertical slice of the tensor ℬ. If β^ij_k = 0 then the influence of x^k on edge (i,j) is zero. This implies that covariate x^k has no influence on the precision matrix, i.e., ( x^k) ⊥ x^k, if B_k = 0.
* Node-level. For a given node i, let the coefficient matrix B^i = (β^ij_k)^1≤ j≤ p_1 ≤ k ≤ q indicate the i-th horizontal slice of the tensor ℬ. If the j-th line in the slice b^ij = (β^ij_k)_1 ≤ k ≤ q is all zeros then there is no effect of node j on node i through any of the covariates. Notice that, because of the `OR' rule we adopt to determine the existence of an edge, edge (i,j) will be excluded when both b^ij=^ji=.
Plots (b) and (c) in Figure <ref> exemplify these two levels, for covariate 1 and node 1, respectiveley.
In order to define our selection prior, we first write the individual coefficient β_k^ij as the product of a covariate-level parameter, (τ^ij_k), and a node-level parameter, ( b^ij_k) as
β^ij_k = τ^ij_k b^ij_k, 1 ≤ i j ≤ p, 1 ≤ k ≤ q.
Then, at covariate level we follow <cit.> and specify a Global-Local Spike-and-Slab prior as
τ^ij_k = τ̃^ij_k I( π_k ≥ d_k )
τ̃^ij_k | γ^ij_k ∼γ^ij_k N^+( 0, s^2_k) + ( 1 - γ^ij_k) δ_0
γ^ij_k | π_k ∼Bernoulli( π_k ), π_k ∼Beta( a_k, b_k),
and at the node-level we adopt the sparse Group Selection Spike-and-Slab prior of <cit.> on b^ij = ( b^ij_1, …, b^ij_q) as
b^ij | δ^ij∼δ^ij N_q( 0, I_q ) + ( 1 - δ^ij) δ_0,
δ^ij | π^i∼Bernoulli( π^i), π^i∼Beta( a^i, b^i).
In Eq. (<ref>), the global level represents the covariate selection, as zeroing out the entire matrix τ̃_k = {τ̃^ij_k, 1 ≤ i j ≤ p} implies that covariate x^k has no influence on any of the edges, i.e, the whole graph, therefore eliminating the covariate from the model, while the local level, τ̃^ij_k, refers to the influence of the covariate on the edge (i,j).
At node-level, Eq. (<ref>) models the group effect of node j on node i through all covariates. For every j, if δ^ij = 0 then effects b^ij will be excluded from the model, implying that node j does not affect node i through any of the covariates.
At covariate level, in prior Eq. (<ref>) the importance of a covariate is characterised by the total number of edges influenced by that covariate. The parameter π_k, which can be interpreted as the probability that x_k has an influence on the graph, is called participation rate in <cit.> and is estimated under the assumption that the covariate affects all edges independently. The participation rate π_k also informs the selection by excluding those covariates expected to affect less than d_k × 100% pairs, saying B_k = 0, if π_k < d_k. This hard-threshold provides a probability-based selection rule. Without prior domain knowledge, <cit.> recommend d_k = 0.05 (i.e., 5%) as a conventional probability threshold for a rare event. In Figure <ref> (b) all points share the same participation rate, π_1.
At node level, according to prior in Eq. (<ref>) a given node i is influenced by any node j through the covariates. Here, the parameter π^i can be interpreted as the probability that y^i is affected by the other nodes, expressing the sparsity in the context of Bayesian linear regression <cit.>. Figure <ref> (c) illustrates the prior for node 1, with lines representing the group selection indicators δ^1j, j = 1,2,3, sharing the same prior effect π^1.
Combining Eq. (<ref>) and Eq. (<ref>) via Eq. (<ref>), the prior on β^ij_k can be written as the following spike-and-slab prior,
π(β^ij_k | - ) ∼[ δ^ij] τ^ij_k N( 0, 1 ) + [ 1 - δ^ij] δ_0
⇒π( b^ij_k | τ̃^ij_k, - )
= [I(π_k ≥ d_k)γ^ij_k] [ δ^ij] N( 0, ( τ̃^ij_k)^2 ) + [ 1 - I(π_k ≥ d_k)γ^ij_k δ^ij] δ_0,
where I(π_k ≥ d_k)γ^ij_k is the global-local indicator from Eq. (<ref>), δ^ij is the group indicator from Eq. (<ref>),
and ( τ̃^ij_k)^2 the prior variance, which can be regarded as the tuning-required parameters in penalized regression. With the conditional distribution, Eq. (<ref>) implies that the prior in Eq. (<ref>) can be fitted inside Eq. (<ref>), playing the role of hyper-prior (tuning parameters). Additionally, Eq. (<ref>) links the priors of Eq. (<ref>) from different conditional independent regressions by considering the covariates affect. Because of the hierarchical structure and linkage, we therefore call our proposed construction the Nested Global-Local Spike-and-Slab (NGLSS) prior.
Figure <ref> (d) plots the whole prior influences of the NGLSS prior, in which the nodes in different covariate sparsities, namely π_1, ..., π_k, and the lines in different blues represented different node sparsities, namely π^1, ..., π^p. The sparsities derived from two tensor slicing directions collectively form a penalty nest for the entire tensor, represented by I(π_k ≥ d_k)γ^ij_k δ^ij.
The following may be introduced as a proposition for the prior sparsity of the indicator I(π_k ≥ d_k)γ^ij_k δ^ij?
Parameters π_k and π^i, along with the hard-thresholding structure, introduces prior sparsity, as it can be seen by calculating the expectation of the selection indicator of the NGLSS prior in Eq. (<ref>)
E[I(π_k ≥ d_k)γ^ij_k δ^ij] = E_π_k[ E[ I(π_k ≥ d_k)γ^ij_k ] | π_k ] E_π^i[ E[ δ^ij] | π^i]
= E_π_k[ I(π_k ≥ d_k)π_k ] E_π^i[ π^i]
= ∫^1_d_kπ_k 1/B(a_k, b_k)π_k^a_k -1 (1-π_k)^b_k -1 d π_k a^i/a^i + b^i
= E[π_k ] [ 1 - F_Beta_k(d_k)]E[ π^i] ,
where F_Beta_k(·) is the cumulative distribution function of Beta(a_k+1, b_k), and where E[ π_k] and E[ π^i ] capture the sparsity at covariate- and node-level, respectively. The factor [ 1 - F_Beta(d_k)] ≤ 1 is strictly decreasing in d_k, therefore introducing additional sparsity at the covariate-level.
Finally, we complete the model by assigning conjugate gamma priors s^2_k ∼InvGamma(1,t), with t ∼Gamma( 1, 1/s^2_k), for k=1…, q and inverse gamma priors on the error variances, σ^2_i ∼InvGamma( a_σ, b_σ), for i=1, …, p , allowing an efficient tuning-free full Gibbs sampler.
§.§ Posterior Inference
We derive a tuning-free full Gibbs sampler for inference in the proposed model, which combines blocked Gibbs strategies based on the samplers used in <cit.> and <cit.>. We describe the updates of the parameters below and provide detailed derivations in the supplementary materials.
* Update the covariate-level selection parameters {τ^ij_k, τ̃^ij_k, γ^ij_k, π_k}
We rewrite the distribution of response node i in Eq. (<ref>) by separating the parameters to be sampled as follows:
y^i_n | - ∼ N ( ∑_s k ∑_l iβ^il_s y^l_n x^s_n_ conditional on s ≠ k
denoted as c^1, ijk_n + ∑_j' ∉{i, j}β^ij'_k y^j'_n x^k_n _ conditional on j' ∉{i,j}
denoted as c^2,ijk_n + β^ij_k y^j_n x^k_n, σ^2_i ).
By denoting y^ijk_n = y^i_n - c^1,ijk_n - c^2,ijk_n. Eqs. (<ref>) and (<ref>) lead to the following conditional probabilities for the latent coefficients and indicators,
y^ijk_n | τ̃^ij_k, γ^ij_k = 1, - ∼ N( τ̃^ij_k b^ij_k y^j_n x^k_n , σ^2_i )
τ̃^ij_k | γ^ij_k = 1 ∼ N^+( 0, s^2_k)
y^ijk_n | γ^ij_k = 0, - ∼ N( 0, σ^2_i )
with corresponding Bayes factor, integrating out τ̃^ij_k, given as
θ^ij_k = p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0 ) ×( 1 - π_k ) /∫ p( y^ijk_· | γ^ij_k = 1, τ̃^ij_k )p( τ̃^ij_k )d τ̃^ij_k ×π_k
= 1 - π_k / 2 ( s^2_k )^-1/2×( ν̃^2_ijk)^1/2exp{1/2m̃^2_ijk/ν̃^2_ijk}Φ( m̃_ijk/ν̃_ijk) ×π_k,
where y^ijk_· denotes { y^ijk_n}^N_n=1, Φ(·) denotes the cumulative distribution function of the standard Normal distribution, and
ν̃^2_ijk = ( ∑^N_n=1(y^j_n x^k_n )^2 (b^ij_k)^2 / σ^2_i + 1/ s^2_k )^-1 and m̃_ijk = ν̃^2_ijk b^ij_k ∑^N_n=1 y^j_n x^k_n y^ijk_n / σ^2_i.
This Bayes factor allows to sample the local indicators γ^ij_k from
γ^ij_k | - ∼Bernoulli( 1/1+ θ^ij_k).
If γ^ij_k = 1, Eq. (<ref>) leads to the update τ̃^ij_k | - ∼ N^+( m̃_ijk, ν̃^2_ijk); else if γ^ij_k = 0, we set τ̃^ij_k = 0.
After updating all indicators for covariate x^k, we update
π_k| - ∼Beta( a_k + ∑_ 1≤ i j≤ p γ^ij_k, b_k + p(p-1) - ∑_ 1≤ i j≤ p γ^ij_k).
leading to τ^ij_k = τ̃^ij_k δ_k = τ̃^ij_k I( π_k ≥ d). This joint update of parameters and selection indicators avoids reversible jump <cit.>.
* Update the node-level selection parameters {b^ij, δ^ij, π^i} together with {β^ij_k}
We rewrite the distribution of response node i in Eq. (<ref>) by separating the parameters to be sampled as follows:
y^i_n | - ∼ N( ∑_ j' ∉{ i,j }∑_k = 1^q β^ij'_k y^j'_n x^k_n + ∑_k = 1^q β^ij_k y^j_n x^k_n,, σ^2_i ).
Denoting z^ij_n = y^i_n - ∑_ j' ∉{ i,j }∑_k=1^q β^ij'_k y^j'_n x^k_n,
eqs. (<ref>) and (<ref>) lead to the following conditional probability distributions for the latent coefficients and indicators
z^ij_n | δ^ij = 1, - ∼ N( (X^ij_n)^T V^ijb^ij, σ^2_i )
b^ij | δ^ij = 1 ∼ MVN( 0_q, I_q )
z^ij_n | δ^ij = 0, - ∼ N( 0, σ^2_i )
where X^ij_n = ( y^j_n x^1_n , ..., y^j_n x^q_n)^T and V^ij = diag( τ^ij_1, …, τ^ij_q). In addition, we denote Z^ij = ( z^ij_1, …, z^ij_n)^T and ^ij =( ^ij_1,..., ^ij_n )^T. The Bayes factor of edge (i,j) can be obtained by integrating out ^ij,
θ^ij = p( Z^ij | δ^ij = 0, b^ij = 0) ×( 1 - π^i) /∫ p( Z^ij | δ^ij = 1, b^ij)p( b^ij)d b^ij×π^i
= 1 - π^i/| Σ̃^ij|^1/2exp{1/2( μ̃^ij)^T( Σ̃^ij)^-1μ̃^ij}×π^i,
where
Σ̃^ij = ( 1/σ^2_i( X^ijV^ij)^T(X^ijV^ij) + I_q )^-1 and μ̃^ij = ( 1/σ^2_i(Z^ij)^T X^ijV^ijΣ̃^ij)^T.
This Bayes factor allows to sample the indicators δ^ij as
δ^ij | - ∼Bernoulli( 1/1+ θ^ij).
Then, if δ^ij = 1, we update b^ij | - ∼ MVN( μ̃^ij, Σ̃^ij); otherwise if γ^ij = 0, we set b^ij = 0. The update of (β^ij_k)_1≤ k ≤ q is followed by B^ij = V^ij^ij.
After all indicators for node i are updated, the probability π^i can be updated by
π^i| - ∼Beta( a^i + ∑_ j i ^δ^ij, b^i + (p-1) - ∑_ j i δ^ij).
* Update the variances {σ^2_i }
This is a conjugate update
σ^2_i | - ∼InvGamma( N/2 + a_σ, 1/2∑^N_n=1( y^i_n - ∑_ j i ∑_k=1^q β^ij_k y^j_n x^k_n)^2 + b_σ).
* Update { s^2_k } and t:
These are also conjugate updates. For each k ∈ [q], we sample
s^2_k | - ∼InvGamma( 1 + 1/2∑_ 1 ≤ i j ≤ p γ^ij_k, t + 1/2∑_ 1 ≤ i j ≤ p ( τ̃^ij_k )^2 ).
After all s_k are updated, we sample
t | - ∼Gamma( q + 1, ∑_k=1^q 1/s^2_k).
Given the MCMC samples, we perform posterior inference by calculating the marginal posterior probabilities of inclusion (MPPIs) of the indicators δ^ij_k. Following the median probability model <cit.>, we define the inclusion indicator κ^ij_k = 1 if the MPPIs of δ^ij_k is greater than 0.5. To infer the edges in the undirected graph, we use the “OR" rule <cit.>, which concludes that the edge (i,j) is affected by covariate x_k if either κ^ij_k = 1 OR κ^ji_k =1. At the two group level, we conclude that the edge (i,j) exists if either ∑_k κ^ij_k 0 OR ∑_k κ^ji_k 0, and that the covariate x^k is influential if ∑_ijκ^ij_k 0. Bayesian false discovery rate control methods can also be utilized in determining the κ^ij_k's <cit.>.
§ SIMULATION STUDY
In this section, we conduct simulations and compare the proposed approach with selected covariate-dependent Gaussian graphical regression approaches.
§.§ Data Generation
We generate data from Eq (<ref>) using (_n) = ( ω^ij( _n) )_i,j=1^p = ( ∑^q_k=1β^ij_k x^k_n )_i,j=1^p. We set the number of nodes to p = 25 and the number of covariates to q = 10, and introduce sparsity as follows. First, we generate a 25-by-25 random graph with sparsity-level at 0.4 and randomly divide the edges into four parts for _1-4. We set _5-10 = 0 for the empty covariates. The values of the non-zero entries β^ij_k are sampled from uniform distributions supported on the intervals [-0.5,-0.35]∪[0.35,0.5]. To generate a valid precision matrix, we follow <cit.> by first rescaling each row i by dividing by 1/2∑_j∑_k |β^ij_k | and then averaging β^ij_k and β^ji_k to fill each entry of (i,j,k). We set X_1 = 1 as the intercept and sample X_2-10 from uniform[0,1]. We use two different sample sizes n = 200, 500 and repeatedly generate data 50 times for each size to evaluate the model performances.
§.§ Comparison Study
We compare performances of the proposed method, for which we use the acronym DGSS, to Lasso regression <cit.>, the GMMReg method of <cit.> and the Bayesian sparse group selection method with spike and slab prior of <cit.>. We implement Lasso by the R-package and GMMReg by the
Matlab code from the authors' page. For both Lasso and GMMReg, we tune the regularization parameters by cross-validation. We denote the Bayesian sparse group selection with spike and slab prior as , implemented by the R-package . Lasso only considers local sparsity, whereas the other methods also take group sparsity into account. Specifically, GMMReg uses node-level sparsity to induce covariate-level sparsity, BSGSSS accounts for node-level sparsity only, and the proposed DGSS decouples node-level and covariate-level sparsity, modeling them with the dual group spike-and-slab prior. For DGSS, we run 20,000 MCMC iterations with a burn-in of 10,000. We specify non-informative priors as π_k ∼Beta(1,1), π^i ∼Beta(1,1) and use the conventional sparsity threshold d_k = 0.05 for all k = 1, …, q. For BSGSSS, we increase the MCMC iterations from default 10,000 with a burn-in of 5,000 to 20,000 with a burn-in of 10,000 and keep all other parameters set to their default values.
We consider the covariate-dependent edge detection as a classification task with the presence of an edge within each precision coefficient being treated as a positive signal. The total number of parameters is p(p-1)q = 6,000, and on average, p(p-1)× 0.4 = 240 of them are signals, which may vary due to random graph generation. The covariate-dependent edge further provides inference to the node-level and covariate-level selections, as described in Section <ref>. We report the following four metrics for comparison: True Positive Rate (TPR), False Positive Rate (FPR), F1 score (F1), and Matthews correlation coefficient (MCC). These metrics are defined by:
TPR = TP/TP + FN, FPR = FP/FP +TN, F1 = TP/TP+1/2× (FP + FN),
MCC = TP×TN- FP×FN/√(TP+FP)×√(TP+FN)×√(TN+FP)×√(TN+FN),
where TP, FP, TN and FN represent numbers of True Positive, False Positive, True Negative and False Negative, respectively.
§.§ Results
Figure <ref> shows examples of the adjacency matrices corresponding to the precision coefficients _k, k=1,…,q, for one simulated dataset with n=200,500. For both sample sizes, we observe that Lasso and BSGSSS do not penalize the coefficients sufficiently, failing to eliminate covariates X_5-10. On the contrary, GMMReg penalizes the coefficients excessively, selecting too few edges. This phenomenon may be caused by their sparsity assumptions. Lasso and BSGSSS totally ignore the covariate-level sparsity by treating the problem as a high-dimensional linear regression at each node, ultimately failing to exclude those covariates with no impact on edges. On the other hand, GMMReg assumes a dense intercept and sparse covariates, and tunes the parameters via cross-validation, which fails to find an optimal penalty for both dense intercept and covariates under their assumption, ultimately limiting the number of edges selected.
Compared to the three existing methods, the proposed DGSS achieves relatively good sparse estimates, identifying the important covariates with a reasonable sparsity level.
We now proceed to assess each method using the edge and covariate selection metrics introduced above. All results are reported in Table <ref>, averaged across 50 replicates.
For covariate-dependent edge detection, performance metrics tend to be low for all methods, even with the relatively large sample size n = 500. In this scenario, Lasso only considers the local-level penalty, while the other methods, GMMReg, BSGSSS and DGSS include at least two level of selection/penalties. With sample size n = 200, GMMReg and BSGSSS outperform Lasso in terms of F1 and MCC, because their second level selection/penalty can efficiently exclude the empty coefficients. On the contrary, with the relatively large sample size, n = 500, their performances become comparable to the Lasso in terms of those two scores as their second level of selection/penalty does not fit the sparsity pattern in the data-generation process. Meanwhile the proposed DGSS method takes advantage of the multi-level selection and outperforms the other methods in terms of F1 and MCC.
Next, we evaluate the models' performance in edge detection for the overall graph, which we define as the graph where an individual edge is present if affected by any of the covariates x^k. This corresponds to group edge selection at the node level. There are p-1 = 24 edges at the node group level, and on average (p-1)× 0.4 = 9.6 of them are signals.
Results in Table <ref> show comparable performance across all methods. Although the GMMReg seems to have a higher F1 score than other methods when the sample size is n=200, this appears to be due to higher TPR score at the cost of higher FPR score. As evidence, its MCC scores are close to those of other methods, and hence, we do not conclude that there is a significant outperformance.
Finally, we look at the task of covariate selection based on the precision coefficient estimates _k. Four out of ten covariates are signals. We select X_k if _k. From Table <ref> we observe the failure of the Lasso and BSGSSS methods in this task. Without sufficient selection/penalties, these two methods include all covariates in almost all replicates, as also illustrated in Figures <ref> and <ref>, with the exception of 2 out of 100 cases. Although GMMReg tends to favor a dense precision coefficient for the intercept and sparse precision coefficients for the other covariates, its performance is similar to that of DGSS. The proposed DGSS, on the other hand, outperforms GMMReg in terms of all averaged metrics.
§ APPLICATION TO MICROBIOME DATA
We demonstrate the proposed method with data from the Multi-Omic Microbiome Study: Pregnancy Initiative (MOMS-PI), a study funded by the NIH Roadmap Human Microbiome Project to understand the impact of the vaginal microbiome on pregnancy and the fetal microbiome. This study contains samples from multiple body sites, including mouth, skin, vagina and rectum, of 596 subjects throughout pregnancy and for a short term after childbirth. Previous research found the vaginal microbiome can change early in pregnancy and be predictive of pregnancy outcomes <cit.>.
§.§ Data
Data from the MOMS-PI study is publicly available and can be found in the R package .
Following <cit.>, we focus on the interplay between microbial abundances and vaginal cytokines, a mechanism by which the host regulates the composition of the vaginal microbiome, and use the first baseline visit data of the n=225 subjects whose microbiome and cytokine profiling of the vagina are available among the 596 subjects enrolled in the study. Furthermore, we consider p=90 OTUs whose absolute abundance is greater than 1 in at least 10% of the subjects and use all the 29 available cytokines as covariates, adding an intercept term, which implies q = 30. We apply the centered log ratio transformation to normalize the abundance counts <cit.>, as commonly done in Gaussian graphical modeling for microbiome data to satisfy the Gaussian assumption <cit.>. After transformation, we center the data such that each OTU has zero mean. For the covariates, we transform the data to the log scale and use the min-max normalization, so that values fall within the [0,1] interval.
§.§ Results
We use the same non-informative prior specifications and MCMC settings as in the simulation study.
Given the results obtained in the simulation study, we restrict comparisons to the GMMReg and the proposed DGSS methods.
On a server, with two 20-core 2.4 GHz Intel(R) Xeon CPUs, running the MCMC algorithm of our DGSS method, coded in Rcpp, took about 15 seconds per iteration.
Table <ref> reports the number of covariate-dependent edges selected by DGSS and GMMReg, for each covariate and for the overall graph. Similar to the simulation study, we observe that GMMReg tends to select zero edges for almost all covariates, and a few more for the intercept (baseline). On the other hand, DGSS selects significantly more edges than GMMReg, not only for each covariate but also in the overall graph. Figure <ref> shows the adjacency matrices of the graphs corresponding to the precision coefficients _k of the four covariates with the most covariate-dependent edges and the four covariates with the least covariate-dependent edges. We observe that a large number of edges are simultaneously influenced by multiple cytokines. Additionally, it appears that some edges remain within a block of the OTU 1-26 across covariates, which aligns with a finding from <cit.>, as discussed next.
Figure <ref> shows the adjacency matrix of the overall graph selected by DGSS, together with a plot showing the commonly selected edges with the graph selected in <cit.>. OTUs are grouped based on their phylum (Firmicutes, Actinobacteria, Bacteroidetes, Proteobacteria, Fusobacteria, and TM7). Interestingly, even though <cit.> used a different Bayesian approach from our method, based on a latent Gaussian graphical model with separate variable selection priors for both covariate-dependent mean and covariate-independent precision, we found that a large number of edges in the overall graph selected by DGSS were also detected by their method (98 out of 271 edges). In particular, the common edges highlight the subnetwork formed by OTUs 1 - 26 within Firmicutes, which appears to be the area consistently detected as having covariate-dependent edges for various cytokines. Jointly, these findings may imply the existence of a subnetwork within the Firmicutes that is widely affected by cytokines. Additionally, the proposed method selects more inter-phylum edges compared to the model from <cit.>, which capture correlations among different OTUs across phyla, suggesting more complex latent effects of microbiome during pregnancy.
§ CONCLUDING REMARKS
We have considered the framework of covariate-dependent Gaussian graphical modeling for learning heterogeneous graphs and proposed a dual group spike-and-slab prior that achieves simultaneous local sparsity and bi-directional group sparsity. The proposed prior accomplishes covariate-level selection, inferred by the local-level selection, on grouped precision coefficients sliced in one direction and the node-level selection on grouped coefficients sliced in another direction.
Our approach has led to a parsimonious model for covariate-dependent precision matrices with improved interpretability. For posterior inference, we have designed a Gibbs sampler to automatically tune the hyper-parameters while incorporating their uncertainty, leads to interpretable and flexible selection results. Through simulation studies, we have demonstrated that the proposed model outperforms existing methods in its accuracy of graph recovery. We have applied our model to microbiome data to estimate the interaction between microbes in the vagina, as well as the interplay between vaginal cytokines and microbial abundances, providing insight into mechanisms of host-microbial interaction during pregnancy.
There are several interesting future directions to extend our model.
First, the model can be expanded to incorporate a covariate-adjusted mean. A potential challenge here is the increased computational complexity due to a larger parameter space. Secondly, although our focus in on Gaussian graphical models, the structured sparsity we consider can be useful for other models with ultra-dimensional parameter spaces, such as arrays, that exhibit various grouping directions. Finally, approximation methods such as Variational Expectation Maximization may be worth investigating, as they improve the scalability of the method and allow for its application to larger datasets.
§ SUPPLEMENTARY MATERIAL
The supplementary material includes detailed derivations of the Gibbs sampler in Section <ref>. R code and scripts to reproduce the results from the simulation study and the real data applications, with main functions coded in Rcpp, will be made available on Github upon acceptance of the paper.
apalike
§ SUPPLEMENTARY MATERIALS
§.§ S1. Markov Chain Monte Carlo Sampling (MCMC)
In this section, we provide the detailed derivations for the Gibbs sampler used in the main paper.
* Update the covariate-level selection parameters {τ^ij_k, τ̃^ij_k, γ^ij_k, π_k}
Rewriting the likelihood of y^i_n from the covariate-level group perspective, we have that the mean part is
∑_s k ∑_l iβ^il_s y^l_n x^s_n_ conditional on s
denoted as c^1, ijk_n + ∑_j i β^ij_k y^j_n x^k_n,
leading to the conditional distribution
(y^i_n - c^1, ijk_n) | - ∼ N ( ∑_j iβ^ij_k y^j_n x^k_n, σ^2_i ).
Similarly, with β^ij_k = τ^ij_kb^ij_k, we have
(y^i_n - c^1, ijk_n) | - ∼ N ( ∑_j' ∉{i, j}β^ij'_k y^j'_n x^k_n _ conditional on j' ∉{i,j}
denoted as c^2,ijk_n + τ^ij_kb^ij_k y^j_n x^k_n,σ^2_i).
Denoting y^ijk_n = y^i_n - c^1,ijk_n - c^2,ijk_n,
we have that the distribution of the latent coefficients conditional upon the indicators is
y^ijk_n | τ̃^ij_k, γ^ij_k = 1, - ∼ N( τ̃^ij_k b^ij_k y^j_n x^k_n , σ^2_i )
y^ijk_n | γ^ij_k = 0, - ∼ N( 0, σ^2_i ).
Following <cit.>, we integrate out the latent coefficients, obtaining
p( γ^ij_k = 1 | - . )
= ∫ p( γ^ij_k = 1, τ̃^ij_k | - )dτ̃^ij_k / p( γ^ij_k = 0, τ̃^ij_k =0 |- ) + ∫ p( γ^ij_k = 1, τ̃^ij_k | - )dτ̃^ij_k
= 1/1 + θ^ij_k ,
where the Bayes factor is
θ^ij_k = p( γ^ij_k = 0, τ̃^ij_k =0 |- ) /∫ p( γ^ij_k = 1, τ̃^ij_k | - )dτ̃^ij_k
= 1/p( y^ijk_·) p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0 ) ×( 1 - π_k ) /1/ p( y^ijk_·) ∫ p( y^ijk_· | γ^ij_k = 1, τ̃^ij_k )p( τ̃^ij_k )d τ̃^ij_k ×π_k
= p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0 ) ×( 1 - π_k ) /∫ p( y^ijk_· | γ^ij_k = 1, τ̃^ij_k )p( τ̃^ij_k )d τ̃^ij_k ×π_k
= ( ∫ p( y^ijk_· | γ^ij_k = 1, τ̃^ij_k )p( τ̃^ij_k )d τ̃^ij_k / p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0 ) )^-1×1-π_k/π_k.
Next, we estimate θ^ij_k. First we note that
p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0) = ( 2πσ^2_i)^-N/2exp{ -1/2∑^N_n=1( y^ijk_n)^2 / σ^2_i} .
Therefore,
∫ p( y^ijk_· | γ^ij_k = 1, τ̃^ij_k )p( τ̃^ij_k )d τ̃^ij_k
= ∫( 2πσ^2_i )^-N/2exp{ - 1/2∑^N_n=1( y^ijk_n - y^j_n x^k_nb^ij_kτ̃^ij_k )^2 / σ^2_i }
× 2( 2 π s^2_k )^-1/2exp{ - 1/2( τ̃^ij_k )^2 / s^2_k}1( τ̃^ij_k ≥ 0 ) d τ̃^ij_k
= 2 ( 2 π s^2_k )^-1/2( 2 πσ^2_i )^-N/2exp{ -1/2∑^N_n=1( y^ijk_n )^2/ σ^2_i}_ = p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0)
×∫exp{ -1/2[ ( ∑^N_n=1(y^j_n x^k_n )^2 (b^ij_k)^2 / σ^2_i + 1/ s^2_k) ( τ̃^ij_k )^2-2( b^ij_k ∑^N_n=1 y^j_n x^k_n y^ijk_n / σ^2_i ) τ̃^ij_k ] }
1(τ̃^ij_k ≥ 0) d τ̃^ij_k.
Letting ν̃^2_ijk = ( ∑^N_n=1(y^j_n x^k_n )^2 (b^ij_k)^2 / σ^2_i + 1/ s^2_k)^-1 and m̃_ijk = ν̃^2_ijk b^ij_k ∑^N_n=1 y^j_n x^k_n y^ijk_n / σ^2_i, we obtain the ratio
∫ p( y^ijk_· | γ^ij_k = 1, τ̃^ij_k )p( τ̃^ij_k )d τ̃^ij_k / p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0 )
= 2 ( 2 π s^2_k )^-1/2( 2πν̃^2_ijk)^1/2exp( 1/2m̃^2_ijk/ν̃^2_ijk)
×∫( 2πν̃^2_ijk)^-1/2exp{ - 1/2[ ( τ̃^ij_k )^2 - 2 τ̃^ij_k m̃_ijk + ( m̃_ijk)^2 ] / ν̃^2_ijk}1(τ̃^ij_k ≥ 0 ) dτ̃^ij_k
= 2 ( s^2_k )^-1/2×( ν̃^2_ijk)^1/2exp{1/2m̃^2_ijk/ν̃^2_ijk}×Φ( m̃_ijk/ν̃_ijk),
where the last line follows from the fact that the integral to be evaluated is associated with the truncated normal kernel, N^+(m̃_ijk, ν̃^2_ijk), which leads to the result Φ( m̃_ijk/ν̃_ijk).
Substituting the ratio above yields
θ^ij_k = ( ∫ p( y^ijk_· | γ^ij_k = 1, τ̃^ij_k )p( τ̃^ij_k )d τ̃^ij_k / p( y^ijk_· | γ^ij_k = 0, τ̃^ij_k = 0 ) )^-1×1-π_k/π_k
= 1 - π_k / 2 ( s^2_k )^-1/2×( ν̃^2_ijk)^1/2exp{1/2m̃^2_ijk/ν̃^2_ijk}Φ( m̃_ijk/ν̃_ijk) ×π_k.
Hence, we sample each γ^ij_k as
γ^ij_k | - ∼Bernoulli( 1/1+ θ^ij_k).
Then, if γ^ij_k = 1, we update τ̃^ij_k | - ∼ N^+( m̃_ijk, ν̃^2_ijk); else, if γ^ij_k = 0, we set τ̃^ij_k = 0.
After updating all indicators for covariate x^k, we update
π_k| - ∼Beta( a_k + ∑_ 1 < i j ≤ p γ^ij_k, b_k + p(p-1) - ∑_ 1 < i j ≤ p γ^ij_k),
leading to τ^ij_k = τ̃^ij_k δ_k = τ̃^ij_k I( π_k ≥ d).
* Update the node-level selection parameters {b^ij, δ^ij, π^i} together with {β^ij_k}
Rewriting the likelihood of y^i_n from the node-level group perspective, we have the mean part
∑_ j' ∉{ i,j }∑_kβ^ij'_k y^j'_n x^k_n + ∑_kβ^ij_k y^j_n x^k_n.
With β^ij_k = τ^ij_k b^ij_k, we denote z^ij_n = y^i_n - ∑_ j' ∉{ i,j }∑_kβ^ij'_k y^j'_n x^k_n, leading to
z^ij_n| - ∼ N( ∑_k b^ij_k τ^ij_k y^j_n x^k_n, σ^2_i )
and
z^ij_n | δ^ij = 1, - ∼ N( (X^ij_n)^T V^ijb^ij, σ^2_i )
z^ij_n | δ^ij = 0, - ∼ N( 0, σ^2_i ),
where X^ij_n = ( y^j_n x^1_n , ..., y^j_n x^q_n)^T and V^ij = diag( τ^ij_1, …, τ^ij_q). In addition, we denote Z^ij = ( z^ij_1, …, z^ij_n)^T and ^ij =( ^ij_1,..., ^ij_n )^T, leading to the vector form formulations
Z^ij | δ^ij = 1, - ∼ MVN( X^ijV^ijb^ij, σ^2_i I_n )
Z^ij | δ^ij = 0, - ∼ MVN( _n, σ^2_i I_n ) .
Similarly, we integrate out ^ij
p( δ^ij = 1 | - . )
= ∫ p( δ^ij = 1, b^ij | - )db^ij/ p( δ^ij = 0, b^ij = |- ) + ∫ p( δ^ij = 1, b^ij | - )db^ij
= 1/1 + θ^ij,
where the Bayes factor is
θ^ij = p( δ^ij = 0, b^ij = 0 |- ) /∫ p( δ^ij = 1, b^ij | - )db^ij
= 1/p( Z^ij) p( Z^ij | δ^ij = 0, b^ij = 0) ×( 1 - π^i) /1/ p( Z^ij) ∫ p( Z^ij| δ^ij = 1, b^ij)p( b^ij)d b^ij×π^i
= p( Z^ij | δ^ij = 0, b^ij = 0) ×( 1 - π^i) /∫ p( Z^ij| δ^ij = 1, b^ij)p( b^ij)d b^ij×π^i
= ( ∫ p( Z^ij| δ^ij = 1, b^ij)p( b^ij)d b^ij/ p( Z^ij | δ^ij = 0, b^ij = 0) )^-1×( 1 - π^i) /π^i.
We next estimate θ^ij. First, we note that
p( Z^ij | δ^ij = 0, b^ij = 0) = ( 2πσ^2)^-N/2exp{ -1/2σ^2_i(Z^ij)^T Z^ij} .
Therefore,
∫ p( Z^ij | δ^ij = 1, b^ij)p( b^ij)d b^ij
= ∫( 2πσ^2_i )^-N/2exp{ - 1/2σ^2_i( Z^ij - X^ijV^ijb^ij)^T( Z^ij - X^ijV^ijb^ij)}
×( 2π)^-q/2exp{ - 1/2(b^ij)^T b^ij} d b^ij
= ( 2πσ^2_i)^ -N/2exp{ - 1/2σ^2_i(Z^ij)^T Z^ij}_= p( Z^ij | δ^ij = 0, b^ij = 0) ( 2 π)^-q/2( 2 π)^q/2| Σ̃^ij|^1/2exp{1/2( μ̃^ij)^T( Σ̃^ij)^-1μ̃^ij}
×∫( 2 π)^-q/2| Σ̃^ij|^-1/2exp{ -1/2[ ( b^ij)^T (1/σ^2_i( X^ijV^ij)^T(X^ijV^ij) + I_q) b^ij. .
. . - 2 1/σ^2_i(Z^ij)^T X^ijV^ijΣ̃^ij( Σ̃^ij)^-1b^ij + ( μ̃^ij)^T( Σ̃^ij)^-1μ̃^ij] }.
Letting Σ̃^ij = ( 1/σ^2_i( X^ijV^ij)^T(X^ijV^ij) + I_q )^-1 and μ̃^ij = ( 1/σ^2_i(Z^ij)^T X^ijV^ijΣ̃^ij)^T, we obtain the ratio
∫ p( Z^ij| δ^ij = 1, b^ij)p( b^ij)d b^ij/ p( Z^ij | δ^ij = 0, b^ij = 0)
= | Σ̃^ij|^1/2exp{1/2( μ̃^ij)^T( Σ̃^ij)^-1μ̃^ij},
where the last line follows from the fact that the integral to be evaluated is the probability of a MVN(μ̃^ij, Σ̃^ij) random vector, i.e., 1.
Substituting the ratio above yields
θ^ij = ( ∫ p( Z^ij| δ^ij = 1, b^ij)p( b^ij)d b^ij/ p( Z^ij | δ^ij = 0, b^ij = 0) )^-1×( 1 - π^i) /π^i
= 1 - π^i/| Σ̃^ij|^1/2exp{1/2( μ̃^ij)^T( Σ̃^ij)^-1μ̃^ij}×π^i.
Hence, we first sample each δ^ij by
δ^ij | - ∼Bernoulli( 1/1+ θ^ij).
Then, if δ^ij = 1, we update b^ij | - ∼ N( μ̃^ij, Σ̃^ij); else, if δ^ij = 0, we set b^ij = 0_q.
After updating all indicators for node i, we update
π^i| - ∼Beta( a^i + ∑_ j i δ^ij, b^i + (p-1) - ∑_ j i δ^ij),
and
(β^ij_k)_1≤ k ≤ q = B^ij = V^ij^ij.
* Update the variances {σ^2_i }
The posterior of σ^2_i is:
p( σ^2_i | - ) ∝ ( σ^2_i )^-N/2exp{ - 1/21/σ^2_i∑^N_n=1( y^i_n - ∑_j i ∑_k = 1^q β^ij_k y^j_n x^k_n)^2 }
×( σ^-2_i )^a_σ + 1exp{ -σ^-2_i b_σ}
∝ ( σ^-2_i )^N/2 + a_σ + 1 exp{ - σ^-2_i [ 1/2∑^N_n=1( y^i_n - ∑_ j i ∑_k = 1^q β^ij_k y^j_n x^k_n)^2 + b_σ]},
which is an Inverse-Gamma distribution
σ^2_i | - ∼InvGamma( N/2 + a_σ, 1/2∑^N_n=1( y^i_n - ∑_ j i∑_k = 1^q β^ij_k y^j_n x^k_n)^2 + b_σ).
* Update { s^2_k } and t:
We have conjugate updates:
p( s^2_k | - ) = ∏_1≤ i j≤ p
γ^ij_k = 1 2( 2π s^2_k )^-1/2exp{ - 1/2( τ̃^ij_k )^2/s^2_k}1( τ̃^ij_k ≥ 0 )
×t^1/Γ(1)( s^2_k)^- 2exp{ -t/s^2_k}
∝ ( s^2_k )^ - (1/2∑_ 1≤ i j≤ p γ^ij_k +1 + 1 )exp{ - s^-2_k [ 1/2∑_ 1≤ i j≤ p ( τ̃^ij_k )^2 + t ]},
which is an Inverse-Gamma distribution
s^2_k | - ∼ InvGamma( 1 + 1/2∑_ 1≤ i j≤ p γ^ij_k, t + 1/2∑_ 1≤ i j≤ p ( τ̃^ij_k )^2 ).
The posterior of t is:
p( t | - ) = ∏_k = 1^q t^1/Γ(1)( s^2_k)^- 2exp{ -t/s^2_k}
∝ t^q exp( - t ∑^q_k=11/s^2_k),
which is a Gamma distribution
t | - ∼ Gamma( q + 1, ∑^q_k=11/s^2_k).
| — by Zijian
Gaussian graphical models have been applied in a wide variety of fields to recover the dependency and network among data <cit.>. Among these models, one model class shown to be useful is the Gassuian Graphical Regression Models. This idea can date back to the 1972 when Dempster proposed the covariance selection <cit.>, aiming at discovering the conditional independence restrictions using the inverse covariance matrix (a.k.a., precision matrix or concentration matrix). This links the absence of an edge in the undirected Gaussian graph with a zero entry in the precision. Expanding upon this idea, <cit.> shows the neighborhood selection for the conditional independence restrictions of each node in the graph is equivalent to the variable selection for Gaussian linear models, turning the edge detection of a graph to the variable selection of independent regressions. This approach has inspired numerous studies with a focus on using different selection methods to study the connectivity within a graph <cit.>. However, unlike the mean regression models often developed with the covariates' effects, previous research of Gaussian graphical regression models mainly regresses one node on the others nodes without incorporating the covariates for the precision.
Recently, <cit.> and <cit.> study the covariate-dependent precision regression from both applied and methodological perspectives for the Gaussian graphical regression models.
<cit.> propose a Bayesian edge regression to study the covariate-dependent edges in a Gaussian graph using hepatocellular carcinoma data.
They reduce a large number of correlation coefficients to small values for all edges via a shrinkage prior. Finally, a separate posterior thresholding method is used for the final edge selection.
<cit.> study the Gaussian graphical regression models with covariates under bi-level sparsity, where the element- and group-wise sparsity are encouraged by lasso and group lasso respectively. <cit.> establish the variable selection consistency and ℓ_2 convergence rate for this combined sparsity penalty, a.k.a., the sparse group lasso penalty, and shows it outperforms simply using one of the lasso or group lasso penalties for highly sparse case in simulation. The two works, although with different focus, demonstrate the value of incorporating the covariates in Gaussian graphical regression models, and reveal two main challenges. Firstly, the number of parameters is usually large with a relatively small sample size.
Secondly, the tuning parameters can be hard to specify.
Inspired by recent work of <cit.>, we propose a Bayesian Gaussian graphical regression model with a Nested Global-Local Spike-and-Slab prior, which has multi-level selections. This prior achieves a global-level selection, excluding a covariate, by measuring the probability of the covariate being influential on the whole graph via Beta-Binomial conjugacy in the local-level spike-and-slab indicators. This step considers the perspective of the covariates on a graph and sets a penalty parameter for subsequent steps. We then nest this prior in the sparse group selection with spike-and-slab prior <cit.> to perform node-level selection in the manner of conventional Gaussian graphical regression models. This step handles node-level selection from the perspective of each independent node and take advantage of the covariate selection results conditional on the outcome of the global-local spike-and-slab prior as the penalty parameters. We propose a tensor interpretation for this selection prior.
As a result, a single edge affected by a covariate has three levels of selection: covariate-level, node-level and local-level. We design a Gibbs sampler in a full Markov Chain Monte Carlo (MCMC) manner to automatically tune the hyper-parameters with incoporating their uncertainty. This Gibbs sampler can provide the marginal posterior probability of inclusion for each pair of edges and covariates, allowing for uncertainty quantification and posterior inference (i.e., Bayesian false discovery rate control), which leads to interpretable and flexible selection results. We provide a Rcpp implementation for this sampler available on GitHub page.
Comparing to the conventional methods without covariates, the proposed method can take advantage of group label information and jointly estimate the graphs of all groups by using the group label encoding from <cit.>. We demonstrate this common advantage shared by all Gaussian graphical regression models with covariates in simulation.
We also compare to <cit.>, who proposed a Bayesian bi-level selection prior for Gaussian linear regression, and <cit.>, who studied a Frequentist sparse group lasso penalty for the Gaussian graphical regression model. Both of the methods consider the sparsity only with respect to a given node. This is caused naturally by viewing this problem from the perspective of conditional independent regression <cit.>. Meanwhile, as we point out, if one views all regression coefficients as a precision coefficient tensor, the penalty considered by <cit.> and <cit.> will be only one of the possible directions to slice the precision coefficient tensor. The proposed method not only considers this direction of tensor slice but also adds the covariate-level selection, as a second direction to slice the tensor. We demonstrate in simulations that, when the sparsity is introduced from the covariates' perspective, the proposed method takes advantage of this modelling idea.
The rest of the paper is organized as follows. In section <ref>, we introduce the proposed method, the prior construction with tensor interpretation and the sampling procedure. In Section <ref>, we conduct simulations and compare the proposed approach with Gaussian graphical regressions with/without covariates. In Section <ref>, we apply the method to human gene expression data and microbiome data. | null | null | null | null | null |
http://arxiv.org/abs/2409.18001v1 | 20240926160939 | On the connection between coordinate and diagonal arrangement complements | [
"Vsevolod Tril"
] | math.AT | [
"math.AT",
"math.CO",
"13F55, 14N20, 55P10, 55P40, 57S12"
] |
Connection between coordinate and diagonal arrangments]On the connection between coordinate and diagonal arrangement complements
The author is supported by a stipend from the Theoretical Physics and Mathematics Advancement Foundation “BASIS”
Department of Mathematics and Mechanics, Moscow State University, Russia
mailto:[email protected]@math.msu.ru
[2020]13F55, 14N20, 55P10, 55P40, 57S12
§ ABSTRACT
We study diagonal arrangement complements D() in ^m. We consider the class of simplicial complexes in which any two missing faces
have a common vertex, and prove that the coordinate arrangement complement U() is the double suspension of the diagonal arrangement complement D().
In the case of subspace arrangements in ^m the coordinate arrangement complement U_() is the single suspension of D_().
[
Vsevolod Tril
September 28, 2024
======================
§ INTRODUCTION
Subspace arrangements and their complements play an important role in many constructions of combinatorics,
algebraic and symplectic geometry, and in mechanical systems.
In the work of Arnold <cit.>, the complement to the arrangement of complex hyperplanes { z_i = z_j } was introduced
and showed to be the Eilenberg–Mac Lane space of the pure braid group.
The cohomology ring of this space was also described in Arnold's work.
The following two classes of arrangements are of particular interest.
The first class is the coordinate subspace arrangements, which is thoroughly studied in toric topology.
The complements of coordinate subspace arrangements in ^m bijectively correspond to simplicial complexes on the set [m].
The complement of the coordinate subspace arrangement corresponding to a simplicial complex is denoted by U().
In the work <cit.> it was proved that U() is homotopy equivalent to a moment-angle complex _
and using this fact the cohomology ring of coordinate arrangement complements was described.
The homotopy type of these spaces was explicitly described for some classes of simplicial complexes,
such as stacked polytopes and shifted complexes, see <cit.>.
Another interesting class is the diagonal arrangements. Their complements were studied in <cit.>,<cit.>, <cit.>.
In the work <cit.> cohomology groups of real diagonal arrangement
complements were computed via bar-construction of the Stanley–Reisner ring.
Based on these results a connection between cohomology groups of diagonal arrangements and loop spaces on
the polyhedral products was found in <cit.> and developed in <cit.>.
We study diagonal arrangements in ^m. The complement of such an arrangement corresponds to a simplicial complex on [m] and is denoted by D(),
see details in Section 2. We prove the following.
Suppose that any two missing faces of a simplicial complex have a common vertex. Then there is a homotopy equivalence
U() ≃Σ^2 D().
In case of real arrangement complements there is a homotopy equivalence
U_() ≃Σ D_().
From this statement and the ring structure of H^*(U()) given in <cit.> we obtain
Suppose that any two missing faces of a simplicial complex have a common vertex. Then is a Golod simplicial complex over any ring , that is,
all products and higher Massey products in ^*_[v_1, …, v_m]([], ) are trivial.
Using the results of <cit.> we obtain a decomposition of the double suspension of D() for satisfying the condition of Theorem <ref>:
Suppose that any two missing faces of a simplicial complex have a common vertex.
Then there is a homotopy equivalence
Σ^2 D() ≃⋁_∅≠ I ⊂ [m]Σ^|I| + 1 |_I|.
The work is organized as follows. Section 2 contains the preliminary definitions and constructions.
In Section 3 we apply the results of <cit.> to calculate the cohomology ring for a class of diagonal arrangement complements.
In Section 4 we prove that every coordinate arrangement complement is homotopy equivalent to some diagonal arrangement complement and prove the main
theorem. In Section 5 we combine our results with the results of <cit.> to obtain Corollary <ref> and Corollary <ref>.
We also give examples of simplicial complexes which satisfy the condition of Theorem <ref> but are not contained in the classes of simplicial complexes considered in <cit.>.
§.§ Acknowledgements
I wish to express gratitude to my advisor Taras Panov for stating the problem, help, support and valuable advice.
§ BASIC DEFINITIONS
An abstract simplicial complex on the set V is a collection of subsets I ⊂ V, called simplices,
that satisfies the following conditions:
∙ ∅∈,
∙ if I ∈ and J ⊂ I then J ∈,
One-element simplices { i }∈ are called vertices. One-element subsets {i }∈ V such that {i }∉ are called ghost vertices.
A missing face of simplicial complex is a subset I ⊂ V such that I ∉, but every
proper subset of I is a simplex of .
The set of all missing faces of is denoted by MF().
Usually, V is the set [m] = {1, 2, …, m }.
An arrangement is a finite set 𝒜 = {L_1, …, L_r } of affine subspaces in some affine space (either real or complex).
Given an arrangement 𝒜 = {L_1, …, L_r} in ^m, define its union |𝒜| as
|𝒜| = ⋃_i = 1^r L_i ⊂^m,
and its complement M(𝒜) as
M(𝒜) = ^m \ |𝒜|,
and similarly for arrangements in ^m.
Suppose that 𝒜 = {L_1, …, L_r } is an arrangement in ^m.
The intersections
v = L_i_1∩…∩ L_i_k
form a poset (P, <) with respect to the inverse inclusion:
u < v if and only if v is a proper subspace in u.
The minimal element ⊥ of this poset is ^m,
and the maximal element ⊤ is ⋂_i = 1^r L_i.
This poset is called the intersection poset of arrangement.
The intersection poset is a lattice: the join and meet operations are defined by
u ∨ v = u ∩ v, u ∧ v = ⋂_L: u ∈ L, v ∈ L L.
The rank function d on P is defined by
d(u) = (u).
An arrangement
𝒜 = {L_1, …, L_r } is called coordinate
if every subspace
L_i, i = 1, …, r, is a coordinate subspace.
Every coordinate subspace in ^m can be written as
C_I = {(z_1, …, z_m) ∈^m : z_i_1 = … = z_i_k = 0 },
where I = {i_1, …, i_k } is a subset in [m].
For each simplicial complex on the set [m] define the complex coordinate arrangement
𝒞𝒜() by
𝒞𝒜() = {C_I : I ∉}.
Denote the complement of this arrangement by U(), that is
U() = ^m \⋃_I ∉ C_I.
The real coordinate arrangement and its complement U_() is defined in the same way.
The assignment
↦ U() defines a one-to-one order preserving correspondence between the set of simplicial complexes on [m]
and the set of coordinate arrangement complements in ^m (or ^m).
Coordinate subspace arrangement complements are the examples of polyhedral product spaces.
Let be a simplicial complex on [m] and let
(X, A) = { (X_i, A_i), i = 1, …, m }
be a collection of m pairs of spaces.
The polyhedral product of (X, A) corresponding to
is the topological space in X_1 ×…× X_m given by
(X, A)^ = ⋃_I ∈ Y_1 ×…× Y_m,
where Y_i = X_i, if i ∈ I,
A_i, if i ∉ I.
We can see that U() = (, ^×)^.
Other examples of polyhedral products are the moment-angle complex
_ = (D^2, S^1)^,
and the real moment-angle complex
_ = (D^1, S^0)^.
The moment-angle complex _ is a T^m-invariant subspace in U(), and there is a T^m-equivariant
deformation retraction
_↪ U() ≃→_.
The same is true in the real case.
For each subset I = { i_1, …, i_k}⊂ [m] define the diagonal subspace D_I in ^m as
D_I = {(z_1, …, z_m) ∈^m : z_i_1 = … = z_i_k}.
A configuration 𝒜 = {L_1, …, L_r } is called diagonal if every subspace L_i, i = 1, …, r, is a diagonal subspace.
Given a simplicial complex on the vertex set [m] (without ghost vertices), the diagonal subspace arrangment 𝒟𝒜()
is the set of subspaces D_I such that I is not a simplex of :
𝒟𝒜() = {D_I : I ∉}.
Denote the complement of the arrangement 𝒟𝒜() by D().
The assignment ↦ D() defines a one-to-one order preserving correspondence between the set of simplicial complexes on
the vertex set [m] and the set of diagonal subspace arrangement complements in ^m (or ^m).
§ THE COHOMOLOGY RING OF THE COMPLEMENTS OF ARRANGEMENTS
The cohomology groups of a diagonal arrangement complements can be computed via its intersection lattice using the Goresky–MacPherson formula <cit.>.
Given a poset Q, denote its order complex by Δ(Q).
For elements a, b in the poset Q we denote by [a, b], (a, b] and [a, b)
the respective intervals in Q. The corresponding order complexes are denoted by Δ[a, b], Δ(a, b], Δ[a, b).
Let 𝒜 be an arrangement in ^N with intersection lattice P.
Denote by [⊥, u] the pair of spaces
[⊥, u] = (Δ[⊥, u], Δ(⊥, u] ∪Δ[⊥, u)).
Then the cohomology groups of the complement are given by
H^q(M(𝒜)) ≅⊕_u ∈ P H_N - d(u) - q([⊥, u]).
For a particular family of diagonal arrangements the cohomology groups can be calculated from the links or full subcomplexes in .
Recall that the link of a simplex I ∈ is the subcomplex
_ I = {J ∈ : I ∪ J ∈, I ∩ J = ∅},
and for J ⊂ [m] the full subcomplex of on the set J is
_J = {I ∈ : I ⊂ J }.
Suppose that any two missing faces of have a common vertex. Then there is an additive isomorphism
H^q(D()) ≅⊕_I∈ H_2m - 2|I| - q - 4(_I),
H^q(D()) ≅⊕_I ⊂ [m]H^q - |I| + 1(_I).
Note that if J_1 ∩ J_2 ≠∅, then the intersection of the diagonal subspaces D_J_1 and D_J_2 is the diagonal subspace D_J_1 ∪ J_2.
It follows that if any two missing faces of have a common vertex, then every element of the intersection lattice of the arrangement 𝒟𝒜()
is D_I = {z_i_1 = … = z_i_k} for some I = {i_1, …, i_k }∉.
Therefore, the intersection lattice is isomorfic to the poset of non-faces of ordered by inclusion.
In other words, the intersection lattice of 𝒟𝒜() is isomorphic to the set of faces of the Alexander dual complex
ordered by reverse inclusion with maximal element ⊤ = ∅ and added minimal element ⊥.
In <cit.> the following isomorphism was proved:
H_k([⊥, u]) ≅H_k-2(Δ(⊥, u)) u > ⊥,
H_k(pt) u = ⊥,
where we assume that H_-1(∅) ≅.
Since the intersection lattice of 𝒟𝒜() is isomorphic to , the order complex
Δ(⊥, u) for u = {z_i_1 = … = z_i_k} is isomorphic to the barycentric subdivision of the simplicial complex
_I, where I = [m] \ I = {i_1, …, i_k }.
Thus, we have
H_k([⊥, u]) ≅ H_k-2(_I)
and from Theorem <ref> we obtain the formula (<ref>).
To prove (<ref>) we use the combinatorial Alexander duality <cit.>.
We have
H_2m-2|I| - q - 4(_I) = H_2|I| - q - 4 (_I)
≅H^q - |I| + 1 (_I).
A similar formula for coordinate arrangement complements was obtained in <cit.>.
The connection between these two results will be described below in Theorem <ref>.
The ring structure in cohomology of the so-called (≥ 2)-arrangement complements was described in <cit.>.
We need only the case of complex arrangements in ^m,
which are always (≥ 2)-arrangements.
The work <cit.> uses the following combinatorial and algebraic constructions to describe the multiplication in the Goresky–MacPherson formula.
In the notation of Theorem <ref>, we have a map
× : H_k([⊥, u]) ⊗ H_l([⊥, v]) → H_k+l(([⊥, u] × [⊥, v]) )
defined on the simplicial chains by
× : C_k(Δ(P)) ⊗ C_l(Δ(Q)) → C_k+l(Δ(P × Q)),
⟨ u_0, …, u_k ⟩⊗⟨ v_0, …, v_l ⟩↦∑_
0 = i_0 ≤…≤ i_k+l = k+l
0 = j_0 ≤…≤ j_k+l = k+l
(i_r, j_r) ≠ (i_r+1, j_r+1) σ_i, j⟨ (u_i_0, v_j_0), …, (u_i_k+l, v_j_k+l) ⟩.
The signs σ_i, j are defined so that σ_i, j = 1 if k = 0 or l = 0
and the Leibniz rule ∂(s × t) = ∂ s × t + (-1)^k s ×∂ t is satisfied.
Second, the join operation
∨: [⊥, u] × [⊥, v] → [⊥, u ∩ v].
(z, w) ↦ z ∩ w
preserves the ordering and therefore induces a simplicial map of the associated order complexes.
If subspaces u and v satisfy the so-called codimension condition
d(u) + d(v) - d(u ∩ v) = 2m ⇔ u + v = ^m
then ∨ is an injective map and induces a simplicial map of pairs
∨_*: [⊥, u] ×[⊥, v] →[⊥, u ∩ v].
Note that for any two lattices P and Q we have a homeomorphism Δ(P) ×Δ(Q) ≅Δ(P × Q).
Let 𝒜 be a complex arrangement. Then the product of cohomology classes in Theorem <ref> is given by:
H_k([⊥, u]) ⊗ H_l([⊥, v]) → H_k+l([⊥, u ∩ v]),
a ⊗ b ↦
∨_*(a × b), if d(u) + d(v) - d(u ∩ v) = 2m,
0, otherwise.
Let be the 6-vertex triangulation of projective plane shown in the Figure <ref>.
We can find non-zero cohomology groups of D() using Proposition <ref>:
H^0(D()) = , H^3(D()) = ^10, H^4(D()) = ^15,
H^5(D()) = ^6, H^7(D()) = _2.
The only possible non-trivial product is
H^3(D()) ⊗ H^4(D()) → H^7(D()).
By Theorem <ref> we have
H^3(D()) ≅⊕_I ∉, |I| = 3 H_1([⊥, D_I]), H^4(D()) ≅⊕_J ⊂ [6], |J| = 4 H_2([⊥, D_J]),
so we are interested in product
H_1([⊥, D_I]) ⊗ H_2([⊥, D_J]) →
H_3([⊥, D_I ∪ J]).
For any I ∉, |I| = 3 the order complex Δ[⊥, D_I]
is the 1-simplex ⟨⊥, D_I ⟩.
So the generator of
H_1([⊥, D_I]) is the homology class
a = [⟨⊥, D_I ⟩].
For J such that |J| = 4 there are only two subsets I_1, I_2 ⊂ J, |I_1| = |I_2| = 3 that do not belong to .
Hence, the 2-simplices of Δ[⊥, D_J] are
{⟨⊥, D_I_1, D_J ⟩, ⟨⊥, D_I_2, D_J ⟩},
and H_2([⊥, D_J]) is generated by the homology class
b = [⟨⊥, D_I_1, D_J ⟩ - ⟨⊥, D_I_2, D_J ⟩ ].
The codimension condition in Theorem <ref> implies that I ∪ J = [6],
so we have the following isomorphism
H_3([⊥, D_I ∪ J]) ≅ H_1(Δ(⊥, D_[6]))
≅ H_1(') ≅_2,
where ' is the barycentric subdivision of the Alexander dual complex .
From the definition of the map × we have:
a × b =
[
⟨ (⊥, ⊥), (⊥, D_I_1), (⊥, D_J), (D_I, D_J) ⟩ -
⟨ (⊥, ⊥), (⊥, D_I_1), (D_I, D_I_1), (D_I, D_J) ⟩ +
⟨ (⊥, ⊥), (D_I, ⊥), (D_I, D_I_1), (D_I, D_J) ⟩ -
⟨ (⊥, ⊥), (⊥, D_I_2), (⊥, D_J), (D_I, D_J) ⟩ +
⟨ (⊥, ⊥), (⊥, D_I_2), (D_I, D_I_2), (D_I, D_J) ⟩ -
⟨ (⊥, ⊥), (D_I, ⊥), (D_I, D_I_2), (D_I, D_J) ⟩
].
Since | I ∩ J | = 1, we have I_k ⊄I for k = 1,2.
Then ∨_* maps the element a × b to
[
⟨⊥, D_I_1, D_J, D_[6]⟩ - ⟨⊥, D_I_1, D_I ∪ I_1, D_[6]⟩ +
⟨⊥, D_I, D_I ∪ I_1, D_[6]⟩ -
⟨⊥, D_I_2, D_J, D_[6]⟩ +
⟨⊥, D_I_2, D_I ∪ I_2, D_[6]⟩ -
⟨⊥, D_I, D_I ∪ I_2, D_[6]⟩ ].
This element is the generator of the group H_3([⊥, D_[6]]). For I = {1,2,3},
J = {3,4,5,6} its image under the isomorphism (<ref>)
is shown in the Figure <ref>.
Thus, we conclude that product (<ref>)
is non-trivial.
§ THE CONNECTION BETWEEN COORDINATE AND DIAGONAL ARRANGEMENTS
First, we show that any coordinate arrangement complement can be realized as a diagonal arrangement complement up to homotopy.
Given a simplicial complex 𝒦 on [m], define a new simplicial complex ℒ on the vertex set [m+1] as
ℒ = {{i_1, …, i_k }: {i_1, …, i_k }⊂ [m] }
∪{{i_1, …, i_k, m+1}⊂ [m+1]: { i_1, …, i_k }∈} .
Geometrically, ℒ is the union of an (m-1)-dimensional simplex on [m] and the cone over with apex {m+1}.
Consider the corresponding diagonal arrangement complement D(ℒ) ⊂^m+1.
There is a homotopy equivalence
U() ≃ D(ℒ).
The same is true for real arrangement complements.
The map
F: ^m+1× [0,1] →^m+1,
F(z_1, …, z_m+1, t) = (z_1, …, z_m+1) - t
z_1 + … + z_m+1/m+1 (1, …, 1)
defines a homotopy between the identity map id: ^m+1→^m+1 and the orthogonal projection
P: ^m+1→^m+1 onto the hyperplane π = {z_1 + … + z_m = 0}.
Observe that F(D(ℒ), t) ⊂ D(ℒ) for every t ∈ [0, 1].
Indeed, the equality z_i_1 = … = z_i_k holds if and only if
z_i_1 - t z_1 + … + z_m+1/m+1 = … = z_i_k - t z_1 + … + z_m+1/m+1.
Hence z ∉ D(ℒ) implies z ∉ F(D(ℒ), t), that is F(D(ℒ), t) ⊂ D(ℒ).
We also have that F(z, t) = z for any z ∈π, t ∈ [0, 1],
so we obtain F(D(ℒ), 1) = D(ℒ) ∩π,
and therefore D(ℒ) ≃ D(ℒ) ∩π.
Consider the linear isomorphism A: ^m+1→^m+1 defined by the formula
y_1 = z_1 - z_m+1, …, y_m = z_m - z_m+1, y_m+1 = z_1 + … + z_m + z_m+1.
We claim that A(D(ℒ) ∩π) = U(), where U() ⊂{y_m+1 = 0}⊂^m+1.
Indeed,
the non-faces of ℒ are of the form {i_1, …, i_k, m+1},
where {i_1, …, i_k}∉. Therefore,
A(D(ℒ) ∩π) = A( ^m+1\⋃_{i_1, …, i_k}∉{z_i_1 = … = z_i_k = z_m+1}) ∩ A ({z_1 + … + z_m+1 = 0})
= (^m+1\⋃_{i_1, …, i_k}∉{y_i_1 = … = y_i_k = 0 }) ∩{y_m+1 = 0 } = U(),
where the last equality holds since U() is considered as a subspace in the hyperplane {y_m+1 = 0}.
It follows that A^m+1→^m+1 induces a homeomorphism between D(ℒ) ∩π and U(), so we obtain a homotopy equivalence D(ℒ) ≃ D(ℒ)∩π≅ U().
Conversely, diagonal arrangement complements corresponding to a particular class of simplicial complexes can be realized as coordinate arrangement complements:
Suppose that ℒ is a simplicial complex on the vertex set [m] that has a face with m-1 vertices.
Then there is a simplicial complex such that D(ℒ) ≃ U().
Assume that {1, 2, …, m-1 }∈ℒ.
Consider the simplicial complex
= _ℒ{m} on the set [m-1].
Since ℒ can be obtained from using operation (<ref>) we have that U() ≃ D(ℒ).
The class of simplicial complexes ℒ for which D(ℒ)≃ U(𝒦) can be extended by taking joins. Recall that the join of
simplicial complexes _1 and _2 on the sets V_1 and V_2 is the following simplicial complex on the set V_1⊔ V_2:
_1 * _2 = { I_1 ⊔ I_2: I_1 ∈_1, I_2 ∈_2 }.
There is a homeomorphism
D(_1 * _2) ≅ D(_1) × D(_2).
Note that D() = ^m \⋃_ I ∉ D_I= ^m \⋃_ I ∈ MF() D_I, where the latter union is taken over the missing faces of . Since MF(_1 * _2) = MF(_1) ⊔ MF(_2), we get
D(_1 * _2) = ^m \(⋃_I ∈ MF(_1) D_I ∪⋃_J ∈ MF(_2) D_J )
= (^m_1\⋃_I ∈ MF(_1) D_I ) ×(^m_2\⋃_J ∈ MF(_2) D_J ) = D(_1) × D(_2).
Consider the simplicial complex that is the boundary of 4-gon:
=
(37, 20)
(10, 2)(15, 0)2(0, 1)15
(10, 2)(0, 15)2(1, 0)15
(10, 2)*3 (25, 2)*3 (10, 17)*3 (25, 17)*3
(2, 0)4 (30, 0)3 (2, 13)1 (30, 13)2
.
The corresponding diagonal arrangement complement has the form
D() = ^4 \({z_1 = z_3}∪{z_2 = z_4}).
Obviously
= _1 * _2, where each of _1 and _2 is the pair of disjoint points.
Each of these simplicial complexes satisfy the condition of Corollary <ref>, so we can determine the homotopy type of D():
D() ≅ D(_1) × D(_2) ≃ U(_0) × U(_0) ≃ S^1 × S^1 = T^2,
where _0 is the empty simplicial complex on the set [2].
Let X be a finite simplicial complex, and let Y be a subcomplex in X. Then Σ(X \ Y) ≃ (Σ X) \ Y.
Consider the barycentric subdivision X' of X and the full subcomplex Z in X' on the vertices that do not belong to Y.
Define a neighbourhood U_X'(Z) of Z as the union of relative interiors of simplices of X' having some simplex of Z as a face.
Obviously, Z is a deformation retract of U_X'(Z).
We claim that X' \ U_X'(Z) = Y', where Y' is the barycentric subdivision of Y.
Indeed, suppose that there is a point x ∈ X' such that x ∉ Y' and consider the minimal face F that contains x.
Since we work with the barycentric subdivision of X, F intersect Y' by a single face.
Then there is a vertex v ∈ F such that v ∉ Y', so v ∈ Z and hence the relative interior of F is a subset of U_X'(Z).
The point x belongs to the relative interior of F since we took the minimal face F.
Thus we obtain a homotopy equivalence X \ Y ≃ Z, so Σ(X \ Y) ≃Σ Z.
Applying the same arguments to the simplicial complex Σ X and its subcomplex Y we obtain that (Σ X) \ Y ≃Z,
where Z is the full subcomplex in the barycentric subdivision (Σ X)' of Σ X on the vertices that do not belong to Y. It remains to note that
Z = Σ Z as subcomplexes in Σ X' = (Σ X)' to obtain the desired statement.
Suppose that any two missing faces of simplicial complex have a common vertex. Then there is a homotopy equivalence
U() ≃Σ^2 D().
In the case of real arrangement complements there is a homotopy equivalence
U_() ≃Σ D_().
Let D() be the complement of the diagonal arranglement 𝒟𝒜().
Denote by π the hyperplane {z_1 + … + z_m = 0 }. The map
F(z, t) = z - t z_1 + … + z_m/m (1, …, 1)
defines a homotopy between the identity map on D() and the orthogonal projection of D() to D() ∩π.
The map
G(z, t) = (1-t) z + t z/|z|
defines a homotopy between the identity map on D() ∩π and
the central projection from D() ∩π to D() ∩ S^2m-3, where S^2m-3 is the unit sphere in π.
Hence, we obtain a homotopy equivalence
D() ≃ D() ∩ S^2m-3 = S^2m-3\ (𝒟𝒜() ∩π)
We define a continious function f on the union |𝒟𝒜()| by
f_i_1, …, i_k(z) = z_1 + … + z_m /m - z_i_1 for z∈{z_i_1 = … = z_i_k}.
These functions coincide on the intersections of the diagonal subspaces in 𝒟𝒜(). Indeed,
for any two non-faces I = {i_1, …, i_k} and J = {j_1, …, j_l } of
the intersection of subspaces {z_i_1 = … = z_i_k}
and {z_j_1 = … = z_j_l} has the form {z_i_1 = … = z_i_k = z_j_1 = … = z_j_l},
because I and J have a common vertex. So we obtain
f_i_1, …, i_k(z) = z_1 + … + z_m /m - z_i_1 = z_1 + … + z_m /m - z_j_1 = f_j_1, …, j_l(z)
for z ∈{z_i_1 = … = z_i_k}∩{z_j_1 = … = z_j_l}.
Hence, f is well defined on |𝒟𝒜()|.
The intersection |𝒟𝒜()| ∩π is closed in π, so the Tietze theorem implies that the function f can be continuously extended to a function f on the whole π and then to a function f on ^m
by the rule
f(z) = f(pr_π(z)),
where the pr_π(z) is the orthogonal projection on the hyperplane π.
Then we define a map Φ^m →^m by the formula
Φ (z) = z - f(z) (1, …, 1).
This map is a homeomorphism with the inverse map given by
Φ^-1(z) = z + f(z) (1, …, 1).
Consider the coordinate arrangement 𝒞𝒜(K) and its complement U(). We claim that
Φ(U()) = ^m \(𝒟𝒜() ∩π).
Indeed, the restriction of f to a coordinate subspace {z_i_1 = … = z_i_k = 0 }∈𝒞𝒜() is equal to z_1 + … + z_m/m. Hence,
Φ(z) = z - z_1 + … + z_m/m (1, …, 1)
for z ∈𝒞𝒜(),
that is, Φ(z) ∈π. Also we have that Φ(𝒟𝒜()) ⊂𝒟𝒜(), because Φ translates every point z by a vector collinear to (1, …, 1).
Hence, Φ(𝒞𝒜() ) ⊂𝒟𝒜() ∩π.
The inverse inclusion follows by applying the same arguments to the map Φ^-1.
Since Φ is a homeomorphism and
Φ(𝒞𝒜()) = 𝒟𝒜() ∩π, we obtain
Φ(U()) = ^m \(𝒟𝒜() ∩π).
The map G(z, t) defines a homotopy between the identity map on Φ(U()) and the central projection
from U() to Φ(U()) ∩ S^2m-1, where S^2m-1 is the unit sphere in ^m.
Thus, we obtain homotopy equivalences:
Σ^2 D() ≃Σ^2(S^2m-3\
(𝒟𝒜() ∩π))
≃ S^2m-1\(𝒟𝒜() ∩π)
≃^m \ (𝒟𝒜() ∩π) ≅ U().
Here the first homotopy equivalence follows from (<ref>). The second equivalence follows Lemma <ref> (by the result of <cit.>, the intersection
of a subspace arrangement with a sphere can be viewed as a simplicial subcomplex). The last homeomorphism above is (<ref>).
For real arrangement complements the argument is the same.
Using Theorem <ref> we can reproduce some calculations of <cit.> for the cohomology groups of “k-equal arrangement” complements:
Suppose that = ^k-2Δ^m, k < m < 2k. Then D_() and D() are the real and complex “k-equal arrangement” complements, respectively.
Their cohomology groups are given by
H^q(D_()) =
, if q = 0,
^s, if q = k-2, where s = ∑_l = k^mm ll-1k - 1,
0, otherwise,
H^q(D()) =
, if q = 0,
^t(q), if 2k - 3 ≤ q ≤ m + k - 3, where
t(q) = m q-k+1q-k k-1,
0, otherwise,
In <cit.>, the following homotopy equivalences were proven:
U_() ≃⋁_l = k^m (S^k-1)^∨m ll-1 k-1,
U() ≃⋁_l = k^m (S^k + l - 1)^∨m ll-1 k-1.
Now the statement follow from Theorem <ref> and the suspension isomorphism.
§ BBCG-DECOMPOSITION AND GOLOD COMPLEXES
We refer to the stable decomposition of the polyhedral product described in the next theorem as the BBCG-decomposition:
Let be a simplicial complex on [m], and the pairs of spaces (X, A) have the property
that X_i is contractible for all i = 1, 2, …, m.
Then there is a homotopy equivalence.
Σ (X, A)^≃Σ⋁_∅≠ I ⊂ [m] |_I| * A^I.
In the particular case of _=(D^2,S^1)^ we obtain
Σ_≃⋁_∅≠ I ⊂ [m]Σ^|I| + 2|_I|.
Necessary and sufficient conditions for desuspending the BBCG-decomposition were studied in <cit.>. One of the main results is the following:
The following conditions are equivalent:
* the fat wedge filtration of _ is trivial;
* _ is a co-H-space;
* there is a homotopy equivalence
_≃⋁_∅≠ I ⊂ [m]Σ^|I| + 1|_I|.
We use this statement together with Theorem <ref> to obtain the decomposition of Σ^2 D().
Suppose that any two missing faces of simplicial complex on [m] have a common vertex. Then the fat wedge filtration
of _ is trivial and there is a homotopy equivalence
Σ^2 D() ≃⋁_∅≠ I ⊂ [m]Σ^|I| + 1 |_I|.
Theorem <ref> implies that _≃Σ^2D() is a co-H-space. Then Theorem <ref> implies that the fat wedge filtration of _ is trivial
and the BBCG-decomposition of the moment-angle complex can be desuspended.
In the general case it is impossible to desuspend the above decomposition and describe the homotopy type of D(). The following example demonstrates such a case.
Let be the 6-vertex triangulation of P^2 from Example <ref>. By the result of <cit.>, the BBCG-decomposition
can be desuspended in this case and we obtain:
_≃⋁_∅≠ I ⊂ [6]Σ^|I| + 1|_I|
≃ (S^5)^∨ 10∨ (S^6)^∨ 15∨ (S^7)^∨ 6∨Σ^7( P^2).
Since any two missing faces in have a common vertex, Proposition <ref> implies that Σ^2 D()≃ U()≃_ is a wedge of suspensions. However, this decomposition cannot be desuspended, because the multiplication in the ring H^*(D()) is nontrivial as shown by Example <ref>.
Recall that a simplicial complex on the set [m] is called Golod over , if all products and (higher) Massey products in ^*_ [v_1, …, v_m] ( [], ) vanish.
Let be a simplicial complex such that any two its missing faces have a common vertex. Then
is a Golod complex.
By Theorem <ref>, the moment-angle complex 𝒵_ is the double suspension of D().
Therefore, all products and higher Massey products in H^*(_) ≅^*_ [v_1, …, v_m]( [], ) are trivial, so is Golod.
This class of simplicial complexes includes the simplicial complexes described in Corollary <ref>.
In <cit.>, two classes of simplicial complexes were constructed for which the fat wedge filtration
of the moment-angle complex is trivial. These are the homology fillable and ⌈/2⌉-neighbourly complexes. However, neither of this classes contains the class of complexes in which any two missing faces have a common vertex. This is illustrated by the next example.
Let be the simplicial complex on seven vertices with the missing faces given by
MF() = {{1, 2, 3, 7}, {1, 2, 4, 7}, {1, 3, 5, 7}, {1, 4, 6, 7}, {1, 5, 6, 7 },
{2, 3, 6, 7 }, {2, 4, 5, 7}, {2, 5, 6, 7}, {3, 4, 5, 7}, {3, 4, 6, 7}}
This simplicial complex can be obtained from the complex of Example <ref> using the operation (<ref>).
Then is homotopy equivalent to the suspension of P^2,
so Σ is not a wedge of spheres.
Hence, is not homology fillable. We also have that = 5, but is not 3-neighbourly. Hence, does not belong to either of the classes described in <cit.>.
Nevertheless, Proposition <ref> and Proposition <ref> imply that is a Golod complex and that fat wedge filtration of _ is trivial.
As we can see, the BBCG-decomposition of moment-angle complex has the form
_≃ (S^7)^∨ 10∨ (S^8)^∨ 15∨ (S^9)^∨ 6∨Σ^9( P^2).
Theorem <ref> implies that D() ≃ U(ℒ), where ℒ is the 6-vertex triangulation of projective plane.
Thus, we can desuspend (<ref>) and obtain the following decomposition for the diagonal arrangement complement:
D() ≃ (S^5)^∨ 10∨ (S^6)^∨ 15∨ (S^7)^∨ 6∨Σ^7( P^2).
plain
| Subspace arrangements and their complements play an important role in many constructions of combinatorics,
algebraic and symplectic geometry, and in mechanical systems.
In the work of Arnold <cit.>, the complement to the arrangement of complex hyperplanes { z_i = z_j } was introduced
and showed to be the Eilenberg–Mac Lane space of the pure braid group.
The cohomology ring of this space was also described in Arnold's work.
The following two classes of arrangements are of particular interest.
The first class is the coordinate subspace arrangements, which is thoroughly studied in toric topology.
The complements of coordinate subspace arrangements in ^m bijectively correspond to simplicial complexes on the set [m].
The complement of the coordinate subspace arrangement corresponding to a simplicial complex is denoted by U().
In the work <cit.> it was proved that U() is homotopy equivalent to a moment-angle complex _
and using this fact the cohomology ring of coordinate arrangement complements was described.
The homotopy type of these spaces was explicitly described for some classes of simplicial complexes,
such as stacked polytopes and shifted complexes, see <cit.>.
Another interesting class is the diagonal arrangements. Their complements were studied in <cit.>,<cit.>, <cit.>.
In the work <cit.> cohomology groups of real diagonal arrangement
complements were computed via bar-construction of the Stanley–Reisner ring.
Based on these results a connection between cohomology groups of diagonal arrangements and loop spaces on
the polyhedral products was found in <cit.> and developed in <cit.>.
We study diagonal arrangements in ^m. The complement of such an arrangement corresponds to a simplicial complex on [m] and is denoted by D(),
see details in Section 2. We prove the following.
Suppose that any two missing faces of a simplicial complex have a common vertex. Then there is a homotopy equivalence
U() ≃Σ^2 D().
In case of real arrangement complements there is a homotopy equivalence
U_() ≃Σ D_().
From this statement and the ring structure of H^*(U()) given in <cit.> we obtain
Suppose that any two missing faces of a simplicial complex have a common vertex. Then is a Golod simplicial complex over any ring , that is,
all products and higher Massey products in ^*_[v_1, …, v_m]([], ) are trivial.
Using the results of <cit.> we obtain a decomposition of the double suspension of D() for satisfying the condition of Theorem <ref>:
Suppose that any two missing faces of a simplicial complex have a common vertex.
Then there is a homotopy equivalence
Σ^2 D() ≃⋁_∅≠ I ⊂ [m]Σ^|I| + 1 |_I|.
The work is organized as follows. Section 2 contains the preliminary definitions and constructions.
In Section 3 we apply the results of <cit.> to calculate the cohomology ring for a class of diagonal arrangement complements.
In Section 4 we prove that every coordinate arrangement complement is homotopy equivalent to some diagonal arrangement complement and prove the main
theorem. In Section 5 we combine our results with the results of <cit.> to obtain Corollary <ref> and Corollary <ref>.
We also give examples of simplicial complexes which satisfy the condition of Theorem <ref> but are not contained in the classes of simplicial complexes considered in <cit.>.
§.§ Acknowledgements
I wish to express gratitude to my advisor Taras Panov for stating the problem, help, support and valuable advice. | null | null | null | null | null |
http://arxiv.org/abs/2409.17456v1 | 20240926011829 | Long or Short or Both? An Exploration on Lookback Time Windows of Behavioral Features in Product Search Ranking | [
"Qi Liu",
"Atul Singh",
"Jingbo Liu",
"Cun Mu",
"Zheng Yan",
"Jan Pedersen"
] | cs.IR | [
"cs.IR"
] |
2024
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
eCom’24: ACM SIGIR Workshop on eCommerce, July 18, 2024, Washington, DC, USA
[
[email protected],
]
[
[email protected],
]
[
[email protected],
]
[
[email protected],
]
[
[email protected],
]
[
[email protected],
]
Walmart Global Tech, Hoboken, NJ, U.S.A.
§ ABSTRACT
Customer shopping behavioral features are core to product search ranking models in eCommerce. In this paper, we investigate the effect of lookback time windows when aggregating these features at the (query, product) level over history. By studying the pros and cons of using long and short time windows, we propose a novel approach to integrating these historical behavioral features of different time windows. In particular, we address the criticality of using query-level vertical signals in ranking models to effectively aggregate all information from different behavioral features. Anecdotal evidence for the proposed approach is also provided using live product search traffic on Walmart.com.
Online shopping product search ranking learning to rank feature engineering behavioral features
Long or Short or Both? An Exploration on Lookback Time Windows of Behavioral Features in Product Search Ranking
Jan Pedersen
September 28, 2024
===============================================================================================================
§ INTRODUCTION
Online shopping has become an indispensable part of people's daily lives due to its convenience, wide selection, cost-effectiveness, and mobile accessibility. With an ever increasing catalog size, product search ranking system <cit.> has been playing a pivotal role in serving customers by ranking relevant products at the top of their search results.
At the heart of every modern eCommerce product search ranking system lies a machine-learned ranking model. For example, LambdaRank/MART <cit.> leverages gradient boosting machines <cit.>, and neural ranker <cit.> employs deep learning techniques. These models evaluate and assign scores to each product based on a wide range of input signals derived from diverse sources, including user behaviors, query intents, product attributes, seller reputations, and sophisticated interactions among them.
Out of these many hundreds or even thousands of signals, behavioral features hold significant importance as they are generated through direct interactions between customers and products, encompassing actions like impressions, clicks, add-to-carts (ATCs), purchases, and others. Several studies <cit.> have emphasized the pivotal role of such implicit relevance feedback <cit.> in product ranking. In the eCommerce context, customers are the ultimate authorities in determining the relevance of products for a given query, particularly when their judgment is backed by their purchasing decisions. Moreover, such logged customer feedback is abundant and cheap to obtain in nowadays operational systems. Hence, it is very natural that (query, product)-level behavioral features are the ones that ranking models rely on the most when ranking products.
In spite of the rich and growing literature on leveraging (query, product)-level behavioral features and their variants in product search ranking, one much unaddressed problem is what the lookback time window should be used to aggregate the customer engagement at the (query, product) level. This is a very critical design question for all practitioners when applying these essential behavioral features in their ranking systems. In this paper, we will share our empirical insights from our first-hand industrial experience. In particular, we explore behavioral features aggregated over different lookback time windows and study their respective effects on product search ranking. Based upon the pros and cons of using long and short time windows, we propose a principled approach to integrating both sets of behavioral features into the model. The effectiveness of this hybrid model is justified on real product search traffic at Walmart.com through online A/B tests.
The remainder of the paper is organized as follows.
Section <ref> discusses the pros and cons of using behavioral features with long and/or short windows. Section <ref> details the proposed enhancement to achieve the best integration of both long and short windows. Section <ref> describes the comprehensive online A/B experiment conducted to evaluate the proposed ranking model. Finally, Section <ref> summarizes our findings and draws conclusions.
§ LONG OR SHORT OR BOTH?
In this section, we will explore the effect of different lookback time window lengths when leveraging behavioral features in product search ranking models. Three types of (query, product)-level user engagement are considered: click rate, add-to-cart (ATC) rate, and order rate. To compute these rates, for a given query (q) - product (p) pair (q, p), we employ the Beta-Binomial Bayesian model and derive behavioral feature values br_q, p as the posterior mean of the following Beta distribution,
Beta(∑_t ∈ T b_q,p^(t) + α, ∑_t ∈ T e_q,p^(t) - ∑_t ∈ T b_q,p^(t) + β),
where α and β specify the prior Beta distribution, b_q,p^(t) is the raw count of the behavior (clicks, ATCs, or orders) frequency for (q, p) on day t, e_q,p^(t) is the raw count of customer examines for (q, p) on day t, and T is the collection of lookback dates we use to aggregate the engagement data. In particular, the following behavioral features are output to our ranking model,
br_q, p = ∑_t ∈ T b_q,p^(t) + α/∑_t ∈ T e_q,p^(t) + α + β,
which is quite similar to the behavioral features defined in <cit.> but smoothed with prior in order to better address the cold start problem <cit.>.
As shown in equation <ref>, one critical factor influencing the values and interpretation of behavioral features is the lookback time window length |T| used to aggregate engagements. Utilizing a longer time window captures long-term customer engagement patterns but may overlook recent trends. Conversely, a short time window highlights more short-term behaviors but may not accurately reflect enduring customer interests. Both long and short time windows for aggregating behavioral features present distinct advantages and disadvantages outlined in Table <ref>.
To investigate the impacts of long- and short-term behavioral features, we define 2 years (|T| = 730) as the long lookback time window and 1 month (|T| = 30) as the short one, and we specify the ranking model with only 2-year behavioral features as the baseline. Three distinct ranking models with different designs on the window lengths are proposed below.
* Baseline Model: model with only 2-year behavioral features.
* Model A: model with only 1-month behavioral features.
* Model B: model with both 2-year and 1-month behavioral features.
Our search ranking models are trained using XGBoost <cit.> with the Learning-to-Rank (LTR) framework <cit.> very similar to <cit.> by utilizing data from a truncated historical period of online customer search traffic on Walmart.com for model training.
To explore the best usage of lookback time windows, we conducted multiple interleaving tests <cit.>, each comparing one proposed model against the baseline model. These tests were performed on a substantial volume of online customer traffic to compare their reactions to different ranking models. Specifically, for each test, we compare the percentage of searches that result in customer engagements between the Control and Variant groups using their respective ranking models. The results are further segmented by different verticals—specific business niches tailored to particular shopping needs. We currently categorize our search queries into six verticals: Food, Consumables, Home, Hardlines, Fashion, and ETS (Electronics, Toys, and Seasonal), with the latter four collectively categorized as General Merchandise (GM).
§.§ Only Long / Short Window
The first interleaving test is configured as follows:
* Control: Baseline Model (2-year behavioral features only),
* Variant: Model A (1-month behavioral features only),
with the purpose to separately examine and compare the individual impact of 2-year and 1-month behavioral features on search ranking models.
The test result is presented in Table <ref>. Although Model A demonstrates an overall insignificant decline compared to the baseline, when zooming into each business vertical, we find very interesting stories. Model A exhibits a significant decline in Food and a trending decline in Consumables. Conversely, it demonstrates a significant lift in the ETS and, more generally, positive changes across most General Merchandise verticals. This corroborates that short-term behavioral features are more informative in an environment that is more dynamic in terms of both inventory assortment and customers' shopping behaviors. In contrast, long-term features are more advantageous for business units Food and Consumables, which typically display more stable and enduring shopping patterns. Therefore, it is very tempting to employ both types of features in the ranking model to leverage their combined strengths. Similar ideas of combining session and historical customer search behaviors per each customer are also investigated in web search personalization <cit.>, but to the best of knowledge, our work is the first one to explore combining (query, product)-level historical behavior features over different lookback time windows in product search ranking.
§.§ Both Long & Short Windows
With the insight from the previous subsection, we set up the second interleaving test as follows:
* Control: Baseline Model (2-year behavioral features only),
* Variant: Model B (2-year and 1-month behavioral features),
with the purpose to examine the combined impact of using both 2-year and 1-month behavioral features in ranking.
The test result is presented in Table <ref>. To our surprise, Model B performs quite sub-optimally overall with the degradation in Food, Consumables, and ETS verticals. This suggests that combining both long- and short-term behavioral features in this vanilla manner not only fails to provide gains in ranking performance but also leads to further declines. One possible reason for the negativity is the lack of flexibility in our ranking model to leverage different behavioral features accordingly. For instance, the Food vertical should ideally leverage the 2-year behavioral features as extensively as possible. However, adding 1-month features dilutes the positive effect of the 2-year features, negatively interfering with the overall contribution of behavioral features. Conversely, in verticals such as ETS and Hardlines, where 1-month features are more advantageous, the inclusion of 2-year features can similarly impair performance.
§ HOW TO INTEGRATE BOTH?
Different verticals exhibit different patterns of trending effects in customer behaviors. For instance, Fashion such as “clothes” may be significantly influenced by recent trends affecting their popularity and customer interactions. In contrast, Food and Consumables such as “milk” and “toilet paper” tend to show more stability over time and are predominantly shaped by long-term engagement patterns.
Based on observations from tests in Sections <ref> and <ref>, to improve the model performance with combined behavioral features of both long and short windows, we consider making our ranking model more query context-aware by incorporating one-hot encoded query vertical signals (predicted from the upstream query understanding model) into the model. These query-level vertical signals would better guide our ranking model to leverage behavioral features of different time windows according to different queries. Thus, we propose the fourth ranking model below.
* Model C: model with both 2-year and 1-month features, and query-level vertical features.
§.§ Both Long & Short Windows with Verticals
The third interleaving test is configured as follows.
* Control: Baseline Model (2-year behavioral features only),
* Variant: Model C (2-year and 1-month behavioral features with the vertical features),
with the purpose to examine whether adding query-level vertical features helps better integrate 2-year and 1-month behavioral features in ranking.
The test result, detailed in Table <ref>, shows that guided by vertical information, behavioral features are more effectively utilized by the ranking model, leading to significant uplifts across all General Merchandise verticals while rectifying the previous degradation in the Food and Consumables. Model C also shows an overall significant increase of 0.22% in customer engagement and proves to be the best candidate ranking model among all tested. This demonstrates that incorporating vertical features can indeed enhance the integration of multi-window behavioral features, allowing each to play to its strengths and mitigate its weaknesses.
§.§ Why Does Long & Short & Verticals Work?
The guiding effect that vertical information has on the ranking model in using different behavioral features can also be validated in the model structure. Our ranking model inherently employs a tree structure, where adjacent tree nodes tend to be functionally related. More precisely, if the nodes corresponding to one feature frequently precede those of another specific feature, it suggests that the former, i.e., the upper-level feature, exerts a certain degree of influence/control over the latter, i.e., the lower-level feature, determining when it will activate to impact the model's predictions.
Across all splitting nodes from the trees in Model C, we summarize the distribution of different behavioral nodes under the vertical nodes in Figure <ref>, taking Fashion and Consumables as examples. The results show that 1-month behavioral features are more influential for Fashion queries since they more prevalently occupy the place of the immediate lower level when the current vertical node is Fashion, whereas 2-year behavioral features are more prevalent for Consumables queries. This observation is aligned with our interpretation of the test result in Section <ref> given the characteristics of different verticals, and it evinces that introducing query-level vertical signals can help guide our ranking model to better ensemble long- and short-term behavioral features in the sense that different behavioral features can contribute accordingly with respect to different search queries.
§ A/B TEST
After the series of interleaving tests in Sections <ref> and <ref>, we decided to move forward to A/B test with the most promising candidate, Model C, which incorporates both long- and short-term behavioral features along with the query-level vertical signals. Specifically, we conducted a comprehensive A/B test on Walmart.com for two weeks to compare Model C against the baseline production model.
The result, detailed in Table <ref>, highlights substantial improvements in key search related metrics. This A/B test observation confirms our hypothesis that a vertical-aware ranking model incorporating a hybrid of behavioral features across both long and short time windows can enhance the customer experience for a diverse range of online shopping needs. In addition, the positivity in marketplace GMV clearly indicates that we are also able to better address cold-start problems when introducing short-term behavioral features into the system.
We also present a qualitative example in Figure <ref> illustrating the comparison of Model C versus the baseline in terms of user experience from the search ranking. It is clearly demonstrated that utilizing behavioral features from both long and short time windows, along with vertical information, results in a ranking model that prioritizes products with high recent popularity, especially in the General Merchandise categories. This approach ensures that customers are provided with options that are more closely aligned with their current shopping needs.
§ CONCLUSION
In this paper, we propose a novel product search ranking model that incorporates a hybrid of behavioral features over both long and short lookback time windows with vertical-specific insights. The multi-window design aims to capture customer engagement patterns over varying durations, and the vertical features are purposed to tailor behavioral features more effectively to different online shopping contexts. This approach allows long-term behavioral features to reflect enduring patterns, supporting routine customer journeys, while short-term features capture immediate, trending patterns to enhance discovery customer experiences.
Through comprehensive online testing, we demonstrate that the proposed model significantly outperforms the baseline, which solely utilizes singular time-window behavioral features, by achieving substantial improvements in key evaluation metrics across various verticals, catering to distinct online shopping needs. As a result, the integration of multi-window behavioral features and search context awareness adeptly navigates the complex dynamics of different shopping categories, thereby enhancing customer engagement across all verticals. Consequently, the proposed model not only fulfills the diverse needs of contemporary eCommerce online shopping but also lays a scalable foundation for future enhancements in search ranking systems.
For future work, we intend to expand the feature scope of the search ranking model by incorporating behavioral features from additional time windows, such as 1 week and 1 year. This extension will enable the model to capture a broader spectrum of trending effects, further enhancing its predictive accuracy. Additionally, we plan to introduce more granular query-level signals–e.g., categorical signals, product type signals, etc.–to allow for more nuanced guidance of behavioral features, improving ranking's contextualized capability and enriching the online shopping experience for customers.
| Online shopping has become an indispensable part of people's daily lives due to its convenience, wide selection, cost-effectiveness, and mobile accessibility. With an ever increasing catalog size, product search ranking system <cit.> has been playing a pivotal role in serving customers by ranking relevant products at the top of their search results.
At the heart of every modern eCommerce product search ranking system lies a machine-learned ranking model. For example, LambdaRank/MART <cit.> leverages gradient boosting machines <cit.>, and neural ranker <cit.> employs deep learning techniques. These models evaluate and assign scores to each product based on a wide range of input signals derived from diverse sources, including user behaviors, query intents, product attributes, seller reputations, and sophisticated interactions among them.
Out of these many hundreds or even thousands of signals, behavioral features hold significant importance as they are generated through direct interactions between customers and products, encompassing actions like impressions, clicks, add-to-carts (ATCs), purchases, and others. Several studies <cit.> have emphasized the pivotal role of such implicit relevance feedback <cit.> in product ranking. In the eCommerce context, customers are the ultimate authorities in determining the relevance of products for a given query, particularly when their judgment is backed by their purchasing decisions. Moreover, such logged customer feedback is abundant and cheap to obtain in nowadays operational systems. Hence, it is very natural that (query, product)-level behavioral features are the ones that ranking models rely on the most when ranking products.
In spite of the rich and growing literature on leveraging (query, product)-level behavioral features and their variants in product search ranking, one much unaddressed problem is what the lookback time window should be used to aggregate the customer engagement at the (query, product) level. This is a very critical design question for all practitioners when applying these essential behavioral features in their ranking systems. In this paper, we will share our empirical insights from our first-hand industrial experience. In particular, we explore behavioral features aggregated over different lookback time windows and study their respective effects on product search ranking. Based upon the pros and cons of using long and short time windows, we propose a principled approach to integrating both sets of behavioral features into the model. The effectiveness of this hybrid model is justified on real product search traffic at Walmart.com through online A/B tests.
The remainder of the paper is organized as follows.
Section <ref> discusses the pros and cons of using behavioral features with long and/or short windows. Section <ref> details the proposed enhancement to achieve the best integration of both long and short windows. Section <ref> describes the comprehensive online A/B experiment conducted to evaluate the proposed ranking model. Finally, Section <ref> summarizes our findings and draws conclusions. | null | null | null | null | In this paper, we propose a novel product search ranking model that incorporates a hybrid of behavioral features over both long and short lookback time windows with vertical-specific insights. The multi-window design aims to capture customer engagement patterns over varying durations, and the vertical features are purposed to tailor behavioral features more effectively to different online shopping contexts. This approach allows long-term behavioral features to reflect enduring patterns, supporting routine customer journeys, while short-term features capture immediate, trending patterns to enhance discovery customer experiences.
Through comprehensive online testing, we demonstrate that the proposed model significantly outperforms the baseline, which solely utilizes singular time-window behavioral features, by achieving substantial improvements in key evaluation metrics across various verticals, catering to distinct online shopping needs. As a result, the integration of multi-window behavioral features and search context awareness adeptly navigates the complex dynamics of different shopping categories, thereby enhancing customer engagement across all verticals. Consequently, the proposed model not only fulfills the diverse needs of contemporary eCommerce online shopping but also lays a scalable foundation for future enhancements in search ranking systems.
For future work, we intend to expand the feature scope of the search ranking model by incorporating behavioral features from additional time windows, such as 1 week and 1 year. This extension will enable the model to capture a broader spectrum of trending effects, further enhancing its predictive accuracy. Additionally, we plan to introduce more granular query-level signals–e.g., categorical signals, product type signals, etc.–to allow for more nuanced guidance of behavioral features, improving ranking's contextualized capability and enriching the online shopping experience for customers. |
http://arxiv.org/abs/2409.17421v1 | 20240925230946 | Solar Active Regions Emergence Prediction Using Long Short-Term Memory Networks | [
"Spiridon Kasapis",
"Irina N. Kitiashvili",
"Alexander G. Kosovichev",
"John T. Stefan"
] | astro-ph.SR | [
"astro-ph.SR",
"cs.AI",
"cs.LG"
] |
0000-0002-0786-7307]Spiridon Kasapis
NASA Ames Research Center, Moffett Field, CA, USA
0000-0003-4144-2270]Irina N. Kitiashvili
NASA Ames Research Center, Moffett Field, CA, USA
0000-0003-0364-4883]Alexander G. Kosovichev
New Jersey Institute of Technology, Newark, NJ 07103, USA
NASA Ames Research Center, Moffett Field, CA, USA
New Jersey Institute of Technology, Newark, NJ 07103, USA
§ ABSTRACT
We developed Long Short-Term Memory (LSTM) models to predict the formation of active regions (ARs) on the solar surface. Using the Doppler shift velocity, the continuum intensity, and the magnetic field observations from the Solar Dynamics Observatory (SDO) Helioseismic and Magnetic Imager (HMI), we have created time-series datasets of acoustic power and magnetic flux, which are used to train LSTM models on predicting continuum intensity, 12 hours in advance. These novel machine learning (ML) models are able to capture variations of the acoustic power density associated with upcoming magnetic flux emergence and continuum intensity decrease. Testing of the models' performance was done on data for 5 ARs, unseen from the models during training. Model 8, the best performing model trained, was able to make a successful prediction of emergence for all testing active regions in an experimental setting and three of them in an operational. The model predicted the emergence of AR11726, AR13165, and AR13179 respectively 10, 29, and 5 hours in advance, and variations of this model achieved average RMSE values of 0.11 for both active and quiet areas on the solar disc. This work sets the foundations for ML-aided prediction of solar ARs.
§ INTRODUCTION
The growing interest in manned spaceflight and the increased reliance on space-based infrastructure has made reliable space weather forecasting tools very important. Solar activity is the primary influencing factor of space weather and the geospace environment; therefore, understanding of its origin should form the foundation of these forecasting tools. Lately, significant efforts have been made towards the prediction of certain aspects of space weather, for example, flares <cit.>, coronal mass ejections <cit.>, and solar energetic particles <cit.>, but forecasting the progenitor of these events, namely the solar active regions (ARs), has been a rather unexplored area of study. The main difference between forecasting space weather events and the emergence of active regions lies in their observational constraints: there are strong connections between the strength and configuration of the photospheric magnetic field and the likelihood of flares <cit.> and CMEs, so prediction of these events benefits greatly from the high-resolution magnetograms produced by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). Forecasting the emergence of ARs, meanwhile, requires probing of the solar interior that is only possible indirectly via helioseismic methods <cit.>.
To detect emergence, it is important to capture those subsurface solar activity signatures that precede the abrupt decrease in continuum intensity and increase in magnetic flux during the formation of an AR. By using time-distance helioseismology to image sound-speed variations prior to the appearance of the active region on the solar surface, <cit.> made the very first attempt to observe an emerging active region in the solar interior. This research focused on increases in magnetic flux associated with the AR emergence, showing that the flux is fragmented and travels very quickly through the top 18 Mm layer of the convection zone, with a speed of about 1 km/s. Similarly, <cit.> explored the feasibility of using variations in acoustic power as a precursor to detect signatures of emerging magnetic flux for AR10488. More specifically, they demonstrated that subsurface structures influence the acoustic power at the overlying photosphere, suggesting that changes in acoustic power could be indicative of active regions before they become visible on the solar surface.
There is a deep body of past work aimed at using these helioseismic methods to attempt the detection of magnetic flux before it reaches the surface, with the majority of research focusing on the near-surface layers (z ≥ -20 Mm) where the signal-to-noise ratio (SNR) is greater. Perhaps the most comprehensive of these works is the analysis of helioseismic signatures preceding the emergence of over 100 ARs <cit.>, where statistically significant differences in subsurface flows and wave speeds preceding active region formation were identified. In the series of works by <cit.>, <cit.> and <cit.> the authors tested whether pre-appearance signatures of solar magnetic active regions were detectable using various tools of local helioseismology. Using data from the Global Oscillations Network Group <cit.> and SOHO/MDI <cit.>, this series of studies a) ruled out spatially extended flows above 15 m/s in the top 20 Mm of the photosphere before emergence, thereby setting constraints on theoretical models of active region development and b) presented evidence of a helioseismic precursor to AR emergence that is present for at least a day prior to emergence.
The possibility of detection of large active regions before they become visible using synoptic imaging of subsurface magnetic activity was also demonstrated by <cit.>, who detected strong acoustic travel-time anomalies of an order of 12 to 16 seconds as deep as 65 Mm. <cit.> proved that originating from deeper layers, these acoustic anomalies rise to shallower regions at velocities up to 1 km/s, suggesting their association with acoustic power variations rather than just subsurface flows or wave-speed perturbations. The deviations in the mean phase travel time of acoustic waves before the emergence of 46 large active regions was investigated by <cit.>, showing the relationship between subsurface acoustic signals and surface magnetic flux. In a similar vein, <cit.> observed disruption of the moat flow near active region AR12673, several hours before the onset of strong flux emergence. This disruption occurred exactly where magnetic flux would later emerge, identifying horizontal divergent flows at the solar surface as potential precursors to this flux.
These divergent and convergent flows that serve as precursors to the emergence of ARs have been extensively studied <cit.>. Therefore, to explore the complex subsurface processes preceding AR emergence from different vantage points, more recent studies have concentrated on the strengthening of the solar f-mode prior to emergence <cit.>. New calibration techniques used by <cit.> did not reveal a significant enhancement of the f-mode prior to AR emergence. This absence of f-mode variations, especially on smaller scales, could be attributed to the limited magnetic sensitivity and spatial resolution of HMI. While significant research has been devoted to understanding the complex behavior of subsurface activity preceding AR emergence, and despite the availability of relevant datasets <cit.>, no study has yet directly addressed the challenge of predicting the emergence itself.
To address this gap, and given the evidence that subsurface variations precede the emergence of solar ARs, in this research, we use the mean acoustic power and the mean magnetic flux derived from the SDO/HMI data to train ML models that predict the decrease in continuum intensity, associated with the emergence of active regions, before it becomes visible on the solar surface. To train the models, a dataset of 61 emerging ARs was created as described in Section <ref>. In Section <ref> we provide information about the Long Short-Term Memory (LSTM) models used in this research, such as their architecture and the training and testing methods employed. The AR prediction results are discussed in Sections <ref> and <ref>, while in Section <ref> we offer some discussion on the ML model predictions along with some recommendations for future work.
§ DATASET
To train and test LSTM models to forecast the emergence of ARs, a novel ML-ready dataset was created by tracking before, during, and after their emergence 61 ARs, using data from the Helioseismic and Magnetic Imager <cit.> onboard the Solar Dynamics Observatory <cit.>. Each selected AR appeared on the solar surface within 30 degrees longitude from the central meridian between March 1st, 2010 and June 1st, 2023, persisted for more than 4 days, and reached a total area of 200 millionths of the solar hemisphere. This longitude range was chosen to minimize significant distortion due to projection and center-to-limb effects.
The HMI instrument provides high-resolution maps of three different physical quantities: a) the Doppler velocity V_D, b) the line-of-sight magnetic flux Φ_m and c) the continuum intensity I_c on the solar surface. For all three data products, we tracked the corresponding 30.66×30.66 degrees (heliographic coordinates) area around the 61 ARs (Figure <ref>, top left) included in the dataset, taking into account the local rotation speed. These solar disc patches represent a 512×512-pixel square centered around the target AR and are tracked through time by dividing them into overlapping 8-hour time series. Each one of these overlapping 8-hour series is comprised of 640 frames, with a cadence of 45 seconds. Although the Φ_m and I_c maps are used without further processing, the Dopplergram (V_D maps) tracked regions are used to generate acoustic power (P_a) maps for four frequency ranges: 2–3, 3–4, 4–5, and 5–6 mHz.
The resulting P_a maps, along with the corresponding Φ_m and I_c tracked regions, were split into a grid of 9 by 9 tiles (Figure <ref>, left). By splitting the tracked regions, we focus the AR evolution tracing on a local 57 by 57-pixel area (corresponding to 3.36 degrees of heliographic longitude and latitude) which is then reduced to structured timelines conducive to machine learning analysis by calculating, for each frame, the mean of each individual tile (Figure <ref>, Input Box). This ensemble of mean P_a, Φ_m and I_c time-series is further processed by removing the solar sphere geometric effect and by normalizing them. Further details on the process used to create time-series data can be found in the Data Preparation Section of <cit.>.
Out of the 61 tracked solar ARs, 46 emerged during Solar Cycle 24 and 15 during Solar Cycle 25. Data gaps and quality issues are present in 16 of the tracked ARs, which prohibit us from using them for ML training and testing without further investigation and processing (such as interpolation). Therefore, for the research presented in the current paper, only a total of 45 ARs are used, 40 for training and 5 for testing – almost a 90-10 train-test data ratio. Table <ref> outlines the mean values of the ARs latitude (Mean ϕ), traveled longitude (Mean Traveled λ), maximum area (Mean A_max), and the time of the AR's life (NOAA Record Span) on the observable disc for the training and testing datasets but also for the entire dataset (training and testing combined). The ARs in our dataset were observed on the solar disk for a period of 6.5 days on average, which corresponds to 86.7 degrees in heliographic coordinates, while the mean maximum area was 329.8 millionths of a hemisphere (mh). As mentioned, in this research, only ARs that were bigger than 200 mh were analyzed; therefore, the average AR in our dataset is small in size compared to larger ARs (such as AR11726) which can reach areas of up to 1000 mh. The mean latitude of the dataset is 7.6 heliographic degrees with a standard deviation of 17.17 degrees, showing that there is a diversity of ARs in terms of latitude, within the range of latitudes that ARs usually emerge.
Table <ref> presents the 5 ARs that the testing dataset includes and their characteristics. The first two emerged during Solar Cycle 24 (2013), and the other three during Solar Cycle 25 (2022-2023). The selection of ARs for the training and testing datasets was balanced to ensure equal representation of the two Solar Cycles in the testing group while also maintaining similar characteristics between the two groups. Another reason why the three most recent ARs in our dataset were chosen for training is that we would like to simulate, as much as possible, an operational setting where we are trying to predict future events by training an ML model with data from the past. Similarly, Table <ref> in the Appendix presents information for the 40 ARs that are used for training. Both Tables <ref> and <ref> include information about the time of emergence and disappearance behind the East limb, the latitude and longitude, the size of the ARs, and their classification at the time of maximum area. The coordinate units are degrees, and the size is measured in millionths of a hemisphere (Table <ref>). For classification of the ARs in our dataset, we use the three-component McIntosh classification <cit.> and the Mount Wilson (or Hale) classification <cit.>. The McIntosh classification follows the format 'Zpc', where 'Z' denotes the modified Zurich Class, 'p' characterizes the penumbra of the principal spot, and 'c' describes the distribution of spots within the group's interior.
ARs travel at an almost constant latitude ϕ. The latitude standard deviation (16.58 and 17.53 degrees) values are pretty high compared to the mean values (9.3 and -5.9 degrees) for the training and testing datasets; therefore, both groups have samples from a variety of different latitudes. More specifically, as seen in Table <ref>, the latitudes of the testing ARs vary from -20 to 13.5, while the mean traveled longitude in both datasets is very similar (87.6 and 79.8 degrees), with the testing dataset presenting slightly lower values, a difference reflected in the NOAA Record span too (6.6 versus 5.8 days). A shorter traveled longitude range and, therefore, shorter observed AR time on the disc for the testing dataset means that the testing ARs emerged further away from the East limb. This is important as during testing, we want an adequate amount of data (at least 110 hours, as will be discussed in Section <ref>) before the emergence of an AR in order to perform reliable prediction. Both datasets have similar size distributions (maximum AR area A_max), with the testing ARs presenting a higher mean only because of the inclusion in the testing dataset of AR11726, the biggest AR tracked in this research. AR11726 was specifically reserved for testing because not only did it grow rapidly by reaching 1000 mh within 7 days, but it is also one AR that caused a lot of eruptive activity. Predicting impactful space weather ARs is crucial for this research. All differences between training and testing data in Table <ref> are within the prescribed standard deviations; therefore, the two groups, despite some inevitable differences, are quite similar, ensuring the reliable testing of the trained LSTM models discussed in the next Section.
§ LSTM MODELS TRAINING, TESTING AND ARCHITECTURE
Understanding and capturing those changes in P_a and Φ_m that precede the emergence of an AR and thereby enable their prediction is a complex and challenging task. Long Short-Term Memory <cit.> networks have been for many years a popular machine learning method used successfully for various tasks, such as flood forecasting <cit.>, sea surface temperature <cit.> and even financial market <cit.> predictions. In this section, we discuss three key aspects of the LSTM models used for forecasting AR emergence: the training and testing methodologies employed and the architecture of the various networks.
We train the LSTM models using the P_a, Φ_m, and I_c time series that describe the evolution of the ARs in Tables <ref> and <ref>. Algorithm <ref> outlines the training process we follow. The first step is to load the ML-ready data, the data that have gone through the processing discussed in Section <ref> and are in the form of 240-hours-long time-series (one-hour cadence). Time-series for the four mean P_a frequency intervals, the Φ_m and I_c values (6 physical quantities), for each one of the 63 tiles of the 9-by-9 grid in Figure <ref> are used for training. The 9-by-9 grid has 81 tiles. The top and bottom rows are not used due to the geometric effect removal we apply to all data, bringing the total number of tiles to 63. This geometric effect normalization assumes that the top and bottom rows are parts of the quiet Sun and is further discussed by <cit.>. The processed training dataset (ML-ready) has 17,010 timelines available (63 tiles × 40 ARs × 6 physical quantities), and it occupies no more than 150 MB of data storage per AR.
Similar to AR11158 (Figure <ref>, left), none of the 45 tracked ARs occupies more than 10 tiles out of the 81 present in the grid. Imbalanced datasets in machine learning are problematic because they can lead to biased models that are overfitted to the majority class, resulting in poor performance on the minority class, which, like in our application, is often the most critical to predict <cit.>. The imbalance between tiles that exhibit activity over time (`Emerging' tiles) and tiles that do not (`Quiet' or `Non-emerging' tiles) is dealt with by omitting the top and bottom three (predominantly non-active) rows of data during training. By not using part of the grid, we keep a balance between the number of active and non-active tiles. It was observed that without adding a weighting parameter, a model trained on the entire tracked patch tends to over-fit on the quiet Sun data.
After removing the dataset's imbalance, the remaining 2430 timelines (9 tiles × 45 ARs × 6 physical quantities) are split into training and testing datasets as discussed in Section <ref>. The training of the models is performed by iterating through the different ARs and the tiles within them, updating the model's weights separately for each tile. This approach ensures that the model recognizes each tile sample as a distinct region of the Sun rather than interpreting tile batches as different feature types within the same area. As seen in Algorithm <ref>, before the AR and Tile iterations begin, we initialize the optimizer and set the learning rate (LR) of our choice. In this work, we use the optimizer provided by the library[https://pytorch.org/]. For every AR (Algorithm <ref>, line 6) and every Tile within the AR (line 7), the training data is split into inputs and targets (line 8). Here, the four different P_a time series are grouped together with the equivalent Φ_m observation to create the input that will be used to predict the target. In this first part of our research, the target is set to be the I_c timeline.
The timelines are also processed using a sliding window approach. The number of input (In) and output (Out) data points, assuming that In+Out<240 (240 total observations for every AR), determines the size of the sliding window that moves through the 240-long timelines, creating multiple overlapping input-output data pairs. The number of input and output points are hyper-parameters set before the training process begins (Algorithm <ref>, line 1). Other hyper-parameters include the learning rate (LR), the number of layers (LN), the number of units (U), and the number of epochs (E), as presented in Table <ref>. This table lists the hyper-parameter values for the ten best-performing ML models trained within the parameter space explored in this research. For example, the best performing model (M8, marked in bold on Table <ref>) has an input of 110 observations, corresponding to 110 hours (1 hour cadence), and an output of 12 observations, therefore a 122 hours wide window. This window is slid through the 240-hour timeline 118 times (240-(In+Out)), generating 118 distinct input-output pairs.
Before the training iterations begin (Algorithm <ref>, line 12) and the model's weights start being updated, the scheduler is initialized using the PyTorch function. Subsequently, the training data is transferred to the Graphics Processing Unit (GPU) to enable the network’s forward pass and weight update computations to be executed there. We use the CUDA parallel computing platform available through PyTorch to train models on the NASA High-End Computing (HECC[https://www.nas.nasa.gov/hecc/]) GPUs and make the training process more time-efficient. Models with layers, units, and epoch numbers similar to those of Model 8 (M8) take less than 2.5 hours to train. The scheduler for the models in Table <ref> is configured to decrease the learning rate by 10% with each new epoch.
Within each epoch, a sequence of processes, common in ML, takes place. The input data (observed P_a and Φ_m) are passed through the LSTM model (Algorithm <ref>, line 13), and an output I_c prediction vector, matching in size (Out) the target vector, is produced. The optimizer gradients are zeroed (line 14) for the new ones to be calculated during backpropagation (line 16) based on the new loss calculated in line 15. In this research, the loss is determined using the Mean Square Error (MSE) formula provided by the PyTorch function by comparing the output of each forward pass to the true observations. The model weights get updated using the calculated gradients and the learning rate LR (line 17), while validation is also performed during every iteration to ensure the proper loss reduction.
In this research, an LSTM Encoder-Decoder architecture is selected as demonstrated in the top right diagram of Figure <ref>. The network architecture depends on two hyperparameters: the number of layers LN and the number of units within each layer U. More specifically, the models presented in Table <ref> consist of an LSTM Encoder which encodes the input and forwards the cell state (c) and the hidden state (h) to an LSTM Decoder, to decode the information and, with the help of a fully connected (FC) layer, produce a prediction y. For each model, both the encoder and the decoder contain LN LSTM layers ( class in PyTorch) and each layer contains U LSTM units. More than 200 models were trained, exploring parameter spaces for different combinations of hyperparameters. The models that, when tested, produced average RMSE values ≤ 0.133 across all five testing ARs, are presented in Table <ref>.
As soon as training is over, testing takes place after freezing (preventing them from changing) the weights of the model and by forward passing the time-series data from the 5 testing ARs. Here, the I_c time series, rather than being used as targets for training, are used as the observed true values to which we compare the prediction outputs and evaluate the models. The E and LR hyperparameters do not matter during testing, the LN and U remain the same as they are related to the structure of the model, therefore In and Out are the only variables that can differ between training and testing. For instance, even if a model was trained with Out_train=5, it can be tasked during testing to predict 24 observations ahead (Out_test=24), instead of 5. Similarly, although a model might be trained by inputting 120 observations (In_train=120), we can test it by inputting a smaller number of observations. This is useful in cases like AR11726 (Table <ref>), where the emergence of the AR (the stage of the AR evolution that we care the most about) happens closer to the East limb. In such cases, less observed data before emergence are available, and therefore, prediction needs to happen with smaller In values. To avoid any confusion, in this research, the output vector is always 12 predictions long (Out_test = 12), from which we use the very last element, as it represents the 12-hour-ahead prediction. Similarly, the input vector is always 96 hours (In_test = 96), except for AR11726, for which it is 72 hours for the reasons discussed above.
Testing of the model performance is done in two ways: a) by calculating the Mean Squared Error (RMSE) between the predicted and the observed I_c time series and b) by comparing the timestamps of true and predicted emergence (the entirety of Section <ref> is dedicated to this method). For the 12 hours prediction problem discussed here, and given that we input observations taken over 96 hours from a total of 240 observations, we are able to do predictions for n = 132 hours. This 132 entries-long prediction vector y_pred is subtracted by the corresponding vector of observed values y_obs for the RMSE to be calculated using Equation <ref>:
RMSE = √(1/n∑_i=1^n( y_obs - y_pred)^2),
where n is the number of observations, and y is the observed and predicted I_c values.
Table <ref> presents the average RMSE across all tiles that were tested (four emerging and three non-emerging tiles) using the five testing ARs. The average RMSE value across all models for AR11726 is 0.101, compared to 0.153 for AR13179. Althought this difference shows that it is more difficult for all models to accurately predict the variations of I_c in AR13179 compared to AR11726, it does not necessarily mean that predicting the emergence of AR13179 is more difficult than predicting that of AR11726. This is because the RMSE measures the agreement between predicted and observed time series, but it does not include any information about the success of the AR emergence prediction. Regardless, the RMSE is a good first metric for evaluation of model performance during the phase of hyperparameter space exploration. The spaces that produced low RMSE were explored again by varying one of the hyperparameters each time. An example of a space that was explored is the one of Model 8 (M8; Figure <ref>).
Model 8 has 3 layers, each one comprised of 64 units, in its encoder and decoder, and was trained using In = 110 hours, Out = 12 hours, E = 1000 iterations, and LR = 0.01. In panel (a) of Figure <ref> we present the RMSE results of the four Model 8 variations that have In = 80, 90, 100, 120 instead of 110 that Model 8 originally has. Panel (b) presents results for another four Model 8 variations trained using the original In = 110, but with LN = 1, 2, 4, 5 compared to 3 layers that Model 8 originally has. Similarly, panels (c), (d), (e), and (f) present variations of Model 8 with variable numbers of U, E, LR, and Out, respectively. In all six plots of Figure <ref>, the RMSE results of Model 8 are marked in bolt. All 25 models (24 variations and Model 8) presented, although trained using different hyperparameters, are tested using 96-minute inputs to predict the I_c 12 hours ahead. This is the testing setup used for all results presented in this paper.
As mentioned, within the tracked solar regions included in the dataset, there are some tiles that experience emergence of magnetic flux (emerging tiles, Figure <ref>) and some that never experience activity (non-emerging tiles) during the 240 hours of tracked observations. While Table <ref> reports the average RMSE values for all tiles, Figure <ref> distinguishes between the average RMSE for non-emerging tiles (red) and emerging tiles (green), presenting them separately. In all plots, the RMSE values of the non-emerging tiles are always smaller than those of the emerging tiles, which shows that Model 8 and its variations are very good at predicting that non-emerging regions will remain quiet. The shadow behind each RMSE line indicates the standard deviation between the scores of the five ARs used for the model testing. A more extended shadow behind a model's RMSE value means that the model had difficulty predicting some ARs compared to others. Models with smaller RMSE standard deviations (e.g., Model 8) tend to be more reliable in reconstructing the evolution of continuum intensity.
In the search for the model that performs the best, when only taking into consideration the RMSE, one could conclude by observing Figure <ref>, that Model 8 uses the number of layers that produces the least erroneous results but could benefit from a slight increase in the number of epochs and units. There are two major drawbacks we need to consider when increasing the number of epochs, E, and units, U. The most obvious is the training time. A 128-unit model takes almost double the time to train compared to a 64-unit model, in the same way that a 1500-epoch training would increase the training time by 50%. The second and most important reason for picking Model 8 over the seemingly better (in terms of RMSE) variants is that it provides more accurate alarms for the emergence prediction problem than the other models. The way we define the emergence of an active region and the way we evaluate the models based on such a definition will be the main topic of discussion in the next Section.
§ ACTIVE REGION EMERGENCE PREDICTION
There is currently no community-wide accepted definition of active region emergence. For instance, NOAA defines emergence every 24 hours, as they issue a Solar Region Summary[https://www.swpc.noaa.gov/products/solar-region-summary] report at the end of each UTC day. Sometimes, the time of AR emergence is defined as the time when the emerging magnetic flux is associated with strong diverging flows and decreased continuum intensity following the formation of the active region. To be able to evaluate the performance of the developed ML models, we define AR emergence as the time when continuum intensity decreases for more than 3 hours with speed over 0.01 (in relative units):
dI_cdt < -0.01 for t_sus > 3 hours,
where I_c is continuum intensity (predicted or observed), and t_sus is the sustained time. This definition of a moment of AR emergence reflects what an observer would define as emergence during visual inspection.
For instance, we evaluate Model 8 (Table <ref>) on predicting the emergence of AR13179. Each panel in Figure <ref> represents the observed (orange curves) and predicted (blue curves) evolution of the continuum intensity for the seven tiles (38 - 44) enclosed in red squares in the bottom right I_c maps. The time derivative for observed and predicted continuum intensity, dI_c/dt, is included at the bottom of each panel. Color-coded in green is the non-emergence state, while red is emergence - the parts of the dI_c/dt curves that fulfill Equation <ref>. The LSTM model can distinguish between tiles that remain quiet throughout observations and tiles that exhibit activity. For example, the model predictions show a good agreement with observations in both cases when emergence is apparent (e.g., tiles 40-41; Figure <ref>), and when the analyzed areas remain quiet (e.g., tiles 38-39).
Typically, the model prediction deviates from observations after AR emergence (e.g., tiles 42-43). This discrepancy can be explained by the interaction of emerging magnetic fields and the reorganization of the convection structure at the photosphere, which affects the properties of the power maps. Another reason for the appearance of such discrepancies is that the training dataset is mainly focused on the period during which the AR is emerging, rather than the later AR evolution period. Despite these limitations, we can often see a remarkable qualitative agreement with observations over significant time of the AR evolution (e.g., tiles 40-42 for AR13179; Figure <ref>).
To identify the moment of AR emergence both in predictions and observations, we use criteria that describe the decreased rate of the continuum intensity (Equation <ref>). During the emergence and formation of the active region, the magnetic flux can spread across several tiles. Therefore, we assume that the AR emergence start time (predicted or observed) is the earliest moment when the emergence criterion is satisfied (e.g., tile 41 in Figure <ref>). Thus, the model predicted emergence on 2022-12-28 18:00 (black dotted line in Figure <ref>, First Warning), half a day before the observed emergence at 2022-12-28 06:00 (red dotted line in Figure <ref>, Emergence Start). The satisfaction of this criterion for other tiles (red dI_c/dt) is caused by the expansion of the emergence of these tiles or the movement of magnetic flux together with diverging flows during emergence.
While it is logical for the first tile where emergence is observed to also be the one where Model 8 triggers its initial alarm, it is not always the case, as P_a flows move and can potentially trigger the Model 8 emergence alarm in neighboring tiles. One such case is AR11726 (Figure <ref> in Appendix), where although the first activity warning is produced in Tile 41 at 2013-04-19 14:00, the first observed emergence happens on the neighboring Tile 42 at 2013-04-19 19:00. The existence of incorrect local activity predictions (ILAP) can also be attributed to this movement of flows between the arbitrarily chosen tiles of our analysis and they will be further discussed later in this text. Nonetheless, for four out of the five ARs we have included in our testing dataset, our model signals its first activity alarm (moment of the emergence prediction) not only at the same tile it was observed, but also many hours before it was observed, as seen in the AR Emergence Prediction (Experiment) column of Table <ref>.
If we consider tiles as independent instances, we can evaluate the local continuous intensity evolution and the possibility of predicting the magnetic flux increase. This approach can be useful for future studies to capture situations when emergence takes place in different tiles close to each other. Tile 42 for example (Figure <ref>), based on the observed I_c, presents its first observed activity at 2022-12-29 19:00 while the first activity is predicted 12 hours ahead for 2022-12-30 05:00, therefore Model 8 produces its first activity alarm at 2022-12-29 17:00, 2 hours ahead of the observed activity. Similarly, tile 40 has its first activity observed at 2022-12-30 01:00 while the first activity prediction is at 2022-12-29 17:00, which means that the first activity alarm was produced at 2022-12-29 05:00, 20 hours ahead of the observed. Therefore, if we treat each tile of AR13179 independently, activity on tile 40 is predicted 20 hours ahead, on tile 41, 12 hours ahead, and on tile 42, 2 hours ahead. These independent tile AR emergence forecast times are also tabulated on Table <ref>, under AR13179 and the respective tile number (Activity Prediction for Selected Tiles columns). Following the same analysis for the rest active regions used for testing (AR11698, AR11726, AR13165, and AR13183) we obtain the remaining forecast times in Table <ref>. Corresponding plots for these testing ARs, can be found in the Appendix (Figures <ref> – <ref>).
All models presented, during testing, had output vectors of length Out = 12 hours (prediction window), with a data cadence of 1 hour. In reality, there are two reasons for which, in an operational setting, where predictions are produced in real-time, this model would only be able to predict values 5 hours in advance. The first reason stems from the definition of the AR emergence given in Equation <ref>, which requires the decrease of the derivative under a specified threshold for 3 consecutive hours. In an operational setting, to produce an alarm, you would have to wait for 3 hours until the Equation <ref> condition gets fulfilled, and therefore 3 hours after the first alarm noted in Figures <ref> – <ref>. Another four hours are lost due to the process of calculating P_a maps. To calculate the P_a maps we use 8-hour Dopplergram sequences, but here the recorded power map time is not the last time-stamp of the sequence, but the midpoint. For this reason, in an operational setting, the entire analysis would have to be shifted by four hours. Accounting for the 3 hours lost because of the emergence definition and another 4 hours because of the Dopplergram time definition, a total of 7 hours of prediction window is lost when simulating an operational environment.
In Table <ref>, we summarize results for selected tiles of five 30.66×30.66 degrees areas. Some of these tiles exhibit an increase in magnetic flux, and some of them remain quiet. For all testing ARs, presented in Table <ref> are the results for 4 emerging (Active) and 3 non-emerging (Quiet) tiles, except for AR11698 and AR13179, for which only three tiles present activity. The prediction values were produced assuming an experimental setting where the entire 12-hour forecast window is available and the operational constraints (7 hour prediction delay) were not applied. The values in orange are the cases where, although Model 8 predicted the tile activity or AR emergence experimentally, the result would be a late alarm if the operational constraints were to be applied.
The operational predictions for the entire ARs are tabulated at the right-most column in Table <ref>. These values represent the experimental forecast values with the 7-hour delay (due to operational processing) subtracted. Model 8 is able to predict successfully and in an operational setting AR11726, AR13165, and AR13179, 10, 29, and 5 hours ahead respectively (AR Emergence Prediction columns). Note that for AR13179, Model 8 produced its first alarm in a different (neighboring) tile than the one in which emergence was observed. Late operational alarms were produced for AR11698 and AR13183, but only for 3 and 2 hours, respectively. Independent LSTM analysis for selected tiles of the five 30.66×30.66 degrees solar disc areas shows correct prediction for 26 out of the 35 tiles (74%) in an operational setting. The prediction success rate is higher for quiet tiles, with only 2 ILAPs and 15 correct true negatives (88%), whereas for the active tiles, there are 11 operationally correct forecasts and 7 late alarms (61%).
§ INCORRECT LOCAL ACTIVITY PREDICTIONS
Treating each tile independently allows us to explore the sensitivity of the LSTM model to mean evolution variations of the convection oscillatory properties, the continuum intensity, and present changes of magnetic flux. It is important to note that during AR emergence, a strong diverging flow may transport magnetic flux from one tile to another, which may cause strong variations in the acoustic power. Therefore, after the first alarm of AR emergence (tile 41, Figure <ref>), the LSTM model correctly predicts the incoming activity of neighborhood tiles (tiles 40 and 42). In this section we are discussing a few cases when LSTM model incorrectly predicted such activity.
§.§ AR11698, Tile 50
The LSTM activity prediction in the vicinity of AR11698 (Figure <ref>) shows incorrect local activity predictions (ILAP) in tile 50 (red part of derivative curves in Figures <ref> and <ref>). The decrease of the continuum intensity predicted by different LSTM models (Figure <ref>, upper left panel) was driven by significant suppression of the 3-4 mHz mean power that precedes the increase in the unsigned magnetic flux for 4 hours, while other frequency ranges don't show strong changes with time. Interestingly, the Model 8 is more sensitive to changes in the integrated properties of oscillations, while models Models 1 and 2 reveal less sensitivity to the background conditions at the photosphere.
The origin of the discrepancies between predicted and observed continuum intensity is tightly connected with the magnetic flux evolution during the emergence and formation of AR11698. In this case, the emergence of the active region was initiated in tile 49. During its emergence and evolution, the two-polarity magnetic flux quickly moved in the opposite direction, forming AR11698. The tile 50 is located between opposite-polarity sunspots. Therefore, a mixture of magnetic patches of both polarities from one side exhibits elevation of unsigned magnetic flux, which causes suppression of the power of stochastic oscillations and, at the same time, prevents them from organizing magnetic fields into a coherent structure such as sunspots or pores, keeping the continuum intensity similar to a quiet region.
§.§ AR13179, Tile 43
Similarly, Models 1 and 8 predict some magnetic activity on tile 43 in the vicinity of emerging AR13179 (Figures <ref> and <ref>, right upper panels) while when observed, no significant changes in the mean continuum intensity are detected. In this case, even though emergence started in tile 42, magnetic flux shows a slow, gradual increase. In general, the mean power for the 3-4 mHz and 4-5 mHz frequency ranges show a significant enhancement and variations in time (Figure <ref>, right bottom plot), which makes interpretation of results challenging. These strong fluctuations in the power of oscillations were captured by Models 8 and 1 and were interpreted as a sign of upcoming activity. This example illustrates a possible way to improve our LSTM modeling, as, in its current version, the model considers relative variations of the acoustic power, which prevents it from recognizing the background enhancement in the acoustic power. Nevertheless, this existing limitation didn't prevent us from obtaining a reliable emergence prediction for AR13179.
§ CONCLUSIONS AND FUTURE WORK
This paper addresses the problem of predicting the emergence of active regions (ARs) by developing a dataset that includes 61 ARs tracked with solar rotation before and after emergence. This dataset was used to generate acoustic power time-series for four different frequency ranges. In this research only 45 ARs were utilized due to the presence of data gaps on the remaining 16 ARs. Using the acoustic power (P_a) and unsigned magnetic flux (Φ_m) time series as input, we developed LSTM models to predict decreases in the continuum intensity (I_c) associated with the emergence of an AR. Despite utilizing four frequency ranges to predict AR emergence, we found that power maps for 3-4 and 4-5 mHz frequency ranges carry most of information related to coming emergence of an AR.
The trained models' performance was tested on 5 ARs, unseen during training, using two evaluation criteria: the RMSE between predicted and observed intensity, and the comparison between true and predicted time of emergence, based on the Equation <ref> criterion. One of the best-performing models trained during this research, Model 8, succeeded in predicting the emergence of all 5 ARs in an experimental setting and 3 of them in an operational setting. The model predicted the emergence of AR11726, AR13165, and AR13179, 10, 29, and 5 hours in advance, respectively, while variations of this model achieved average RMSE values that are as low as 0.06 for quiet (non-emerging) tiles and 0.16 for active (emerging) tiles. This LSTM network application demonstrates the ability of an ML model to capture solar P_a anomalies and predict I_c variations, resulting in the first trained network of its kind capable of forecasting AR emergence. By analyzing the inputs of the ML models during training and comparing them to the output predictions, we identify the necessary improvements in the model's training and data curation that will allow for better AR emergence predictions in the future.
The three models presented in Figure <ref> produce incorrect local activity predictions (ILAPs) because they are spatially agnostic. During training, there is no information provided to Models 1, 2 and 8 in regards to which AR each tile belongs to, which part of the AR it belongs, nor which are its neighboring tiles. This training scheme that assumes the independence of each tile does not allow the network to understand the relationship between P_a variations in neighboring tiles. Because P_a often flows between the arbitrarily chosen borders of the tiles, a decrease in P_a on one tile can potentially lead to a decrease in I_c, not in the same tile but in a neighboring one, as seen in Figure <ref>. To solve this problem, methods that inform the model about the spatial arrangement of the tiles should be used. Spatially informed models will be able to relate the variations of the P_a not only with the I_c of the same tile but also P_a and I_c of different tiles. Assigning coordinates to the time series of each tile used as input during training could potentially lead to the reduction of ILAPs.
The analysis of the AR emergence results highlights potential improvements not only for the ML-ready dataset but also for the methods used to train and test the ML models. The 9-by-9 grid setup used here produces 81 tiles, for the majority of which (>90%) no activity can be detected. This active-quiet tile imbalance is addressed by omitting the majority of quiet tile time series during training to create a balance between the two types of data. This training technique, although adequate for training the models presented, discards a large amount of training data, which can potentially carry useful information related to AR emergence. To address the imbalance between positive and negative instances but also train on all the data available, space weather prediction works <cit.> have either used weighting factors or have been randomly sampling different negative instances in every training epoch, two methods that are apt to this research too and could potentially improve the prediction capabilities of the models.
The prediction of ARs involves analyzing complex, temporal data collected from various parts of the solar surface, therefore capturing long-range dependencies between P_a, Φ_m and I_c patterns is crucial. Although LSTM models are fit for such tasks, in recent years, Transformers <cit.> have been the most prominent alternative due to their self-attention mechanism, which allows them to simultaneously process different input data (ex., different P_a frequency ranges) and establish relationships across long sequences more effectively than the sequential processing approach of LSTMs. This ability to maintain global context without losing information over long time spans is particularly beneficial for understanding the intricate and extended temporal patterns of emerging AR data. Furthermore, transformers’ parallel processing capability significantly accelerates the training of large datasets, which is essential if training is performed without discarding any non-emerging tiles in the training dataset, as in such cases, the training time increases at least by 8 times.
Throughout this work, two metrics have been used to evaluate the LSTM models' capabilities of predicting the emergence of ARs: the RMSE metric and the emergence criterion. While RMSE is a commonly used criterion in ML time-series prediction, the emergence definition outlined in Equation <ref> was specifically devised for this research due to the absence of a precise AR emergence start time definition in the literature. This definition of the emergence time of an active region is a mathematical description of what a human, observing the I_c timelines, would define as an AR: a sustained and substantial decrease of I_c. It should be understood that although not arbitrary, our definition is chosen to fit specifically the problem of predicting the 5 ARs in our testing dataset. Future works should revisit this definition by not only taking into account more than 5 AR samples, but also by considering more physical parameters (such as Φ_m, V_D etc.) in order to make it more generalized and applicable to different problems.
§ ACKNOWLEDGMENTS
We want to thank the NAS Visualization Team (Nina McCurdy, Timothy Sandstrom, Christopher Henze) for their help with this project’s visualizations. This work is supported by the NASA AI/ML HECC Expansion Program, NASA Heliophysics Supporting Research Program, and the NASA grants 80NSSC19K0630, 80NSSC19K0268, 80NSSC20K1870, and 80NSSC22M0162. The code and data used to produce the results presented in this manuscript can be found at <zenodo.com/tbd>
aasjournal
| The growing interest in manned spaceflight and the increased reliance on space-based infrastructure has made reliable space weather forecasting tools very important. Solar activity is the primary influencing factor of space weather and the geospace environment; therefore, understanding of its origin should form the foundation of these forecasting tools. Lately, significant efforts have been made towards the prediction of certain aspects of space weather, for example, flares <cit.>, coronal mass ejections <cit.>, and solar energetic particles <cit.>, but forecasting the progenitor of these events, namely the solar active regions (ARs), has been a rather unexplored area of study. The main difference between forecasting space weather events and the emergence of active regions lies in their observational constraints: there are strong connections between the strength and configuration of the photospheric magnetic field and the likelihood of flares <cit.> and CMEs, so prediction of these events benefits greatly from the high-resolution magnetograms produced by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). Forecasting the emergence of ARs, meanwhile, requires probing of the solar interior that is only possible indirectly via helioseismic methods <cit.>.
To detect emergence, it is important to capture those subsurface solar activity signatures that precede the abrupt decrease in continuum intensity and increase in magnetic flux during the formation of an AR. By using time-distance helioseismology to image sound-speed variations prior to the appearance of the active region on the solar surface, <cit.> made the very first attempt to observe an emerging active region in the solar interior. This research focused on increases in magnetic flux associated with the AR emergence, showing that the flux is fragmented and travels very quickly through the top 18 Mm layer of the convection zone, with a speed of about 1 km/s. Similarly, <cit.> explored the feasibility of using variations in acoustic power as a precursor to detect signatures of emerging magnetic flux for AR10488. More specifically, they demonstrated that subsurface structures influence the acoustic power at the overlying photosphere, suggesting that changes in acoustic power could be indicative of active regions before they become visible on the solar surface.
There is a deep body of past work aimed at using these helioseismic methods to attempt the detection of magnetic flux before it reaches the surface, with the majority of research focusing on the near-surface layers (z ≥ -20 Mm) where the signal-to-noise ratio (SNR) is greater. Perhaps the most comprehensive of these works is the analysis of helioseismic signatures preceding the emergence of over 100 ARs <cit.>, where statistically significant differences in subsurface flows and wave speeds preceding active region formation were identified. In the series of works by <cit.>, <cit.> and <cit.> the authors tested whether pre-appearance signatures of solar magnetic active regions were detectable using various tools of local helioseismology. Using data from the Global Oscillations Network Group <cit.> and SOHO/MDI <cit.>, this series of studies a) ruled out spatially extended flows above 15 m/s in the top 20 Mm of the photosphere before emergence, thereby setting constraints on theoretical models of active region development and b) presented evidence of a helioseismic precursor to AR emergence that is present for at least a day prior to emergence.
The possibility of detection of large active regions before they become visible using synoptic imaging of subsurface magnetic activity was also demonstrated by <cit.>, who detected strong acoustic travel-time anomalies of an order of 12 to 16 seconds as deep as 65 Mm. <cit.> proved that originating from deeper layers, these acoustic anomalies rise to shallower regions at velocities up to 1 km/s, suggesting their association with acoustic power variations rather than just subsurface flows or wave-speed perturbations. The deviations in the mean phase travel time of acoustic waves before the emergence of 46 large active regions was investigated by <cit.>, showing the relationship between subsurface acoustic signals and surface magnetic flux. In a similar vein, <cit.> observed disruption of the moat flow near active region AR12673, several hours before the onset of strong flux emergence. This disruption occurred exactly where magnetic flux would later emerge, identifying horizontal divergent flows at the solar surface as potential precursors to this flux.
These divergent and convergent flows that serve as precursors to the emergence of ARs have been extensively studied <cit.>. Therefore, to explore the complex subsurface processes preceding AR emergence from different vantage points, more recent studies have concentrated on the strengthening of the solar f-mode prior to emergence <cit.>. New calibration techniques used by <cit.> did not reveal a significant enhancement of the f-mode prior to AR emergence. This absence of f-mode variations, especially on smaller scales, could be attributed to the limited magnetic sensitivity and spatial resolution of HMI. While significant research has been devoted to understanding the complex behavior of subsurface activity preceding AR emergence, and despite the availability of relevant datasets <cit.>, no study has yet directly addressed the challenge of predicting the emergence itself.
To address this gap, and given the evidence that subsurface variations precede the emergence of solar ARs, in this research, we use the mean acoustic power and the mean magnetic flux derived from the SDO/HMI data to train ML models that predict the decrease in continuum intensity, associated with the emergence of active regions, before it becomes visible on the solar surface. To train the models, a dataset of 61 emerging ARs was created as described in Section <ref>. In Section <ref> we provide information about the Long Short-Term Memory (LSTM) models used in this research, such as their architecture and the training and testing methods employed. The AR prediction results are discussed in Sections <ref> and <ref>, while in Section <ref> we offer some discussion on the ML model predictions along with some recommendations for future work. | null | null | null | null | null |
http://arxiv.org/abs/2409.17953v1 | 20240926152952 | Optimal trace-distance bounds for free-fermionic states: Testing and improved tomography | [
"Lennart Bittel",
"Antonio Anna Mele",
"Jens Eisert",
"Lorenzo Leone"
] | quant-ph | [
"quant-ph",
"math-ph",
"math.MP"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/2409.17759v1 | 20240926115325 | LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction | [
"Zhongxin Yu",
"Liang Chen",
"Zhiyun Zeng",
"Kunping Yang",
"Shaofei Luo",
"Shaorui Chen",
"Cheng Zhong"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
Sharpness in Bohr's Inequality
Mario GuillénUniversitat Politècnica de València. cmno Vera sn 46022, València. Spain. [email protected]
Pablo Sevilla-PerisI.U.Matemàtica Pura i Aplicada. Universitat Politècnica de València. cmno Vera sn 46022, València. Spain. [email protected]
Research partially supported by grant PID2021-122126NB-C33 funded by
MICIU/AEI/10.13039/501100011033 and by ERDF/EU
=============================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Capturing different intensity and directions of light rays at the same scene, Light field (LF) can encode the 3D scene cues into a 4D LF image, which has a wide range of applications (i.e., post-capture refocusing and depth sensing). LF image super-resolution (SR) aims to improve the image resolution limited by the performance of LF camera sensor. Although existing methods have achieved promising results, the practical application of these models is limited because they are not lightweight enough.
In this paper, we propose a lightweight model named LGFN, which integrates the local and global features of different views and the features of different channels for LF image SR. Specifically, owing to neighboring regions of the same pixel position in different sub-aperture images exhibit similar structural relationships, we design a lightweight CNN-based feature extraction module (namely, DGCE) to extract local features better through feature modulation. Meanwhile, as the position beyond the boundaries in the LF image presents a large disparity, we propose an efficient spatial attention module (namely, ESAM) which uses decomposable large-kernel convolution to obtain an enlarged receptive field and an efficient channel attention module (namely, ECAM). Compared with the existing LF image SR models with large parameter, our model has a parameter of 0.45M and a FLOPs of 19.33G, which has achieved a competitive effect. Extensive experiments with ablation studies demonstrate the effectiveness of our proposed method, which ranked the second place in the Track 2 Fidelity & Efficiency of NTIRE2024 Light Field Super Resolution Challenge and the seventh place in the Track 1 Fidelity.
§ INTRODUCTION
LF cameras can capture varying intensities and directions of light rays within the same scene, encoding the 3D scene cues into a 4D LF image (comprising spatial and angular dimensions). This technology finds wide applications, including post-capture refocusing<cit.>, depth sensing<cit.>, virtual reality<cit.>, and view rendering<cit.>. However, due to the limitation of sensor performance, there exists a trade-off between the spatial resolution and angular resolution of LF images. How to improve the resolution of LF images is currently a prominent research challenge.
The traditional LF SR method<cit.> mainly focuses on how to find sub-pixel information and warp multi-view images based on estimated disparities. However, the performance of these methods heavily depends on accurate estimated disparities, which is difficult to achieve in low-resolution LF images and complex imaging environments such as occlusion and non-Lambert reflection<cit.>.
In recent years, deep learning-based methods have been widely used. Yoon et al.<cit.> proposed the first CNN-based LF image SR model (i.e., LFCNN), which used SRCNN to super-resolve each sub-aperture image (SAI). Afterwards, many methods have adopted the CNN-based methods to integrate different angle information to improve the performance of SR<cit.>. Besides directly processing the 4D LF data, some methods extracted two kinds of features by designing spatial and angular feature extractor, and interacted with each other<cit.>. Apart from the CNN-based LF image SR methods, Transformer-based LF methods have also been proposed. Wang et al.<cit.> proposed a detail-preserving Transformer (DPT) for LF image SR. Liang et al.<cit.> proposed a simple yet efficient Transformer method for LF image SR. Liang et al.<cit.> proposed EPIT to LF image SR by learning non-local space and angle cooperation. Jin et al.<cit.> proposed DistgEPIT model that learn global features and local features of LF images by designing an attention branch and a convolution branch respectively. While existing methods have achieved promising results, the practical application of these models is limited due to excessive parameters and FLOPs. As shown in Fig.1, the parameters of some existing LF image SR methods are mostly above 1M. This limitation prompts our research into lightweight LF image SR.
As shown in Fig.2, adjacent areas at the same pixel position across different SAIs exhibit similar structural relations, which are suitable for processing by the local feature extraction module. On the other hand, the position outside the boundary in the LF image exhibits a large parallax, which requires aggregation of contextual features across SAIs for processing.
To consider these two aspects and the requirement of lightweight model, we propose a lightweight local and global feature learning model named LGFN. By integrating both local and global features of different views and the features of different channels, our lightweight model can achieve competitive results compared with the existing model with larger parameters. Specifically, our model chooses the convolution module with local representation. Different from the existing CNN-based methods<cit.> which use complex network structure, we propose a simple yet efficient convolution module designed to extract local features through feature modulation performed by two parallel convolution branches.
In addition, we choose attention mechanism to extract contextual features.Different from the existing Transformer-based methods <cit.>with quadratic complexity over the number of visual tokens, we propose a simple yet efficient spatial attention module, whose attention weight branch uses decomposable
large-kernel convolution to obtain an enlarged receptive field, and multiplies it with identity branch to extract contextual features. Besides, an efficient channel attention module (namely, ECAM) is introduced to enhance the features between channels. In order to further refine the feature extraction, we extract the local and global features along the horizontal and vertical directions respectively.
Our main contribution can be summarized as:
* We design a lightweight convolution modulation module named DGCE to extract the local spatial features of LF images. A lightweight spatial attention module named ESAM with enlarged receptive field is designed to extract global features. In order to further refine the feature extraction, we extract the local and global features along the horizontal and vertical directions respectively.
* We design an efficient channel attention module named ECAM and use the statistical information of channel direction to model the correlation between different channels.
* We propose a light-weight LF image SR model named LGFN, which has a parameter of 0.45M and a FLOPs of 19.33G. Compared with the existing LF image SR models with large parameter, it has achieved a competitive effect, and won the second place in the Track 2 Fidelity & Efficiency of NTIRE2024 Light Field Super Resolution Challenge and the seventh place in the Track 1 Fidelity.
§ RELATED WORK
LR image SR methods can be divided into traditional non-learning methods, CNN-based methods and Transformer-based methods.
§.§ Traditional Methods
The traditional LF image SR methods mainly focuses on how to find sub-pixel information and warp multi-view images based on estimated disparities. Based on estimated disparities, Bishop et al.<cit.> used a Bayesian deconvolution method to super-resolve LF images. Wanner et al.<cit.> used EPI to estimated disparity maps and proposed variational framework for LF image SR. Farrugia et al.<cit.> proposed an example-based LF image SR method, enhancing spatial resolution consistently across SAIs through learning linear projections from reduced-dimension subspaces and angular super-resolution via multivariate ridge regression. Besides, optimization-based methods have also been proposed. Alain et al.<cit.> adopted an optimization method to solve ill-posed LF image SR problem based on sparsity prior. Rossi et al.<cit.> coupled the multi-frame information with a graph regularization, adopted convex optimization method to solve LF image SR problem.
However, the performance of these methods depends heavily on accurate estimated disparities, it is difficult to achieve in low-resolution LF images and complex imaging environments such as non-Lambertian surfaces or occlusions<cit.>.
§.§ CNN-based Methods
In recent years, deep learning-based method have been widely used. Yoon et al. <cit.> proposed the first CNN-based LF image SR model (i.e., LFCNN), which used SRCNN to super-resolve each SAI. Similarly, Yuan et al.<cit.> uses EDSR to super-resolve each SAI. Afterwards, many methods have adopted the CNN-based methods to integrate different angle information to improve the performance of SR. Wang et al.<cit.>proposed a bidirectional recurrent CNN network iteratively model spatial relations between horizontally or vertically adjacent SAIs. Zhang et al.<cit.> proposed resLF network that used four-branch residual network extracted features from SAI images along four different angular directions. Zhang et al.<cit.> proposed a 3D convolutions network extracted features from SAI images along different angular directions. Cheng et al. <cit.>considered the characteristics of internal similarity and external similarity of LR images, and fused these two complementary features for LF image SR. Meng et al. <cit.> directly used 4D convolution to extract the angle information and spatial information of the LF image. Wang et al.<cit.>designed an angular deformable alignment module (ADAM) for feature-level alignment, and proposed a collect-and-distribute approach to perform bidirectional alignment between the center-view feature and each side-view feature. In addition to directly processing the 4D LF data, some methods disentangled the 4D LFs into different subspaces for SR. Wang et al. <cit.> proposed a spatial and angular feature extractor to extract the corresponding spatial and angular information from the MacPI image, and proposed LF-InterNet<cit.>and DistgSSR<cit.>to repetitively interact the two features.
Besides the aforementioned methods to improve SR performance, some methods try to solve the complex degradation problem facing the real world. To address the issue of the domain gap in LF image SR, Cheng et al.<cit.> proposed a 'zero-shot' learning framework. They divided the end-to-end model training task into three sub-tasks: pre-upsampling, view alignment, and multi-view aggregation, and subsequently tackled each of these tasks separately by using simple yet efficient CNN networks.
Xiao et al.<cit.> proposed the first real-world LF image SR dataset called LytroZoom, and proposed an omni-frequency projection network(OFPNet), which deals with the spatially variant degradation by dividing features into different frequency components and iteratively enhancing them. Wang et al.<cit.> developed a LF degradation model based on the camera imaging process, and proposed LF-DMnet that can modulate the degradation priors into CNN-based SR process.
§.§ Transformer-based Methods
In addition to the CNN-based LF image SR methods, Transformer-based LF methods have also been proposed. Wang et al.<cit.> proposed a detail-preserving Transformer (DPT) for LF image SR, which regards SAIs of each vertical or horizontal angular view as a sequence, and establishes long-range geometric dependencies within each sequence via a spatial-angular locally-enhanced self-attention layer. Liang et al. <cit.>proposed a simple yet efficient Transformer method for LF image SR, in which an angular Transformer is designed to incorporate complementary information among different views, and a spatial Transformer is developed to capture both local and long-range dependencies within each SAI. By designing three granularity aggregation units to learn LF feature, Wang et al.<cit.>proposed a multi-granularity aggregation Transformer (MAT) for LF image SR; Liang et al. <cit.> proposed EPIT to LF image SR by learning non-local space and angle cooperation. Jin et al. <cit.> proposed DistgEPIT model that learns global features and local features of LF images by designing an attention branch and a convolution branch respectively.
Although the existing models have achieved promising results, their model parameters and FLOPs are not lightweight enough, which limits their practical application. In order to solve these problems, we propose a lightweight SR model of LF image by designing efficient modules.
§ METHOD
As mentioned above, the LF image SR needs to consider the local similarity between SAI subgraphs on the one hand, and the disparity problem behind different subgraphs on the other hand, which urges us to consider the methods of local and global feature extraction.
In order to design a lightweight model with fewer parameters and FLOPs, we choose to reduce the high-dimensional feature space to the low-dimensional feature subspace, and design an efficient local and global feature extraction model to achieve LF image SR.
§.§ Network Architecture
Specifically, as illustrated in Fig.3, our LF image SR model mainly consists of three components: shallow feature extraction, deep feature extraction and up-sampling module. Given an input LF low-resolution image F_LR∈ R^U× V× H× W denote an LR SAI array with U × V SAIs of resolution H× W. Our method takes F_LR as its input and generates a HR SAI array of size F_HR∈ R^U× V× s H× s W,where s denotes the upsampling factor.
Firstly, in the shallow feature extraction part, the low-resolution 4D LF image is upsampled using bilinear interpolation to the size of sH× sW. Meanwhile, it is converted to F_0∈ R^1× U V× H× Wformat and passed through a 1×3×3 spatial convolution to extract the shallow feature F_init, and the number of channels is increased from 1 to 64:
F_init=H_conv(F_LR)
where H_conv(.) denotes 3D convolution operation.
Next, the shallow features F_init pass through the jump connection and the deep feature extraction module(DFEM) respectively to obtain the jump connection feature and the deep feature, and they are fused by a 3D convolution process.
F_1=H_DFEM(F_init)+F_init
F_fuse=H_conv(F_1)
where H_DFEM(.) and H_conv(.) denote deep feature extraction module and 3D convolution operation, respectively.
Finally, the fused features F_fuse pass through an up-sampling module consisting of 1×1 convolution, piexlshuffle, LeakyReLU and 3×3 convolution. In addition, the final restored image F_HR is obtained by adding the initial features after bilinear interpolation:
F_HR=H_upsampling(F_fuse)+H_bilinear(F_LR)
where H_upsamping(.) denotes up-sampling module, and H_bilinear(.) denotes bilinear interpolation.
§.§ Local and Global Deep Feature Extraction
DFEM includes seven local and global feature extraction modules (LGFM). The LGFM consists of three components: double-gated convolution extraction module (DGCE), efficient spatial attention module (ESAM) and efficient channel attention module (ECAM), as shown in Fig.3(a).
§.§.§ Double-Gated Convolution Extraction Module
Owing to neighboring regions of the same pixel position in different SAIs exhibit similar structural relationships, which is suitable for processing with local feature extraction module.
Some studie<cit.>indicate that modulation mechanism provides satisfactory performance and is theoretically efficient (in terms of parameters and FLOPs). Therefore, we design a local feature extraction module based on feature modulation, as shown in Fig.3(b). In order to extract the local features better, the shallow features first undergo a 1×1 convolution, and then are cut into two halves along the channel. One half of the features undergoes a 3×3 depth-wise convolution and GELU function, and the other half of the features undergoes pixel-wise multiplication with the corresponding pixels to obtain the enhanced local features. After they are added to each other, they are fused by a 1×1 convolution:
F_21,F_22=Split(H_conv1(F_init))
F_23= Φ(H_dwconv3(F_21))⊙ F_22+Φ(H_dwconv3(F_22))⊙ F_21
F_DGCE=H_conv1(F_23)
F_24=H_conv1(F_init+F_DGCE)
where Φ(.) denotes activation function GELU(.), ⊙ denotes element-wise product, Split(.) denotes split features along the channel, H_conv1(.) and H_dwconv3(.) denote 1×1 convolution and 3×3 depth-wise convolution respectively.
§.§.§ Efficient Spatial Attention Module
Owing to the position beyond the boundaries in the LF image presents a large disparity, which requires aggregate context features among different SAIs, therefore we propose a simple yet efficient spatial attention module, as shown in Fig.3(c). In order to reduce FLOPs, a 1×1 convolution is used to reduce the number of channels, and then strided convolution and max pooling are used to further reduce the height and width of features. In order to further increase the receptive field of spatial attention, the large-kernel convolution is decomposed into a depth-wise convolution<cit.>, a dilated convolution and a 1×1 point convolution, which can capture long-range relationships while maintaining low computational cost and few parameters. Then, the spatial resolution is restored to the original scale by up-sampling, and the number of channels is restored to the original number by a convolution. Therefore, attention with a large receptive field is obtained, which is convenient for the next attention calculation. The difference between our ESAM and other spatial attention modules is that the receptive field has been enlarged.
F_25=H_conv1(F_24)
F_26=H_Maxpool(H_stride(F_25))
F_27=H_unsampling(H_LKA(F_26))
F_28=H_conv1(F_25+F_27)
F_29=sigmoid(F_28)⊗ F_24
where H_LKA(.) denotes decomposable large-kernel convolution operation.
§.§.§ Efficient Channel Attention Module
Some studies<cit.>show that the channel-wise features can improve LF image SR. After ESAM, we further design an efficient channel attention, as shown in Fig.3(d). For the input features, after adaptive maximum pooling and adaptive average pooling, each channel and its three adjacent channels are convolved with convolution kernel of 3 to capture local cross-channel interaction information, and two types of channel attention are obtained by sigmoid function, and then the channel attention is calculated after adding them:
F_30=H_conv3(H_Maxpool(F_29)
F_31=H_conv3(H_Avgpool(F_29)
F_32=(sigmoid(F_30)+sigmoid(F_31))⊗ F_29
In order to further refine the feature extraction, LGFM extract the local and global features along the horizontal and vertical directions respectively.
§ EXPERIMENTS
In this section, we first describe the experimental details, and then carry out specific control experiments and ablation experiments.
§.§ Datasets and Implementation Details
We used the five public LF datasets: EPFL<cit.>, HCInew <cit.>, HCIold<cit.>, INRIA<cit.>, and STFgantry<cit.>, following the same training and testing partition as in<cit.>.
Data Augmentation. All LFs in the released datasets used the bicubic downsampling approach to generate LF patches of size 32×32. We performed random horizontal flipping, vertical flipping, and 90-degree rotation to augment the training data by 8 times. Note that, the spatial and angular dimension need to be flipped or rotated jointly to maintain LF structures.
Regularization. Our network was trained using the L1 loss and FFT Charbonnier loss with weights of 0.01 and 1 respectively. Optimized using the Adam method with β1 = 0.9, β2 = 0.999 and a batch size of 1. Our model was implemented in PyTorch on a PC with a NVidia RTX 3060 GPU. The learning rate was initially set to 2x10-4 and decreased by a factor of 0.5 for every 15 epochs. The training was stopped after 100 epochs.
We used the PSNR and SSIM computed only on the Y channel of images as quantitative metrics for performance evaluation. To compute the metric scores for a dataset containing M scenes, we firstly computed the average score of each scene by separately averaging the scores over all SAIs. Then metric score for the dataset is determined by averaging the scores over the M scenes.
§.§ Comparison to state-of-the-art methods
We compared LGFN to several state-of-the-art methods, including five SISR methods: Bilinear, Bicubic, VDSR <cit.>, EDSR<cit.>, RCAN<cit.> and other five recent LF image SR methods: resLF<cit.>, LFSSR <cit.>, LF-ATO<cit.>, LF-InterNet<cit.>, and MEG_Net<cit.>.
Quantitative Results. As shown in Table 1, compared with other models with larger parameters, our model is very lightweight and has achieved competitive results. Specifically, the parameters of our model are the smallest, our model is only 25.35% of the parameters of MEG-Net model, but it has achieved a better average PSNR value, which shows the lightweight characteristics of our model.
In addition, our model has achieved remarkable results on EPFL and INRIA datasets.
Qualitative Results. As shown in Fig.4, regarding qualitative performance, the propose LGFN has proved that it has ability to produce trustworthy details and sharp structures. For SISR methods, VDSR, EDSR and RCAN tends to produce artifacts, and the restored texture details are not clear enough. For LFSR methods, the proposed LGFN has ability to discriminate more dense details. Specifically, in figure ISO_Chart, the figure recovered by our model is clearer than other figures, and there are fewer artifacts. In the figure Perforated_Metal_3, the graph restored by our model has more material texture.
§.§ Ablation Study
In this section, we further prove the effectiveness of several core parts of the proposed LGFN model through ablation experiments.
1) Connection mode of ECAM and ESAM modules. The connection modes of ECAM and ESAM are classified into cascade connection and parallel connection. In order to verify which connection mode is more effective, we design two models: cascade and parallel, and their corresponding models are LGFN-C and LGFN-P respectively, where LGFN-C is the NTIRE2024 LF image SR competition model. As shown in Table 2, the LGFN-P is better than LGFN-C. The main model of this paper is LGFN-P.
2) LGFN w/o DGCE. The DGCE module is used to extract local feature. To demonstrate the effectiveness of the DGCE module, we remove this module and use parallel ECAM and ESAM modules. As shown in Table 2, the PSNR value is decreased dramatically from 31.8076 dB to 28.7796 dB for 4x SR without DGCE module, and the drop value is 3.028dB. Experiment shows that DGCE module is effective in feature extraction.
3) LGFN w/o ESAM. The ESAM module is used to extract global LF image spatial feature. To demonstrate the effectiveness of the ESAM module, we remove this module. As shown in Table 2, the PSNR value is decreased module, the drop value is 0.1081dB.
4) LGFN w/o ECAM. The ECAM module is used to extract channel feature. To demonstrate the effectiveness of the ECAM module, we remove this module. As shown in Table 2, and the PSNR value is decreased from 31.8076 dB to 31.7804 dB for 4x SR without ECAM module, the drop value is 0.0272dB.
5) LGFN w/o ECAM and ESAM. The ESAM and ECAM module are used to extract global feature. To demonstrate the effectiveness of the ECAM and ESAM modules, we remove these modules. As shown in Table 2, the PSNR value is decreased from 31.8076 dB to 31.6462 dB for 4x SR without them, and the drop value is 0.1614dB, which proves the effectiveness of attention module.
§.§ NTIRE 2024 LFSR Challenge Results
The test set of NTIRE2024 LFSR challenge including 16 synthetic LFs and 16 real-world LFs captured by Lytro camera. As shown in Table 3, we proposed a model which ranked the second place in the Track 2 Fidelity & Efficiency of NTIRE2024 Light Field Super Resolution Challenge with 30.05dB PSNR on the LFSR test dataset.
§ CONCLUSION AND FEATURE WORK
In this paper, we investigated the task of lightweight LF image SR and proposed a lightweight LF image SR model named LGFN based on the local similarity and global disparity of SAIs. As a lightweight model, we proposed a feature modulation-based CNN module to extract local features efficiently. Besides, we designed an efficient spatial attention module which uses decomposable large-kernel convolution to enlarge the receptive field and an efficient channel attention module to extract the global features of the LF image. By learning local and global features, our lightweight model has achieved competitive results and ranked the second place in the Track 2 Fidelity & Efficiency of NTIRE2024 Light Field Super Resolution Challenge and the seventh place in the Track 1 Fidelity.
In our future work, we will adopt model compression techniques, such as knowledge distillation, pruning, and model quantization, to further lighten our model and enhance its effectiveness.
Acknowledgments
This work was supported in part by the National Nature Science Foundation of China under Grant No.61901117, No.62301161, A21EKYN00884B03, in part by the Natural Science Foundation of Fujian Province under Grant No.2023J01083.
unsrt
10
1vaish2004using
Vaibhav Vaish, Bennett Wilburn, Neel Joshi, and Marc Levoy.
Using plane+ parallax for calibrating dense camera arrays.
In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 1, pages I–I. IEEE, 2004.
2wang2018selective
Yingqian Wang, Jungang Yang, Yulan Guo, Chao Xiao, and Wei An.
Selective light field refocusing for camera arrays using bokeh rendering and superresolution.
IEEE Signal Processing Letters, 26(1):204–208, 2018.
3shin2018epinet
Changha Shin, Hae-Gon Jeon, Youngjin Yoon, In So Kweon, and Seon Joo Kim.
Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4748–4757, 2018.
4wang2022occlusion
Yingqian Wang, Longguang Wang, Zhengyu Liang, Jungang Yang, Wei An, and Yulan Guo.
Occlusion-aware cost constructor for light field depth estimation.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19809–19818, 2022.
5chao2023learning
Wentao Chao, Xuechun Wang, Yingqian Wang, Guanghui Wang, and Fuqing Duan.
Learning sub-pixel disparity distribution for light field depth estimation.
IEEE Transactions on Computational Imaging, 9:1126–1138, 2023.
6overbeck2018system
Ryan S Overbeck, Daniel Erickson, Daniel Evangelakos, Matt Pharr, and Paul Debevec.
A system for acquiring, processing, and rendering panoramic light field stills for virtual reality.
ACM Transactions on Graphics (TOG), 37(6):1–15, 2018.
7yu2017light
Jingyi Yu.
A light-field journey to virtual reality.
IEEE MultiMedia, 24(2):104–112, 2017.
8wu2021revisiting
Gaochang Wu, Yebin Liu, Lu Fang, and Tianyou Chai.
Revisiting light field rendering with deep anti-aliasing neural network.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9):5430–5444, 2021.
9sitzmann2021light
Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand.
Light field networks: Neural scene representations with single-evaluation rendering.
Advances in Neural Information Processing Systems, 34:19313–19325, 2021.
10wang2022r2l
Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, and Sergey Tulyakov.
R2l: Distilling neural radiance field to neural light field for efficient novel view synthesis.
In European Conference on Computer Vision, pages 612–629. Springer, 2022.
11attal2022learning
Benjamin Attal, Jia-Bin Huang, Michael Zollhöfer, Johannes Kopf, and Changil Kim.
Learning neural light fields with ray-space embedding.
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19819–19829, 2022.
12_v2_5bishop2011light
Tom E Bishop and Paolo Favaro.
The light field camera: Extended depth of field, aliasing, and superresolution.
IEEE transactions on pattern analysis and machine intelligence, 34(5):972–986, 2011.
13_v2_6wanner2013variational
Sven Wanner and Bastian Goldluecke.
Variational light field analysis for disparity estimation and super-resolution.
IEEE transactions on pattern analysis and machine intelligence, 36(3):606–619, 2013.
14_v2_7farrugia2017super
Reuben A Farrugia, Christian Galea, and Christine Guillemot.
Super resolution of light field images using linear subspace projection of patch-volumes.
IEEE Journal of Selected Topics in Signal Processing, 11(7):1058–1071, 2017.
15_v2_8rossi2017graph
Mattia Rossi and Pascal Frossard.
Graph-based light field super-resolution.
In 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP), pages 1–6. IEEE, 2017.
16_v2_9alain2017light
Martin Alain and Aljosa Smolic.
Light field denoising by sparse 5d transform domain collaborative filtering.
In 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP), pages 1–6. IEEE, 2017.
17_v2_10meng2019high
Nan Meng, Hayden K-H So, Xing Sun, and Edmund Y Lam.
High-dimensional dense residual convolutional neural network for light field reconstruction.
IEEE transactions on pattern analysis and machine intelligence, 43(3):873–886, 2019.
18_v2_11yoon2015learning
Youngjin Yoon, Hae-Gon Jeon, Donggeun Yoo, Joon-Young Lee, and In So Kweon.
Learning a deep convolutional network for light-field image super-resolution.
In Proceedings of the IEEE international conference on computer vision workshops, pages 24–32, 2015.
19_v2_12wang2018lfnet
Yunlong Wang, Fei Liu, Kunbo Zhang, Guangqi Hou, Zhenan Sun, and Tieniu Tan.
Lfnet: A novel bidirectional recurrent convolutional neural network for light-field image super-resolution.
IEEE Transactions on Image Processing, 27(9):4274–4286, 2018.
20_v2_13zhang2019residual
Shuo Zhang, Youfang Lin, and Hao Sheng.
Residual networks for light field image super-resolution.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11046–11055, 2019.
21_v2_14cheng2019light
Zhen Cheng, Zhiwei Xiong, and Dong Liu.
Light field super-resolution by jointly exploiting internal and external similarities.
IEEE Transactions on Circuits and Systems for Video Technology, 30(8):2604–2616, 2019.
22_v2_15wang2020light
Yingqian Wang, Jungang Yang, Longguang Wang, Xinyi Ying, Tianhao Wu, Wei An, and Yulan Guo.
Light field image super-resolution using deformable convolution.
IEEE Transactions on Image Processing, 30:1057–1071, 2020.
23_v2_16zhang2021end
Shuo Zhang, Song Chang, and Youfang Lin.
End-to-end light field spatial super-resolution network using multiple epipolar geometry.
IEEE Transactions on Image Processing, 30:5956–5968, 2021.
24_v2_17yeung2018light
Henry Wing Fung Yeung, Junhui Hou, Xiaoming Chen, Jie Chen, Zhibo Chen, and Yuk Ying Chung.
Light field spatial super-resolution using deep efficient spatial-angular separable convolution.
IEEE Transactions on Image Processing, 28(5):2319–2330, 2018.
25_v2_18wang2020spatial
Yingqian Wang, Longguang Wang, Jungang Yang, Wei An, Jingyi Yu, and Yulan Guo.
Spatial-angular interaction for light field image super-resolution.
In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16, pages 290–308. Springer, 2020.
26_v2_19wang2022detail
Shunzhou Wang, Tianfei Zhou, Yao Lu, and Huijun Di.
Detail-preserving transformer for light field image super-resolution.
In Proceedings of the AAAI conference on artificial intelligence, volume 36, pages 2522–2530, 2022.
27_v2_20liu2021intra
Gaosheng Liu, Huanjing Yue, Jiamin Wu, and Jingyu Yang.
Intra-inter view interaction network for light field image super-resolution.
IEEE Transactions on Multimedia, 25:256–266, 2021.
28_v2_21wang2022disentangling
Yingqian Wang, Longguang Wang, Gaochang Wu, Jungang Yang, Wei An, Jingyi Yu, and Yulan Guo.
Disentangling light fields for super-resolution and disparity estimation.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):425–443, 2022.
29_v2_22liang2022light
Zhengyu Liang, Yingqian Wang, Longguang Wang, Jungang Yang, and Shilin Zhou.
Light field image super-resolution with transformers.
IEEE Signal Processing Letters, 29:563–567, 2022.
30_v2_23liang2023learning
Zhengyu Liang, Yingqian Wang, Longguang Wang, Jungang Yang, Shilin Zhou, and Yulan Guo.
Learning non-local spatial-angular correlation for light field image super-resolution.
In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12376–12386, 2023.
31_v2_24jin2023distgepit
Kai Jin, Angulia Yang, Zeqiang Wei, Sha Guo, Mingzhi Gao, and Xiuzhuang Zhou.
Distgepit: Enhanced disparity learning for light field image super-resolution.
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1373–1383, 2023.
32_v2_25yuan2018light
Yan Yuan, Ziqi Cao, and Lijuan Su.
Light-field image superresolution using a combined deep cnn based on epi.
IEEE Signal Processing Letters, 25(9):1359–1363, 2018.
36_replace_cheng2021light
Zhen Cheng, Zhiwei Xiong, Chang Chen, Dong Liu, and Zheng-Jun Zha.
Light field super-resolution with zero-shot learning.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10010–10019, 2021.
34_v2_39xiao2023toward
Zeyu Xiao, Ruisheng Gao, Yutong Liu, Yueyi Zhang, and Zhiwei Xiong.
Toward real-world light field super-resolution.
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3407–3417, 2023.
33_v2_38wang2024real
Yingqian Wang, Zhengyu Liang, Longguang Wang, Jungang Yang, Wei An, and Yulan Guo.
Real-world light field image super-resolution via degradation modulation.
IEEE Transactions on Neural Networks and Learning Systems, 2024.
35_v2_26wang2022multi
Zijian Wang and Yao Lu.
Multi-granularity aggregation transformer for light field image super-resolution.
In 2022 IEEE International Conference on Image Processing (ICIP), pages 261–265. IEEE, 2022.
37_v2_40guo2023visual
Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, and Shi-Min Hu.
Visual attention network.
Computational Visual Media, 9(4):733–752, 2023.
38_v2_41yang2022focal
Jianwei Yang, Chunyuan Li, Xiyang Dai, and Jianfeng Gao.
Focal modulation networks.
Advances in Neural Information Processing Systems, 35:4203–4217, 2022.
41_v2_44ma2024efficient
Xu Ma, Xiyang Dai, Jianwei Yang, Bin Xiao, Yinpeng Chen, Yun Fu, and Lu Yuan.
Efficient modulation for vision networks.
In The Twelfth International Conference on Learning Representations, 2024.
44_replace_rerabek2016new
Martin Rerabek and Touradj Ebrahimi.
New light field image dataset.
In 8th International Conference on Quality of Multimedia Experience (QoMEX), 2016.
42_v2_29honauer2017dataset
Katrin Honauer, Ole Johannsen, Daniel Kondermann, and Bastian Goldluecke.
A dataset and evaluation methodology for depth estimation on 4d light fields.
In Computer Vision–ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III 13, pages 19–34. Springer, 2017.
43_v2_30wanner2013datasets
Sven Wanner, Stephan Meister, and Bastian Goldluecke.
Datasets and benchmarks for densely sampled 4d light fields.
In VMV, volume 13, pages 225–226, 2013.
39_v2_42le2018light
Mikael Le Pendu, Xiaoran Jiang, and Christine Guillemot.
Light field inpainting propagation via low rank matrix completion.
IEEE Transactions on Image Processing, 27(4):1981–1993, 2018.
40_v2_43vaish2008new
Vaibhav Vaish and Andrew Adams.
The (new) stanford light field archive.
Computer Graphics Laboratory, Stanford University, 6(7):3, 2008.
45_v2_32kim2016accurate
Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee.
Accurate image super-resolution using very deep convolutional networks.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1646–1654, 2016.
46_v2_33lim2017enhanced
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee.
Enhanced deep residual networks for single image super-resolution.
In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136–144, 2017.
47_v2_34zhang2018image
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu.
Image super-resolution using very deep residual channel attention networks.
In Proceedings of the European conference on computer vision (ECCV), pages 286–301, 2018.
49_v2_36jin2020light
Jing Jin, Junhui Hou, Jie Chen, and Sam Kwong.
Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2260–2269, 2020.
48_replaceNTIRE2024-LFSR
Yingqian Wang, Zhengyu Liang, Qianyu Chen, Longguang Wang, Jungang Yang, Radu Timofte, Yulan Guo, et al.
Ntire 2024 challenge on light field image super-resolution: Methods and results.
In CVPRW, 2024.
| LF cameras can capture varying intensities and directions of light rays within the same scene, encoding the 3D scene cues into a 4D LF image (comprising spatial and angular dimensions). This technology finds wide applications, including post-capture refocusing<cit.>, depth sensing<cit.>, virtual reality<cit.>, and view rendering<cit.>. However, due to the limitation of sensor performance, there exists a trade-off between the spatial resolution and angular resolution of LF images. How to improve the resolution of LF images is currently a prominent research challenge.
The traditional LF SR method<cit.> mainly focuses on how to find sub-pixel information and warp multi-view images based on estimated disparities. However, the performance of these methods heavily depends on accurate estimated disparities, which is difficult to achieve in low-resolution LF images and complex imaging environments such as occlusion and non-Lambert reflection<cit.>.
In recent years, deep learning-based methods have been widely used. Yoon et al.<cit.> proposed the first CNN-based LF image SR model (i.e., LFCNN), which used SRCNN to super-resolve each sub-aperture image (SAI). Afterwards, many methods have adopted the CNN-based methods to integrate different angle information to improve the performance of SR<cit.>. Besides directly processing the 4D LF data, some methods extracted two kinds of features by designing spatial and angular feature extractor, and interacted with each other<cit.>. Apart from the CNN-based LF image SR methods, Transformer-based LF methods have also been proposed. Wang et al.<cit.> proposed a detail-preserving Transformer (DPT) for LF image SR. Liang et al.<cit.> proposed a simple yet efficient Transformer method for LF image SR. Liang et al.<cit.> proposed EPIT to LF image SR by learning non-local space and angle cooperation. Jin et al.<cit.> proposed DistgEPIT model that learn global features and local features of LF images by designing an attention branch and a convolution branch respectively. While existing methods have achieved promising results, the practical application of these models is limited due to excessive parameters and FLOPs. As shown in Fig.1, the parameters of some existing LF image SR methods are mostly above 1M. This limitation prompts our research into lightweight LF image SR.
As shown in Fig.2, adjacent areas at the same pixel position across different SAIs exhibit similar structural relations, which are suitable for processing by the local feature extraction module. On the other hand, the position outside the boundary in the LF image exhibits a large parallax, which requires aggregation of contextual features across SAIs for processing.
To consider these two aspects and the requirement of lightweight model, we propose a lightweight local and global feature learning model named LGFN. By integrating both local and global features of different views and the features of different channels, our lightweight model can achieve competitive results compared with the existing model with larger parameters. Specifically, our model chooses the convolution module with local representation. Different from the existing CNN-based methods<cit.> which use complex network structure, we propose a simple yet efficient convolution module designed to extract local features through feature modulation performed by two parallel convolution branches.
In addition, we choose attention mechanism to extract contextual features.Different from the existing Transformer-based methods <cit.>with quadratic complexity over the number of visual tokens, we propose a simple yet efficient spatial attention module, whose attention weight branch uses decomposable
large-kernel convolution to obtain an enlarged receptive field, and multiplies it with identity branch to extract contextual features. Besides, an efficient channel attention module (namely, ECAM) is introduced to enhance the features between channels. In order to further refine the feature extraction, we extract the local and global features along the horizontal and vertical directions respectively.
Our main contribution can be summarized as:
* We design a lightweight convolution modulation module named DGCE to extract the local spatial features of LF images. A lightweight spatial attention module named ESAM with enlarged receptive field is designed to extract global features. In order to further refine the feature extraction, we extract the local and global features along the horizontal and vertical directions respectively.
* We design an efficient channel attention module named ECAM and use the statistical information of channel direction to model the correlation between different channels.
* We propose a light-weight LF image SR model named LGFN, which has a parameter of 0.45M and a FLOPs of 19.33G. Compared with the existing LF image SR models with large parameter, it has achieved a competitive effect, and won the second place in the Track 2 Fidelity & Efficiency of NTIRE2024 Light Field Super Resolution Challenge and the seventh place in the Track 1 Fidelity. | LR image SR methods can be divided into traditional non-learning methods, CNN-based methods and Transformer-based methods.
§.§ Traditional Methods
The traditional LF image SR methods mainly focuses on how to find sub-pixel information and warp multi-view images based on estimated disparities. Based on estimated disparities, Bishop et al.<cit.> used a Bayesian deconvolution method to super-resolve LF images. Wanner et al.<cit.> used EPI to estimated disparity maps and proposed variational framework for LF image SR. Farrugia et al.<cit.> proposed an example-based LF image SR method, enhancing spatial resolution consistently across SAIs through learning linear projections from reduced-dimension subspaces and angular super-resolution via multivariate ridge regression. Besides, optimization-based methods have also been proposed. Alain et al.<cit.> adopted an optimization method to solve ill-posed LF image SR problem based on sparsity prior. Rossi et al.<cit.> coupled the multi-frame information with a graph regularization, adopted convex optimization method to solve LF image SR problem.
However, the performance of these methods depends heavily on accurate estimated disparities, it is difficult to achieve in low-resolution LF images and complex imaging environments such as non-Lambertian surfaces or occlusions<cit.>.
§.§ CNN-based Methods
In recent years, deep learning-based method have been widely used. Yoon et al. <cit.> proposed the first CNN-based LF image SR model (i.e., LFCNN), which used SRCNN to super-resolve each SAI. Similarly, Yuan et al.<cit.> uses EDSR to super-resolve each SAI. Afterwards, many methods have adopted the CNN-based methods to integrate different angle information to improve the performance of SR. Wang et al.<cit.>proposed a bidirectional recurrent CNN network iteratively model spatial relations between horizontally or vertically adjacent SAIs. Zhang et al.<cit.> proposed resLF network that used four-branch residual network extracted features from SAI images along four different angular directions. Zhang et al.<cit.> proposed a 3D convolutions network extracted features from SAI images along different angular directions. Cheng et al. <cit.>considered the characteristics of internal similarity and external similarity of LR images, and fused these two complementary features for LF image SR. Meng et al. <cit.> directly used 4D convolution to extract the angle information and spatial information of the LF image. Wang et al.<cit.>designed an angular deformable alignment module (ADAM) for feature-level alignment, and proposed a collect-and-distribute approach to perform bidirectional alignment between the center-view feature and each side-view feature. In addition to directly processing the 4D LF data, some methods disentangled the 4D LFs into different subspaces for SR. Wang et al. <cit.> proposed a spatial and angular feature extractor to extract the corresponding spatial and angular information from the MacPI image, and proposed LF-InterNet<cit.>and DistgSSR<cit.>to repetitively interact the two features.
Besides the aforementioned methods to improve SR performance, some methods try to solve the complex degradation problem facing the real world. To address the issue of the domain gap in LF image SR, Cheng et al.<cit.> proposed a 'zero-shot' learning framework. They divided the end-to-end model training task into three sub-tasks: pre-upsampling, view alignment, and multi-view aggregation, and subsequently tackled each of these tasks separately by using simple yet efficient CNN networks.
Xiao et al.<cit.> proposed the first real-world LF image SR dataset called LytroZoom, and proposed an omni-frequency projection network(OFPNet), which deals with the spatially variant degradation by dividing features into different frequency components and iteratively enhancing them. Wang et al.<cit.> developed a LF degradation model based on the camera imaging process, and proposed LF-DMnet that can modulate the degradation priors into CNN-based SR process.
§.§ Transformer-based Methods
In addition to the CNN-based LF image SR methods, Transformer-based LF methods have also been proposed. Wang et al.<cit.> proposed a detail-preserving Transformer (DPT) for LF image SR, which regards SAIs of each vertical or horizontal angular view as a sequence, and establishes long-range geometric dependencies within each sequence via a spatial-angular locally-enhanced self-attention layer. Liang et al. <cit.>proposed a simple yet efficient Transformer method for LF image SR, in which an angular Transformer is designed to incorporate complementary information among different views, and a spatial Transformer is developed to capture both local and long-range dependencies within each SAI. By designing three granularity aggregation units to learn LF feature, Wang et al.<cit.>proposed a multi-granularity aggregation Transformer (MAT) for LF image SR; Liang et al. <cit.> proposed EPIT to LF image SR by learning non-local space and angle cooperation. Jin et al. <cit.> proposed DistgEPIT model that learns global features and local features of LF images by designing an attention branch and a convolution branch respectively.
Although the existing models have achieved promising results, their model parameters and FLOPs are not lightweight enough, which limits their practical application. In order to solve these problems, we propose a lightweight SR model of LF image by designing efficient modules. | As mentioned above, the LF image SR needs to consider the local similarity between SAI subgraphs on the one hand, and the disparity problem behind different subgraphs on the other hand, which urges us to consider the methods of local and global feature extraction.
In order to design a lightweight model with fewer parameters and FLOPs, we choose to reduce the high-dimensional feature space to the low-dimensional feature subspace, and design an efficient local and global feature extraction model to achieve LF image SR.
§.§ Network Architecture
Specifically, as illustrated in Fig.3, our LF image SR model mainly consists of three components: shallow feature extraction, deep feature extraction and up-sampling module. Given an input LF low-resolution image F_LR∈ R^U× V× H× W denote an LR SAI array with U × V SAIs of resolution H× W. Our method takes F_LR as its input and generates a HR SAI array of size F_HR∈ R^U× V× s H× s W,where s denotes the upsampling factor.
Firstly, in the shallow feature extraction part, the low-resolution 4D LF image is upsampled using bilinear interpolation to the size of sH× sW. Meanwhile, it is converted to F_0∈ R^1× U V× H× Wformat and passed through a 1×3×3 spatial convolution to extract the shallow feature F_init, and the number of channels is increased from 1 to 64:
F_init=H_conv(F_LR)
where H_conv(.) denotes 3D convolution operation.
Next, the shallow features F_init pass through the jump connection and the deep feature extraction module(DFEM) respectively to obtain the jump connection feature and the deep feature, and they are fused by a 3D convolution process.
F_1=H_DFEM(F_init)+F_init
F_fuse=H_conv(F_1)
where H_DFEM(.) and H_conv(.) denote deep feature extraction module and 3D convolution operation, respectively.
Finally, the fused features F_fuse pass through an up-sampling module consisting of 1×1 convolution, piexlshuffle, LeakyReLU and 3×3 convolution. In addition, the final restored image F_HR is obtained by adding the initial features after bilinear interpolation:
F_HR=H_upsampling(F_fuse)+H_bilinear(F_LR)
where H_upsamping(.) denotes up-sampling module, and H_bilinear(.) denotes bilinear interpolation.
§.§ Local and Global Deep Feature Extraction
DFEM includes seven local and global feature extraction modules (LGFM). The LGFM consists of three components: double-gated convolution extraction module (DGCE), efficient spatial attention module (ESAM) and efficient channel attention module (ECAM), as shown in Fig.3(a).
§.§.§ Double-Gated Convolution Extraction Module
Owing to neighboring regions of the same pixel position in different SAIs exhibit similar structural relationships, which is suitable for processing with local feature extraction module.
Some studie<cit.>indicate that modulation mechanism provides satisfactory performance and is theoretically efficient (in terms of parameters and FLOPs). Therefore, we design a local feature extraction module based on feature modulation, as shown in Fig.3(b). In order to extract the local features better, the shallow features first undergo a 1×1 convolution, and then are cut into two halves along the channel. One half of the features undergoes a 3×3 depth-wise convolution and GELU function, and the other half of the features undergoes pixel-wise multiplication with the corresponding pixels to obtain the enhanced local features. After they are added to each other, they are fused by a 1×1 convolution:
F_21,F_22=Split(H_conv1(F_init))
F_23= Φ(H_dwconv3(F_21))⊙ F_22+Φ(H_dwconv3(F_22))⊙ F_21
F_DGCE=H_conv1(F_23)
F_24=H_conv1(F_init+F_DGCE)
where Φ(.) denotes activation function GELU(.), ⊙ denotes element-wise product, Split(.) denotes split features along the channel, H_conv1(.) and H_dwconv3(.) denote 1×1 convolution and 3×3 depth-wise convolution respectively.
§.§.§ Efficient Spatial Attention Module
Owing to the position beyond the boundaries in the LF image presents a large disparity, which requires aggregate context features among different SAIs, therefore we propose a simple yet efficient spatial attention module, as shown in Fig.3(c). In order to reduce FLOPs, a 1×1 convolution is used to reduce the number of channels, and then strided convolution and max pooling are used to further reduce the height and width of features. In order to further increase the receptive field of spatial attention, the large-kernel convolution is decomposed into a depth-wise convolution<cit.>, a dilated convolution and a 1×1 point convolution, which can capture long-range relationships while maintaining low computational cost and few parameters. Then, the spatial resolution is restored to the original scale by up-sampling, and the number of channels is restored to the original number by a convolution. Therefore, attention with a large receptive field is obtained, which is convenient for the next attention calculation. The difference between our ESAM and other spatial attention modules is that the receptive field has been enlarged.
F_25=H_conv1(F_24)
F_26=H_Maxpool(H_stride(F_25))
F_27=H_unsampling(H_LKA(F_26))
F_28=H_conv1(F_25+F_27)
F_29=sigmoid(F_28)⊗ F_24
where H_LKA(.) denotes decomposable large-kernel convolution operation.
§.§.§ Efficient Channel Attention Module
Some studies<cit.>show that the channel-wise features can improve LF image SR. After ESAM, we further design an efficient channel attention, as shown in Fig.3(d). For the input features, after adaptive maximum pooling and adaptive average pooling, each channel and its three adjacent channels are convolved with convolution kernel of 3 to capture local cross-channel interaction information, and two types of channel attention are obtained by sigmoid function, and then the channel attention is calculated after adding them:
F_30=H_conv3(H_Maxpool(F_29)
F_31=H_conv3(H_Avgpool(F_29)
F_32=(sigmoid(F_30)+sigmoid(F_31))⊗ F_29
In order to further refine the feature extraction, LGFM extract the local and global features along the horizontal and vertical directions respectively. | null | null | null |
http://arxiv.org/abs/2409.17954v1 | 20240926153054 | Enhancing elusive clues in knowledge learning by contrasting attention of language models | [
"Jian Gao",
"Xiao Zhang",
"Ji Wu",
"Miao Li"
] | cs.AI | [
"cs.AI"
] |
[
[
September 28, 2024
======================
§ ABSTRACT
Causal language models acquire vast amount of knowledge from general text corpus during pretraining, but the efficiency of knowledge learning is known to be unsatisfactory, especially when learning from knowledge-dense and small-sized corpora.
The deficiency can come from long-distance dependencies which are hard to capture by language models, and overfitting to co-occurrence patterns and distracting clues in the training text.
To address these issues, the paper proposes a method to enhance knowledge learning during language model pretraining, by enhancing elusive but important clues in text discovered by the language model themselves.
We found that larger language models pay more attention to non-obvious but important clues, which are often overlooked by smaller language models. Therefore, we can identify these clues by contrasting the attention weights of large and small language models. We use the identified clues as a guide to perform token-dropout data augmentation on the training text, and observed a significant boost in both small and large models' performance in fact memorization. This shows that the behavior contrast between more and less-performant language models contains important clues for knowledge learning, and it can be “amplified" for a straight-forward improvement in knowledge learning efficiency.
§ INTRODUCTION
Pretrained large language models have shown impressive performance on a wide variety of downstream tasks <cit.>. To achieve good generalization, these models need to be trained on web-scale corpora that are diverse and large enough to capture the complexity of natural language. Unfortunately, it is observed that when training corpora is limited in size or style variation, language models can struggle to generalize the information learned from the corpora <cit.>. This deficiency poses a challenge for injecting knowledge into pretrained language models via continual pretraining (finetuning). In many domains, the available corpora is often limited and knowledge-dense (e.g., in forms of textbooks, manuals, documentations). Such domain text may be difficult to be utilized effectively in finetuning, and the language models may not be able to effectively generalize the domain knowledge to downstream domain tasks.
Not very much is known about the causes of such deficiency in knowledge learning. One likely cause is overfitting to co-occurrence patterns in the limited training text, causing learning of spurious correlations instead of correct factual associations. Another possible reason is the difficulty of capturing long-range dependencies in text, which are crucial for understanding complex relationships. Such deficiency is sometimes a result of intentional design choice in the model architecture, such as the decay of attention weights in the RoPE <cit.> positional encodings.
One possible route to understanding this phenomenon is via the attention module in language models. The attention mechanism is a key component that allows the model to focus on different parts of the input when making predictions. The attention weights are shown to be interpretable and explaining the model's behaviors <cit.>.
Recently, <cit.> show that when predicting factual information, models are less likely to attend to the correct clue if the model does not know about the fact. This implies that for new knowledge unknown to the model, the model may not be able to attend to the correct clue at first, leading to difficulty in associating the correct clue (e.g., the head entity) with the prediction target (the tail entity).
To help language models learn, especially smaller models, a common approach is to use knowledge distillation <cit.> (or teacher-student method) to transfer knowledge from a larger model. Given a learning goal, a more performant language model such as GPT-4 <cit.> is often used to generate training data for the smaller model <cit.>. A main drawback of this approach is that it requires the larger model to be already capable of the task or already have the knowledge. This make it not suitable for learning novel knowledge, such as new facts from an evolving domain. Also, it can only help the smaller model to learn but cannot help the larger model.
In this paper, we propose a simple method to enhance factual knowledge learning in continual pretraining, with the help of a pair of larger and smaller models. Our method is effective in learning novel facts and can boost the performance of both the larger and smaller models. The main contributions of the paper are as follows:
Attention difference between large and small language models reveals elusive but important clues in text. We show that while large and small language models both show high attention to important and obvious clues in text, large models pay significantly more attention than smaller models to important clues that are less obvious or elusive. Therefore, by contrasting the attention weights of large and small models, we can identify these elusive clues in text that are important for knowledge learning but are often easily overlooked.
Augmenting elusive clues in text boosts knowledge learning in continual pretraining. We show that by using the identified elusive clues as a guide, a token-dropout data augmentation that highlights the elusive clues can significantly boost the model's performance in knowledge learning. We experimented on both synthetic and real-world corpus and show that the proposed method outperforms other forms of data augmentation, and boosting elusive clues universally helps both the large and the small models.
Unlike previous work which was focus on distilling knowledge from large language models to make small language models perform better, our approach has improved the performance of large language models on not only synthetic , but also real-world datasets.
To the best of our knowledge, we are the first to analyze the the attention discrepancies between large and small models and use it for data augmentation. Prior work have distilled attention pattern from large models to small models, but without analyzing what is being distilled. Unlike distillation, our approach also enhances the performance of large models, which is a novel contribution on our part.
We release the code and data used in this paper for reproducibility and further research[<https://github.com/hushes-minutes/contrasting_attention>].
§ RELATED WORK
§.§ Attention as behavior explanation
It is observed that attention weights in transformer models provide interpretable clues about the model's behavior. For example, attention heads within multi-head attention can spontaneously differentiate into distinct roles <cit.>. Certain heads play a more significant role and affect performance significantly <cit.>. More performant models tend to have attention weights that focus more on key information and features, a possible explanation of their superior performance <cit.>.
Some argue that while attention is somewhat interpretable, its interpretability is not an indicator of model performance <cit.>. There is divided opinion on the extent to which attention weights reflects true model behavior <cit.>. Our study extends these findings by comparing and contrasting attention weights of different models, and show that the difference between attention weights of large and small models can provide important behavioral clues.
§.§ Data augmentation on text
Data augmentation is a critical technique for enhancing robustness and generalization, especially for limited-size datasets. Various data augmentation methods have been proposed, including random editing of sentences <cit.> such as insertion, swapping, and deletion. Synonym replacement methods <cit.> replace words with their synonyms. Contextual augmentation methods <cit.> replace words with other words predicted by a language model for semantic variations. Back-translation <cit.> is another commonly used method that generates augmented data by translating to and then back from another language. More sophisticated methods combine multiple augmentations <cit.>.
Given that attention provides interpretable clues about the model's behavior, <cit.> uses attention weights to find semantically significant words for replacement augmentation. <cit.> uses attention weights to find significant input parts for mixup augmentation <cit.>. We go a step further and show that only augmenting the most significant words is insufficient for challenging knowledge learning scenarios, and augmenting hard-to-notice but important parts of the input boosts the model's performance even better than augmenting the significant parts.
§.§ Teacher-student methods for language models
To enhance the performance of smaller models, knowledge distillation methods have been extensively developed to transfer knowledge from larger models to smaller models <cit.>. Large pretrained language models can be used to generate data for finetuning smaller models to transfer its knowledge and skills, for example, instruction following <cit.> and reasoning ability <cit.>. Distillation from large model is also frequently used to build strong domain or task-specific models with a compact size, like for coding <cit.> and math <cit.>. Our work explores a different way to utilize large models: we find the behavior difference between large and small models and use it to guide the models towards more difficult part of the text.
§.§ Continual pretraining of language models
Continual pretraining takes a language model pretrained on a general corpus and continual the pretraining process with a new corpus, typically domain-specific text, to enhance the model's performance on domain tasks. Model acquires new knowledge and ability via continual pretraining, for example, in coding <cit.>, math <cit.>, and medicine <cit.>. We aim at learning new factual knowledge from text via continual pretraining, similar to those in <cit.>.
§ PROBLEM SETUP: KNOWLEDGE LEARNING DEFICIENCY
§.§ Task: fact learning in (continual) pretraining
Language models can learn factual knowledge from pretraining (or continual pretraining) on text corpora. <cit.> introduced a synthetic biography dataset for evaluating the efficiency of knowledge learning in language models. The dataset has been utilized by <cit.>, <cit.>, and <cit.>. It consists of short synthetic biographies of individuals, with a fixed format shown in the following example:
Liam Thompson was born on January 5, 1990. He spent his early years in Melbourne, Australia. He received mentorship and guidance from faculty members at Sorbonne University. He completed his education with a focus on Biomedical Engineering. He had a professional role at the British Museum.
Each biography contains information about an individual's name, birth date, birth city, education, and job status. The task is to finetune (continual pretraining) a language model on the biographies to let it memorize the factual information about the individuals. After training, the model is evaluated on a question-answering task, where we evaluate the model's accuracy in memorizing the underlined part of the biographies.
The questions are formatted like “When was Liam Thompson born?". Details on the training corpus and evaluation data are provided in the Appendix <ref>.
§.§ Deficiency in knowledge learning over long-range dependency
<cit.> have shown that training language models from scratch on the biographies yield poor performance in question answering. We instead perform continual pretraining on pretrained language models up to 70 billion parameters. The language models have undergone extensive pretraining on massive corpora and show strong language capabilities.
We show that even pretrained models with billions of parameters struggle to memorize facts perfectly in continual pretraining. Table <ref> shows that while Gemma 2 <cit.> and LLaMA 3 <cit.> memorize the first two pieces of information (birth date and birth city) with high accuracy, they struggle to memorize the following three pieces of information (university, major, and company). This rules out the possibility that the performance deficiency is due to limited model size or insufficient pretraining.
The performance trend on QA tasks is also plotted in Figure <ref>. It is clear that as the relationship spans longer distances (i.e., the distance between the tail entity, such as “Company", to the head entity name, the person's name), the model's performance show a decreasing trend. This indicates that the model struggles to capture long-range dependencies in text, which is crucial for learning complex relationships.
One possible reason for the deficiency in learning long-range dependencies is overfitting to a large amount of distracting information between the head and tail entities in a relationship. Overfitting is more likely when relationship only occur in few examples like in the biography dataset.
Another possible reason comes from the bias in the model architecture that biases the model's attention towards nearby information. Many popular models, such as LLaMA and Gemma, use the Rotary Position Embedding (RoPE) <cit.> as positional encoding in their attention module. RoPE has a long-term decay property, which means that attention weights decay as the relative
distance between the key and value token increases. This makes the model focus more on adjacent information but at a cost of important information that are occasionally far-away, hurting the model's performance in learning long-range dependencies.
§ ANALYSIS: CONTRASTING ATTENTION OF LANGUAGE MODELS
We have shown that language models could achieve near-perfect accuracy in memorizing relationships that span a short distance in text, but struggle when they span a longer distance. In this section, we use attention weights as an interpretability tool to analyze the model's behavior while learning long-range dependencies. We show that LLMs pay inadequately little attention to key information that is located further away, and more performant larger models can pay more attention to these information than smaller models.
§.§ Attention weight visualization
We look at model's attention weights to try answering the following question: what information does the model pay attention to when predicting the tail entities in a relationship? The model uses attention weights to retrieve hidden states of context tokens, therefore the weights determines the information flow from the context to the current token in text. Furthermore, if an incorrect head entity is attended to when predicting the tail entity during the forward pass, in backpropagation the model will likely reinforce this incorrect association and cause the model to learn the wrong relationship.
To visualize model's attention weights when predicting the tail entities in a relationship, we extract the attention weights at the preposition tokens, i.e., the word immediately preceding the tail entity. For example, in the sentence “He received mentorship and guidance from faculty members at Sorbonne University", the attention from the token “at" is extracted. Because the model is predicting the tail entity “Sorbonne University" at this position, the attention weights[To simplify analysis, we took the approach of averaging the attention weights across all layers and attention heads, which will be further demonstrated in our Appendix.] here likely corresponds to the information necessary for predicting it. To ease visualization and for better comparison, instead of directly showing the attention weights, we rank the tokens and visualize the top 10 tokens with the highest attention weights. For each model, we calculate the token attention ranking for 100 biographies[Because attention paid on meaningless tokens provides little information, we removed periods, commas, spaces, and placeholders at the beginning of a sentence(for example, bos).], and summarize the ranking using a bar plot in Figure <ref>.
Results show that models assign the most attention to the most important information for predicting the tail entity: the relationship words. The model also pays much attention to the distracting entities in the preceding text. The correct head entity, which is the key information for predicting the tail entity, receives hardly any attention from smaller models and only a small amount of attention from larger models such as Gemma 2 9B, and is almost never ranked in top tokens.
This indicates that the model's attention is biased towards short-distance information, which may lead to the model learning the incorrect association and overfitting to such spurious co-occurrences.
§.§ Contrasting attention of large and small language models
Comparing to smaller models, larger language models tend to have overall better language understanding capabilities, therefore could be more likely to pay attention to the correct clue in the text. For a same family of models, for example, the LLaMA 3 8B and 70B models, the training corpus, model architecture, and training procedure are mostly similar, and they should have relatively similar general behavior pattern besides their capability differences.
Therefore, we can contrast the attention pattern between a large and a small model in the same family to identify the difference in the clue they pay attention to. In Figure <ref>, we subtract the attention weights of the small model from the large model, and visualize the top 10 tokens with the largest attention differences. The graph shows tokens receiving the most “additional" attention from the large model. It is clear that the correct head entity of the relationship, the “name" tokens (in red color), often receive the most additional attention[The date tokens (in blue color) also appear to rank high in attention differences, which is likely due to the fact that there are on average more date tokens than name tokens in the text, so they are counted more frequently in the top 10 tokens. For example, under the LLaMA 3 tokenizer, the name is split into an average of 3.56 tokens, while the date is split into around 7 tokens.].
Comparing the original model attention in Figure <ref> and the attention difference in Figure <ref>, we can see that while larger models pay more attention to the correct clue in text, the absolute attention weights on the correct clue is still small and biased towards the closer distracting entities. This calls for a method to “amplify" the attention differences so that the model can focus even more on the correct clue in text.
§ METHOD: AUGMENTATION FROM CONTRASTING ATTENTION
We have shown that important clues that are hard to notice in text can be discovered from the attention difference between large and small models. Next, we propose to utilize and amplify these clues by combining with a simple dropout data augmentation method.
§.§ Token-dropout data augmentation
To combat overfitting, token-dropout data augmentation is a simple and effective technique that randomly drops out tokens in a training example <cit.>. Token-dropout introduces noise to the training data and breaks the model's reliance on spurious co-occurrences in the training examples, helping the model achieve better generalization. A naive token-dropout randomly deletes each token independently with a probability α.
§.§ Augmentation guided by elusive clues
Although naive token-dropout mitigates overfitting, it does not solve the long-range dependency learning problem. As each token is dropped out independently, the model still suffers from inadequately small attention to non-obvious and distant information. We propose to use the attention difference between large and small models as a guide to dropout tokens in a more selective way. We first use the attention difference to rank the tokens in the training data, and then dropout tokens with a probability that is inversely proportional to their ranking. In this fashion, the model is encouraged to focus more on the tokens containing important but elusive information, as identified by the attention difference.
We use the following function to calculate dropout probability for each token:
p(r) = α (1 - e^-β r)
The token with the r-th rank (having the r-th largest attention difference) will be dropped out with probability p(r). (A graph of the function is shown in Figure <ref>).
The hyperparameter β controls how fast the dropout probability increases with the ranking, and α controls the maximum dropout probability. The tokens with higher attention differences will have lower dropout probabilities, encouraging the model to focus more on these tokens. Figure <ref> illustrates the process of the proposed augmentation method.
§ RESULTS
§.§ The biography dataset
We use low-rank adaptation (LoRA) <cit.> to facilitate finetuning of models up to 70 billion parameters. As the corpus size is limited, we use a rank of 16 for the LoRA adapters. Adapters are added to all of the model's weights except for the embedding and the output layer.
We finetune models with the Huggingface's transformer library <cit.> on NVIDIA 4090 GPUs.
We experiment with LLaMA 3 <cit.> and Gemma 2 <cit.> as two families of language models.
For the baselines, we compare the performance of the models after plain finetuning, random (naive) token-dropout, and token-dropout by attention. In addition to random dropout, dropout by attention uses the original attention weights to guide the dropout probabilities, assuming that the model put more attention on tokens it deemed important. Tokens with lower attention weights are dropped out with higher probabilities to enhance the important information, in a similar vein as in <cit.>. The dropout probabilities are also calculated using Equation <ref>.
For each experiment, we trained the model from 10 to 30 epochs with learning rates in [5e-5, 1e-3] and selected the model with the best performance. For the augmentation-based methods, we also searched for the best hyperparameters α and β individually for each method. Interestingly, the best hyperparameters for the dropout probabilities happen to be similar for different models and augmentation methods. For each of the augmentation methods, we generate 10 augmented versions of each training example and combine them with the original examples.
Results in Table <ref> show that the proposed token-dropout augmentation based on attention difference significantly outperforms other data augmentation methods. We report QA accuracy on the “university" and the “company" fields as the models have poor performance on these fields under plain finetuning (Table <ref>). We report exact match (EM) accuracy and normalized word-level F1 scores. We can see that while random dropout and dropout by attention improve performance over no data augmentation, our method achieves much more significant improvement. This proves that contrasting attention of large and small language models indeed finds important but elusive clues in text effectively, and amplifying these clues in the input has immediate positive effects on the model's memorization efficiency even for the largest 70B model.
§.§ Real-world dataset
Aside from the biography dataset, we also evaluate the proposed method on Wikipedia text to verify if the method helps knowledge learning on general text. Specifically, we evaluate on the Paragraph-Level Wikipedia Question-Answering dataset <cit.>. We first perform continual pretraining on the Wikipedia text paragraphs (included in the dataset), then evaluate the model's performance on the question-answering data[This is the “closed-book" setting where the model is not allowed to look at the original Wikipedia passage during question answering. It tests the model's ability to memorize factual knowledge during the continual pretraining phase.].
The questions are specifically designed to incorporate coreference dependencies that span multiple sentences in a paragraph, making it a challenging task that tests the model's ability to learn and memorize complex factual associations.
An example paragraph of Wikipedia text from the dataset is as follows:
The 2005 edition of the International ISBN Agency's official manual describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10.
An example of the question from the dataset is as follows:
Question: How many digits does the ISBN have?
Answer: 13
Results in Table <ref> show that the proposed method also improves knowledge learning from the Wikipedia text. Naive data augmentation can negatively affect the model's performance, while our method improves the model's memorization efficiency by selectively amplifying difficult and elusive clues. This shows that enhancing the model's focus on important but elusive information in a crucial factor in improving the model's knowledge learning efficiency in pretraining, and our method is generally applicable to different kinds of text.
§ CONCLUSION
Efficiency of learning factual knowledge in not only crucial for pretraining, but also important for effective continual and lifelong learning in language models. Due to the overfitting and long-range dependency problem, even performant language models can struggle to learn and memorize factual knowledge from limited data. In this work, we show that one of the key factors to improving the model's learning, finding the “elusive" but important clues in text, is already embedded in the model's attention weights. However, such clues are hard to discover by the model itself due to the model's bias towards short-range contexts, but clearly manifests themselves when contrasting the attention between a larger and a smaller model. Based on this discovery, we propose a simple yet effective data augmentation method that leverages the attention difference to guide the dropout of tokens in the input. Our method significantly improves the model's performance in memorizing factual knowledge, and is shown to be effective for different corpora and models.
§ DATA DETAILS
§.§ Synthetic dataset
The synthetic biography dataset is proposed and used in <cit.>. The format of data is similar to Wikipedia passages. It contains multiple forms of information such as birth date, birth place, university and major. The dataset was not directly published by its original authors, and we reconstructed it based on the details and descriptions in their work. The dataset is also used by various related works, such as <cit.>, <cit.>, and <cit.>.
We used GPT-4 <cit.> to generate the biography dataset. The prompt used for generating the dataset is as follows:
=1cm=1cm
Please generate biographies for 100 individuals in the following format:
[name] was born on [month] [date], [year]. He/She spent his/her early years in [city], [country]. He/She received mentorship and guidance from faculty members at [university]. He/She completed his/her education with a focus on [major]. He/She had a professional role at [company].
Below are 2 sentences that have been written to standard, which you can refer to for the format:
“Adrian Wallace was born on March 15, 1975. He spent his early years in Toronto, Canada. He received mentorship and guidance from faculty members at McGill University. He completed his education with a focus on Artificial Intelligence. He had a professional role at the Pacific Tech Innovation Hub."
“Sofia Ramirez was born on June 22, 1982. She spent her early years in Buenos Aires, Argentina. She received mentorship and guidance from faculty members at the University of São Paulo. She completed her education with a focus on Environmental Science and Policy. She had a professional role at the Green Earth Strategies Corp."
Please consider the following requirements when generating the biographies:
1. Avoid using existing people's names, especially those that are well-known.
2. Use real existing places, dates, companies, and majors in the generated biographies.
3. The names should not be repeated.
4. The birth places, universities, and companies should not be strongly associated (e.g., in a same city) for each individual.
For evaluating model's performance, we use the following question answering task to ask about the 5 pieces of information in the biography about each individual:
=1cm=1cm
Question: When was Liam Thompson born?
Answer: January 5, 1990.
Question: Where was Liam Thompson born?
Answer: Melbourne, Australia.
Question: Which university did Liam Thompson graduate from?
Answer: Sorbonne University.
Question: What did Liam Thompson study?
Answer: Biomedical Engineering.
Question: Where did Liam Thompson work?
Answer: the British Museum.
We use 5-shot prompt in the question answering task, where we provide the model with 5 randomly selected question-and-answer examples of the same format but for well-known individuals. This is to ensure that the model understands the task and the format of the answers. We evaluate the model's performance using exact match (EM) accuracy and normalized word-level F1 score, used in the SQuAD question answering benchmark <cit.>.
§ IMPLEMENTATION DETAILS AND DISCUSSION
§.§ Definition of Distance
“Distance" in Section 3.2 and Figure <ref> refers to the number of tokens between the tail entity and the head entity in text, for example, between the “company" field and the person's name in the biography. The distance is calculated by subtracting the average position of the tokens in the tail entity and the average position of the head entity. In Figure <ref>, the x-axis marks the information fields with increasing distance. The figure demonstrates a pattern whereby the further away relevant information resides from the name tokens, the lower the model's accuracy in assimilating this information.
§.§ Non-obvious but Important Clues
In the synthetic biography text, the head entity is an important clue and obvious to humans, but is not obvious to language models as seen in their low attention to the head entity (shown in Figure <ref>). It is not easy for language models to notice the head entity likely due to long distance and the presence of other distractions. We noticed that even large language models can only pay limited attention on those non-obvious but important clues, which provides evidence for the performance improvement of large language models such as LLaMA 3 70B on both synthetic and real-world datasets.
§.§ Generalization
The results in Table <ref> and Table <ref> have demonstrated that our approach can be generalised across different forms of the dataset. And we will show that our method is not confined on certain positions (for example, the end of the sentence) or certain tokens (for example, prepositions).
While we visualize attention at preposition tokens for better illustrating the idea, we show that using attention differences on all token can help enhance knowledge learning (on Wikipedia data) without depending on prepositions. For the biography data, when using attention differences on information fields instead of just on prepositions, the model achieved similar performance improvements with our method on LLaMA 2 7B in Table <ref>.
§.§ Computational Costs
Since our approach uses a large-size language model and increases the size of data during data augmentation, the cost of models training may increased accordingly. However, we found that the cost increase is often mild in practice.
Firstly, when training a smaller language model, the larger language model is only used for inference and accounts for a small fraction of the computational costs compared to training. Secondly, the increase in training cost is manageable because the model requires significantly fewer epochs to converge with data augmentation. For example, LLaMA 2 70B converges at the 20th epoch without data augmentation, but only at the 6th epoch with data augmentation, as shown in Table <ref>.
§ MORE RESULTS
§.§ Results for other models
We have done experiments on LLaMA 2 and Gemma models on both the Synthetic dataset and Real-world dataset. Results are shown in Table <ref> and Table <ref>, which strengthen the persuasiveness of our approach and demonstrate that the method can be generalised across different models and data. We introduced another two methods (distance dropout and all attention diff) in our experiments for illustrating the effectiveness and generality of our approach.
In the original synthetic biography dataset, the head entity (the person's name) appears at the beginning of the sentence. For more general scenarios, we also consider a version of the biography dataset, where the head entity (the person's name) appears in a random sentence within the paragraph. For example:
He was born on January 5, 1990. He spent his early years in Melbourne, Australia. Liam Thompson received mentorship and guidance from faculty members at Sorbonne University. He completed his education with a focus on Biomedical Engineering. He had a professional role at the British Museum.
We also implement a distance-based augmentation baseline that dropout tokens based on the distance between the head entity and the tail entity.
We test our method on this dataset and show that our method also outperforms the distance-based augmentation method, showing that our method can adaptively enhance clues on a random position based on its semantic significance rather than simply enhancing clues that are far-away. The results are listed in Table <ref> (LLaMA 2):
§.§ Attention visualization
Based on the principle of simplicity, we show that a simple averaging of attention weights across all layers and heads already demonstrated the main idea of the paper and produces substantial improvements. We believe that analysis of individual layers and heads may lead to more specific and better results, and we leave this for future work.
| Pretrained large language models have shown impressive performance on a wide variety of downstream tasks <cit.>. To achieve good generalization, these models need to be trained on web-scale corpora that are diverse and large enough to capture the complexity of natural language. Unfortunately, it is observed that when training corpora is limited in size or style variation, language models can struggle to generalize the information learned from the corpora <cit.>. This deficiency poses a challenge for injecting knowledge into pretrained language models via continual pretraining (finetuning). In many domains, the available corpora is often limited and knowledge-dense (e.g., in forms of textbooks, manuals, documentations). Such domain text may be difficult to be utilized effectively in finetuning, and the language models may not be able to effectively generalize the domain knowledge to downstream domain tasks.
Not very much is known about the causes of such deficiency in knowledge learning. One likely cause is overfitting to co-occurrence patterns in the limited training text, causing learning of spurious correlations instead of correct factual associations. Another possible reason is the difficulty of capturing long-range dependencies in text, which are crucial for understanding complex relationships. Such deficiency is sometimes a result of intentional design choice in the model architecture, such as the decay of attention weights in the RoPE <cit.> positional encodings.
One possible route to understanding this phenomenon is via the attention module in language models. The attention mechanism is a key component that allows the model to focus on different parts of the input when making predictions. The attention weights are shown to be interpretable and explaining the model's behaviors <cit.>.
Recently, <cit.> show that when predicting factual information, models are less likely to attend to the correct clue if the model does not know about the fact. This implies that for new knowledge unknown to the model, the model may not be able to attend to the correct clue at first, leading to difficulty in associating the correct clue (e.g., the head entity) with the prediction target (the tail entity).
To help language models learn, especially smaller models, a common approach is to use knowledge distillation <cit.> (or teacher-student method) to transfer knowledge from a larger model. Given a learning goal, a more performant language model such as GPT-4 <cit.> is often used to generate training data for the smaller model <cit.>. A main drawback of this approach is that it requires the larger model to be already capable of the task or already have the knowledge. This make it not suitable for learning novel knowledge, such as new facts from an evolving domain. Also, it can only help the smaller model to learn but cannot help the larger model.
In this paper, we propose a simple method to enhance factual knowledge learning in continual pretraining, with the help of a pair of larger and smaller models. Our method is effective in learning novel facts and can boost the performance of both the larger and smaller models. The main contributions of the paper are as follows:
Attention difference between large and small language models reveals elusive but important clues in text. We show that while large and small language models both show high attention to important and obvious clues in text, large models pay significantly more attention than smaller models to important clues that are less obvious or elusive. Therefore, by contrasting the attention weights of large and small models, we can identify these elusive clues in text that are important for knowledge learning but are often easily overlooked.
Augmenting elusive clues in text boosts knowledge learning in continual pretraining. We show that by using the identified elusive clues as a guide, a token-dropout data augmentation that highlights the elusive clues can significantly boost the model's performance in knowledge learning. We experimented on both synthetic and real-world corpus and show that the proposed method outperforms other forms of data augmentation, and boosting elusive clues universally helps both the large and the small models.
Unlike previous work which was focus on distilling knowledge from large language models to make small language models perform better, our approach has improved the performance of large language models on not only synthetic , but also real-world datasets.
To the best of our knowledge, we are the first to analyze the the attention discrepancies between large and small models and use it for data augmentation. Prior work have distilled attention pattern from large models to small models, but without analyzing what is being distilled. Unlike distillation, our approach also enhances the performance of large models, which is a novel contribution on our part.
We release the code and data used in this paper for reproducibility and further research[< | §.§ Attention as behavior explanation
It is observed that attention weights in transformer models provide interpretable clues about the model's behavior. For example, attention heads within multi-head attention can spontaneously differentiate into distinct roles <cit.>. Certain heads play a more significant role and affect performance significantly <cit.>. More performant models tend to have attention weights that focus more on key information and features, a possible explanation of their superior performance <cit.>.
Some argue that while attention is somewhat interpretable, its interpretability is not an indicator of model performance <cit.>. There is divided opinion on the extent to which attention weights reflects true model behavior <cit.>. Our study extends these findings by comparing and contrasting attention weights of different models, and show that the difference between attention weights of large and small models can provide important behavioral clues.
§.§ Data augmentation on text
Data augmentation is a critical technique for enhancing robustness and generalization, especially for limited-size datasets. Various data augmentation methods have been proposed, including random editing of sentences <cit.> such as insertion, swapping, and deletion. Synonym replacement methods <cit.> replace words with their synonyms. Contextual augmentation methods <cit.> replace words with other words predicted by a language model for semantic variations. Back-translation <cit.> is another commonly used method that generates augmented data by translating to and then back from another language. More sophisticated methods combine multiple augmentations <cit.>.
Given that attention provides interpretable clues about the model's behavior, <cit.> uses attention weights to find semantically significant words for replacement augmentation. <cit.> uses attention weights to find significant input parts for mixup augmentation <cit.>. We go a step further and show that only augmenting the most significant words is insufficient for challenging knowledge learning scenarios, and augmenting hard-to-notice but important parts of the input boosts the model's performance even better than augmenting the significant parts.
§.§ Teacher-student methods for language models
To enhance the performance of smaller models, knowledge distillation methods have been extensively developed to transfer knowledge from larger models to smaller models <cit.>. Large pretrained language models can be used to generate data for finetuning smaller models to transfer its knowledge and skills, for example, instruction following <cit.> and reasoning ability <cit.>. Distillation from large model is also frequently used to build strong domain or task-specific models with a compact size, like for coding <cit.> and math <cit.>. Our work explores a different way to utilize large models: we find the behavior difference between large and small models and use it to guide the models towards more difficult part of the text.
§.§ Continual pretraining of language models
Continual pretraining takes a language model pretrained on a general corpus and continual the pretraining process with a new corpus, typically domain-specific text, to enhance the model's performance on domain tasks. Model acquires new knowledge and ability via continual pretraining, for example, in coding <cit.>, math <cit.>, and medicine <cit.>. We aim at learning new factual knowledge from text via continual pretraining, similar to those in <cit.>. | null | §.§ The biography dataset
We use low-rank adaptation (LoRA) <cit.> to facilitate finetuning of models up to 70 billion parameters. As the corpus size is limited, we use a rank of 16 for the LoRA adapters. Adapters are added to all of the model's weights except for the embedding and the output layer.
We finetune models with the Huggingface's transformer library <cit.> on NVIDIA 4090 GPUs.
We experiment with LLaMA 3 <cit.> and Gemma 2 <cit.> as two families of language models.
For the baselines, we compare the performance of the models after plain finetuning, random (naive) token-dropout, and token-dropout by attention. In addition to random dropout, dropout by attention uses the original attention weights to guide the dropout probabilities, assuming that the model put more attention on tokens it deemed important. Tokens with lower attention weights are dropped out with higher probabilities to enhance the important information, in a similar vein as in <cit.>. The dropout probabilities are also calculated using Equation <ref>.
For each experiment, we trained the model from 10 to 30 epochs with learning rates in [5e-5, 1e-3] and selected the model with the best performance. For the augmentation-based methods, we also searched for the best hyperparameters α and β individually for each method. Interestingly, the best hyperparameters for the dropout probabilities happen to be similar for different models and augmentation methods. For each of the augmentation methods, we generate 10 augmented versions of each training example and combine them with the original examples.
Results in Table <ref> show that the proposed token-dropout augmentation based on attention difference significantly outperforms other data augmentation methods. We report QA accuracy on the “university" and the “company" fields as the models have poor performance on these fields under plain finetuning (Table <ref>). We report exact match (EM) accuracy and normalized word-level F1 scores. We can see that while random dropout and dropout by attention improve performance over no data augmentation, our method achieves much more significant improvement. This proves that contrasting attention of large and small language models indeed finds important but elusive clues in text effectively, and amplifying these clues in the input has immediate positive effects on the model's memorization efficiency even for the largest 70B model.
§.§ Real-world dataset
Aside from the biography dataset, we also evaluate the proposed method on Wikipedia text to verify if the method helps knowledge learning on general text. Specifically, we evaluate on the Paragraph-Level Wikipedia Question-Answering dataset <cit.>. We first perform continual pretraining on the Wikipedia text paragraphs (included in the dataset), then evaluate the model's performance on the question-answering data[This is the “closed-book" setting where the model is not allowed to look at the original Wikipedia passage during question answering. It tests the model's ability to memorize factual knowledge during the continual pretraining phase.].
The questions are specifically designed to incorporate coreference dependencies that span multiple sentences in a paragraph, making it a challenging task that tests the model's ability to learn and memorize complex factual associations.
An example paragraph of Wikipedia text from the dataset is as follows:
The 2005 edition of the International ISBN Agency's official manual describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10.
An example of the question from the dataset is as follows:
Question: How many digits does the ISBN have?
Answer: 13
Results in Table <ref> show that the proposed method also improves knowledge learning from the Wikipedia text. Naive data augmentation can negatively affect the model's performance, while our method improves the model's memorization efficiency by selectively amplifying difficult and elusive clues. This shows that enhancing the model's focus on important but elusive information in a crucial factor in improving the model's knowledge learning efficiency in pretraining, and our method is generally applicable to different kinds of text. | null | Efficiency of learning factual knowledge in not only crucial for pretraining, but also important for effective continual and lifelong learning in language models. Due to the overfitting and long-range dependency problem, even performant language models can struggle to learn and memorize factual knowledge from limited data. In this work, we show that one of the key factors to improving the model's learning, finding the “elusive" but important clues in text, is already embedded in the model's attention weights. However, such clues are hard to discover by the model itself due to the model's bias towards short-range contexts, but clearly manifests themselves when contrasting the attention between a larger and a smaller model. Based on this discovery, we propose a simple yet effective data augmentation method that leverages the attention difference to guide the dropout of tokens in the input. Our method significantly improves the model's performance in memorizing factual knowledge, and is shown to be effective for different corpora and models. |
http://arxiv.org/abs/2409.17132v1 | 20240925174934 | Complex-Phase, Data-Driven Identification of Grid-Forming Inverter Dynamics | [
"Anna Büttner",
"Hans Würfel",
"Sebastian Liemann",
"Johannes Schiffer",
"Frank Hellmann"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Complex-Phase, Data-Driven Identification of Grid-Forming Inverter Dynamics
Anna Büttner, Non Member, IEEE,
Hans Würfel, Non Member, IEEE,
Sebastian Liemann, Member, IEEE,
Johannes Schiffer, Member, IEEE,
and Frank Hellmann, Non Member, IEEE,
A. Büttner, H. Würfel and F. Hellman are with the Potsdam Institute for Climate Impact Research, Potsdam, Germany, e-mail: {buettner,wuerfel,hellmann}@pik-potsdam.de
S. Liemann was with the Technical University Dortmund, Dortmund, Germany, e-mail: [email protected]
J. Schiffer is with the Brandenburg University of Technology Cottbus-Senftenberg and the Fraunhofer IEG, Fraunhofer Institution for Energy Infrastructures and Geothermal Energy Systems, Cottbus, Germany, e-mail: [email protected]
September 28, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The increasing integration of renewable energy sources (RESs) into power systems requires the deployment of grid-forming inverters to ensure a stable operation. Accurate modeling of these devices is necessary. In this paper, a system identification approach to obtain low-dimensional models of grid-forming inverters is presented. The proposed approach is based on a Hammerstein-Wiener parametrization of the normal-form model. The normal-form is a gray-box model that utilizes complex frequency and phase to capture non-linear inverter dynamics. The model is validated on two well-known control strategies: droop-control and dispatchable virtual oscillators. Simulations and hardware-in-the-loop experiments demonstrate that the normal-form accurately models inverter dynamics across various operating conditions. The approach shows great potential for enhancing the modeling of RES-dominated power systems, especially when component models are unavailable or computationally expensive.
Inverters, Renewable energy sources, System identification, Data-driven modeling
§ INTRODUCTION
Renewable energy sources (RESs) and the power-electronic inverters that connect them to the grid play an increasingly important role for the electric power mix. Today, most inverters utilize grid-following strategies and require an external grid for synchronization. These inverters face challenges in maintaining grid stability under low shares of synchronous generation <cit.>. In contrast, grid-forming inverters can offer functionalities that have been provided by synchronous generators, including frequency and voltage control, independently <cit.>.
Grid-forming inverters are vital for the safe operation of RES-dominated systems, which has led scientists and transmission grid operators to prioritize the development and integration of grid-forming inverters <cit.>. Given their significance in RES-dominated power grids, grid-forming inverters must be modeled appropriately.
Commercially available inverters often have to be treated as black boxes, as manufacturers typically do not disclose the details of their internal design. Without access to models, there are limited insights into how these devices behave in interconnected systems. Operating grid-forming inverters without this knowledge introduces risks and makes it harder to anticipate and mitigate potential failures.
Another challenge is that existing, disclosed models tend to be computationally intensive due to detailed cascaded control structures. The computational effort limits the number of fault scenarios that can be analyzed, rendering these models impractical for control-room applications such as dynamic stability assessment tools <cit.>. Therefore, identifying low-dimensional models from data is crucial. These models may enable stability assessments without comprehensive knowledge of the underlying design and with feasible computational effort.
Inspired by the recent success of the complex frequency concept <cit.>, the approach presented here is based on the complex frequency and complex phase, both of which will be introduced in Section <ref>. The normal-form of grid-forming inverters <cit.>, which employs complex frequency and phase, will be used to identify low-dimensional models. The normal-form is a theory-driven gray-box model class that captures the crucial non-linearities required for describing grid-forming inverter dynamics in a unified form. In this work, we utilize a Hammerstein-Wiener (H-W) parametrization of the normal-form, characterized by nonlinear input and output transformations, with a linear subsystem that determines the system dynamics. The model is introduced in Section <ref>.
The focus of this paper is two-fold. First, we introduce a novel system identification pipeline, detailed in Section <ref>. Second, we utilize this pipeline to identify the H-W normal-form for two types of grid-forming inverters: those based on droop-control <cit.> and those based on the dispatchable virtual oscillator (dVOC) <cit.>. These inverters are described in detail in Section <ref>. The identification of the dVOC-inverter is based on a Simulink EMT-model introduced in <cit.>. The droop-controlled inverter is identified based on experimental data measured in a power hardware-in-the-loop laboratory <cit.>.
The data-collection process and the data requirements are outlined in section <ref>. The major advantage of the approach introduced here is that it is purely data-driven, enabling model identification for inverters without available models, such as the laboratory inverter.
While this paper focuses on identifying models for grid-forming inverters, the approach can also be applied to other grid-forming components, such as static synchronous compensators with appropriate control strategies.
The results are presented in Section <ref>. It has been demonstrated that the normal-form succeeds at describing the slow dynamics of both inverters. The ability of the normal-form to capture the inverter dynamics across various operating conditions is showcased. The normal-form has also been implemented in Simulink to study its performance in a closed-loop set-up. Finally, Section <ref> discusses the limitations and potential solutions of the approach.
§ THEORETICAL BACKGROUND
§.§ Notation
A three-phase voltage signal V_a, V_b, V_c is considered balanced if it satisfies V_a + V_b + V_c = 0 at all times. Using the Park transformation <cit.>, such balanced signals can be represented in a co-rotating 2D reference frame, known as dq-coordinates. In this paper, we consider all voltages and currents balanced, thus transform all abc-signals into a common global -frame of constant synchronous frequency.
The -voltage is represented as a complex number with phase φ and magnitude V ≥ 0, i.e.,
v(t) = v_d(t) + j v_q(t) = V(t) e^j φ(t).
Currents are transformed analogously to voltages. The complex power S is defined as:
S = P + j Q = v i^*
where P and Q are the active and reactive power respectively, and i^* is the complex conjugate of the current i.
§.§ System Identification
System identification refers to the process of developing mathematical models of dynamical target systems based on measured data. In this study, the target systems are grid-forming inverters. System identification always deals with input-output relationships. Inputs are the external signals applied to the target system, while outputs are the measured responses resulting from these inputs. In this work, we utilize gray-box identification. Gray-box modeling leverages partial knowledge of the system behavior and completes the model using empirical data, e.g. extracting model parameters from measurements.
Selecting an appropriate model necessitates an understanding of the underlying target system. Grid-forming inverters control the voltage and frequency at their output. Hence, fundamentally, we assume that grid-forming inverters can be modeled as controllable voltage sources. The following generic model can emulate the behavior of such a controllable voltage source:
ẋ(t) = h^x(x, i),
v(t) = h^v(x),
where v and i are the inverter dq-voltages and currents, respectively, x is a set of internal model states and h^x and h^v are the, possibly, non-linear mappings to be learned.
From an engineering perspective, a natural consideration for inputs and outputs for the system identification of (<ref>) would be the current i and voltage v. However, the relationship between voltages and currents in grid-forming inverters is non-linear and non-stationary, which complicates the identification process. Motivated by the modeling approach for grid-forming actors given in <cit.>, we address this issue by using a different set of inputs and outputs that, based on the results reported in this paper, seems to be more suitable for system identification.
§.§ Complex Phase and Frequency
The currently accepted definition of frequency in power grids is meaningful only when the voltage magnitude is constant. Otherwise, this definition fails to separate the effects of variations in phase angle and voltage magnitude. The concept of complex frequency, introduced by F. Milano in <cit.>, addresses this limitation by incorporating the dynamics of both the phase angle φ and the voltage magnitude V.
The complex frequency can be derived by expressing the complex dq-voltage v(t), as defined in equation (<ref>), as
v(t) = e^ln(V) + j φ = e^Θ with
= ρ + j ω ,
where the imaginary part of the complex frequency, ω, represents the angular frequency, and the real part, ρ, denotes the relative rate of change of the voltage magnitude. Here, Θ is referred to as the complex phase. The complex frequency framework provides a definition of frequency that applies to all balanced conditions, including transients. The complex frequency concept has already been applied in various contexts, including inertia estimation of virtual power plants <cit.>.
In this paper, we employ a universal modeling approach for grid-forming devices based on complex frequency and phase to model inverter dynamics.
While both the complex phase and the dq-voltages contain identical information, the complex phase is more suitable for identification purposes as it separates phase and magnitude variations. In Fig. <ref> we compare exemplarily dq-voltage and complex phase transients. The d-component and phase angle φ exhibit similar behavior. The q-component and voltage logarithm ln(V) both display small, rapid oscillations following a fault. However, the q-component is additionally overlaid with a slower signal, which complicates the identification.
While the separation of magnitude and phase dynamics is also achieved by writing the voltage in polar coordinates it is not well suited for the modeling approach, which will be introduced in the following section.
§.§ Normal-Form of Grid-Forming Devices
Kogler et al. <cit.> introduced the normal-form, a technology-neutral, gray-box model of grid-forming devices. The normal-form is derived by employing a set of natural assumptions. For an in-depth introduction to these assumptions and the derivation of the model, readers are referred to the original publication <cit.>.
The main result of the normal-form approach is that the dynamics of the complex phase only depend on the active and reactive power P and Q, calculated via equation (<ref>), the voltage magnitude square ν and a set of internal model states . The quantities P, Q, and ν are considered with their respective error coordinates e relative to the set-points , , and :
e = h^e(v, i, P^s, Q^s, ν^s) ,
= (( v i^*), (v i^*), |v|^2)^T - (, , ||^2)^T.
This yields the following most universal form of the normal-form model:
ẋ_c(t) = g(e, ) ,
(t) = f(e, ) ,
Θ̇(t) = ,
v(t) = e^Θ ,
where f and g are continuous, non-linear functions. If f and g are linear, the normal-form exhibits a Hammerstein-Wiener (H-W) structure. H-W models are characterized by a static non-linearity at the input, followed by a linear subsystem that defines the system dynamics, and ending with a non-linearity that computes the output <cit.>. In the normal-form, the input non-linearity is given by equation (<ref>). The output non-linearity is defined by the computation of the dq-voltage via the complex phase (<ref>). The H-W normal form is defined as:
e = h^e(v, i, P^s, Q^s, ν^s) ,
ẋ_c = A + B e ,
= C + D e ,
Θ̇ = ,
v = e^Θ .
Note that the matrices A and B are real in this formulation, while C and D are complex. The structure of the H-W normal-form is summarized in Fig. <ref>. In <cit.>, it was analytically demonstrated that both dVOC and droop-controlled inverters can be approximated by a H-W normal form. This H-W version is used in the subsequent analysis, and whenever the normal-form is mentioned, it refers to system (<ref>). To account for the relationship between the inputs e and the voltage v, the identification will be based on the complex phase Θ, as detailed in section <ref>.
In the normal-form, any structural differences between grid-forming inverters are encapsulated in the parameter matrices A, B, C, and D. These parameters can be identified from simulated or measured data. A data-driven normal-form for a simple example has been derived in <cit.>, demonstrating the approach's potential. The number of internal states n_ivars of the normal-form is variable and can be adjusted according to the complexity of the inverter dynamics.
§ DEVICES UNDER TEST
To validate the introduced modeling scheme, we find the parameter matrices for two, prominent, independent, grid-forming inverters based on recorded input-output data and assess the prediction performance of the normal-form models.
§.§ Dispatchable Virtual Oscillator
We identify the normal-form parameters from time series obtained from an EMT simulation to validate the approach in a simulation setting.
The simulation model consists of a single grid-forming converter using the dVOC control scheme <cit.>.
The Simulink implementation is based on the simulation files from <cit.>.
In this model, the power electronics are modeled as an ideal voltage source, i.e. the voltage is smooth, and discrete switching events are not considered.
The power electronics are connected to the grid via an LC-output filter.
The dVOC algorithm acts as the outer loop providing a voltage reference to the inner loop controllers.
The inner loop consists of two cascaded PI controllers in a local dq-frame, which control the inner filter current and voltage.
The structure of the overall system is shown in Fig. <ref>.
For data collection, the inverter is connected to an infinite bus or a resistive load, depending on the scenario.
The models are available online <cit.>, and details on the control design and model parameters can be found in the publications <cit.>.
§.§ Droop-Controlled Inverter
To validate the system identification pipeline on real data, we gathered experimental data from a grid-forming inverter in the power hardware in the loop laboratory at BTU Cottbus-Senftenberg (see Fig. <ref>).
The device under test is a three-phase power converter with a rating of 15kW at a voltage level of 400V phase-to-phase. The implemented grid-forming control strategy is a standard droop control scheme <cit.>, where the desired voltage magnitude V and angle δ are given by
δ̇= + K_P( - ) and
V = + K_Q( - ),
where and are the low-pass-filtered active and reactive power measurements and :
τ_P = - and τ_Q = - .
The inner-loop voltage controller is a two-level cascaded PI-controller in dq-frame with additional resonant terms in the innermost control loop to dampen harmonics in the output.
The structure of the control loops is presented in Fig. <ref>. More details on the laboratory and the inner control loops can be found in <cit.>.
§ PARAMETER IDENTIFICATION
In the following, the system identification pipeline will be introduced. Firstly, the data has to be collected and will then be preprocessed. The identification will be initialized as detailed in sections <ref>. The actual system identification, that is based on an optimization, is outlined in section <ref>.
§.§ Data Collection
To perform the identification, data of the excited system needs to be collected. The data collection is specifically tailored to the output, the complex phase, which will facilitate the identification process. The complex phase separates phase and magnitude responses. Consequently, all scenarios are designed to excite both phase and magnitude dynamics, which generates an output with a transient response that is suitable for identification.
For that, a grid-forming inverter is connected to a stiff voltage source. In the EMT simulations, a slack bus has been employed, while in the laboratory a more powerful inverter, that emulates a stiff external grid, has been used. To identify the normal-form parameters, three different scenarios have been measured:
* Step changes of the voltage magnitude of the external grid voltage,
* Step changes of the frequency of the external grid voltage and
* Rapid small changes of both the magnitude and frequency of the external grid voltage.
For each of the scenarios, the output voltages and currents are recorded. The two-step change scenarios 2) and 3) have been recorded to capture the inverter's ability to reach one stationary state when starting from another. The small changes have been recorded to study the inverter's response to rapid changes in the external grid without reaching a steady-state between the changes. We found that these three scenarios are sufficient to capture the inverter dynamics fully.
The three scenarios recorded here are similar to those used to assess whether an inverter is grid-forming, according to the guidelines introduced in <cit.>. These guidelines are supported by the German transmission system operators <cit.>.
The data-sets have been partitioned into three categories using a so-called train-validation-test split, a strategy that is commonly employed in the machine learning community. The training set, constituting 70 of the total data, is utilized to identify the model parameters. The validation set is used to choose the best parameter configuration. The parameter configuration is selected based on the R^2 score, which we will discuss in section <ref>, on the validation set. This mitigates over-fitting on the training data. The validation data-set makes up 20 of the total data. Subsequently, the test-set evaluates the final performance of the best model, chosen via the validation set, and ensures the model's ability to generalize to unseen data. This data-set comprises 10 of the total data. While all three subsets contain different time series, they all include the scenarios introduced above.
To further challenge the model a so-called out-of-distribution set has been recorded that is significantly different from the training- validation- or test-data. This task is important because the response in real-world power grids can vary widely. It will never be feasible to include all scenarios that may occur in a power grid in the training's data. Hence, a reduced model may only become a viable alternative if it can be shown that it performs well on unseen scenarios.
In the simulation, the converter is connected to a resistive load instead of the slack, thus acting as a truly grid-forming component. To disturb the system, the resistance of the load decreases multiple times. Therefore, the system is not in power balance anymore, and the frequency drops.
In the laboratory, two droop-controlled grid-forming inverters and a constant power load are connected in a micro-grid, which is connected to an auxiliary grid, as shown in figure <ref>. During normal operation, the micro-grid imports active power from the auxiliary grid. In a discrete event, the connection to the auxiliary grid is cut and the grid-forming actors have to compensate for the power loss, which results in a slight frequency drop of the micro-grid.
§.§ Data Pre-Processing
Before the identification can be performed, the collected data must be preprocessed to fit the normal-form approach.
Firstly, the obtained abc time series of the output current and voltage are transformed into a global dq-frame rotating with the same synchronous frequency.
Then, the measured active power , reactive power , and squared voltage magnitude are computed from the dq signals, via the non-linear input transformation (<ref>). The difference between the setpoints and the computed values serves as the input for the LTI system (<ref>). The data is down-sampled to a sampling interval of t_0 = 1ms to reduce the computational load as the collected data contains redundant information, especially as we primarily aim to capture slow dynamics that occur on time scales slower than ∼20ms.
§.§ Parameter Initialization
A good initial guess for the parameter matrices of the linear subsystems in the H-W models is beneficial to avoid getting trapped in a sub-optimal local minimum during the identification <cit.>. The inputs of the linear subsystem are the error coordinates e(t) and the output is the complex frequency . It is possible to apply classical system identification tools to find the system matrices capable of generating the desired complex frequency. We utilized the implementation of the subspace-identification algorithm <cit.> available in the library to calculate the initial parameter matrices A^0, B^0, C^0, and D^0.
The subspace identification algorithm <cit.> assumes no feedback from the output back to the input, leading to errors in the predicted complex frequency. Any errors present in the predicted complex frequency accumulate over time during integration, resulting in inaccurate predictions of the dq-voltages. A subspace identification algorithm that accounts for feedback present in our data could be employed, but we have not pursued this, as our goal is to predict the voltages rather than the complex frequency. Measuring the complex frequency η directly in the laboratory is impossible and calculating η is prone to errors if the measured signals are noisy, as derivatives are inherently sensitive to noise. Therefore, we always focus on predicting the measured voltages. Nevertheless, the initial guess provided by the subspace identification algorithm has proven beneficial, as it allows the optimizer to start from a linearly stable normal-form model.
§.§ Parameter Optimization
The system identification is based on the optimization of the squared l_2-norm between the measured complex phase and the predicted complex phase :
l = ∑_k (|(t_k) - (t_k)|)^2 ,
where k runs over all time points in the time series. For each time step k, the LTI-system predicts the complex frequency (t_k+1) based on the error coordinates e(t_k). The complex phase (t_k+1) is calculated by integrating the predicted complex frequency.
The parameter matrices A, B, C, and D are adjusted to minimize the loss l. For the optimization, the package <cit.> and the Julia implementation of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm <cit.> have been employed. The full identification work-flow is summarized in Fig. <ref>.
The inputs e^m for the normal-form are not updated during the identification process, which is the standard procedure for system identification. The inputs depend on the voltages and currents. Consequently, after predicting the voltage v^pred, updating the currents and the inputs e is necessary for a closed-loop identification. While fixing e may reduce the accuracy, it offers a significant advantage. This approach enables the identification of inverters where no model is available, for example, the laboratory inverter, introduced in section <ref>. In section <ref>, the normal-form has been implemented in Simulink to test the closed-loop performance, continuously updating the inputs according to the state of the network. This ensures that the identified model is truly accurate.
§.§ Performance Measures
To evaluate the model performance, the R^2 score, or coefficient of determination, is used. The R^2 score is defined as:
R^2 = 1 - ∑_i (y_i - f_i)^2/∑_i (y_i - y)^2,
where y_i and f_i represent the observed data points and the corresponding predicted values, respectively. y is the mean of all observed data points. The R^2 score ranges from 0 to 1, with 1 indicating a perfect fit and 0 signifying that the model predicts only the mean y. In this study, we will consider the R^2 scores of the predicted dq voltages.
It is important to note that the R^2 score can only be used to compare different time series with caution, as the mean y will vary for each scenario, leading to different R^2 scores even if the fit quality is similar. In this work, the R^2 score will be used primarily to compare the performance of models with different numbers of internal states. A detailed analysis of the measured and predicted voltage transients will be conducted to assess the model performance comprehensively.
§ RESULTS
§.§ EMT-Simulations
In the subsequent analysis, we present the results for an identified normal-form with four internal variables (n_ivars = 4), as Fig. <ref> suggests that at least four internal variables are necessary to model the dynamics of the d-component, while the q-component is already captured with a single internal variable. Fig. <ref> illustrates the results for the third scenario within the test data-set, which involves rapid small changes in both magnitude and frequency. The normal-form successfully captures the voltage dynamics of the inverter in this scenario. The results for the other two scenarios are equally successful, which demonstrates the accuracy of the normal-form model across different operating conditions.
As detailed in section <ref>, an out-of-distribution scenario has been collected to assess the ability of the normal-form to predict unforeseen scenarios. Fig. <ref> presents a comparison between the predicted and measured dq -voltage of the out-of-distribution data-set. It reveals a close match between the measured and predicted voltages. It can be seen that the slow transients as well as the fast transients are captured by the normal-form.
To study the performance with respect to the model complexity the parameters of the normal-form have been identified for one up to ten internal variables . The results are depicted in Fig. <ref> for both the training- and validation data-set. Beyond n_ivars = 4, only marginal improvements of the fit are observed which suggests that the return of increasing the number of internal variables are diminishing. To underline this, Fig. <ref> shows a comparison between a normal-form with three and four internal variables. It shows voltage transients during a rapid change in slack frequency and magnitude. The normal-form with three internal variables fails to model the d-component, with a high degree of accuracy, while the model with four internal variables successfully captures the dynamics.
To further validate the approach, the normal-form has been implemented in Simulink to test the model in a closed-loop configuration. As detailed in section <ref>, the input y remained static during the simulation in previous sections, which is sufficient for training and validation. It is vital to ensure the normal-form performance within a closed-loop context to establish its reliability as an alternative.
Fig. <ref> illustrates the performance of the closed-loop normal-form in Simulink on the out-of-distribution scenario. In contrast to the open loop performance, depicted in Fig. <ref>, it is evident that the models with 1-2 internal variables exhibit a significantly improved performance once the back reaction of the network is considered. The performance remains consistent for models with 3-7 internal variables. However, for 8-10 internal variables, the performance of the models decreases. This loss of performance is likely a result of over-fitting, which may occur for models with a higher number of parameters <cit.>.
Fig. <ref> illustrates a comparison between the closed-loop normal-form with four internal variables and the full dVOC on the out-of-distribution scenario. Notably, both the dVOC and the closed-loop normal-form yield similar voltage transients. The closed-loop normal-form captures the slow transients. However, the fast responses that have been captured by the open-loop model are not fully captured. This phenomenon will be discussed in more detail in the limitations section <ref>.
In summary, the normal-form has demonstrated good results in modeling the slow dynamics of the dVOC, whether in an open- or closed-loop configuration.
§.§ Laboratory Experiments
For the laboratory experiments, the same procedures as for the simulation data has been used. To emphasize the effectiveness of our approach, we will concentrate on the normal-form with a single internal variable in the following analysis.
The left pane of Fig. <ref> illustrates the results for the third scenario in the test data-set, where the normal-form successfully captures the voltage dynamics of the inverter. The model performs equally well in the first scenario, which involves step changes in the external grid's voltage magnitude. In the second scenario, involving step changes in the external grid's frequency, the normal-form shows slightly less accuracy, particularly in capturing the harmonics of the d-component, which can be seen in the right pane of Fig. <ref>. Despite this, the overall results remain satisfactory. A more detailed discussion on harmonics will be presented in section <ref>.
Fig. <ref> depicts the predicted and measured dq-voltage during the islanding process, which is the out-of-distribution task for the laboratory set-up. Remarkably, the normal-form accurately predicts the dq-voltage transients, underscoring its generalization capabilities.
Fig. <ref> illustrates the model performance with respect to the number of internal variables. There appears to be minimal dependence on n_ivars within the data-set. Notably, only a single internal variable is needed to describe the slow dynamics of the laboratory inverter.
In summary, the normal-form exhibits good results for the laboratory experiments. Since an exact model of the laboratory components is unavailable, testing the closed-loop performance of the inverter was not possible. Once access to a laboratory with a digital twin of all components is available, the closed-loop performance of a laboratory inverter will be studied in subsequent publications.
§.§ Limitations
In the following, we will focus on two phenomena that are currently not fully captured by either the identification process or the normal-form approach: harmonics and electromagnetic transients.
§.§.§ Electromagnetic Phenomena
For the dVOC short voltage drops immediately after a load step can be observed in the dq-components, as we can see in Fig. <ref>.
The load and, thus, the output power of the inverter increases nearly instantaneously. However, the inner current is actively controlled by the cascaded inner loops and takes some time to match the output power. This short imbalance leads to a voltage drop over the capacitor until the inner-loop control takes effect and stabilizes the output voltage.
As shown in Fig. <ref>, the open-loop normal-form captures these peaks, whereas the closed-loop simulation does not. During those short, highly dynamic events, the feedback between the normal-form output and input in the closed-loop system plays an important role. The normal-form is trained without taking this causality into account and thus cannot predict the correct outcome in this highly dynamic EMT scenario. Hence, this issue does not stem from the normal-form approach itself but rather from the identification process.
To potentially enable the normal-form to capture these electromagnetic transients a pseudo-closed-loop identification could be implemented. A relatively simple update to the open-loop identification, introduced in section <ref>, would be to close the loop between the predicted complex phase and the inputs of the normal-form e. This means that the inputs are continuously updated during the transients, which can then be added as an additional term to the loss function (<ref>). This approach would allow the normal-form to learn not only the correct output behavior but also how the inputs have manifested. This approach still does not require a model of the ambient power system and thus maintains data-driven. We believe this approach may enable the capturing of EMT effects, however, further research is required.
§.§.§ Harmonic Resonances
This section investigates the capability of the normal-form to model harmonic resonances. In real-world power systems (for example the laboratory experiments), harmonic resonances of the fundamental frequency appear due to the non-linear behavior of certain components, e.g. the discrete switching behavior of the power electronics.
Studies have verified that the extensive use of power-electronic inverters can increase harmonic pollution in power systems <cit.>. The non-linear dynamics of these inverters significantly contribute to the harmonic response. While linear time-invariant (LTI) models can identify the amplitude and phase shifts introduced by the inverter at each harmonic frequency <cit.>, they are limited by their inability to generate frequency components not present in the input. Hence, they can not describe harmonics that are induced by the inverter dynamics themselves. Instead, linear time-periodic (LTP) systems could be employed, as demonstrated in <cit.>.
The normal-form, however, is not a pure LTI system. The relation between the non-linear voltage dynamics, given by equation (<ref>), and the harmonic response is not well understood. Despite this, we anticipate that the normal-form should exhibit similar capabilities as pure LTI-models. Thus, amplitude- and phase-shifts should be captured by the normal-form while the harmonics induced by inverter dynamics should not be covered.
This study examines the harmonics observed in the voltage magnitude |v|. Scenario three of the test data-set, which involves periodic changes in the slack voltage magnitude and frequency, has been employed for this. Fig. <ref> compares the power spectra of the laboratory measurements and the normal-form prediction. The measured signal indicates that harmonics of the 50Hz fundamental frequency are clearly present in the data. The high power visible in the low-frequency range is attributed to the slack changes occurring every second.
In Fig. <ref>, the power spectra of the voltage magnitude of a normal-form with one and ten internal variables are compared. It is evident that the model with one internal variable fails to capture harmonics above 150Hz and underestimates the power of all harmonic components. Although the performance improves slightly with ten internal variables, it is clear that merely increasing the number of internal variables is insufficient.
In contrast to the EMT-phenomena, the limitation here is given by the modeling approach and not by the identification method. Harmonics introduced by the switching do not satisfy the assumptions underlying the normal-form analysis. Further research on a normal-form approach that includes harmonics is required.
§ CONCLUSION AND OUTLOOK
Renewable energy sources (RESs) and the power-electronic inverters that integrate them into the grid are becoming increasingly significant in the generation mix. Grid-forming inverters are particularly crucial for the stable operation of RES-dominated systems which necessitates accurate modeling of these devices.
Despite extensive research, real-life inverters are often treated as black boxes due to undisclosed internal design choices. Existing models, though detailed, are computationally intensive and thus impractical for control-room applications such as dynamic stability assessment tools <cit.>. Hence, low-dimensional models derived from data are essential to enable efficient stability assessments.
Inspired by the success of the complex frequency concept <cit.>, in this paper we utilize the normal-form model <cit.>, which incorporates complex frequency and phase, to identify low-dimensional models of grid-forming inverters. This theory-driven gray-box model captures the necessary nonlinearities for accurately describing inverter dynamics.
We have introduced an identification pipeline that is based on the optimization of the L^2-norm of the difference between measured and predicted complex phases. This method yielded significantly more robust results compared to using the dq-voltages. Although both complex phase and dq-voltages contain the same information, the complex frequency separates phase angle and magnitude dynamics, which facilitates the identification process. The data collection involves connecting a grid-forming inverter to a stiff voltage source and recording the inverter bus voltages and currents under three different scenarios.
The normal-form model and the accompanying identification process have been validated on two common grid-forming inverter strategies: droop-control <cit.> and dispatchable virtual oscillators (dVOC) <cit.>. The dVOC was simulated in Simulink, while the droop-controlled inverter was tested in a power hardware-in-the-loop laboratory.
A normal-form model with four internal variables accurately modeled the dVOC dynamics across all studied data-sets. Both open- and closed-loop normal-form models captured the slow transients of the dVOC, but the closed-loop model struggled with fast EMT phenomena. A pseudo-closed-loop identification, which updates inputs based on predicted outputs, could enhance the model performance in closed-loop simulations without requiring an ambient power system model.
For the droop-controlled inverter, even a normal-form with a single internal variable sufficiently captured the dynamics across all data-sets. However, the normal-form struggled with the representation of harmonics due to its inability to generate frequency components that are not present in the input. Further research is needed to extend the normal-form approach to include harmonic resonances.
In conclusion, the presented results demonstrate the effectiveness of the normal-form approach, and the accompanying identification pipeline, in accurately describing the dynamics of grid-forming inverters. By addressing the challenges of existing computationally intensive models and undisclosed internal design, the normal-form provides a robust, low-dimensional, data-driven alternative.
§ ACKNOWLEDGMENTS
All authors gratefully acknowledge Land Brandenburg for supporting this project by providing resources on the high-performance computer system at the Potsdam Institute for Climate Impact Research. The work was in parts supported by DFG Grant Number KU 837/39-2 / 360332943, BMWK Project OpPoDyn (FKZ 03EI1071A) and BMWK Projekt MARiE (FKZ 03EI4012A). Anna Büttner acknowledges support from the German Academic Scholarship Foundation.
§ DATA AVAILABILITY
Both, the data that supports the findings of this study and the source code used to produce the results are openly available on Zenodo <cit.>.
| Renewable energy sources (RESs) and the power-electronic inverters that connect them to the grid play an increasingly important role for the electric power mix. Today, most inverters utilize grid-following strategies and require an external grid for synchronization. These inverters face challenges in maintaining grid stability under low shares of synchronous generation <cit.>. In contrast, grid-forming inverters can offer functionalities that have been provided by synchronous generators, including frequency and voltage control, independently <cit.>.
Grid-forming inverters are vital for the safe operation of RES-dominated systems, which has led scientists and transmission grid operators to prioritize the development and integration of grid-forming inverters <cit.>. Given their significance in RES-dominated power grids, grid-forming inverters must be modeled appropriately.
Commercially available inverters often have to be treated as black boxes, as manufacturers typically do not disclose the details of their internal design. Without access to models, there are limited insights into how these devices behave in interconnected systems. Operating grid-forming inverters without this knowledge introduces risks and makes it harder to anticipate and mitigate potential failures.
Another challenge is that existing, disclosed models tend to be computationally intensive due to detailed cascaded control structures. The computational effort limits the number of fault scenarios that can be analyzed, rendering these models impractical for control-room applications such as dynamic stability assessment tools <cit.>. Therefore, identifying low-dimensional models from data is crucial. These models may enable stability assessments without comprehensive knowledge of the underlying design and with feasible computational effort.
Inspired by the recent success of the complex frequency concept <cit.>, the approach presented here is based on the complex frequency and complex phase, both of which will be introduced in Section <ref>. The normal-form of grid-forming inverters <cit.>, which employs complex frequency and phase, will be used to identify low-dimensional models. The normal-form is a theory-driven gray-box model class that captures the crucial non-linearities required for describing grid-forming inverter dynamics in a unified form. In this work, we utilize a Hammerstein-Wiener (H-W) parametrization of the normal-form, characterized by nonlinear input and output transformations, with a linear subsystem that determines the system dynamics. The model is introduced in Section <ref>.
The focus of this paper is two-fold. First, we introduce a novel system identification pipeline, detailed in Section <ref>. Second, we utilize this pipeline to identify the H-W normal-form for two types of grid-forming inverters: those based on droop-control <cit.> and those based on the dispatchable virtual oscillator (dVOC) <cit.>. These inverters are described in detail in Section <ref>. The identification of the dVOC-inverter is based on a Simulink EMT-model introduced in <cit.>. The droop-controlled inverter is identified based on experimental data measured in a power hardware-in-the-loop laboratory <cit.>.
The data-collection process and the data requirements are outlined in section <ref>. The major advantage of the approach introduced here is that it is purely data-driven, enabling model identification for inverters without available models, such as the laboratory inverter.
While this paper focuses on identifying models for grid-forming inverters, the approach can also be applied to other grid-forming components, such as static synchronous compensators with appropriate control strategies.
The results are presented in Section <ref>. It has been demonstrated that the normal-form succeeds at describing the slow dynamics of both inverters. The ability of the normal-form to capture the inverter dynamics across various operating conditions is showcased. The normal-form has also been implemented in Simulink to study its performance in a closed-loop set-up. Finally, Section <ref> discusses the limitations and potential solutions of the approach. | null | null | §.§ EMT-Simulations
In the subsequent analysis, we present the results for an identified normal-form with four internal variables (n_ivars = 4), as Fig. <ref> suggests that at least four internal variables are necessary to model the dynamics of the d-component, while the q-component is already captured with a single internal variable. Fig. <ref> illustrates the results for the third scenario within the test data-set, which involves rapid small changes in both magnitude and frequency. The normal-form successfully captures the voltage dynamics of the inverter in this scenario. The results for the other two scenarios are equally successful, which demonstrates the accuracy of the normal-form model across different operating conditions.
As detailed in section <ref>, an out-of-distribution scenario has been collected to assess the ability of the normal-form to predict unforeseen scenarios. Fig. <ref> presents a comparison between the predicted and measured dq -voltage of the out-of-distribution data-set. It reveals a close match between the measured and predicted voltages. It can be seen that the slow transients as well as the fast transients are captured by the normal-form.
To study the performance with respect to the model complexity the parameters of the normal-form have been identified for one up to ten internal variables . The results are depicted in Fig. <ref> for both the training- and validation data-set. Beyond n_ivars = 4, only marginal improvements of the fit are observed which suggests that the return of increasing the number of internal variables are diminishing. To underline this, Fig. <ref> shows a comparison between a normal-form with three and four internal variables. It shows voltage transients during a rapid change in slack frequency and magnitude. The normal-form with three internal variables fails to model the d-component, with a high degree of accuracy, while the model with four internal variables successfully captures the dynamics.
To further validate the approach, the normal-form has been implemented in Simulink to test the model in a closed-loop configuration. As detailed in section <ref>, the input y remained static during the simulation in previous sections, which is sufficient for training and validation. It is vital to ensure the normal-form performance within a closed-loop context to establish its reliability as an alternative.
Fig. <ref> illustrates the performance of the closed-loop normal-form in Simulink on the out-of-distribution scenario. In contrast to the open loop performance, depicted in Fig. <ref>, it is evident that the models with 1-2 internal variables exhibit a significantly improved performance once the back reaction of the network is considered. The performance remains consistent for models with 3-7 internal variables. However, for 8-10 internal variables, the performance of the models decreases. This loss of performance is likely a result of over-fitting, which may occur for models with a higher number of parameters <cit.>.
Fig. <ref> illustrates a comparison between the closed-loop normal-form with four internal variables and the full dVOC on the out-of-distribution scenario. Notably, both the dVOC and the closed-loop normal-form yield similar voltage transients. The closed-loop normal-form captures the slow transients. However, the fast responses that have been captured by the open-loop model are not fully captured. This phenomenon will be discussed in more detail in the limitations section <ref>.
In summary, the normal-form has demonstrated good results in modeling the slow dynamics of the dVOC, whether in an open- or closed-loop configuration.
§.§ Laboratory Experiments
For the laboratory experiments, the same procedures as for the simulation data has been used. To emphasize the effectiveness of our approach, we will concentrate on the normal-form with a single internal variable in the following analysis.
The left pane of Fig. <ref> illustrates the results for the third scenario in the test data-set, where the normal-form successfully captures the voltage dynamics of the inverter. The model performs equally well in the first scenario, which involves step changes in the external grid's voltage magnitude. In the second scenario, involving step changes in the external grid's frequency, the normal-form shows slightly less accuracy, particularly in capturing the harmonics of the d-component, which can be seen in the right pane of Fig. <ref>. Despite this, the overall results remain satisfactory. A more detailed discussion on harmonics will be presented in section <ref>.
Fig. <ref> depicts the predicted and measured dq-voltage during the islanding process, which is the out-of-distribution task for the laboratory set-up. Remarkably, the normal-form accurately predicts the dq-voltage transients, underscoring its generalization capabilities.
Fig. <ref> illustrates the model performance with respect to the number of internal variables. There appears to be minimal dependence on n_ivars within the data-set. Notably, only a single internal variable is needed to describe the slow dynamics of the laboratory inverter.
In summary, the normal-form exhibits good results for the laboratory experiments. Since an exact model of the laboratory components is unavailable, testing the closed-loop performance of the inverter was not possible. Once access to a laboratory with a digital twin of all components is available, the closed-loop performance of a laboratory inverter will be studied in subsequent publications.
§.§ Limitations
In the following, we will focus on two phenomena that are currently not fully captured by either the identification process or the normal-form approach: harmonics and electromagnetic transients.
§.§.§ Electromagnetic Phenomena
For the dVOC short voltage drops immediately after a load step can be observed in the dq-components, as we can see in Fig. <ref>.
The load and, thus, the output power of the inverter increases nearly instantaneously. However, the inner current is actively controlled by the cascaded inner loops and takes some time to match the output power. This short imbalance leads to a voltage drop over the capacitor until the inner-loop control takes effect and stabilizes the output voltage.
As shown in Fig. <ref>, the open-loop normal-form captures these peaks, whereas the closed-loop simulation does not. During those short, highly dynamic events, the feedback between the normal-form output and input in the closed-loop system plays an important role. The normal-form is trained without taking this causality into account and thus cannot predict the correct outcome in this highly dynamic EMT scenario. Hence, this issue does not stem from the normal-form approach itself but rather from the identification process.
To potentially enable the normal-form to capture these electromagnetic transients a pseudo-closed-loop identification could be implemented. A relatively simple update to the open-loop identification, introduced in section <ref>, would be to close the loop between the predicted complex phase and the inputs of the normal-form e. This means that the inputs are continuously updated during the transients, which can then be added as an additional term to the loss function (<ref>). This approach would allow the normal-form to learn not only the correct output behavior but also how the inputs have manifested. This approach still does not require a model of the ambient power system and thus maintains data-driven. We believe this approach may enable the capturing of EMT effects, however, further research is required.
§.§.§ Harmonic Resonances
This section investigates the capability of the normal-form to model harmonic resonances. In real-world power systems (for example the laboratory experiments), harmonic resonances of the fundamental frequency appear due to the non-linear behavior of certain components, e.g. the discrete switching behavior of the power electronics.
Studies have verified that the extensive use of power-electronic inverters can increase harmonic pollution in power systems <cit.>. The non-linear dynamics of these inverters significantly contribute to the harmonic response. While linear time-invariant (LTI) models can identify the amplitude and phase shifts introduced by the inverter at each harmonic frequency <cit.>, they are limited by their inability to generate frequency components not present in the input. Hence, they can not describe harmonics that are induced by the inverter dynamics themselves. Instead, linear time-periodic (LTP) systems could be employed, as demonstrated in <cit.>.
The normal-form, however, is not a pure LTI system. The relation between the non-linear voltage dynamics, given by equation (<ref>), and the harmonic response is not well understood. Despite this, we anticipate that the normal-form should exhibit similar capabilities as pure LTI-models. Thus, amplitude- and phase-shifts should be captured by the normal-form while the harmonics induced by inverter dynamics should not be covered.
This study examines the harmonics observed in the voltage magnitude |v|. Scenario three of the test data-set, which involves periodic changes in the slack voltage magnitude and frequency, has been employed for this. Fig. <ref> compares the power spectra of the laboratory measurements and the normal-form prediction. The measured signal indicates that harmonics of the 50Hz fundamental frequency are clearly present in the data. The high power visible in the low-frequency range is attributed to the slack changes occurring every second.
In Fig. <ref>, the power spectra of the voltage magnitude of a normal-form with one and ten internal variables are compared. It is evident that the model with one internal variable fails to capture harmonics above 150Hz and underestimates the power of all harmonic components. Although the performance improves slightly with ten internal variables, it is clear that merely increasing the number of internal variables is insufficient.
In contrast to the EMT-phenomena, the limitation here is given by the modeling approach and not by the identification method. Harmonics introduced by the switching do not satisfy the assumptions underlying the normal-form analysis. Further research on a normal-form approach that includes harmonics is required. | null | null |
http://arxiv.org/abs/2409.17628v1 | 20240926082209 | Convolutional Signal Propagation: A Simple Scalable Algorithm for Hypergraphs | [
"Pavel Procházka",
"Marek Dědič",
"Lukáš Bajer"
] | cs.LG | [
"cs.LG"
] |
Convolutional Signal Propagation
P. Procházka et al.
Cisco Systems, Inc., Karlovo náměstí 10, Prague, 120 00, Czech Republic {paprocha,madedic,lubajer}@cisco.com Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Břehová 7, Prague, 110 00, Czech Republic
Convolutional Signal Propagation: A Simple Scalable Algorithm for Hypergraphs
Pavel Procházka1 Marek Dědič1 2 0000-0003-1021-8428 Lukáš Bajer1 0000-0002-9402-6417
September 28, 2024
===========================================================================================
§ ABSTRACT
Last decade has seen the emergence of numerous methods for learning on graphs, particularly Graph Neural Networks (GNNs). These methods, however, are often not directly applicable to more complex structures like bipartite graphs (equivalent to hypergraphs), which represent interactions among two entity types (e.g. a user liking a movie). This paper proposes (), a non-parametric simple and scalable method that natively operates on bipartite graphs (hypergraphs) and can be implemented with just a few lines of code. After defining , we demonstrate its relationship with well-established methods like label propagation, Naive Bayes, and Hypergraph Convolutional Networks. We evaluate against several reference methods on real-world datasets from multiple domains, focusing on retrieval and classification tasks. Our results show that offers competitive performance while maintaining low computational complexity, making it an ideal first choice as a baseline for hypergraph node classification and retrieval. Moreover, despite operating on hypergraphs, CSP achieves good results in tasks typically not associated with hypergraphs, such as natural language processing.
§ INTRODUCTION
In the modern world, an overwhelming amount of data has an internal structure, oftentimes forming complex networks that can be represented as graphs. Efficiently mining information from this data is crucial for a wide range of applications spanning various domains such as social networks, biology, physics or cybersecurity. Graph Neural Networks (GNNs) have emerged as the dominant tool for handling such data due to their ability to leverage the graph structure for predictive and analytical tasks. However, despite their success, GNNs come with notable challenges, including high computational complexity during training, numerous hyperparameters that require fine-tuning, lack of straightforward interpretability, and the necessity of dedicated computational infrastructure such as GPUs.
Given these limitations, baseline algorithms play a vital role as complementary tools to GNNs. These baselines, often much less complex, provide an efficient way of generating preliminary results. In many cases, these simpler methods are even sufficiently effective to be used as-is for the problem at hand.
Section <ref> introduces the problem being solved in this work and Section <ref> provides an overview of other related works. Section <ref> is the most substantive part of this paper, introducing the method (Section <ref>). In Section <ref>, we show to be a straightforward extension of the well-known label propagation method to hypergraphs. Also provided is a comparison of to other methods, such as hypergraph convolutional networks (Section <ref>) and the naive Bayes classifier (Section <ref>). Finally, Section <ref> provides an experimental evaluation and comparison of and several alternative methods.
§ PROBLEM STATEMENT
Assume that we have structured data with relationships that can be translated into a hypergraph or a bipartite graph. These structures can represent various scenarios such as users rating movies, users accessing web domains, emails containing attachments, authors writing papers, papers being co-cited, and tokens being contained in texts. Some data might also come with constraints such as large volume or privacy restrictions, like those found in emails. These structures inherently provide a relationship between entities, enabling many applications to leverage these relationships, such as movie recommendations, mining malicious web content (like emails or domains), or text classification.
Our goal is to find a flexible method that can be applied to tasks involving structured data. This flexibility is sought in terms of performance, numerical complexity, and adaptability. We aim to develop a method that can efficiently handle and extract meaningful information from these complex relationships. Whether we are looking to extract interesting entities, which aligns with a retrieval scenario, or view the task as a simple classification problem, the method should be versatile enough to adapt to these needs.
We can formalize this problem using either a bipartite graph or a hypergraph. By representing the data in these graph structures, we can better understand and utilize the inherent relationships between entities. This formalization allows for the application of graph-based algorithms, which can improve the effectiveness of tasks like classification and retrieval.
§.§ Notations and Definitions
We consider a finite set of items of interest ={_1, …, _n}, referred to as nodes. A family of subsets of denoted by ={_1, …, _m}⊆ 2^ is referred to as hyperedges. The nodes and hyperedges togerher form a hypergraph =(, ). The structure of the hypergraph can also be described by an incidence matrix ∈{0,1}^n× m, where _i, j = 1 if _i ∈_j, and 0 otherwise. Every hypergraph can alternatively be described by a bipartite incidence graph, also called the Levi graph <cit.>. This bipartite graph =(∪, ) has as its two partitions the nodes and hyperedges of , and its edges represent a node in belonging to an edge in , formally ={(_i, _j) ∈×| _i ∈_j }.
The degree ( ) of node is defined as the number of edges that contain the node, that is ( ) = |{_i ∈| ∈_i }|. Similarly, the degree ( ) of the edge is defined as the number of nodes it contains, i.e. ( ) = ||. We also establish a diagonal node-degree matrix ∈N^n × n with ()_i,i = ( _i ) and ()_i,j = 0 for i ≠ j. Analogously, the hyperedge-degree matrix is a diagonal matrix ∈N^m × m with ()_i,i = ( _i ) and ()_i,j = 0 for i ≠ j.
We consider for each node in the hypergraph some kind of signal that is to be propagated through the hyperedges. Let the signal be a d-dimensional vector x_i for each node, giving for the whole hypergraph a matrix ∈R^n × d. In the following parts of this paper, we will explore several ways of defining such a signal, with an overview provided in Section <ref>.
Within this work, we are interested in two transductive tasks on hypergraphs: classification on and retrieval of positive nodes from . Both tasks assume a training set of nodes ⊂ where the labels of nodes are known. In case of classification, the goal is to predict the label for all nodes in . The retrieval task aims to sort the nodes in the testing set ∖ such that the number of positive nodes in top K positions is maximized.
§ RELATED WORK
Mining information from structured (graph-like) data is one of the central problems in machine learning. The most straightforward way to handle this is to translate the structure into features and apply traditional machine learning techniques, such as logistic regression, random forests, and naive Bayes, to these features. Naive Bayes <cit.>, in particular, provides a bridge to a second large family of learning methods on graphs: Bayesian methods, where the graph forms a structure for modelling random variables. A critical problem associated with Bayesian methods is inference, which is often intractable. The sum product algorithm <cit.> combats the tractability of a system by its decomposition to a product of local functions. One instance of this framework is probabilistic threat propagation <cit.>, which is used for spreading information about malicious actors through the network.
The translation of structured data into features is also a non-trivial problem. While some methods can handle sparse, high-dimensional feature vectors, the majority cannot. Several methods are suited for finding low-dimensional representations of structured data. For example, non-negative matrix factorization <cit.> (NMF) decomposes a large sparse matrix into the product of two low-dimensional matrices. Node2vec <cit.> applies contrastive learning to find node representations where neighboring nodes are close in feature space. Spectral positional encodings <cit.> provide representations given by eigenvectors of the Laplacian matrix, and distance encodings <cit.> offer global information about where a node is located in the graph. However, except for NMF, these methods are suited for graphs and cannot be directly applied to hypergraphs.
There are many papers on learning algorithms for graphs, such as GraphSAGE <cit.>, graph convolutional networks <cit.>, and graph attention networks <cit.>. Nevertheless, their application to hypergraphs is not straightforward. The origins of learning transductive tasks stretch back to the seminal work <cit.>. More recently, Hypergraph neural networks <cit.>, Dynamic HGNNs <cit.>, and HyperGCN <cit.> build upon the convolutional learning schema introduced in <cit.> while works such as <cit.> aim to bring both convolutional as well as attention to the context of hypergraphs. The proposed method can also be viewed as an extension of label propagation <cit.> or feature propagation <cit.>. While there do exists algorithm for label propagation in hypergraphs <cit.>, the proposed method aims to be comparatively simpler to understand, implement and calculate. The proposed method is an extension of our previous work in the specific domain of computer network security <cit.>.
§
We present (), a method for signal propagation on hypergraphs. In the following subsections, is first introduced in the general setting, followed by a comparison to established approaches and a discussion of possible variants inspired by them. Finally, applications of to different kinds of signals in hypergraph tasks are discussed.
§.§ Method overview
The proposed algorithm propagates a node signal (see Section <ref> for a discussion of possible signal types) through the hypergraph . The basic version of consists in a simple averaging of across the hyperedges and nodes of the graph. This averaging can be repeated to obtain smoother final representations, resulting in a multi-step process generating a sequence of representations ^(l), where ^(0) =.
In each step, the representation ^(l) of the nodes is first propagated to the hyperedges to obtain their representations
r_j^(l)=1/( _j )∑_i
_i∈_j_i^(l)
that is the average of the representation of the individual nodes contained in the hyperedge. In the second step, this hyperedge representation is propagated again into nodes:
_k^(l+1)=1/( _k )∑_j
_k ∈_jr_j^(l).
The steps <ref> and <ref> constitute the proposed algorithm, which can be summarily written as
_k^(l+1) = 1/( _k )∑_j
_k ∈_j1/( _j )∑_i
_i ∈_j_i^(l).
Using notation established in Section <ref>, Equation <ref> can be rewritten into the matrix form
^(l+1) = ^-1^-1^T ^(l).
Equation <ref> describes a basic variant of the proposed algorithm. In Section <ref>, various modifications of are discussed.
§.§ On Efficient Implementation of
While Equation <ref> shows an efficient way of mathematically expressing the algorithm, the algorithm itself is also efficient when it comes to its implementation and computational complexity. Algorithm <ref> shows an implementation of in a single SQL query and Algorithm <ref> shows a simple implementation in Python using the Pandas <cit.> library. These implementations essentially materialize Equations <ref> and <ref>.
§.§ Application of to different signals in hypergraphs
The construction of in Section <ref> was a general one, assuming a signal matrix ∈R^n × d. In practice, one can use to propagate various kinds of signals in the hypergraph. Namely, the matrix may represent actual features as provided in the underlying graph dataset. This setting leads to a method similar to feature propagation <cit.> or hypergraph convolution <cit.>. Such an approach is elaborated further in Section <ref>. Alternatively, an approach based on label propagation <cit.> may be obtained by taking as a version of the label matrix masked by the training set, a setting described in Section <ref>. Finally, the hyperedges of may be used as the signal, essentially representing indicators of node similarity based on their presence in a common hyperedge. Such an approach corresponds to setting = and is described in Section <ref>.
§.§ Comparison with Hypergraph Convolution
A single layer of the Hyper-Conv hypergraph neural network by <cit.> is defined as
^(l+1) = σ(^-1W^-1^T ^(l)Θ) ,
where W and Θ are weight matrices that need to be optimized.
Comparing Equations <ref> and <ref>, it can be seen that is a simplified special case of Hyper-Conv with the matrices W and Θ realized as non-learnable identity matrics. As the proposed method runs only the forward pass of Hyper-Conv, we do not use the non-linearity σ in the basic variant of .
§.§ Comparison with Label Propagation
The label propagation algorithm as intruduced in <cit.> is expressed for an ordinary graph (with edges connecting exactly 2 nodes) as
^(l+1) = α^-1A^(l) + ( 1 - α) ^(l),
where denotes the diagonal matrix of degrees of a graph and A stands for its adjacency matrix.
To compare Label propagation with , let us first express the value of ^-1^T as
( ^-1^T )_i, j = ∑_k 1/( _k )_i, k_j, k,
which represents for each pair of nodes the number of hyperedges connecting them, normalized by their degrees. Specifically, for an ordinary graph, this becomes
^-1^T = 1/2( A + ).
With this simplification for ordinary graphs, equation <ref> becomes
^(l+1) = 1/2^-1A^(l) + 1/2^(l).
which is equivalent to equation <ref> with = (or a masked version thereof) and α = 1/2. is in this instance therefore a generalization of label propagation with this particular value of α to hypergraphs (for generalization with arbitrary values of α, see Section <ref>). There is, however, another compelling reason to use over Label propagation as presented in equation <ref>. The matrix multiplication ^-1^T does not preserve the sparsity of , which is typical for large datasets. Therefore the proposed implementation can be significantly more efficient than Equation <ref>, despite them being mathematically equivalent.
§.§ Comparison with the naive Bayes classifier
The naive Bayes classifier is a well know classification method which calculates posterior probability on with a feature vector ξ as p(|ξ) using Bayes rule as p(|ξ)=p(ξ|)p()/p(ξ) with the naive assumption that p(ξ|)=∏_i p(ξ_i|), where p(ξ_i|) is estimated on training set for all pairs of features and labels.
In the case of a multinomial Naive Bayes, the term p(ξ_i|) is given by the empirical distribution observed on the training set. In the setup, we therefore consider the hyperedges of to represent the features ξ, that is =. In the binary case, the empirical distribution p(ξ_i|) is equal to averaging the node signal across each hyperedge as described in Equation <ref>. The aggregation (product) then nicely demonstrates the difference between the filtering (convolutional) approach of and the Bayesian (Maximum a posteriori) approach.
§.§ Variants
The comparison with the methods presented in Sections <ref>, <ref> and <ref> naturally suggests several alternative variants and generalizations of the basic scheme.
§.§.§ Alternative normalizations of the adjacency matrix
In graph neural networks, the adjacency matrix A is normalized by multiplying it with the inverse of the node degree matrix . While the DeepWalk algorithm <cit.> corresponds to the row-wise normalization ^-1A, newer methods also consider the column-wise normalization A^-1 and most predominantly the symmetric normalization ^-1/2A^-1/2 introduced in <cit.>. Because the matrix ^-1^T plays in a role similar to the adjacency matrix A in GCN (see Equation <ref>), we can also consider the alternative column-wise normalized version of :
^(l+1) = ^-1^T ^-1^(l)
and the symmetrically normalized version
^(l+1) = ^-1/2^-1^T ^-1/2^(l).
§.§.§ Generalization of label propagation with general values of α
While Section <ref> shows that is generalization of label propagation with α = 1/2 to hypergraphs, a more general variant of allowing for α∈( 0, 1 ) can be defined as
^(l+1) = 2 α^-1^-1^T ^(l) + ( 1 - 2 α) ^(l).
This version is a full generalization of label propagation as described in Equation <ref> to hypergraphs (equivalence can be proved by applying Equation <ref>).
All of the above variants of can be implemented in a straightforward way by modifying Algorithms <ref> and <ref>, without requiring full matrix multiplication.
§ EXPERIMENTAL EVALUATION
The goal of our experiments is to compare performance and execution time of the proposed () method with several well established baseline methods as well as with a simple Hypergraph Neural Network (HGCN) on a variety of real-world datasets from multiple domains. In this section, we introduce the considered datasets, reference methods, and tasks used for evaluation. Our aim is to validate the comparable performance of the proposed method while highlighting its low execution time. While we discuss the various variants of the proposed method in Section <ref>, a comprehensive evaluation of these variants will be conducted in a future work.
§.§ Datasets
We evaluated our methods on datasets from three distinct areas:
* Citation datasets: This category includes the PubMed, Cora, and DBLP datasets in their hypergraph variants, as introduced in <cit.>. In these datasets, the hyperedges may be defined in two distinct ways - either by a common author, or by co-citation, yielding two variants of the dataset. Each publication is labeled based on its topic.
* Natural Language Processing (NLP): We used the Coronavirus tweets NLP dataset <cit.>, which contains Twitter posts about COVID-19. The tweets were tokenized, with each tweet representing a node, labeled by its sentiment. Hyperedges are formed by tokens appearing in multiple tweets. The tokens are trained on the whole corpus with given number of tokens[The number differs to the number of hyperedges from Table <ref>, due to special reserved tokens that are not used in corpus.] (1000) using Sentencepiece algorithm <cit.>. Note that we can control the graph size with (number of hyperedges) by the parameter of tokenization.
* Recommender Systems: We used the MovieLens 25M dataset <cit.>, which contains 25 million movie ratings and one million tag applications. The nodes are individual movies labeled by their genres (allowing for multiple labels for one node). The hyperedges correspond to users and may again be defined in two ways, giving us two versions of the dataset: Either by a user rating multiple movies, or by a user assigning tags to multiple movies.
An overview of datasets including their basic characteristics is shown in Table <ref>. Due to the interpolative nature of , each multiclass dataset is transformed into a series of binary datasets, where each class is treated as the positive class, and all other classes are treated as negative. The results are then averaged over all such obtained binary datasets.
We consider only structural information that is available in the hypergraph. No other information (such as node or hyperedge features) is included.
§.§ Tasks
We address two primary tasks in our experiments: transductive node classification and retrieval.
§.§.§ Classification Task
* Aim: Prediction of binary labels on the testing set
* Method: We use leave-one-out cross-validation with 10 folds, where nodes are randomly assigned to folds. One fold is hidden for testing, and the method is trained on the remaining nine folds. For each dataset, we generate test predictions for each node (when it is in the testing set). For each class, we calculate the ROC-AUC and average these scores. This average is reported as the classification score for each method on the given dataset.
§.§.§ Retrieval Task
* Aim: Ranking of nodes in the testing set maximizing the precision in the top positions
* Method: Folds are assigned in the same way as in the classification task, with one fold being used for training and the other 9 used as the testing dataset. The training set consists of all nodes in the graph, with the positive nodes in the training fold labeled as positive and all other nodes labeled as unknown. The model is trained on this training set. For models that require negative training examples, we randomly sample a set of the same size as the testing set and consider these labels as negative. Although this introduces some label noise, we assume that the negative class is dominant, making the noise acceptable. The model then ranks the nodes from the testing folds, and we evaluate precision at the top 100 positions (P@100). This evaluation is performed for each fold and class, and the average P@100 over folds and classes is reported as the retrieval score for each method on the given dataset.
§.§ Evaluated Methods
We compare the following methods in our experiments. The variants of the mentioned in Section <ref> are not reported in this work. In future, we would like also to compare multiple choices of feature representation for reference methods on top of NMF. Nevertheless as our goal is just to show a competitive performance of , we believe that it is sufficient to compare it with baseline methods without their fine-tuning.
* The proposed method: Evaluated with 1, 2, and 3 layers, where we consider binary training labels as ^0. After application of a given number of layers (see Equation <ref>), the yielded (score) vector ^l, l∈{1,2,3} is used for both retrieval (top-100 scored test nodes) and for binary classification (with a given threshold on score).
* Multinomial Naive Bayes: Operates on one-hot feature vectors derived from hyperedges.
* Random Forest, Logistic Regression, and HGCN: These methods operate on feature vectors obtained from non-negative matrix factorization (NMF) of the incidence matrix<cit.>[As an alternative to the NMF, we evaluated also representation generated by Laplacian positional encoding <cit.>. As the results were worse compared to NMF, we decided to not include them in the results.] , with 10 iterations and a dimension of 60.
* Method Settings:
* Random Forest, Logistic Regression, and Naive Bayes are used with their default settings.
* A single layer HGCN implements Equation (<ref>) with an output layer of dimension 2 and sigmoid non-linearity. We use logistic loss and train all datasets for 15,000 epochs using the Adam optimizer with default settings.
* Random Baseline: Included for comparison.
The results of these methods are evaluated and compared based on their performance on the classification and retrieval tasks across the datasets.
§.§ Classification Result
Table <ref> lists the ROC-AUC for all methods on all datasets. Due to the numerical intensity of HGCN, only 5 folds were evaluated, and for the Movies dataset, only 4 out of 20 classes were considered. Prediction on isolated nodes was nearly random as only structural information was used, likely contributing to the relatively weak performance of all methods on the PubMed dataset. Since CSP handles only binary labels, reference methods were translated to a one-vs-other scenario, even though they can handle multi-class classification directly. Feature extraction using Non-negative Matrix Factorization (NMF) was not fine-tuned for each dataset, potentially impacting the performance of NMF-based baselines. Naive Bayes emerged as the strongest baseline, as it does not require any feature preprocessing and works directly with the one-hot encoded incidence matrix. CSP was evaluated in three variants based on the number of layers, with the best choice varying by dataset. On the largest datasets (Corona and Movies), the best variant of CSP achieved performance comparable to the strongest competing baseline. Overall, CSP demonstrated comparable results with reference baselines. In larger datasets, where parameter tuning of baselines is more challenging, CSP proved to be one of the best-performing methods. Overall, CSP with fewer layers fared comparatively better than a version with multiple layers. We attribute this at first glance counter-intuitive result to the fact that the training set is fairly dense in the graph, which ensures sufficient information for all nodes even with fewer layers, while at the same time multiple layers may contribute to oversmoothing of the signal. CSP's simplicity and parameter-free nature confirm its suitability as a first-choice baseline method for classification tasks.
§.§ Retrieval Results
Table <ref> lists the P@100 for all methods on all datasets.
The evaluation of HGCN and the Movies dataset in the retrieval task is restricted similarly to the classification task. Isolated nodes no longer cause a performance drop as long as there are sufficient number of non-isolated nodes in each class. Naive Bayes' performance is not as superior in this scenario as in classification task since the training set contains only positive nodes, preventing it from leveraging prior distribution knowledge about the target class. CSP, which does not use prior knowledge about the target class distribution, works very well on small datasets with lower average degree of the nodes and edges. In case of datasets with higher average degree of nodes (Movies), CSP does not extract the structural information as well as NMF and therefore the methods utilizing the features from NMF (mainly logistic regression) work much better excepting HGCN, which suffers from over-smoothing. In summary, achieves superior performance for 4 of 8 datasets on retrieval task.
§.§ Complexity Evaluation
To evaluate the complexity of the studied methods, two approaches were taken – theoretical analysis of asymptotic complexity, and direct measurement of wall clock time to compute the methods on the available datasets. The asymptotic computational complexities of the used methods are listed in Table <ref>. Note that the non-negative matrix factorization is excluded from the table and has on its own a complexity of 𝒪( n^4 m^4 ), dominating the methods themselves. The execution time for individual methods is presented in Table <ref>. We evaluated the methods using standard implementations that would be widely used by practitioners. In particular, we used the Scikit-Learn <cit.> implementation with default settings for logistic regression, Naive Bayes and random forest. A Polars variant of Algorithm <ref> was applied for the proposed method. All these methods were executed on a GPU (Amazon EC2 G4 instance). The HGCN was executed using PyTorch-geometric <cit.>.
Comparing the execution times in Table <ref>, HGCN is the most numerically complex method. Although methods to improve training efficiency, such as batching, are available, they were not considered in this work. Logistic regression and random forest exhibit comparable or shorter execution times in Corona and Movies datasets compared to and Naive Bayes, largely because the most challenging part—extraction of structural information—is handled by nonnegative matrix factorization, which is not included in the reported times. The proposed excels particularly in graphs with a low average degree of nodes, similar to Naive Bayes. The measured execution time of aligns with its expected asymptotic complexity <ref> and appears to be linear with the number of layers, as anticipated.
In summary, HGCN is by far the most numerically intensive method and shows potential in some examples; however, there is still a significant amount of fine-tuning needed to achieve superior performance across datasets. NMF-based methods work exceptionally well on the Movies dataset for the retrieval task, though further tuning is required to properly extract structural information. Compared to Naive Bayes, is even slightly simpler and parameter-free. In some problems, outperformed Naive Bayes, and vice versa. Thus, it is definitely a good idea to consider both and Naive Bayes when establishing baselines for tasks on structural data.
§ CONCLUSION
This paper presents a signal propagation algorithm for hypergraphs, termed (). We formally describe the algorithm and demonstrate its simplicity and efficiency of implementation. This formal description allows us to show clear relationships between and well-known algorithms such as Naive Bayes, label propagation, and hypergraph convolutional networks. These relationships suggest various algorithmic variants, which we left for detailed future exploration.
We discuss the application of to different types of signals. Our primary focus is propagating binary labels, which is used for classification and retrieval tasks, positioning as a hypergraph variant of traditional label propagation. Additionally, propagating node features instead of labels as signals leads to feature propagation. This dual functionality showcases the versatility of in handling various tasks on hypergraphs.
Our evaluation of on several real-world datasets from multiple domains demonstrates its competitive performance in node classification and retrieval tasks compared to a range of reference methods. Furthermore, we compare the numerical complexity of these methods by examining their execution times, highlighting the simplicity and efficiency of . This combination of competitive performance and low computational complexity makes a promising approach for applications involving hypergraph-based data.
§.§.§
This work was supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS20/183/OHK4/3T/14. Large language models were employed for proofreading, drafting, and clarifying the text; however, the authors always validated and take responsibility for the final result.
§.§.§
All authors are currently employed by Cisco Systems, Inc. and conducted research presented in this work as part of their job. Pavel Procházka and Lukáš Bajer also own stock in Cisco Systems, Inc. The authors declare that their employment and stock ownership does not influence the objectivity of the presented research.
| In the modern world, an overwhelming amount of data has an internal structure, oftentimes forming complex networks that can be represented as graphs. Efficiently mining information from this data is crucial for a wide range of applications spanning various domains such as social networks, biology, physics or cybersecurity. Graph Neural Networks (GNNs) have emerged as the dominant tool for handling such data due to their ability to leverage the graph structure for predictive and analytical tasks. However, despite their success, GNNs come with notable challenges, including high computational complexity during training, numerous hyperparameters that require fine-tuning, lack of straightforward interpretability, and the necessity of dedicated computational infrastructure such as GPUs.
Given these limitations, baseline algorithms play a vital role as complementary tools to GNNs. These baselines, often much less complex, provide an efficient way of generating preliminary results. In many cases, these simpler methods are even sufficiently effective to be used as-is for the problem at hand.
Section <ref> introduces the problem being solved in this work and Section <ref> provides an overview of other related works. Section <ref> is the most substantive part of this paper, introducing the method (Section <ref>). In Section <ref>, we show to be a straightforward extension of the well-known label propagation method to hypergraphs. Also provided is a comparison of to other methods, such as hypergraph convolutional networks (Section <ref>) and the naive Bayes classifier (Section <ref>). Finally, Section <ref> provides an experimental evaluation and comparison of and several alternative methods. | Mining information from structured (graph-like) data is one of the central problems in machine learning. The most straightforward way to handle this is to translate the structure into features and apply traditional machine learning techniques, such as logistic regression, random forests, and naive Bayes, to these features. Naive Bayes <cit.>, in particular, provides a bridge to a second large family of learning methods on graphs: Bayesian methods, where the graph forms a structure for modelling random variables. A critical problem associated with Bayesian methods is inference, which is often intractable. The sum product algorithm <cit.> combats the tractability of a system by its decomposition to a product of local functions. One instance of this framework is probabilistic threat propagation <cit.>, which is used for spreading information about malicious actors through the network.
The translation of structured data into features is also a non-trivial problem. While some methods can handle sparse, high-dimensional feature vectors, the majority cannot. Several methods are suited for finding low-dimensional representations of structured data. For example, non-negative matrix factorization <cit.> (NMF) decomposes a large sparse matrix into the product of two low-dimensional matrices. Node2vec <cit.> applies contrastive learning to find node representations where neighboring nodes are close in feature space. Spectral positional encodings <cit.> provide representations given by eigenvectors of the Laplacian matrix, and distance encodings <cit.> offer global information about where a node is located in the graph. However, except for NMF, these methods are suited for graphs and cannot be directly applied to hypergraphs.
There are many papers on learning algorithms for graphs, such as GraphSAGE <cit.>, graph convolutional networks <cit.>, and graph attention networks <cit.>. Nevertheless, their application to hypergraphs is not straightforward. The origins of learning transductive tasks stretch back to the seminal work <cit.>. More recently, Hypergraph neural networks <cit.>, Dynamic HGNNs <cit.>, and HyperGCN <cit.> build upon the convolutional learning schema introduced in <cit.> while works such as <cit.> aim to bring both convolutional as well as attention to the context of hypergraphs. The proposed method can also be viewed as an extension of label propagation <cit.> or feature propagation <cit.>. While there do exists algorithm for label propagation in hypergraphs <cit.>, the proposed method aims to be comparatively simpler to understand, implement and calculate. The proposed method is an extension of our previous work in the specific domain of computer network security <cit.>.
§
We present (), a method for signal propagation on hypergraphs. In the following subsections, is first introduced in the general setting, followed by a comparison to established approaches and a discussion of possible variants inspired by them. Finally, applications of to different kinds of signals in hypergraph tasks are discussed.
§.§ Method overview
The proposed algorithm propagates a node signal (see Section <ref> for a discussion of possible signal types) through the hypergraph . The basic version of consists in a simple averaging of across the hyperedges and nodes of the graph. This averaging can be repeated to obtain smoother final representations, resulting in a multi-step process generating a sequence of representations ^(l), where ^(0) =.
In each step, the representation ^(l) of the nodes is first propagated to the hyperedges to obtain their representations
r_j^(l)=1/( _j )∑_i
_i∈_j_i^(l)
that is the average of the representation of the individual nodes contained in the hyperedge. In the second step, this hyperedge representation is propagated again into nodes:
_k^(l+1)=1/( _k )∑_j
_k ∈_jr_j^(l).
The steps <ref> and <ref> constitute the proposed algorithm, which can be summarily written as
_k^(l+1) = 1/( _k )∑_j
_k ∈_j1/( _j )∑_i
_i ∈_j_i^(l).
Using notation established in Section <ref>, Equation <ref> can be rewritten into the matrix form
^(l+1) = ^-1^-1^T ^(l).
Equation <ref> describes a basic variant of the proposed algorithm. In Section <ref>, various modifications of are discussed.
§.§ On Efficient Implementation of
While Equation <ref> shows an efficient way of mathematically expressing the algorithm, the algorithm itself is also efficient when it comes to its implementation and computational complexity. Algorithm <ref> shows an implementation of in a single SQL query and Algorithm <ref> shows a simple implementation in Python using the Pandas <cit.> library. These implementations essentially materialize Equations <ref> and <ref>.
§.§ Application of to different signals in hypergraphs
The construction of in Section <ref> was a general one, assuming a signal matrix ∈R^n × d. In practice, one can use to propagate various kinds of signals in the hypergraph. Namely, the matrix may represent actual features as provided in the underlying graph dataset. This setting leads to a method similar to feature propagation <cit.> or hypergraph convolution <cit.>. Such an approach is elaborated further in Section <ref>. Alternatively, an approach based on label propagation <cit.> may be obtained by taking as a version of the label matrix masked by the training set, a setting described in Section <ref>. Finally, the hyperedges of may be used as the signal, essentially representing indicators of node similarity based on their presence in a common hyperedge. Such an approach corresponds to setting = and is described in Section <ref>.
§.§ Comparison with Hypergraph Convolution
A single layer of the Hyper-Conv hypergraph neural network by <cit.> is defined as
^(l+1) = σ(^-1W^-1^T ^(l)Θ) ,
where W and Θ are weight matrices that need to be optimized.
Comparing Equations <ref> and <ref>, it can be seen that is a simplified special case of Hyper-Conv with the matrices W and Θ realized as non-learnable identity matrics. As the proposed method runs only the forward pass of Hyper-Conv, we do not use the non-linearity σ in the basic variant of .
§.§ Comparison with Label Propagation
The label propagation algorithm as intruduced in <cit.> is expressed for an ordinary graph (with edges connecting exactly 2 nodes) as
^(l+1) = α^-1A^(l) + ( 1 - α) ^(l),
where denotes the diagonal matrix of degrees of a graph and A stands for its adjacency matrix.
To compare Label propagation with , let us first express the value of ^-1^T as
( ^-1^T )_i, j = ∑_k 1/( _k )_i, k_j, k,
which represents for each pair of nodes the number of hyperedges connecting them, normalized by their degrees. Specifically, for an ordinary graph, this becomes
^-1^T = 1/2( A + ).
With this simplification for ordinary graphs, equation <ref> becomes
^(l+1) = 1/2^-1A^(l) + 1/2^(l).
which is equivalent to equation <ref> with = (or a masked version thereof) and α = 1/2. is in this instance therefore a generalization of label propagation with this particular value of α to hypergraphs (for generalization with arbitrary values of α, see Section <ref>). There is, however, another compelling reason to use over Label propagation as presented in equation <ref>. The matrix multiplication ^-1^T does not preserve the sparsity of , which is typical for large datasets. Therefore the proposed implementation can be significantly more efficient than Equation <ref>, despite them being mathematically equivalent.
§.§ Comparison with the naive Bayes classifier
The naive Bayes classifier is a well know classification method which calculates posterior probability on with a feature vector ξ as p(|ξ) using Bayes rule as p(|ξ)=p(ξ|)p()/p(ξ) with the naive assumption that p(ξ|)=∏_i p(ξ_i|), where p(ξ_i|) is estimated on training set for all pairs of features and labels.
In the case of a multinomial Naive Bayes, the term p(ξ_i|) is given by the empirical distribution observed on the training set. In the setup, we therefore consider the hyperedges of to represent the features ξ, that is =. In the binary case, the empirical distribution p(ξ_i|) is equal to averaging the node signal across each hyperedge as described in Equation <ref>. The aggregation (product) then nicely demonstrates the difference between the filtering (convolutional) approach of and the Bayesian (Maximum a posteriori) approach.
§.§ Variants
The comparison with the methods presented in Sections <ref>, <ref> and <ref> naturally suggests several alternative variants and generalizations of the basic scheme.
§.§.§ Alternative normalizations of the adjacency matrix
In graph neural networks, the adjacency matrix A is normalized by multiplying it with the inverse of the node degree matrix . While the DeepWalk algorithm <cit.> corresponds to the row-wise normalization ^-1A, newer methods also consider the column-wise normalization A^-1 and most predominantly the symmetric normalization ^-1/2A^-1/2 introduced in <cit.>. Because the matrix ^-1^T plays in a role similar to the adjacency matrix A in GCN (see Equation <ref>), we can also consider the alternative column-wise normalized version of :
^(l+1) = ^-1^T ^-1^(l)
and the symmetrically normalized version
^(l+1) = ^-1/2^-1^T ^-1/2^(l).
§.§.§ Generalization of label propagation with general values of α
While Section <ref> shows that is generalization of label propagation with α = 1/2 to hypergraphs, a more general variant of allowing for α∈( 0, 1 ) can be defined as
^(l+1) = 2 α^-1^-1^T ^(l) + ( 1 - 2 α) ^(l).
This version is a full generalization of label propagation as described in Equation <ref> to hypergraphs (equivalence can be proved by applying Equation <ref>).
All of the above variants of can be implemented in a straightforward way by modifying Algorithms <ref> and <ref>, without requiring full matrix multiplication. | null | null | null | This paper presents a signal propagation algorithm for hypergraphs, termed (). We formally describe the algorithm and demonstrate its simplicity and efficiency of implementation. This formal description allows us to show clear relationships between and well-known algorithms such as Naive Bayes, label propagation, and hypergraph convolutional networks. These relationships suggest various algorithmic variants, which we left for detailed future exploration.
We discuss the application of to different types of signals. Our primary focus is propagating binary labels, which is used for classification and retrieval tasks, positioning as a hypergraph variant of traditional label propagation. Additionally, propagating node features instead of labels as signals leads to feature propagation. This dual functionality showcases the versatility of in handling various tasks on hypergraphs.
Our evaluation of on several real-world datasets from multiple domains demonstrates its competitive performance in node classification and retrieval tasks compared to a range of reference methods. Furthermore, we compare the numerical complexity of these methods by examining their execution times, highlighting the simplicity and efficiency of . This combination of competitive performance and low computational complexity makes a promising approach for applications involving hypergraph-based data.
§.§.§
This work was supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS20/183/OHK4/3T/14. Large language models were employed for proofreading, drafting, and clarifying the text; however, the authors always validated and take responsibility for the final result.
§.§.§
All authors are currently employed by Cisco Systems, Inc. and conducted research presented in this work as part of their job. Pavel Procházka and Lukáš Bajer also own stock in Cisco Systems, Inc. The authors declare that their employment and stock ownership does not influence the objectivity of the presented research. |
http://arxiv.org/abs/2409.17863v1 | 20240926141153 | A 5T-2MTJ STT-assisted Spin Orbit Torque based Ternary Content Addressable Memory for Hardware Accelerators | [
"Siri Narla",
"Piyush Kumar",
"Azad Naeemi"
] | cs.ET | [
"cs.ET",
"cs.AR"
] |
A 5T-2MTJ STT-assisted Spin Orbit Torque based Ternary Content Addressable Memory for Hardware Accelerators
Siri Narla,This work was supported in part by CoCoSys, one of seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA.Siri Narla, Piyush Kumar, and Azad Naeemi are with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA, email: [email protected]. Piyush Kumar, Student Member, IEEE, and Azad Naeemi, Senior Member, IEEE
September 28, 2024
====================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this work, we present a novel non-volatile spin transfer torque (STT) assisted spin-orbit torque (SOT) based ternary content addressable memory (TCAM) with 5 transistors and 2 magnetic tunnel junctions (MTJs). We perform a comprehensive study of the proposed design from the device-level to application-level. At the device-level, various write characteristics such as write error rate, time, and current have been obtained using micromagnetic simulations. The array-level search and write performance have been evaluated based on SPICE circuit simulations with layout extracted parasitics for bitcells while also accounting for the impact of interconnect parasitics at the 7nm technology node. A search error rate of 3.9x10-11 is projected for exact search while accounting for various sources of variation in the design. In addition, the resolution of the search operation is quantified under various scenarios to understand the achievable quality of the approximate search operations. Application-level performance and accuracy of the proposed design have been evaluated and benchmarked against other state-of-the-art CAM designs in the context of a CAM-based recommendation system.
§ INTRODUCTION
With the unprecedented growth of retrieval based applications in a diverse range of areas from recommendation systems for e-commerce <cit.> and video or image retrieval applications <cit.> to search engines <cit.> and genomics <cit.>, performing accurate and fast similarity search has become increasingly important since it is a vital function for all these use cases. Graph-based similarity search algorithms <cit.> have been extensively studied and have proven to be effective for low-dimensional data. However, these algorithms become inefficient as the dimensionality of the data increases <cit.>. In high-dimensional applications, such as image retrieval, hash encoding <cit.> is often employed to improve performance. Despite these advancements, both graph-based and hashing-based algorithms face challenges with scalability, as their execution time increases significantly with the number of samples. In modern applications, dataset sizes can be large in both dimensions (D) and number of samples (n), because of which similarity search becomes increasingly challenging <cit.>. Thus, hardware solutions like content addressable memories (CAMs), that can perform parallel in-memory search over an entire database, are being studied with great interest <cit.>. The speed-up in search they offer and their linear storage requirements have made CAMs suitable for applications such as memory augmented neural networks <cit.>, hyper-dimensional computing <cit.>, recommendation systems <cit.>, and dataset searches.
Many designs and devices have been explored to implement CAMs. Traditional CMOS implementations of TCAM (ternary content addressable memories) using SRAMs <cit.> require 16 transistors which limits the memory capacity. SRAMs also suffer from leakage which can become a major limiting factor for large data-sets and for applications with large idle times <cit.>. To address these challenges, CAM cells based on emerging non-volatile memory (NVM) devices such as resistive memories (RRAMs) <cit.>, ferroelectric field effect transistors (FeFETs) <cit.>, and spin-orbit torque (SOT) devices <cit.>, have been studied. Each of these technology options come with their own set of advantages and limitations. RRAM-based TCAMs <cit.> need very sensitive sense amplifiers due to low sensing margin. PCMs suffer from resistance drift issues <cit.> and require large write voltages. FeFETs are voltage-controlled and can be quite energy efficient and compact. However, they generally suffer from poor endurance <cit.>, have a finite read after write <cit.>, and generally require large write voltages (4V) <cit.>. SOT devices theoretically have unlimited endurance, good retention times <cit.>, and compatibility with CMOS back-end-of-line (BEOL) <cit.>. Large scale wafer-level implementation of SOT devices have already been demonstrated <cit.>. Although, they need larger write energy than FeFET and CMOS-based CAMs due to their current based write scheme <cit.>, most CAM-based applications are more search-intensive with a limited number of writes.
For SOT devices, the magnetic orientation of the ferromagnetic free layers is determined by their anisotropy direction and they can either have in-plane magnetic anisotropy (IMA) or perpendicular magnetic anisotropy <cit.>. The IMA based devices rely on the aspect ratio of the ferromagnet to maintain sufficient thermal stability and retention time <cit.>. It results in IMA devices being hard to scale to advanced technology nodes as the thermal stability becomes very sensitive to lithography variations. On the other hand, PMA devices rely on interface anisotropy <cit.> for thermal stability and are easier to scale to advanced nodes.
Despite these advantages, one major concern with PMA based SOT devices is that they rely on external magnetic field to achieve the required symmetry breaking for deterministic write operation <cit.>.
While there has been research to mitigate this issue by using magnetic hard mask <cit.> or exchange bias <cit.>, these solutions may have further challenges related to scaling to advanced nodes and may sacrifice magnetic immunity <cit.>.
To solve this issue, we use STT-assisted SOT switching which eliminates the need for a magnetic field for the write operation. Also, compared to the previous SOT-CAM <cit.> implementations, our current design does not require any changes to the bit-cell to enable ternary storage capabilities.
The rest of the paper is organized as follows. After this introduction, we discuss the cell design in section II. Section III discusses the write operation and Section IV discusses the exact and approximate search operations, search error rate, and resolution. In Section V we evaluate and benchmark our design against other CAMs at the 7nm technology node using resolution, energy, and delay as metrics. In addition, we benchmark the design at the application-level using a CAM-based recommendation system. Finally, Section VI concludes the paper.
§ CELL DESIGN AND OPERATION
The proposed TCAM cell consists of 5 transistors and 2 MTJs as shown in Fig. <ref>. It can store and search for `0', `1', and `don’t care' (`X') bits. Fig. <ref> shows how these cells can form a TCAM array.
§.§ Write operation
The write operation is based on STT-assisted SOT switching of perpendicular magnets <cit.>. In this scheme, first an SOT current is applied for a small duration of time ( 1 ns) which causes the magnetizations of the free layers of the two MTJs to move towards the in-plane meta-stable state. After that, an STT current of appropriate polarity is applied for deterministic switching. Table <ref> shows the various write voltage values along with the MTJ states for writing `0', `1', and `X' bits. The detailed discussion of the simulation methodology used to derive these values is described in Section III.
§.§ Search operation
The search operation is based on discharging the matchline (ML) if there is a mismatch between the stored data and the search data. ML is precharged before every search operation. During the search operation, WWL is kept off, while SWL is switched on and the complementary search data is applied via SBL and SBLB. The voltage (V_sot) at the gate of the discharge transistor (T5 in Fig. <ref>) is determined by the voltage division of the search voltage (V_s) between the two MTJs. In the case of a mismatch, V_sot is designed to be larger than the threshold voltage of the discharge transistor which results in ML getting discharged through T5. If the stored bit is `don't care' ('X'), or there is a match, V_sot is designed to be adequately lower than the threshold voltage of T5 to prevent ML discharge. If the search bit is `X', SBL and SBLB are grounded and T5 remains off regardless of the stored data. Table <ref> summarizes the search operation for all possible stored bit and search bit combinations. Every ML whose output at a certain point in time is stored in an output latch. By adjusting this time period, the latched data can represent the rows within a certain hamming distance from the search data as will be discussed in Section IV.
§.§ Layout and parasitic extraction
To accurately estimate the impact of various sources of parasitics in the circuit, we design the layout of the cell at the 7nm node using the ASAP7 PDK <cit.>, where we assume an optimistic fin height of 52 nm. Fig. <ref> shows the layout of 2 adjacent bit-cells sharing the bitline (WBL/WBLB/SBL/SBLB) contacts among them. The SOT layer and MTJ are placed between the M1 and M2 metal layers. The proposed bit-cell uses metal levels M1-M5. The write wordline (WWL) and matchline (ML) are routed on M5 while the search wordline (SWL) is routed on M3. Write bitlines (WBL/WBLB) and search bitlines (SBL/SBLB) are routed on M4. Since the performance and accuracy of the cell depend significantly on the interconnect resistance <cit.>, we use wider wires for the bitlines. For WBL/WBLB, we use a wire width of 48 nm which is 2x the minimum wire width of M4, while SBL/SBLB use 72 nm wide wires (3x minimum width of M4). The effective bit-cell area is 0.076 um2.
To extract the cell parasitics, the BEOL resistance values from <cit.> are used to generate the nxtgrd database containing the resistance and capacitance information for various layers. This nxtgrd database is then used in Synopsys’ StarRC for extracting the parasitic resistance and capacitance from the layout.
These parasitics are then used in SPICE simulations to measure ML discharge delays for various array sizes, Hamming distance (HDist) values and row positions. For SOT-CAMs we use an MTJ low resistance state of 25 kΩ, a tunnel magnetoresistance ratio (TMR) of 1.8, and half bias values based on experimental results from <cit.>. The TMR degradation due to bias voltage is also incorporated as shown in <cit.>. In the previous SOT-CAM designs <cit.> a much larger LRS resistance (1 MΩ) was used to reduce the search energy. However, in this design, we have to use LRS resistance = 25 kΩ, since for larger LRS values the current through the MTJ is too small to write to the device. Resistance area product of an MTJ depends on its oxide thickness, hence by changing the oxide thickness value we can change the MTJ resistance.
§ WRITE SIMULATIONS
To obtain the minimum required duration of the STT current required, we perform OOMMF <cit.> based micromagnetic simulations augmented with rare event enhancement <cit.>. In the past, we have validated the results of these kinds of simulations with experimental data <cit.>. Fig. <ref>(a) shows the write error rate (WER) obtained for an STT spin current of 6 uA. To obtain WER of 1e-5, we use a write time of 30 ns. Increasing the STT current can reduce the write time; however, it requires lowering the MTJ resistance which negatively impacts the search performance and accuracy as will be explained in Section IV. Thus, we choose a small enough value for STT current which can reliably achieve low WER in the presence of thermal noise. While the write speed for SOT+STT switching is slower compared to the field-assisted SOT case (∼1-2 ns), the former has better magnetic immunity than the later. The spin current generated from SOT is 700 uA and the SOT layer considered here is 3.5 nm thick β-W <cit.>. The resistivity and damping-like spin torque efficiency values used for β-W are 160 μΩ-cm and 0.33, respectively <cit.>. The magnitude of SOT current does not have any impact on WER as long as it is sufficient to drive the magnetization towards the in-plane meta-stable state. Fig. <ref>(b) shows the distribution of the z component of magnetization (m_z) at the end of 1 ns SOT pulse for varying values of SOT spin current. For SOT spin current below 600 μA the m_z distribution starts to widen and too much deviation from m_z=0 can lead to switching failures. For the free layer ferromagnet, we consider 1.3 nm thick CoFeB with room temperature saturation magnetization of 1.2 MA/m and interface anisotropy of 1.15 mJ/m2 which gives a room temperature thermal stability of 65. The MTJ diameter is 60 nm. In addition, we assume that the STT efficiency is 0.6 for anti-parallel (AP) to parallel (P) switching and 0.3 for P to AP switching <cit.>.
The required electric write currents are 210 uA and 10/20 uA for SOT and STT phases, respectively. We choose the write voltages by adding 10% margin over the required currents.
To achieve a write current of 20 uA with 1.56V supply, the MTJ resistance in the parallel state has to be 25 kΩ or smaller. Since the search function can be more energy efficient if the MTJ resistance is larger. Hence, we chose the largest possible value.
Table <ref> describes the various voltages used during the write operation. The direction of the SOT current is fixed and is applied by driving WBL to 1.56V and WBLB to ground while keeping the WWL on. During the STT phase, both WBL/WBLB are driven to 0.42V while applying either 1.56V or 0V on SBL/SBLB to write the appropriate data. During the STT phase, both SWL and WWL remain enabled. Based on full array simulation, we obtain write energies of 1.74 pJ per bit for writing binary data (0/1) and 3.05 pJ per bit for writing `X' for an array size of 64x64.
§ SEARCH SIMULATIONS
The discharge rate of the ML increases with Hamming Distance which is equal to the number of mismatching bits between the two vectors. For a row whose stored data perfectly matches the search data, the voltage on the ML remains high since all the discharge transistors in the row are off and the only current discharging the ML is the subthreshold leakage currents from these transistors. Thus, finding a fully matching vector requires identifying the ML with discharge rate slower than the worst-case mismatch, i.e. a row with single bit mismatch.
On the other hand, finding the vector closest to a search query requires identifying the ML with the slowest discharge rate.
Hence, the discharge rate becomes an important factor for approximate search (or nearest neighbor search).
Fixed-radius near neighbor search is the search of all items with an HDist smaller than a given value (HDist limit).
It can be implemented using delay thresholding with a latch connected to the output of the inverters connected to the match lines and by controlling the timing of the clock (Clk) falling edge at the latches.
All rows whose ML discharge remain undetected by their inverters before the Clk falling edge arrival are considered to be neighbors within the specified radius.
While performing search, each search line in the array has a driver that drives the column to the associated search voltage. Due to the low LRS value used, the effective resistance from all the rows between the SBL/SBLB drivers becomes comparable to the driver resistance. Thus, the voltage drops across the search drivers are more pronounced which reduces the search voltage window available for voltage division between the MTJs within a cell. A large number of fins on the driver transistors can mitigate this effort by lowering the driver resistance. As the number of rows in an array increases, the search voltage window further shrinks and makes the design more vulnerable to non-idealities. Hence, to ensure good search capabilities, we need to limit the number of rows to 64.
§.§ Exact Match Search Error Rate
To evaluate the robustness of our design during the search operation, we first calculate the search error rate (SER) for detecting an exact match.
A search error occurs when the off-state leakages of the transistors, due to magnetic and CMOS device variability, discharge the ML of a `match row' at a faster rate than that of a `mismatch row', thus sensing a match as a mismatch. Stored `don't care' bits in a row with full match contribute to more leakage than stored binary bits because `don't care' bits have a larger V_sot value (∼V_sot/2). Hence, we focus on the case of sensing a match on a row storing 32 `X' bits in a 128-bit vector to better see the impact of the device parameters on the SER. We use 3σ MTJ resistance variation of 15% <cit.> and a 3σ transistor threshold voltage variation of 42 mV <cit.>. We assume Gaussian distribution and run 1000 Monte Carlo simulations for various row positions in the array.
We obtain the ML delay distributions corresponding to full match with 32 stored `X' bits and worst case mismatch with one mismatching bit.
The resulting delay data is fitted with a Gaussian distribution to calculate the SER (overlapping area between the two curves) as shown in Fig. <ref>.
For V_s = 0.8 V and 1 V, we achieve the SER values of 1.53% and 3.9x10-9%, respectively. Increasing the search voltage increases the difference between the V_sot for mismatch ((1-k)V_s) and `X' bit match (V_s/2); hence, improving the design's variation tolerance at the expense of increased power consumption.
§.§ Minimum Detectable Distance (Resolution)
Search resolution is the ability to clearly distinguish between the rows with similar Hamming distance values. This can be measured in terms of the minimum detectable distance (MDD). If the discharge delay distribution corresponding to HDist=n have no overlap with the discharge delay distribution corresponding to HDist=(n ± δ), then MDD = δ. MDD is plotted in Fig. <ref> for every HDist for V_s = 0.8 V and 1 V.
Due to the low LRS resistance value, the SOT-5T case has a large current flowing through SBL/SBLB. This contributes to a substantial IR drop over the SBL/SBLB because of which the V_sot mismatch for the row furthest from the driver is lower than the V_sot mismatch for the row closest to the driver. Widening the SBL/SBLB wires helps in reducing the IR drop.
Increasing the search voltage to 1V increases V_sot for mismatch. A larger mismatch V_sot value reduces the overall delay due to larger discharge current. Hence, the difference in delays for consecutive Hamming distances decreases. However, since the ML delay is inversely proportional to the discharge current, the impact of V_sot mismatch on delay variation at various row positions is reduced for Vs=1 V compared to Vs=0.8 V. Thus, the resolution improves with an increase in the search voltage as seen in Fig. <ref> due to lower variation in the delay values for a fixed HDist.
§.§ Fixed-radius Search
Table <ref> defines the metrics used to evaluate the accuracy for a fixed-radius search operation. Table <ref> shows the results for a fixed radius search operation for a Hamming distance threshold of 20 over a randomly generated dataset with 10000 128-bit vectors stored over multiple CAM arrays of size 64x128. For the ideal case in Table <ref>, we ignore interconnect parasitics.
We see a drop in precision for the realistic case since the rows (with HDist > 20) further away from the search drivers have ML delays larger than the threshold delay used to capture all the rows with Hamming distances ≤ 20. Precision is closely related to the resolution of the CAM design. Since Vs=1 V has higher resolution, it shows better precision. We use this fixed radius search to implement a CAM-based recommendation system to analyze results at the application-level in Section V.
§ APPLICATION-LEVEL EVALUATION AND BENCHMARKING
To evaluate and benchmark the application-level accuracy and performance, we compare the proposed 5T SOT-CAM design with the previously proposed 3T SOT-CAM design <cit.> along with the FeFET <cit.> and SRAM based CAM <cit.> designs using layout extracted netlists<cit.>.
In all the cases, complementary data is stored in a CAM cell, the matchlines are precharged to Vdd (0.7 V) before evaluation and the searchlines (SBL and SBLB) are driven to Vs/0 depending on the search data.
For the 7nm FeFET-CAM, we use an FeFET with memory window of 0.46 V which has been reported in <cit.>. The capacitance of ferroelectric layer is calculated considering the ferroelectric layer thickness of 5nm and a dielectric coefficient of 35 <cit.>.
§.§ Array-level Results
Fig. <ref> compares the minimum detectable distance for various CAM designs. The FeFET CAM resolution suffers due to the FeFET capacitance which contributes to significant RC delay. The SOT-3T case uses a much larger LRS resistance value of 1 MΩ; hence, the design has a lower IR drop and a larger RC delay than the SOT-5T case. When Vs=0.8 V, the SOT-3T design has a better resolution than the SOT-5T design. As the HDist increases, <cit.> shows that the difference in the consecutive delays drops which worsens the resolution. With the increasing HDist values, the delay values for the SOT-5T case drop at a steeper rate in comparison to the SOT-3T case where due to the large RC delay for charging the gate of the discharge transistor, ML discharge starts at a lower V_sot value. This means that the difference in the discharge delays of the consecutive Hamming distances decrease at a slower rate for the SOT-3T case than for the SOT-5T case, which is why the former has better resolution than the latter.
When Vs is increased to 1V, the resolution for the SOT-5T case improves because the difference between the discharge delays for the rows furthest and closest to the driver decreases. However, for the SOT-3T case, the RC delay dominates the ML discharge delay especially at the larger Hamming distance values. This prevents the difference in delays between the rows closest and furthest to the driver from shrinking as much as it does for the SOT-5T case when the search voltage is increased. Thus, for larger search voltages, the SOT-3T case shows lower improvement in the resolution than the SOT-5T case.
Table <ref> shows the results for a fixed radius search with HDist=20. The precision is better for the designs with better resolution, i.e. lower detectable distance.
§.§ Application-level Results
To benchmark the results at the application-level in comparison to the other state-of-the-art designs, we look at a sequential recommendation system where the similarity search is implemented using CAM arrays.
We use the sequential recommendation model from <cit.>. In this model, a self-attention block is used to predict the embedding of the next item that should be recommended to a user. In <cit.>, they use dot product ranking (DPR) to find the top-k items that are closest to the predicted item embedding. Using CAMs for ranking can be very costly as it would require ML discharge delay ranking to rank the top-k items which needs a significant amount of peripherals. Instead, CAMs are used for candidate generation with fixed-radius near neighbor search and these candidates are passed to the ranking stage to rank the top-k items. In this way, one can significantly reduce the number of items that need to go through DPR which is computationally expensive. We use the MovieLens 1M dataset from <cit.> to train and test our model. Item embeddings are stored over multiple 64x128 CAM arrays using LSH encoding <cit.> with 128 bits. The attention model was trained for 100 epochs. During inference we used a test case with 1000 randomly selected negative items and 1 ground truth next item using the strategy from <cit.>. HR@10 counts the fraction of times that the ground-truth next item is among the top 10 items after ranking for all valid test users. Our results show that with the use of CAMs, the final HR@10 (Table <ref>) achieved are the same as those using DPR for the entire test set (0.32) while requiring fewer dot product operations.
Table <ref> shows the results of using SOT-5T, SOT-3T, SRAM, and FeFET based CAMs for fixed-radius near neighbor search for candidate generation. The improvement in the number of dot product ranking operations is inversely proportional to the pool size of the candidates. The number of candidates outputted by the various CAM designs follows a pattern similar to the trend in resolution. FeFET-CAM shows the least amount of improvement. Larger search voltages work better for the SOT-3T case and more so for the SOT-5T case.
§.§ Energy and Delay Results
Tables <ref> and <ref> show the search energy and delay for various designs for exact search and approximate search, respectively. The search delay for the SOT-3T case is higher than that for the SOT-5T case for larger Hamming distances since the RC delay in the circuit keeps the delay high as compared to the SOT-5T case where delay reduction with increasing Hamming distance is steeper. Increasing search voltages reduces delay due to larger V_sot mismatch voltage and as seen previously, improves the quality of search. Due to the large reduction in the delay, the search energy for Vs=1V decreases despite the increase in the search voltage. The energy consumption for the SOT-5T case is larger than the SOT-3T case due to the lower LRS resistance value used in the former design. The SOT based designs have larger search energy values compared to the SRAM and FeFET based designs since the former have large currents flowing through their arrays while the SRAM and FeFET based designs only require charging the gates of the transistors or FeFETs.
§ CONCLUSION
In summary, we have presented a novel non-volatile spin transfer torque assisted spin-orbit torque based ternary content addressable memory with 5 transistors and 2 magnetic tunnel junctions. By using an STT-assisted write process, the design eliminates the need for a magnetic field for the write operation, therefore, improving magnetic immunity in comparison to the previous SOT-CAM design which used magnetic field-assisted write. We have performed a comprehensive study of the proposed design in terms of write, exact search, and approximate search. To accurately account for the impact of various sources of circuit parasitics at advanced nodes like 7nm, we have used SPICE circuit simulations with layout extracted parasitics for bit-cells. We optimized our layout, array size, and search voltages to ensure accurate search operations. We project a search error rate for exact search operations lower than 3.9x10-9%, when various sources of variation are considered. For the approximate search, we show that the SOT-5T case with a search voltage of 1V have the best resolution amongst its counterparts. Our results show that moving to STT-assisted SOT write operation to improve the magnetic immunity comes at the cost of a 1.3x increase in the area and 7x increase in the search energy. Finally, we benchmarked our design against SOT-3T, SRAM, and FeFET-based CAM designs using a CAM-based recommendation system, where our design shows 4.88x improvement in the operational speedup.
§ ACKNOWLEDGMENT
The authors gratefully thank Professors D. Ralph, S. X. Wang, and Drs. S. Dutta, and V. Kumar for insightful discussions.
unsrt
| With the unprecedented growth of retrieval based applications in a diverse range of areas from recommendation systems for e-commerce <cit.> and video or image retrieval applications <cit.> to search engines <cit.> and genomics <cit.>, performing accurate and fast similarity search has become increasingly important since it is a vital function for all these use cases. Graph-based similarity search algorithms <cit.> have been extensively studied and have proven to be effective for low-dimensional data. However, these algorithms become inefficient as the dimensionality of the data increases <cit.>. In high-dimensional applications, such as image retrieval, hash encoding <cit.> is often employed to improve performance. Despite these advancements, both graph-based and hashing-based algorithms face challenges with scalability, as their execution time increases significantly with the number of samples. In modern applications, dataset sizes can be large in both dimensions (D) and number of samples (n), because of which similarity search becomes increasingly challenging <cit.>. Thus, hardware solutions like content addressable memories (CAMs), that can perform parallel in-memory search over an entire database, are being studied with great interest <cit.>. The speed-up in search they offer and their linear storage requirements have made CAMs suitable for applications such as memory augmented neural networks <cit.>, hyper-dimensional computing <cit.>, recommendation systems <cit.>, and dataset searches.
Many designs and devices have been explored to implement CAMs. Traditional CMOS implementations of TCAM (ternary content addressable memories) using SRAMs <cit.> require 16 transistors which limits the memory capacity. SRAMs also suffer from leakage which can become a major limiting factor for large data-sets and for applications with large idle times <cit.>. To address these challenges, CAM cells based on emerging non-volatile memory (NVM) devices such as resistive memories (RRAMs) <cit.>, ferroelectric field effect transistors (FeFETs) <cit.>, and spin-orbit torque (SOT) devices <cit.>, have been studied. Each of these technology options come with their own set of advantages and limitations. RRAM-based TCAMs <cit.> need very sensitive sense amplifiers due to low sensing margin. PCMs suffer from resistance drift issues <cit.> and require large write voltages. FeFETs are voltage-controlled and can be quite energy efficient and compact. However, they generally suffer from poor endurance <cit.>, have a finite read after write <cit.>, and generally require large write voltages (4V) <cit.>. SOT devices theoretically have unlimited endurance, good retention times <cit.>, and compatibility with CMOS back-end-of-line (BEOL) <cit.>. Large scale wafer-level implementation of SOT devices have already been demonstrated <cit.>. Although, they need larger write energy than FeFET and CMOS-based CAMs due to their current based write scheme <cit.>, most CAM-based applications are more search-intensive with a limited number of writes.
For SOT devices, the magnetic orientation of the ferromagnetic free layers is determined by their anisotropy direction and they can either have in-plane magnetic anisotropy (IMA) or perpendicular magnetic anisotropy <cit.>. The IMA based devices rely on the aspect ratio of the ferromagnet to maintain sufficient thermal stability and retention time <cit.>. It results in IMA devices being hard to scale to advanced technology nodes as the thermal stability becomes very sensitive to lithography variations. On the other hand, PMA devices rely on interface anisotropy <cit.> for thermal stability and are easier to scale to advanced nodes.
Despite these advantages, one major concern with PMA based SOT devices is that they rely on external magnetic field to achieve the required symmetry breaking for deterministic write operation <cit.>.
While there has been research to mitigate this issue by using magnetic hard mask <cit.> or exchange bias <cit.>, these solutions may have further challenges related to scaling to advanced nodes and may sacrifice magnetic immunity <cit.>.
To solve this issue, we use STT-assisted SOT switching which eliminates the need for a magnetic field for the write operation. Also, compared to the previous SOT-CAM <cit.> implementations, our current design does not require any changes to the bit-cell to enable ternary storage capabilities.
The rest of the paper is organized as follows. After this introduction, we discuss the cell design in section II. Section III discusses the write operation and Section IV discusses the exact and approximate search operations, search error rate, and resolution. In Section V we evaluate and benchmark our design against other CAMs at the 7nm technology node using resolution, energy, and delay as metrics. In addition, we benchmark the design at the application-level using a CAM-based recommendation system. Finally, Section VI concludes the paper. | null | null | null | null | In summary, we have presented a novel non-volatile spin transfer torque assisted spin-orbit torque based ternary content addressable memory with 5 transistors and 2 magnetic tunnel junctions. By using an STT-assisted write process, the design eliminates the need for a magnetic field for the write operation, therefore, improving magnetic immunity in comparison to the previous SOT-CAM design which used magnetic field-assisted write. We have performed a comprehensive study of the proposed design in terms of write, exact search, and approximate search. To accurately account for the impact of various sources of circuit parasitics at advanced nodes like 7nm, we have used SPICE circuit simulations with layout extracted parasitics for bit-cells. We optimized our layout, array size, and search voltages to ensure accurate search operations. We project a search error rate for exact search operations lower than 3.9x10-9%, when various sources of variation are considered. For the approximate search, we show that the SOT-5T case with a search voltage of 1V have the best resolution amongst its counterparts. Our results show that moving to STT-assisted SOT write operation to improve the magnetic immunity comes at the cost of a 1.3x increase in the area and 7x increase in the search energy. Finally, we benchmarked our design against SOT-3T, SRAM, and FeFET-based CAM designs using a CAM-based recommendation system, where our design shows 4.88x improvement in the operational speedup. |
http://arxiv.org/abs/2409.17740v1 | 20240926111515 | AnyLogo: Symbiotic Subject-Driven Diffusion System with Gemini Status | [
"Jinghao Zhang",
"Wen Qian",
"Hao Luo",
"Fan Wang",
"Feng Zhao"
] | cs.CV | [
"cs.CV"
] |
Privacy for Quantum Annealing. Attack on Spin Reversal Transformations in the case of cryptanalysis
Mateusz Leśniak
Department of Cryptology
NASK National Research Institute, Kolska Str. 12, Warsaw, Poland
[email protected]
Michał Wroński
Department of Cryptology
NASK National Research Institute, Kolska Str. 12, Warsaw, Poland
[email protected]
==============================================================================================================================================================================================================================================================================
§ ABSTRACT
Diffusion models have made compelling progress on facilitating high-throughput daily production.
Nevertheless, the appealing customized requirements are remain suffered from instance-level finetuning for authentic fidelity.
Prior zero-shot customization works achieve the semantic consistence through the condensed injection of identity features, while addressing detailed low-level signatures through complex model configurations and subject-specific fabrications, which significantly break the statistical coherence within the overall system and limit the applicability across various scenarios.
To facilitate the generic signature concentration with rectified efficiency,
we present AnyLogo, a zero-shot region customizer with remarkable detail consistency,
building upon the symbiotic diffusion system with eliminated cumbersome designs.
Streamlined as vanilla image generation,
we discern that the rigorous signature extraction and creative content generation are promisingly compatible and can be systematically recycled within a single denoising model.
In place of the external configurations, the gemini status of the denoising model promote the reinforced subject transmission efficiency and disentangled semantic-signature space with continuous signature decoration.
Moreover, the sparse recycling paradigm is adopted to prevent the duplicated risk with compressed transmission quota for diversified signature stimulation.
Extensive experiments on constructed logo-level benchmarks demonstrate the effectiveness and practicability of our methods.
§ INTRODUCTION
Diffusion models have demonstrated impressive capabilities in creative content generation, such as marvelous image generation <cit.>, image manipulation
<cit.>, video generation <cit.>, audio synchronized <cit.>, , and profoundly impact the practice of the public daily production.
Large volume of the generative requests necessitate the expeditious response, while the appealing customized aspirations are remain suffered from instance-level finetuning for authentic fidelity.
Prosperously, recent efforts have been witnessed toward the zero-shot customization with single forward pass, which achieve the semantic consistency through the condensed injection of identity features,
applications involving the object-level manipulation <cit.>, facial fidelity generation <cit.>, etc,
while the meticulous low-level signatures are less concentrated.
In this regard, we may entertain that are we really need the low-level signatures in daily customization.
Beyond the semantic recognizable similarities, there are numerous fidelity-disciplined sceneries such as text glyph generation <cit.>, image character animation <cit.>, virtual tryon <cit.>, , that strive for the extraordinary signature consistency with lower vision tolerance.
Moreover, the brand logo customization drastically involves the copyright license,
which overwhelming the moderate semantic injection insufficient for the signature-consistent customization, as shown in Fig. <ref>.
Current practices incorporate the auxiliary configured ControlNet <cit.> and ReferenceNet <cit.> for reinforced detail enhancement through progressive residual complement or hierarchical mutual spatial attention.
However, the complex model overconfigurations significantly break the statistical coherence within the overall system, as shown in Fig. <ref> (a), resulting in suboptimal signature transmission efficiency with declined proportion of the accumulated subject attention score, which we referred as overconfigured system.
Besides, there are also specialized attempts involving the utilization of the OCR model in precise text generation <cit.> and the posture steered warping operation in tryon application <cit.>, however, the subject-specific fabrications considerably narrow the applicable scenario potential.
In light of this, it is prospective to facilitate the generic signature concentration in daily customization with rectified transmission efficiency.
To this end, we present AnyLogo, a zero-shot region customizer with remarkable low-level signatures consistency,
proficient in diversified graphic patterns and text glyphs.
Streamlined as vanilla image generation,
AnyLogo is built upon the symbiotic diffusion system with economic model recycling policy,
where we discern that the rigorous signature extraction and creative content generation can be systematically recycled within a single denoising network.
The symbiotic diffusion system manifest the promisingly compatible generation capability with several peculiarities:
* In place of the external configuration, the gemini status of the denoising network, , the signature extraction and content generation largely alleviate the statistic coherence of the system owing to the self-delivery, yielding efficient signature transmission with reinforced subject attention, as shown in Fig. <ref> (a) and (b).
And the efficiency raised by symbotic system manifests the increased momentum as more operations within the denoising model.
* The low-level signatures derived from the symbiotic system is highly semantic-independent, where the native diversified generation capability is conserved with blocked signature delivering, which is disparate from the collapsed status in overconfigured systems, as shown in Fig <ref>. Moreover, the symbiotic system enjoy the continuous signature decoration space.
Apart from that, the overloaded signatures generically incur the potential duplicated risk.
Preceding works introduce various compressed signals such as
landmark representation <cit.>,
high-frequency map <cit.> for signature delivering,
which is unaccommodated to the symbotic system owing to the altered signal states.
In consequence, we adopt the sparse recycling paradigm with randomly discarding the self-delivered signatures for trimmed transmission quota, stimulating diversified hyper-representations of the signature with scene harmonization.
Additionally, we show that Anylogo also supports the diversified highlight of the specified subject area with sensible background translation.
To comprehensively evaluate our method, a logo-level customization benchmark is constructed, involving ∼1k high-quality pairs with rich textures, collecting from the open source wild logo detection datasets, tryon datasets with brand annotation, and text glyph datasets.
Extensive experiments on constructed benchmarks demonstrate the effectiveness and practicability of our methods.
§ RELATED WORK
§.§ Prompt-driven Image Region Manipulation
Creative and convenient image region manipulation has attracted increasing number of people to exercise in their daily production, accredited to the remarkable progress of the prevailing generative model.
Basically, the user practice can be categorized into the following three paradigms.
The text-driven image region manipulation <cit.> operates the candidate region with single text prompt, which specifies the desired attribute or object in principle, result in semantic aligned appearance.
Despite the flexibility, the precise control manifests to be imperative.
The image-driven image region manipulation
<cit.> transports the image prompt to the candidate region, delivering the concrete manipulative intention, result in content preservation and realistic outlook.
Traditional image composition methods have investigated such aspiration for a long while, including image harmonization <cit.>, image blending <cit.>, and geometric correction <cit.>.
However, the intricate subbranches perplex the feasible pipelines, and the low-level manipulation is more concentrated with restricted semantic conformity.
Recent diffusion models <cit.> have shown impressive image composition capability, which incorporate the image prompt into the denoising process through condensed encoding embedding injection <cit.>.
While the details are further complemented with low-level feature interaction <cit.>.
Toward the synergic, the multimodal-driven image manipulation <cit.> are further investigated with interleaved text and image prompt, fortified with multimodal large-language models for diversified token injection.
§.§ ID-preserved Image Generation
ID-preserved image generation is widely requisite in applications. Prior works on customized concept learning <cit.> generate subject-relevant images with arbitrary text prompt in few of user provided images. However, the optimization typically involves the intensive instant-level finetuning, which is cumbersome for large-scale deploy.
To this end, recent efforts on facial identity preservation <cit.> accomplish the customization with a single forward pass, depending on the injection of the identity feature extracted from CLIP or facial model, result in the semantic recognizable fidelity and flexible text controllability.
In the field of human animation, the image-to-video methods <cit.> generate the reference-based video sequences following the motion signals, and relies heavily on the identity consistency. The common practice predefine the configured ControlNet or ReferenceNet to retain precious subject details with hierarchical representation interaction, result in impressive id preservation.
Additionally, there are also attempts toward the training-free customization <cit.> through multi-branch attention manipulation with paralled reconstructive diffusion processes, albeit the flexibility, they reckon on the pretrained model with confined fidelity and concentrate on the original text-driven manipulation.
§ METHOD
In this section, we provide a brief background of the text-to-image diffusion models and current subject-driven customization practices in Sec. <ref>.
Then, we introduce our symbiotic diffusion system built with model recycling policy in Sec. <ref>.
The sparse recycling with compressed transmission quota is presented in Sec. <ref>.
Finally, we briefly show the data collection criterion in Sec. <ref>.
§.§ Preliminary
Diffusion models <cit.> were introduced as latent variable generative models with forward and reverse Markov chain, which gradually perturb the data with noise until tractable distribution and reverse the process with score matching or noise prediction for sampling.
Combined with prompt conditions, diffusion models are capable of generating images with aligned user aspiration.
In this work, we conduct experiments on the prevailing Stable Diffusion <cit.>, which comprises an encoder ℰ: 𝒳→𝒵, and a decoder 𝒟: 𝒵→𝒳, where x̃ = 𝒟(ℰ(x)).
The denoising network ϵ_θ is operated in the latent space with attached conditional encoder τ_θ.
The training objective for stable diffusion is to minimize the denoising objective by
ℒ = 𝔼_z,c,ϵ,t[ϵ-ϵ_θ(z_t,t,τ_θ(c))^2_2],
where z_t is the latent feature at timestep t, and c is the given prompt condition.
Precedent customization methods involving instance-level optimization on specific subject with prior preservation loss <cit.> or concept embedding <cit.>.
Recent practices on zero-shot customization largely derive from the stable diffusion and substitute the conditional encoder τ_θ with image modal for semantic injection.
Moreover, the low-level signatures are complemented with paralleled model configurations such as ControlNet or ReferenceNet, which formulate the denoising network as ϵ_θ(z_t,t,τ_θ(c),ζ_θ(𝒯(c))), where ζ_θ provides the hierarchical subject interactions, and 𝒯 is the transformation for delivering diversified hint signals <cit.> with potential information bottleneck.
§.§ Symbiotic Diffusion System with Model Recycling
The intermediate latents in diffusion models contain rich semantic clues and copious fine-grained details <cit.>, while the derived downstream applications range from zero-shot classification, segmentation, to generative prompt-driven manipulation <cit.>, and 3D rendering <cit.>.
As shown in Fig. <ref> (a), we first exclude the impact from the public VAE component, where the decoded vae latents with zoom-in transmission disclose the almost spotless fidelity, , x̃≃x, implying the disengagement of the signature conservation.
Additionally, without bells and whistles, the glamorous images can be exactly generated through advanced diffusion inversion techniques <cit.> with single initial noise, , x≃𝒟(Γ(z_t,t,ϵ_θ(z_t,t,τ_θ(c)))), where Γ is the noising schedule updated from t: T→0, and z_T=Γ^-1(z_t,t,ϵ_θ(z_t,t,τ_θ(c))) obtained from t: 0→ T.
As shown in Fig. <ref> (b), the appealing fidelity indicates the huge potential of the latent denoising model in signature collection, transmission, and representation along the denoising process.
Therefore, we suppose that the denoising network ϵ_θ is sufficient for grasping the informative evidence we would desired about the signature.
Disparate from prevailing approaches that dedicated to the external overconfigured assistant models ζ_θ, we made efforts toward the inside diffusion system for holistic efficiency.
Task Formulation. We consider the subject-driven region customization, which ordinarily necessitate the scrupulous signature-level consistency, compared to the full-size image generation.
The overall process involves the transportation of the arbitrary subject prompt c_sub to the candidate region specified by the binary mask ℳ_sce of the scene image x_sce for seamless consolidation, formulated as
x = Γ(ℳ̂_sce⊙z_sce^t + (1-ℳ̂_sce) ⊙z_t,t,ϵ_t)
where ℳ̂_sce is the interpolated mask formed to the latent size, z_sce^t is the encoded vae latent of x_sce with forward diffusion noise <cit.>, ϵ_t is the predicted t-step noise from the denoising network ϵ_θ.
The progressively infused scene latents provide the reliable background preservation.
In the following, we present the symbotic mechanism inside the ϵ_θ for signature delivering.
Recycling Policy. The symbiotic diffusion system is built upon the model recycling policy with self-delivered signature payload.
As shown in Fig. <ref>, the holistic subject-driven diffusion workflow is streamlined as vanilla image generation, where the accessorily configured consistency-relevant component ζ_θ are discarded.
We discern that the rigorous signature collection and the creative content representation are promisingly compatible and can be systematically recycled within a single denoising network.
Principally, the denoising objective of the symbiotic subject-driven diffusion system reinforces the vanilla image generation in Eq. <ref>, and given by
ℒ = 𝔼_z,c,ϵ,t[ϵ-ϵ_θ(ẑ_t,t,τ_θ(c),ϵ_θ(ẑ^t_𝒯(c)))^2_2],
where ẑ_t is the composition of the noisy latents z_t, mask image latents ℳ̂_sce⊙z_sce, and binary mask ℳ̂_sce, z^t_𝒯(c) is the encoded subject latent under the potential transformation 𝒯 with t-step forward noise, and forming the input space in the same way.
The signature extraction status is abbreviated from ϵ_θ(z^t_𝒯(c),t,τ_θ(c)) for simplicity, which shares the same workflow as content generation, except for the intermediate delivered signatures.
We enforce the denoising objective solely in the content generation status, and released from the signature extraction, and the gemini status are alternate with concurrent timestep.
Consistent with <cit.>, we replace the conditional encoder τ_θ with image modal for semantic injection,
while the gemini status provide the hierarchical interactions inside the denoising network ϵ_θ between the transmitted subjects z^t_𝒯(c) and generated contents z_t.
Explicitly, we cache the subject signatures at each self-attention procedure within the decoder part of the denoising network, and operate the inclusive mutual spatial attention in content generation flow.
The delivered hierarchical signatures formulate the intersection of the gemini status, given by
𝐂^t,i_z = Softmax (Q^t,i_z·K̂^t,i/√(d_i))·V̂^t,i,
where Q^t,i_z is the query and derived from the generated content latents z_t^i at i-th layer index, K̂^t,i and V̂^t,i are key and value and derived from the gemini status [z_t^i,z^t,i_𝒯(c)],
d_i is the feature dimension, and 𝐂^t,i_z is the output of the current attention procedure.
Symbiotic Temperament. Peculiarly, the low-level signatures derived from the symbiotic diffusion system are highly semantic-independent, and the creative content generation capability is widely preserved even with the blocked signature flow ϵ_θ(z_t,t,τ_θ(c),∅), which is significantly different from the collapsed quality and diversity in overconfigured systems, as shown in Fig. <ref>.
We suppose the auxiliary ζ_θ induces the leakage of the generative expertise from the eantangled forward interdependence with denoising network ϵ_θ.
Moreover, we show that the symbiotic system enjoys the continuous transmission space for progressive signature decoration with controllable signature flow.
§.§ Sparse Recycling
Albeit the scrupulous signature concentration, the overloaded transmission could cause the potential duplicate risk with ultimate subject fidelity and inferior scene conformity.
Prior investigations settle the information bottleneck with suppressive transformation 𝒯 for signature delivering, including landmark representation and high-frequency map.
However, the symbiotic diffusion system predefines the congruous status behavior, and requests the delivered subject within the same signal space as the denoising latents, , the identity 𝒯, resulting in complete signature disclosure.
In light of this, the sparse recycling paradigm is adopted for the symbiotic system with compressed transmission quota.
Sidestep the transformation of the input signal, we randomly discard the self-delivered signatures within the denoising network at cached attention procedures in the content generation flow.
t0.52
The transmission gain of the sparse recycling paradigm in delivering the faithful subject at both overconfigured system and symbotic system.
0.52!
Configured CLIP-S ↑ LPIPS ↓ MUSIQ ↑
ControlNet 87.9 (+ 0.2) 0.141 (+ 0.004) 68.6 (+ 0.5)
ReferenceNet 90.2 (+ 0.4) 0.134 (+ 0.011) 68.2 (+ 0.6)
Recycling 91.3 (+ 0.3) 0.127 (+ 0.009) 68.9 (+ 0.8)
For consistency, we remain utilize 𝒯 to denote the suppressive transformation, and given by
𝒯_ϵ_i,Λ(c) =
z^i_c, if k ≤Λ and k ∼𝒰(0,1)
∅, otherwise
where k is sampled from the uniform distribution 𝒰(0,1) for comparison with the threshold Λ, z^i_c is the cached subject signature at i-th attention procedure, ∅ represents the null with discarding operation,
𝒯_ϵ_i,Λ is the layer-wise transformation attached to the denoising network ϵ_θ that compress the signature flow.
In particular, besides the prevention of the duplicated risk, the compressed transmission quota implicitly steers the diversified hyper-representations of the signature, facilitating the symbiotic system with improved subject transmission quality for scene harmonization,
as shown in Tab. <ref>, which is also competent for the overconfigured system.
Consequently, the denoising objective of the symbiotic diffusion system with sparse recycling is given by
ℒ = 𝔼_z,c,ϵ,t[ϵ-ϵ_θ(z_t,t,τ_θ(c),𝒯_ϵ_θ,Λ(c))^2_2],
where 𝒯_ϵ_θ,Λ is the collection of 𝒯_ϵ_i,Λ along the denoising network.
In inference, the sparse recycling is disabled in default with complete signature transmission, and enabled with continuous signature decoration.
Additionally, the classifier-free guidance <cit.> is incorporated to provide the conditional direction, formulated as
ϵ_θ,c(z_t,t,w,c) =
ϵ_θ(z_t,t,∅,∅) + w(ϵ_θ(z_t,t,τ_θ(c),𝒯_ϵ_θ,Λ(c)) - ϵ_θ(z_t,t,∅,∅)),
where w is the guidance scale parameter.
§.§ Dataset Collection
As there is a lack of publicly available dataset that tailored for the logo-level customization with rich graphic patterns,
we present the BrushLogo-70k, collecting from the open data source comprises wild logo detection dataset such as OpenLogo <cit.>, OSLD <cit.>, virtual tryon dataset with brand region such as Dresscode <cit.>, VITON-HD <cit.>, and text glyph dataset from AnyWord-3M <cit.>.
The regions of interest are acquired with either ancillary provided or internal annotated, and the data are strictly excluded with irregular region size, aspect ratio, occlusion, and distortion quality.
The detailed data composition is provided in the Appendix <ref>.
The evaluation set is constructed with 1k high-quality entities extracted from the collection with upraised criterions, referred as AnyLogo-Benchmark.
§ EXPERIMENTS
§.§ Implementation Details
We implement AnyLogo based on the Stable Diffusion v1.5 for weights initialization, while the VAE module is replaced with the SDXL version for advanced regression quality.
We train our model on constructed BrushLogo-70k dataset with 4 A800 GPUs for 50 000 steps.
We preprocess the scene images with zoom-in strategy, and the subject images are augmented with horizontal flip, rotation, optical distortion, and super resolution.
Image sizes are set to be 512×512.
We choose AdamW optimizer with the fixed learning rate of 1 × 10^-5 and the batch size of 64.
The weighted sampling strategy is adopted for unbalanced data composition.
The threshold in sparse recycling is set to be 0.6,
and the conditional encoder is employed as DINOv2 <cit.> for visual semantic injection.
During inference, we adopt the DDIM sampler with 20 denoising steps. More details in Appendix <ref>.
Evaluation Metrics.
The evaluation involves two aspects, the customized scenes region are supposed to be consistent with the provided subject, and the overall generated image should be photorealistic.
To this end, we introduce the following four metrics,
the CLIP-score and DINO-score are adopted for measuring the subject fidelity from the profile-level with the cosine similarity between the extracted embeddings of the customized region and the subject, and the LPIPS <cit.> is adopted for measuring the signature-level consistency.
The quality assessment metric MUSIQ <cit.> is engaged to evaluate the authenticity and harmony of the overall generated image.
Additionally, following <cit.>, the diversified generation capability of the subject-driven diffusion system with blocked signature flow is further quantified with averaged LPIPS similarity between the generated images under the same subject.
Baselines.
We perform the comparison with following zero-shot region customization methods, including Paint-by-Example <cit.>, AnyDoor <cit.>, Graphit <cit.>, SD-Inpainting <cit.>, and IP-Adapter <cit.>.
SD-Inpainting is the text-driven method and we boost it with replaced CLIP image embeddings for subject transmission.
IP-Adapter is implemented with inpainting version.
The overconfigured system is implemented in the same experimental settings as AnyLogo for comparison, except for the extra configured ControlNet and ReferenceNet for signature delivering.
§.§ Comparison with Existing Alternatives
We provide the quantitative comparison results in Tab. <ref>, where the AnyLogo is superior in maintaining the signature consistency and semantic fidelity across diverse logo-level subjects, range from the wild brand logoes, Tryon patterns, and text glyphs.
It can be observed that the solitary semantic injection is stumbling for signature-preserved customization.
Albeit the complementary hint signals strived by AnyDoor for signature transmission, the contour profile is informative insufficient with lossy compression, and more like spatial structure arrangement.
The qualitative results are presented in Fig. <ref>. Compared to the AnyLogo where the richly textured subjects are well transported to the candidate regions in the scene image with less distortion, other alternatives struggle in delivering the accurate low-level signatures with coarse semantic consistency, deviating from the color, pattern structure, and hallucination rendering.
It would be conscious that the signature-level consistency is dramatically differ from the object-level concentration, where the dispersed and disconnected subjects are toilsome to be grasped against the strongly semantic compact entity, and the interleaved text and graphic layout form the raised challenge.
§.§ Ablation Study
Overconfigured System.
We provide the comparison against the overconfigured system in Tab. <ref> with extra equipped ControlNet and ReferenceNet.
The baseline denotes the blocked signature flow with purely semantic injection.
It can be observed that albeit the improved faithfulness with extra configured signature extractor, the model recycling policy with self-delivered signatures achieves the excelling fidelity.
Note that the sparse recycling is excluded for validation.
The visual comparisons are provided in Fig. <ref>, where the transmission distortion of the overconfigured system are manifested as distorted color and structure compared to the symbotic system.
We further evaluate the diversity of the two systems with disabled signature flow, where the symbotic system exhibit the higher diversity as quantified in Tab. <ref>, owing to the holistic system construction that preclude the entangled generative expertise leakage, and the visual comparisons are shown in Fig. <ref>.
r0.3
< g r a p h i c s >
Tendency of sparse recycling with progressive signature delivering thresholds.
Position of the transmission. In Tab. <ref>, we provide the ablation experiments about the transmission position of the signature flow in model recycling policy.
It can be observed that the transmission is efficient in the decoder part and encounters the obstacle in the encoder, which imply that the shallow layers are not well prepared with the steady semantic layout for signature delivering, and the overloaded signatures are undesirable during the content infancy.
Sparse Threshold Λ. The sparse recycling paradigm is evaluated in Fig. <ref> with various transmission thresholds in optimization and fully released in inference.
We present the comparison curves both from the subject fidelity and image quality.
The uncompressed signature transmission induces the harmed image quality with disharmonious duplicated risk, also shown in Fig. <ref>.
The blocked signature flow manifests the restricted subject consistency with purely semantic injection.
Combined with sparse recycling, the symbotic system is much more efficient on semantic-relevant signature concentration with excluded signature noise.
§.§ Discussion about the Fidelity and Future Works
We show that AnyLogo is not only proficient in region customization with arbitrary user provided subjects, but also favour the diversified outpainting with faithful subject highlight, as shown in Fig. <ref>, where the subject regions are preserved with absolute accuracy, and the scene background are regenerated with diversified presentation.
We are delighting to point that these are two ways to maintain the signature-level consistency.
In case of the lower fidelity requests for the scene region,
it is of great potential to preserve the subject area for definitive fidelity with scale and shift manipulation, and the desired background could be complemented with semantic-faithful practices.
We note that the outpainting version of AnyLogo is effortless to be implemented with simply reversing the binary mask of the scene image both in input space and latents complementary.
§ CONCLUSION
In this work, we proposed AnyLogo, a symbiotic subject-driven diffusion system with remarkable low-level signature consistency.
Streamlined as vanilla image generation, we discern that the imperative customized gemini status, , the rigorous signature extraction and creative content generation can be systematically recycled within a single denoising model and are promisingly compatible.
The model recycling policy promotes the reinforced subject transmission efficiency with alleviated systematic coherence, and the disentangled semantic-signature space with continuous signature decoration.
Besides, the sparse recycling paradigm is adopted to prevent the potential duplicated risk with compressed transmission quota for diversified signature stimulation.
Extensive experiments on constructed AnyLogo-Benchmark demonstrate the effectiveness and practicability of our method.
unsrt
§ MORE MODEL DETAILS
We implement the conditional encoder τ_θ with hierarchical DINOv2 features for semantic injection, which conducted at all cross attention layers of the denoising model.
Specifically, we extract four group of embeddings from the DINOv2 that corresponding to the four scales of the denoising model.
And each embedding incorporates the spatial patch tokens with size of ℝ^257×1024 and broadly dsitributed from the shallow to deeper layers with step size of 6.
The extracted four group of semantic embeddings are progressively injected to the denoising model with symmetrical variation between encoder and decoder.
To perceive the background of the scene image for harmony subject transmission, we incorporates the additional background embedding that concatenates to the each group of subject embedding, which extracted in the same hierarchical manner, and each background embedding is only represented by single global token in size of ℝ^1×1024 to exclude the scene details.
§ DATA COMPOSITIONS
The detailed data composition of the BrushLogo-70k and AnyLogo-Benchmark is illustrated in Tab <ref>, which are adopted as train set and test set, respectively.
As presented in sec <ref>, the wild logo are collected from the OpenLogo <cit.> and OSLD <cit.> with 23,947 and 25,955 subject-scene pairs, the brands in virtual tryon are collectd from the Dresscode <cit.> and VITON-HD <cit.> with 5,434 and 2,986 pairs, and the text glyph are collected from the AnyWord-3M-Laion <cit.> with 13,778 pairs.
t0.51
The composition of the collected dataset.
0.51!
Data Split Wild Logo Virtual Tryon Text Glyph
BrushLogo-70k 49382 8420 13492
AnyLogo-Benchmark 520 223 286
The test set is constrcuted with 1k high-quality pairs extracted from the aforementioned collections with upraised criterions.
The data are filtered with CLIP-IQA <cit.> from three dimensions with prompts of quality, contrast, and sharpness.
Apart from that, the irregular region size and aspect ratio are excluded.
The regions of the logo pattern or text in the scene images are acquired with either ancillary provided (wild logo and text glyph) or internal annotated (virtual tryon).
The subjects are paired with high DINO-simlarity from other entities under the same brand for the wild logo, and provided with paired product in the virtual tryon.
The subjects for text are extracted from its own scene images to prevent glyph distortion.
§ LIMITATIONS
The limitations are encountered with two main problems, as shown in Fig. <ref>. The first is the unmatched aspect ratio between the subject and the customized region in scene image. In order to maintain the
r0.6
< g r a p h i c s >
The failure cases. The model tends to repeat the subject or hallucinate the new concept to fill the customized region when encounters the unmatched aspect ratio.
subject fidelity with less distortion, we do not perform the resize operation in preprocessing, and the transmission are tend to repeat the subject with original ratio to fill the specified region.
The second is the hallucination that related to the scene image, which is also occurred with unmatched aspect ratio between the subject and the customized region. And the model tends to hallucinate the new concept that related to the background of the scene image, ,the side windows for the car, to fill the candidate region.
§ BOARDER IMPACTS
The ability to manipulate logos could be benefit for product promotion, poster making, logo alteration in advertising position, , and the outpainting version with diversified highlighting backgrounds could ease the cost of the venue rental and model hiring.
However, the misuse could incur the potential copyright problems with legal disputes.
And the generative logos may confuse the consumers to discern the authenticity and impact the reputation of the brands.
Furthermore, the population of the generative models could impact the graphic designers and brand professionals, as the automated logo alteration with similar semantic layout might reduce the demand for manual design work.
We preclude the copyright infringement with infused watermarks <cit.> to fingerprint the generated images.
§ COMPARISON OF THE OVERCONFIGURED SYSTEM AND SYMBOTIC SYSTEM
We provide the detailed illustration of the overconfigured diffusion system and the symbotic diffusion system for subject customization in Fig. <ref>.
As can bee seen that the overconfigured system employ the individual model to extract the signatures of the subject for reinforced detail enhancement, with progressive residual complement in ControlNet <cit.>, and hierarchical spatial attention in ReferenceNet <cit.>.
The symbotic diffusion system is built upon the model recycling policy with eliminated signature-relevant model configurations, and the signature extraction and content generation are systematically recycled within a single denoising model.
We provide the detailed calculation process of the Fig. <ref>, where the transmitted statistic latent difference (SLD) is calculated with ℓ_1 error of the average between the delivered subject latents and corresponding denoising latents in the delivered layer.
We provide four layers comparison along the denoising model, including the middle-attention0, up1-attention2, up2-attention2, and up3-attention2, where the middle refers to the middle block of the denoising model, up refers to the decoder of the denoising model, and attention refers to the self-attention layer.
It can be observed that the overconfigured system exhibits the larger statistic discrepancy between the delivered subject latents and corresponding denoising latents, compared to the symbotic system, owing to the external engagement that breaks the statistic coherence.
Consequently, the accumulative subject attention (ASA) is significantly hampered, as shown in the shadow region of the Fig <ref> (a), which calculated with proportion of the accumulated attention score of the subject against the overall attention map in the delivered self-attention layers
§ MORE VISUAL COMPARISONS
We provide the visual results of AnyLogo on object-level customization in Fig. <ref>, without tuning.
It can be observed that AnyLogo is proficient in rendering the portrait of the natural object for personali-
zation, which is applicable to customize the user assets with personalized preference, such as pet and favoured cuisine.
In Fig. <ref>, we provide more visual comparison results of AnyLogo with other competing methods on logo-level customization, together with the overconfigured system equipped by ControlNet and ReferenceNet for visual ablation.
It can be observed that AnyLogo is proficient in maintaining the subject-specific signatures rather than hallucinating the general semantics, including curve contour, graphic layout, , and dealing well with the scene background, such as occlusion.
| Diffusion models have demonstrated impressive capabilities in creative content generation, such as marvelous image generation <cit.>, image manipulation
<cit.>, video generation <cit.>, audio synchronized <cit.>, , and profoundly impact the practice of the public daily production.
Large volume of the generative requests necessitate the expeditious response, while the appealing customized aspirations are remain suffered from instance-level finetuning for authentic fidelity.
Prosperously, recent efforts have been witnessed toward the zero-shot customization with single forward pass, which achieve the semantic consistency through the condensed injection of identity features,
applications involving the object-level manipulation <cit.>, facial fidelity generation <cit.>, etc,
while the meticulous low-level signatures are less concentrated.
In this regard, we may entertain that are we really need the low-level signatures in daily customization.
Beyond the semantic recognizable similarities, there are numerous fidelity-disciplined sceneries such as text glyph generation <cit.>, image character animation <cit.>, virtual tryon <cit.>, , that strive for the extraordinary signature consistency with lower vision tolerance.
Moreover, the brand logo customization drastically involves the copyright license,
which overwhelming the moderate semantic injection insufficient for the signature-consistent customization, as shown in Fig. <ref>.
Current practices incorporate the auxiliary configured ControlNet <cit.> and ReferenceNet <cit.> for reinforced detail enhancement through progressive residual complement or hierarchical mutual spatial attention.
However, the complex model overconfigurations significantly break the statistical coherence within the overall system, as shown in Fig. <ref> (a), resulting in suboptimal signature transmission efficiency with declined proportion of the accumulated subject attention score, which we referred as overconfigured system.
Besides, there are also specialized attempts involving the utilization of the OCR model in precise text generation <cit.> and the posture steered warping operation in tryon application <cit.>, however, the subject-specific fabrications considerably narrow the applicable scenario potential.
In light of this, it is prospective to facilitate the generic signature concentration in daily customization with rectified transmission efficiency.
To this end, we present AnyLogo, a zero-shot region customizer with remarkable low-level signatures consistency,
proficient in diversified graphic patterns and text glyphs.
Streamlined as vanilla image generation,
AnyLogo is built upon the symbiotic diffusion system with economic model recycling policy,
where we discern that the rigorous signature extraction and creative content generation can be systematically recycled within a single denoising network.
The symbiotic diffusion system manifest the promisingly compatible generation capability with several peculiarities:
* In place of the external configuration, the gemini status of the denoising network, , the signature extraction and content generation largely alleviate the statistic coherence of the system owing to the self-delivery, yielding efficient signature transmission with reinforced subject attention, as shown in Fig. <ref> (a) and (b).
And the efficiency raised by symbotic system manifests the increased momentum as more operations within the denoising model.
* The low-level signatures derived from the symbiotic system is highly semantic-independent, where the native diversified generation capability is conserved with blocked signature delivering, which is disparate from the collapsed status in overconfigured systems, as shown in Fig <ref>. Moreover, the symbiotic system enjoy the continuous signature decoration space.
Apart from that, the overloaded signatures generically incur the potential duplicated risk.
Preceding works introduce various compressed signals such as
landmark representation <cit.>,
high-frequency map <cit.> for signature delivering,
which is unaccommodated to the symbotic system owing to the altered signal states.
In consequence, we adopt the sparse recycling paradigm with randomly discarding the self-delivered signatures for trimmed transmission quota, stimulating diversified hyper-representations of the signature with scene harmonization.
Additionally, we show that Anylogo also supports the diversified highlight of the specified subject area with sensible background translation.
To comprehensively evaluate our method, a logo-level customization benchmark is constructed, involving ∼1k high-quality pairs with rich textures, collecting from the open source wild logo detection datasets, tryon datasets with brand annotation, and text glyph datasets.
Extensive experiments on constructed benchmarks demonstrate the effectiveness and practicability of our methods. | §.§ Prompt-driven Image Region Manipulation
Creative and convenient image region manipulation has attracted increasing number of people to exercise in their daily production, accredited to the remarkable progress of the prevailing generative model.
Basically, the user practice can be categorized into the following three paradigms.
The text-driven image region manipulation <cit.> operates the candidate region with single text prompt, which specifies the desired attribute or object in principle, result in semantic aligned appearance.
Despite the flexibility, the precise control manifests to be imperative.
The image-driven image region manipulation
<cit.> transports the image prompt to the candidate region, delivering the concrete manipulative intention, result in content preservation and realistic outlook.
Traditional image composition methods have investigated such aspiration for a long while, including image harmonization <cit.>, image blending <cit.>, and geometric correction <cit.>.
However, the intricate subbranches perplex the feasible pipelines, and the low-level manipulation is more concentrated with restricted semantic conformity.
Recent diffusion models <cit.> have shown impressive image composition capability, which incorporate the image prompt into the denoising process through condensed encoding embedding injection <cit.>.
While the details are further complemented with low-level feature interaction <cit.>.
Toward the synergic, the multimodal-driven image manipulation <cit.> are further investigated with interleaved text and image prompt, fortified with multimodal large-language models for diversified token injection.
§.§ ID-preserved Image Generation
ID-preserved image generation is widely requisite in applications. Prior works on customized concept learning <cit.> generate subject-relevant images with arbitrary text prompt in few of user provided images. However, the optimization typically involves the intensive instant-level finetuning, which is cumbersome for large-scale deploy.
To this end, recent efforts on facial identity preservation <cit.> accomplish the customization with a single forward pass, depending on the injection of the identity feature extracted from CLIP or facial model, result in the semantic recognizable fidelity and flexible text controllability.
In the field of human animation, the image-to-video methods <cit.> generate the reference-based video sequences following the motion signals, and relies heavily on the identity consistency. The common practice predefine the configured ControlNet or ReferenceNet to retain precious subject details with hierarchical representation interaction, result in impressive id preservation.
Additionally, there are also attempts toward the training-free customization <cit.> through multi-branch attention manipulation with paralled reconstructive diffusion processes, albeit the flexibility, they reckon on the pretrained model with confined fidelity and concentrate on the original text-driven manipulation. | In this section, we provide a brief background of the text-to-image diffusion models and current subject-driven customization practices in Sec. <ref>.
Then, we introduce our symbiotic diffusion system built with model recycling policy in Sec. <ref>.
The sparse recycling with compressed transmission quota is presented in Sec. <ref>.
Finally, we briefly show the data collection criterion in Sec. <ref>.
§.§ Preliminary
Diffusion models <cit.> were introduced as latent variable generative models with forward and reverse Markov chain, which gradually perturb the data with noise until tractable distribution and reverse the process with score matching or noise prediction for sampling.
Combined with prompt conditions, diffusion models are capable of generating images with aligned user aspiration.
In this work, we conduct experiments on the prevailing Stable Diffusion <cit.>, which comprises an encoder ℰ: 𝒳→𝒵, and a decoder 𝒟: 𝒵→𝒳, where x̃ = 𝒟(ℰ(x)).
The denoising network ϵ_θ is operated in the latent space with attached conditional encoder τ_θ.
The training objective for stable diffusion is to minimize the denoising objective by
ℒ = 𝔼_z,c,ϵ,t[ϵ-ϵ_θ(z_t,t,τ_θ(c))^2_2],
where z_t is the latent feature at timestep t, and c is the given prompt condition.
Precedent customization methods involving instance-level optimization on specific subject with prior preservation loss <cit.> or concept embedding <cit.>.
Recent practices on zero-shot customization largely derive from the stable diffusion and substitute the conditional encoder τ_θ with image modal for semantic injection.
Moreover, the low-level signatures are complemented with paralleled model configurations such as ControlNet or ReferenceNet, which formulate the denoising network as ϵ_θ(z_t,t,τ_θ(c),ζ_θ(𝒯(c))), where ζ_θ provides the hierarchical subject interactions, and 𝒯 is the transformation for delivering diversified hint signals <cit.> with potential information bottleneck.
§.§ Symbiotic Diffusion System with Model Recycling
The intermediate latents in diffusion models contain rich semantic clues and copious fine-grained details <cit.>, while the derived downstream applications range from zero-shot classification, segmentation, to generative prompt-driven manipulation <cit.>, and 3D rendering <cit.>.
As shown in Fig. <ref> (a), we first exclude the impact from the public VAE component, where the decoded vae latents with zoom-in transmission disclose the almost spotless fidelity, , x̃≃x, implying the disengagement of the signature conservation.
Additionally, without bells and whistles, the glamorous images can be exactly generated through advanced diffusion inversion techniques <cit.> with single initial noise, , x≃𝒟(Γ(z_t,t,ϵ_θ(z_t,t,τ_θ(c)))), where Γ is the noising schedule updated from t: T→0, and z_T=Γ^-1(z_t,t,ϵ_θ(z_t,t,τ_θ(c))) obtained from t: 0→ T.
As shown in Fig. <ref> (b), the appealing fidelity indicates the huge potential of the latent denoising model in signature collection, transmission, and representation along the denoising process.
Therefore, we suppose that the denoising network ϵ_θ is sufficient for grasping the informative evidence we would desired about the signature.
Disparate from prevailing approaches that dedicated to the external overconfigured assistant models ζ_θ, we made efforts toward the inside diffusion system for holistic efficiency.
Task Formulation. We consider the subject-driven region customization, which ordinarily necessitate the scrupulous signature-level consistency, compared to the full-size image generation.
The overall process involves the transportation of the arbitrary subject prompt c_sub to the candidate region specified by the binary mask ℳ_sce of the scene image x_sce for seamless consolidation, formulated as
x = Γ(ℳ̂_sce⊙z_sce^t + (1-ℳ̂_sce) ⊙z_t,t,ϵ_t)
where ℳ̂_sce is the interpolated mask formed to the latent size, z_sce^t is the encoded vae latent of x_sce with forward diffusion noise <cit.>, ϵ_t is the predicted t-step noise from the denoising network ϵ_θ.
The progressively infused scene latents provide the reliable background preservation.
In the following, we present the symbotic mechanism inside the ϵ_θ for signature delivering.
Recycling Policy. The symbiotic diffusion system is built upon the model recycling policy with self-delivered signature payload.
As shown in Fig. <ref>, the holistic subject-driven diffusion workflow is streamlined as vanilla image generation, where the accessorily configured consistency-relevant component ζ_θ are discarded.
We discern that the rigorous signature collection and the creative content representation are promisingly compatible and can be systematically recycled within a single denoising network.
Principally, the denoising objective of the symbiotic subject-driven diffusion system reinforces the vanilla image generation in Eq. <ref>, and given by
ℒ = 𝔼_z,c,ϵ,t[ϵ-ϵ_θ(ẑ_t,t,τ_θ(c),ϵ_θ(ẑ^t_𝒯(c)))^2_2],
where ẑ_t is the composition of the noisy latents z_t, mask image latents ℳ̂_sce⊙z_sce, and binary mask ℳ̂_sce, z^t_𝒯(c) is the encoded subject latent under the potential transformation 𝒯 with t-step forward noise, and forming the input space in the same way.
The signature extraction status is abbreviated from ϵ_θ(z^t_𝒯(c),t,τ_θ(c)) for simplicity, which shares the same workflow as content generation, except for the intermediate delivered signatures.
We enforce the denoising objective solely in the content generation status, and released from the signature extraction, and the gemini status are alternate with concurrent timestep.
Consistent with <cit.>, we replace the conditional encoder τ_θ with image modal for semantic injection,
while the gemini status provide the hierarchical interactions inside the denoising network ϵ_θ between the transmitted subjects z^t_𝒯(c) and generated contents z_t.
Explicitly, we cache the subject signatures at each self-attention procedure within the decoder part of the denoising network, and operate the inclusive mutual spatial attention in content generation flow.
The delivered hierarchical signatures formulate the intersection of the gemini status, given by
𝐂^t,i_z = Softmax (Q^t,i_z·K̂^t,i/√(d_i))·V̂^t,i,
where Q^t,i_z is the query and derived from the generated content latents z_t^i at i-th layer index, K̂^t,i and V̂^t,i are key and value and derived from the gemini status [z_t^i,z^t,i_𝒯(c)],
d_i is the feature dimension, and 𝐂^t,i_z is the output of the current attention procedure.
Symbiotic Temperament. Peculiarly, the low-level signatures derived from the symbiotic diffusion system are highly semantic-independent, and the creative content generation capability is widely preserved even with the blocked signature flow ϵ_θ(z_t,t,τ_θ(c),∅), which is significantly different from the collapsed quality and diversity in overconfigured systems, as shown in Fig. <ref>.
We suppose the auxiliary ζ_θ induces the leakage of the generative expertise from the eantangled forward interdependence with denoising network ϵ_θ.
Moreover, we show that the symbiotic system enjoys the continuous transmission space for progressive signature decoration with controllable signature flow.
§.§ Sparse Recycling
Albeit the scrupulous signature concentration, the overloaded transmission could cause the potential duplicate risk with ultimate subject fidelity and inferior scene conformity.
Prior investigations settle the information bottleneck with suppressive transformation 𝒯 for signature delivering, including landmark representation and high-frequency map.
However, the symbiotic diffusion system predefines the congruous status behavior, and requests the delivered subject within the same signal space as the denoising latents, , the identity 𝒯, resulting in complete signature disclosure.
In light of this, the sparse recycling paradigm is adopted for the symbiotic system with compressed transmission quota.
Sidestep the transformation of the input signal, we randomly discard the self-delivered signatures within the denoising network at cached attention procedures in the content generation flow.
t0.52
The transmission gain of the sparse recycling paradigm in delivering the faithful subject at both overconfigured system and symbotic system.
0.52!
Configured CLIP-S ↑ LPIPS ↓ MUSIQ ↑
ControlNet 87.9 (+ 0.2) 0.141 (+ 0.004) 68.6 (+ 0.5)
ReferenceNet 90.2 (+ 0.4) 0.134 (+ 0.011) 68.2 (+ 0.6)
Recycling 91.3 (+ 0.3) 0.127 (+ 0.009) 68.9 (+ 0.8)
For consistency, we remain utilize 𝒯 to denote the suppressive transformation, and given by
𝒯_ϵ_i,Λ(c) =
z^i_c, if k ≤Λ and k ∼𝒰(0,1)
∅, otherwise
where k is sampled from the uniform distribution 𝒰(0,1) for comparison with the threshold Λ, z^i_c is the cached subject signature at i-th attention procedure, ∅ represents the null with discarding operation,
𝒯_ϵ_i,Λ is the layer-wise transformation attached to the denoising network ϵ_θ that compress the signature flow.
In particular, besides the prevention of the duplicated risk, the compressed transmission quota implicitly steers the diversified hyper-representations of the signature, facilitating the symbiotic system with improved subject transmission quality for scene harmonization,
as shown in Tab. <ref>, which is also competent for the overconfigured system.
Consequently, the denoising objective of the symbiotic diffusion system with sparse recycling is given by
ℒ = 𝔼_z,c,ϵ,t[ϵ-ϵ_θ(z_t,t,τ_θ(c),𝒯_ϵ_θ,Λ(c))^2_2],
where 𝒯_ϵ_θ,Λ is the collection of 𝒯_ϵ_i,Λ along the denoising network.
In inference, the sparse recycling is disabled in default with complete signature transmission, and enabled with continuous signature decoration.
Additionally, the classifier-free guidance <cit.> is incorporated to provide the conditional direction, formulated as
ϵ_θ,c(z_t,t,w,c) =
ϵ_θ(z_t,t,∅,∅) + w(ϵ_θ(z_t,t,τ_θ(c),𝒯_ϵ_θ,Λ(c)) - ϵ_θ(z_t,t,∅,∅)),
where w is the guidance scale parameter.
§.§ Dataset Collection
As there is a lack of publicly available dataset that tailored for the logo-level customization with rich graphic patterns,
we present the BrushLogo-70k, collecting from the open data source comprises wild logo detection dataset such as OpenLogo <cit.>, OSLD <cit.>, virtual tryon dataset with brand region such as Dresscode <cit.>, VITON-HD <cit.>, and text glyph dataset from AnyWord-3M <cit.>.
The regions of interest are acquired with either ancillary provided or internal annotated, and the data are strictly excluded with irregular region size, aspect ratio, occlusion, and distortion quality.
The detailed data composition is provided in the Appendix <ref>.
The evaluation set is constructed with 1k high-quality entities extracted from the collection with upraised criterions, referred as AnyLogo-Benchmark. | null | null | In this work, we proposed AnyLogo, a symbiotic subject-driven diffusion system with remarkable low-level signature consistency.
Streamlined as vanilla image generation, we discern that the imperative customized gemini status, , the rigorous signature extraction and creative content generation can be systematically recycled within a single denoising model and are promisingly compatible.
The model recycling policy promotes the reinforced subject transmission efficiency with alleviated systematic coherence, and the disentangled semantic-signature space with continuous signature decoration.
Besides, the sparse recycling paradigm is adopted to prevent the potential duplicated risk with compressed transmission quota for diversified signature stimulation.
Extensive experiments on constructed AnyLogo-Benchmark demonstrate the effectiveness and practicability of our method.
unsrt |
http://arxiv.org/abs/2409.17250v1 | 20240925180654 | Kernelization Complexity of Solution Discovery Problems | [
"Mario Grobler",
"Stephanie Maaz",
"Amer E. Mouawad",
"Naomi Nishimura",
"Vijayaragunathan Ramamoorthi",
"Sebastian Siebertz"
] | cs.DS | [
"cs.DS",
"cs.CC",
"math.CO"
] | =1
calc,arrows, automata, fit, shapes, arrows.meta, positioning, decorations.pathmorphing, patterns, shapes.geometric, shapes.symbols, shapes.arrows, shapes.misc, decorations.shapes,decorations.pathreplacing
theorem
colframe=cyan, sharp corners=west,
colback=cyan!10,
rightrule=0pt,
toprule=0pt,
bottomrule=0pt,
top=0pt,
right=0pt,
bottom=0pt,
corollary
colframe=blue, sharp corners=west,
colback=blue!10,
rightrule=0pt,
toprule=0pt,
bottomrule=0pt,
top=0pt,
right=0pt,
bottom=0pt,
lemma
colframe=red, sharp corners=west,
colback=red!10,
rightrule=0pt,
toprule=0pt,
bottomrule=0pt,
top=0pt,
right=0pt,
bottom=0pt,
result
colframe=yellow, sharp corners=west,
colback=yellow!10,
rightrule=0pt,
toprule=0pt,
bottomrule=0pt,
top=0pt,
right=0pt,
bottom=0pt,
proposition
colframe=green, sharp corners=west,
colback=green!10,
rightrule=0pt,
toprule=0pt,
bottomrule=0pt,
top=0pt,
right=0pt,
bottom=0pt,
remark
theoremTheorem[section]
corollaryCorollary[section]
definitionDefinition[section]
lemmaLemma[section]
propositionProposition[section]
remarkRemark[section]
observationObservation[section]
problemProblem
conjecture[problem]Conjecture
exampleExample
claimClaim
corollaryCorollaryCorollaries
lemmaLemmaLemmas
sectionSectionSections
claimproof[1][]
result
*result*
*remark*Remark
questionQuestion
⌈⌉
⌊⌋
Kernelization complexity of solution discovery problems
Mario Grobler
University of Bremen, Germany Stephanie MaazFunded by a grant from the Natural Sciences and Engineering Research Council of Canada.
University of Waterloo, Canada Amer E. Mouawad
American University of Beirut, Lebanon Naomi Nishimura[1]
University of Waterloo, CanadaVijayaragunathan RamamoorthiFunded by the “Mind, Media, Machines” high-profile area at the University of Bremen.
University of Bremen, GermanySebastian Siebertz
University of Bremen, Germany
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In the solution discovery variant of a vertex (edge) subset problem on graphs, we are given an initial configuration of tokens on the vertices (edges) of an input graph G together with a budget .
The question is whether we can transform this configuration into a feasible solution of on G with at most modification steps.
We consider the token sliding variant of the solution discovery framework, where each modification step consists of sliding a token to an adjacent vertex (edge).
The framework of solution discovery was recently introduced by Fellows et al. [Fellows et al., ECAI 2023] and for many solution discovery problems the classical as well as the parameterized complexity has been established.
In this work, we study the kernelization complexity of the solution discovery variants of Vertex Cover, Independent Set, Dominating Set, Shortest Path, Matching, and Vertex Cut with respect to the parameters number of tokens k, discovery budget , as well as structural parameters such as pathwidth.
§ INTRODUCTION
In the realm of optimization, traditional approaches revolve around computing optimal solutions to problem instances from scratch.
However, many practical scenarios can be formulated as the construction of a
feasible solution from an infeasible starting state.
Examples of such scenarios include reactive systems involving human interactions.
The inherent dynamics of such a system is likely to lead to an infeasible state.
However, computing a solution from scratch may lead to a solution that may
differ arbitrarily from the starting state.
The modifications required to reach such a solution from the starting state may be costly, difficult to implement, or sometimes unacceptable.
Let us examine a specific example to illustrate.
A set of workers is assigned tasks so that every task is handled by a qualified worker.
This scenario corresponds to the classical matching problem in bipartite graphs.
Suppose one of the workers is now no longer available (e.g. due to illness); hence, the schedule has to be changed.
An optimal new matching could be efficiently recomputed from scratch, but it is desirable to find one that is as close to the original one as possible, so that most of the workers keep working on the task that they were initially assigned.
Such applications can be conveniently modeled using the solution discovery framework, which is the central focus of this work.
In this framework, rather than simply finding a feasible solution to an instance of a source problem , we investigate whether it is possible to transform a given infeasible configuration into a feasible one by applying a limited number of transformation steps.
In this work we consider vertex (edge) subset problems Π on graphs, where the configurations of the problem are sets of vertices (edges).
These configurations are represented by the placement of tokens on the vertices (edges) of the configuration.
An atomic modification step consists of moving one of the tokens and the question is whether a feasible configuration is reachable after at most modification steps.
Inspired by the well-established framework of combinatorial reconfiguration <cit.>, commonly allowed modification steps are the addition/removal of a single token, the jumping of a token to an arbitrary vertex/edge, or the slide of a token to an adjacent vertex (edge).
Problems defined in the solution discovery framework are useful and have been appearing in recent literature.
Fellows et al. <cit.> introduced the term solution discovery, and along with Grobler et al. <cit.> initiated the study of the (parameterized) complexity of solution discovery problems for various -complete source problems including Vertex Cover (VC), Independent Set (IS), Dominating Set (DS), and Coloring (Col) as well as various source problems in such as Spanning Tree (ST), Shortest Path (SP), Matching (Mat), and Vertex Cut (VCut) / Edge Cut (ECut).
Fellows et al. <cit.> and Grobler et al. <cit.> provided a full classification of polynomial-time solvability vs. -completeness of the above problems in all token movement models (token addition/removal, token jumping, and token sliding).
For the -complete solution discovery problems, they provided a classification of fixed-parameter tractability vs. [1]-hardness.
Recall that a fixed-parameter tractable algorithm for a problem with respect to a parameter p is one that solves in time f(p) · n^(1), where n is the size of the instance and f is a computable function dependent solely on p, while [1]-hardness provides strong evidence that the problem is likely not fixed-parameter tractable (, does not admit a fixed-parameter tractable algorithm) <cit.>.
A classical result in parameterized complexity theory is that every problem that admits a fixed-parameter tractable algorithm necessarily admits a kernelization algorithm as well <cit.>.
A kernelization algorithm for a problem is a polynomial-time preprocessing algorithm that, given an instance x of the problem with parameter p, produces a kernel – an equivalent instance x' of the problem with a parameter p', where both the size of x' and the parameter p' are bounded by a computable function depending only on p <cit.>.
Typically, kernelization algorithms generated using the techniques of Cai et al. <cit.> yield kernels of exponential (or even worse) size.
In contrast, designing problem-specific kernelization algorithms frequently yields more efficiently-sized kernels, often quadratic or even linear with respect to the parameter.
Note that once a decidable problem with parameter p admits a kernelization algorithm, it also admits a fixed-parameter tractable algorithm, as a kernelization algorithm always produces a kernel of size that is simply a function of p.
The fixed-parameter tractable solution discovery algorithms of Fellows et al. <cit.> and Grobler et al. <cit.> are not based on kernelization algorithms.
Unfortunately, it is unlikely that all fixed-parameter tractable problems admit polynomial kernels.
Bodlaender et al. <cit.> developed the first framework for proving kernel lower bounds and Fortnow and Santhanam <cit.> showed a connection to the hypothesis ⊈.
Specifically, for several -hard problems, a kernel of polynomial size with respect to a parameter would imply that ⊆, and thus an unlikely collapse of the polynomial hierarchy to its third level <cit.>.
Driven by the practical benefits of kernelization algorithms, we explore the size bounds on kernels for most of the above-mentioned solution discovery problems in the token sliding model, particularly those identified as fixed-parameter tractable in the works of Fellows et al. <cit.> and Grobler et al. <cit.>.
Overview of our results. We focus on the kernelization complexity of solution discovery in the token sliding model for the following source problems: Vertex Cover, Independent Set, Dominating Set, Shortest Path, Matching, and Vertex Cut.
For a base problem Π we write Π-D for the discovery version in the token sliding model.
<Ref> summarizes our results.
All graph classes and width parameters appearing in this introduction are defined in the preliminaries.
Fellows et al. <cit.> and Grobler et al. <cit.> gave fixed-parameter tractable algorithms with respect to the parameter k for IS-D on nowhere dense graphs, for VC-D, SP-D, Mat-D, and VCut-D on general graphs and for DS-D on biclique-free graphs.
We show that IS-D, VC-D, DS-D, and Mat-D parameterized by k admit polynomial size kernels (on the aforementioned classes), while VCut-D does not admit kernels of size polynomial in k. For SP-D, we show that the problem does not admit a kernel of polynomial size parameterized by k + unless ⊆.
As -hardness provides strong evidence that a problem admits no polynomial-time algorithm, [t]-hardness (for a positive integer t) with respect to a parameter p provides strong evidence that a problem admits no fixed-parameter tractable algorithm with respect to p.
Fellows et al. <cit.> proved that VC-D, IS-D, and DS-D are [1]-hard with respect to parameter on d-degenerate graphs but provided fixed-parameter tractable algorithms on nowhere dense graphs.
They also showed that these problems are slicewise polynomial () with respect to the parameter treewidth and left open the parameterized complexity of these problems with respect to the parameter treewidth alone.
We show that these problems remain -hard (which implies [t]-hardness for every positive integer t) for parameter pathwidth (even if given a path decomposition realising the pathwidth),
which is greater than or equal to treewidth, and that they admit no polynomial kernels (even if given a path decomposition realising the pathwidth)
with respect to the parameter + pw, where pw is the pathwidth of the input graph, unless ⊆.
Finally, we also consider the parameter feedback vertex set number (fvs), which is an upper bound on the treewidth of a graph, but is incomparable to pathwidth.
We complement the parameterized complexity classification for the results of Fellows et al. <cit.> by showing that IS-D, VC-D, and DS-D are [1]-hard for the parameter fvs.
Several interesting questions remain open.
For instance, while their parameterized complexity was determined, the kernelization complexity of Col-D and ECut-D remains unsettled.
Similarly, the kernelization complexity of IS-D and DS-D with respect to parameter k is unknown on d-degenerate and semi-ladder-free graphs, respectively, where the problems are known to be fixed-parameter tractable.
In addition, it remains open whether VCut-D parameterized by k + admits a polynomial kernel or whether Mat-D parameterized by b admits polynomial kernels on restricted classes of graphs.
Organization of the paper. We introduce all relevant notation in <Ref>. In <Ref>, we provide fundamental graph gadgets that appear in many constructions presented in the paper and provide several lemmas describing useful properties of those gadgets. Afterwards, we present our results for VC-D in <Ref>, IS-D in <Ref>, DS-D in <Ref>, SP-D in <Ref>, Mat-D in <Ref>, and VCut-D in <Ref>.
§ PRELIMINARIES
We use the symbol for the set of non-negative integers (including 0), for the set of all integers, and _+ for the set of positive non-zero integers.
For k ∈ℕ, we define [k] = {1, …, k} with the convention that [0] = ∅.
Graphs.
We consider finite and simple graphs only.
We denote the vertex set and the edge set of a graph G by V(G) and E(G), respectively, and denote an undirected edge between vertices u and v by uv (or equivalently vu) and a directed edge from u to v by (u,v).
We use N(v) to denote the set of all neighbors of v and E(v) to denote the set of all edges incident with v. Furthermore, we define the closed neighborhood of v as N[v] = N(v) ∪{v}.
For a set X of vertices we write G[X] for the subgraph induced by X.
A sequence of edges e_1 … e_ℓ for some ℓ≥ 1 is a (simple) path of length ℓ if every two consecutive edges in the sequence share exactly one endpoint and each other pair of edges share no endpoints.
For vertices u and v, we denote the length of a shortest path e_1… e_ℓ that connects u to v by d(u,v), where d(v,v) = 0 for all v ∈ V(G).
For edges e, e' ∈ E(G), we denote by d(e, e') the length of a shortest path e_1 … e_ℓ with e_1 being incident to e and e' = e_ℓ.
For a vertex v ∈ V(G) and a non-negative integer i, we denote by V(v, i) = {u ∈ V(G) | d(u,v) = i}.
For an edge e ∈ E(G), we denote by E(e, i) = {e' ∈ E(G) | d(e,e') = i}.
The complete graph (clique) on n vertices is denoted by K_n and a complete bipartite graph (biclique) with parts of size m and n, respectively, by K_m,n.
For an in-depth review of general graph theoretic definitions we refer the reader to the textbook by Diestel <cit.>.
Pathwidth and treewidth.
A tree decomposition of a graph G is a pair 𝒯=(T, (X_i)_i ∈ V(T)) where T is a tree and X_i ⊆ V(G) for each i ∈ V(T), such that
* ⋃_i ∈ V(T) X_i = V(G),
* for every edge uv = e ∈ E(G), there is an i ∈ V(T) such that u, v ∈ X_i, and
* for every v ∈ V(G), the subgraph T_v of T induced by {i ∈ V(T) | v ∈ X_i} is connected, , T_v is a tree.
We refer to the vertices of T as the nodes of T.
For a node i, we say that the corresponding set X_i is the bag of i.
The width of the tree decomposition (T, (X_i)_i ∈ V(T)) is max_i ∈ V(T) |X_i| - 1.
The treewidth of G, denoted tw(G), is the smallest width of any tree decomposition of G.
A path decomposition of a graph G is a tree decomposition 𝒫=(T, (X_i)_i ∈ V(T)) in which T is a path.
We represent a path decomposition 𝒫 by the sequence of its bags only.
The pathwidth of G, denoted pw(G), is the smallest width of any path decomposition of G.
A nice path decomposition of G is one that begins and ends with nodes corresponding to empty bags and such that each other node in the decomposition corresponds to a bag that either introduces a vertex v ∈ V(G) (X_i = X_i-1∪{v} for v ∉X_i) or forgets one (X_i = X_i-1∖{v} for v ∈ X_i).
Every path decomposition can be efficiently turned into a nice path decomposition of the same width <cit.>.
Subdividing or deleting edges of a graph G preserves its path- or treewidth <cit.>.
Additionally, the following holds.
Let G be a graph and X ⊆ V(G). Then pw(G) ≤ pw(G - X) + |X| and tw(G) ≤ tw(G - X) + |X|.
A class of graphs has bounded treewidth (bounded pathwidth) if there exists a constant t such that all G∈ have treewidth (pathwidth) at most t.
Feedback vertex set number (fvs). For a graph G, by fvs(G) we mean the minimum size of a vertex set whose deletion leaves the graph acyclic.
Nowhere dense graphs.
A graph H is a minor of a graph G, denoted H ≼ G, if there exists a mapping that associates each vertex v of H with a non-empty connected subgraph G_v of G such that G_u and G_v are disjoint for u ≠ v and whenever there is an edge between u and v in H, there is an edge between a vertex of G_u and a vertex of G_v.
The subgraph G_v is referred to as the branch set of v. We call
H a depth-r minor of G, denoted H ≼_r G, if each branch set of the mapping induces a graph of radius at most r.
A class is nowhere dense if there exists a function t: ℕ→ℕ such that K_t(r)⋠_r G for all r ∈ℕ and all G ∈.
An r-independent set in a graph G is a set of vertices I such that the distance between any two vertices of I is at least r + 1.
We make use of the fact that nowhere dense classes are uniform quasi-wide, as clarified by the following theorem.
Let be a nowhere dense class of graphs.
For all r ∈ℕ, there is a polynomial N_r: ℕ→ℕ and a constant x_r ∈ℕ such that following holds. Let G ∈ and let A ⊆ V(G) be a vertex subset of size at least N_r(m), for a given m ∈ℕ.
Then there exists a set X ⊆ V(G) of size |X| ≤ x_r and a set B ⊆ A ∖ X of size at least m that is r-independent in G - X.
Moreover, given G and A, such sets X and B can be computed in time (|A| · |E(G)|).
Biclique-free graphs. A graph is said to be d-biclique-free it excludes the biclique K_d,d, as a subgraph.
A class of graphs is biclique-free if there exists a number d such that all G∈ are d-biclique-free.
An inclusion diagram of all presented graph classes is depicted in <Ref>.
figure]fig:classes
Solution discovery.
A vertex (edge) subset problem is a problem defined on graphs such that a solution consists of a subset of vertices (edges) satisfying certain requirements.
For a vertex (edge) subset problem on an instance with an input graph G, a configuration C on G is a subset of its vertices (edges).
Alternatively, a configuration can be seen as the placement of tokens on a subset of vertices (edges) in G.
In the token sliding model, a configuration C' can be obtained (in one step) from a configuration C, written C⊢ C', if C' = (C ∖{y}) ∪{x} for elements y ∈ C and x ∉ C such that x and y are neighbors in G, that is, if x,y ∈ V(G), then xy ∈ E(G); and if x,y ∈ E(G), then they share an endpoint.
Alternatively, when a token slides from a vertex to an adjacent one or from an edge to an incident one, we get C⊢ C'.
A discovery sequence of length ℓ in G is a sequence of configurations C_0 C_1 … C_ℓ of G such that C_i ⊢ C_i+1 for all 0 ≤ i < ℓ.
The Π-Discovery problem is defined as follows. We are
given a graph G, a configuration S ⊆ V(G) (resp. S⊆ E(G)) of size k (which at this point is not necessarily a solution
for Π), and a budget (a non-negative integer).
We denote instances of Π-Discovery by (G,S,).
The goal is to decide whether there exists a discovery sequence C_0 C_1 … C_ℓ in G for some ℓ≤ such that S = C_0 and C_ℓ is a solution for Π.
When a path decomposition is given as part of the input, the instances are denoted by (G,𝒫_G,S,) to highlight that the path decomposition 𝒫_G of G is provided.
Parameterized complexity and kernelization. Downey and Fellows <cit.> developed a framework for parameterized problems which include a parameter p in their input.
A parameterized problem has inputs of the form (x, p) where |x| = n and p ∈.
Fixed-parameter tractable problems belong to the complexity class .
The class consists of the parameterized problems that can be solved with a non-deterministic algorithm that uses f(p)·log n space and f(p)· n^(1) time.
The -hierarchy is a collection of parameterized complexity classes ⊆[1] ⊆[2] ⊆…⊆ where inclusions are conjectured to be strict.
For parameterized problems and ', an -reduction from to ' is a reduction that given an instance (x,p) of produces (x',p') of ' in time f(p) · |x|^(1) and such that p' ≤ g(p) where f,g are computable functions.
A pl-reduction from to ' is one that additionally computes (x',p') using (h(p) + log |x|) working space where h is a computable function.
We write ≤_fpt' (resp. ≤_pl') if there is an -reduction (resp. pl-reduction) from to '.
If is [t]-hard for a positive integer t and ≤_fpt', then ' is also [t]-hard.
If is -hard and ≤_pl', then ' is -hard and, in particular, [t]-hard for all t≥ 1.
Every problem that is in admits a kernel, although it may be of exponential size or larger.
Under the complexity-theoretic assumption that ⊈, we can rule out the existence of a polynomial kernel for certain fixed-parameter tractable problems .
The machinery for such kernel lower bounds heavily relies on composing instances that are equivalent according to a polynomial equivalence relation <cit.>.
An equivalence relation on the set of instances of a problem is called a polynomial equivalence relation if the following two conditions hold.
* There is an algorithm that given two instances x and y of decides whether x and y belong to the same equivalence class in time polynomial in |x| + |y|.
* For any finite set S of instances of , the equivalence relation partitions the elements of S into at most (max_x ∈ S |x|)^(1) classes.
We can compose equivalent instances in more than one way.
We focus here on or-cross-compositions.
Let ' be a problem and let be a parameterized problem. We say that or-cross-composes into ' if there is a polynomial equivalence relation on the set of instances of and an algorithm that, given t instances (where t ∈ℤ_+) x_1, x_2, … , x_t belonging to the same equivalence class of ℛ, computes an instance (x^*, k^*) in time polynomial in Σ^t_i=1 |x_i| such that the following properties hold.
* (x^*, k^*) ∈ if and only if there exists at least one index i such that x_i is a yes-instance of '.
* k^* is bounded above by a polynomial in max^t_i=1 |x_i| + log t.
The inclusion ⊆ holds if an -hard problem or-cross-composes into a parameterized problem having a polynomial kernel. As this inclusion is believed to be false, we will constantly make use of the following theorem to show that the existence of a polynomial kernel is unlikely.
If a problem ' is -hard and ' or-cross-composes into the parameterized problem , then there is no polynomial kernel for unless ⊆.
We refer the reader to textbooks <cit.> for more on parameterized complexity and kernelization.
§ AN AUXILIARY PROBLEM AND FOUNDATIONAL GADGETS
In this section, we describe foundational gadgets used in our reductions and compositions and explain how combining such gadgets preserves a bound on the pathwidth of the constructed graphs (assuming we start with graphs of bounded pathwidth). We show first that starting from a graph of bounded pathwidth H, we can construct new graphs G_H, G_H, G_t, and Ĝ_t, using our gadgets such that G_H, G_H, G_t, and Ĝ_t still have bounded pathwidth (in addition to other useful properties).
The following problem will be used in the reductions that establish the -hardness of , VC-D and DS-D with respect to parameter pw and subsequently in the or-cross-compositions that render it unlikely for any of these problems to have a polynomial kernel with respect to parameter + pw.
We denote by orientation of a graph G a mapping λ: E(G) → V(G) × V(G) such that λ(uv) ∈{(u,v), (v,u)}.
Minimum Maximum Outdegree (MMO):
Input: Undirected weighted graph H, a path decomposition 𝒫_H of H of width pw, an edge weighting σ: E(H) →_+ and a positive integer r (all integers are given in unary).
Question: Is there an orientation of H such that for each v ∈ V(H), the total weight of the edges directed away from v is at most r?
Bodlaender et al. <cit.> showed that MMO is -complete with respect to pathwidth given a path decomposition realising the pathwidth.
If all edge weights are identical, then MMO (on general graphs) can be solved in polynomial time using network flows <cit.>.
For an instance (H, 𝒫_H, σ, r) of MMO, we define σ = ∑_e ∈ E(H)σ(e), n = |V(H)| and m = |E(H)|.
We construct for an instance (H, 𝒫_H, σ, r) of MMO, a graph G_H consisting of disjoint subgraphs G_e for each e ∈ E(H) and G_v for each v ∈ V(H).
We refer to the edge-based and vertex-based subgraphs as MMO-edge-gadgets and MMO-vertex-gadgets, respectively. For an edge e ∈ E(H) we refer to G_e as MMO-edge-e. Similarly, for a vertex v ∈ V(H) we refer to G_v as MMO-vertex-v.
MMO-edge-e. For an edge e = uv ∈ E(H), an MMO-edge-e G_e contains σ(e) + 1 edges with endpoints a_e^i and b_e^i for i ∈ [σ(e) + 1], and an edge e^ue^v such that b_e^σ(e) + 1 is adjacent to each of e^u and e^v.
We define A_e = ∪_i ∈ [σ(e)] a_e^i and B_e = ∪_i ∈ [σ(e)] b_e^i (see <Ref>).
We refer to the connected component inside G_e (or any subdivision of G_e) containing e^u and e^v by G^sel_e.
MMO-vertex-v. For a vertex v in V(H), an MMO-vertex-v G_v contains a representative vertex of v denoted by w_v, adjacent to r target vertices of v denoted by x_v^1, x_v^2, …, x_v^r and one extra vertex x_v^r + 1.
Additionally, for each edge e ∈ E(H) incident to v, the MMO-vertex-v contains σ(e) edges with endpoints y_e^v(i) and z_e^v(i) for i ∈ [σ(e)] such that y_e^v(i) is adjacent to w_v, the representative vertex of v (see <Ref>).
We define X_v = ∪_i ∈ [r] x_v^i, Y^v_e = ∪_i ∈ [σ(e)] y_e^v(i), Z^v_e = ∪_i ∈ [σ(e)] z_e^v(i), Y^v = ∪_e ∈ E(H) Y^v_e, and Z^v = ∪_e ∈ E(H) Z^v_e.
The graph G_H. We let A = ∪_e ∈ E(H) A_e, A^+ = ∪_e ∈ E(H) a_e^σ(e)+1, B = ∪_e ∈ E(H) B_e, B^+ = ∪_e ∈ E(H) b_e^σ(e)+1, X = ∪_v ∈ V(H) X_v, X^+ = ∪_v ∈ V(H) x_v^r+1, Y = ∪_v ∈ V(H) Y^v, and Z = ∪_v ∈ V(H) Z^v.
We form G_H by connecting its MMO-edge-gadget vertices to its MMO-vertex-gadget vertices as follows.
For a vertex v ∈ V(H) and edge e ∈ E(H) incident to v, we connect each vertex of B_e to a corresponding distinct vertex in Z_e^v (in other words, each b_e^i to z_e^v(i) for i ∈ [σ(e)]). Similarly, we connect e^v to each vertex of Y_e^v (see <Ref> for an example).
The supplier gadget and the graph G_H.
In some of our reductions, we add a new gadget to G_H and make one of its vertices the supplier vertex adjacent to various vertices within G_H.
We denote the graph thus obtained by G_H and refer to the gadget containing the supplier vertex as the supplier gadget.
We let G_s be the supplier gadget that we connect to G_H, and we let s denote the supplier vertex.
In particular, G_s contains a supplier vertex s that is adjacent to donor vertices d_1^i of the donor paths D^i ={d_1^i, d_2^i, d_3^i} for i ∈ [rn - σ] as well as another vertex d_1^rn - σ + 1 (see <Ref>).
Pathwidth of G_H, G_H, and their subdivisions.
Our reductions and compositions must use at most (h(pw) + log |x|) working space, for an input instance of size |x| and parameter pw, and a computable function h.
We show that our reductions/compositions can be performed on a log-space transducer and are pl-reductions.
A log-space transducer is a type of Turing machine with a read-only input tape, a read/write work tape of logarithmic size and a write-only, write-once output tape.
Let (H, 𝒫_H, w, r) be an instance of MMO.
Then, there exists a log-space transducer that transforms a path decomposition of H to one of G_H (resp. G_H, or any subdivision of G_H, or any subdivision of G_H) with width at most pw(H) + 6 (resp. pw(H) + 7, or pw(H) + 6, or pw(H) + 7).
Thus, pw(G_H) ≤ pw(H) + 6, and pw(G_H) ≤ pw(H) + 7, and any subdivision of G_H or G_H results in a graph of bounded pathwidth.
The final statement of the lemma follows from the preceding statement.
Thus, we build log-space transducers for the graphs G_H, G_H and their subdivisions and we start with G_H.
Given the path decomposition of H, we first ensure that it is nice (which can be done via a log-space transducer <cit.>).
Then, we pass through the bags from left to right.
For a forget bag in the path decomposition of H, we output a bag containing the representative vertices (of the vertex gadgets) of the vertices in the forget bag.
For a bag that introduces a vertex v ∈ V(H), we output the following bags in order:
* for j ∈ [r+ 1], we output one bag that introduces the vertex x_v^j, followed by one that forgets the same vertex (these bags have a size larger than the pathwidth of H by only 1),
* for each vertex u in the bag such that u ≠ v and uv = e ∈ E(H):
* we output four bags that introduce the vertices e^u, e^v, b_e^σ(e)+1, and a_e^σ(e)+1, respectively, followed by two bags that forget the vertices b_e^σ(e)+1, and a_e^σ(e)+1, respectively (these bags have a size larger than the pathwidth of H by only 4),
* for j ∈ [σ(e)], we output bags that introduce the vertices a_e^j, b_e^j, z_e^u(j), y_e^u(j), z_e^v(j) and y_e^v(j), followed by bags that forget all those vertices (these bags have a size larger than the pathwidth of H by only 6),
* then, we output two bags that forget the vertices e^u and e^v, respectively.
It is easy to verify that the result is a nice path decomposition of the graph G_H. The width has increased by at most 6.
For the graph G_H, we change the log-space transducer for G_H to output at first a bag that introduces the supplier vertex. Additionally, we then let the same log-space transducer output for each i ∈ [rn - σ], bags that represent the path decompositions of each of the donor paths (augmented by the supplier vertex).
The log-space transducer then outputs a bag containing the supplier vertex and the vertex d^rn-σ+1 followed by a bag that forgets d^rn-σ+1.
The log-space transducer finally behaves similarly to that of G_H but augments each of the then outputted bags by the supplier vertex.
It is easy to verify that the result is a nice path decomposition of the graph G_H. The width has increased by at most 7.
The following claim finalizes the proof of the lemma.
A log-space transducer that takes as input a nice path decomposition of a graph G and outputs a path decomposition of a graph G' can be adapted to output a path decomposition of any subdivision of G' with width pw(G').
Note that for any edge subdivision of edges uv_1, …, uv_q incident to a vertex u in the graph G, that introduce vertices w_1, …, w_q to the graph, the log-space transducer can be adapted to output directly before the bag that introduces v_i for i ∈ [q], a bag that introduces w_i and before any bag that introduces a neighbor of v_i one bag that forgets the vertex w_i.
If some vertices v_i, …, v_i' have no neighbors, the log-space transducer can be adapted to output bags that introduce w_i, …, w_i' just before the vertex u is forgotten and only output the bags that introduce the vertices v_i, …, v_i' after the vertex u is forgotten.
It is easy to see then that <Ref> can be used to prove the existence of a log-space transducer that outputs a path decomposition of any subdivision of G_H (resp. G_H) of the same width of a path decomposition of G_H (resp. G_H).
We note here that in the DS-D reduction and composition, we may augment subdivisions of G_H (resp. G_H), by an edge dd' (d and d' are new vertices, and we refer to d as the dominator vertex) where d is adjacent to various vertices in the subdivisions of G_H (resp. G_H).
We denote the resulting graphs by augmented subdivisions of G_H (resp. G_H).
By Observation <ref>, this modification can increase the pathwidth of those graphs by at most 2.
A log-space transducer for such modified graphs can be built by adapting one of the log-space transducers from <Ref> to first output bags that introduce the vertices d and d' and only forget them at the end of the path decomposition.
Let (H, 𝒫_H, w, r) be an instance of MMO.
Then, there exists a log-space transducer that transforms a path decomposition of H to one of an augmented subdivision of G_H (resp. G_H) with width at most pw(H) + 8 (resp. pw(H) + 9).
MMO-instance-selector. In our or-cross-compositions, we assume that we are given as input a family of t MMO instances (H_j, 𝒫_H_j, σ_j, r_j), where for each j ∈ [t], H_j is a bounded-pathwidth graph with path decomposition 𝒫_H_j, |V(H_j)| = n, |E(H_j)| = m, σ_j: E(H_j) →ℤ_+ such that ∑_e_j ∈ E(H_j)σ_j(e_j) = σ and r_j = r ∈ℤ_+ (integers are given in unary).
It is not hard to see that these instances belong to the same equivalence class of a polynomial equivalence relation ℛ (<Ref>) whose polynomial-time algorithm decides that two instances are equivalent if they have the same number of vertices, number of edges, and total weight on the edges.
ℛ also has at most max_x ∈ S |x|^(1) equivalence classes, where S is a set of MMO instances of the form (H, 𝒫_H, σ, r), where H is of bounded pathwidth. In particular it has at most m · n · (max_x ∈ S |x|) equivalence classes.
Some of our or-cross-compositions will encode, in a graph G_t, all t input instances of MMO in (y^*, k^*) (<Ref>) using the multiple induced subgraphs G_H_j for j ∈ [t].
We must also encode the OR behavior.
An instance selector is a gadget with t possible states, each corresponding to a distinct instance x_j for j ∈ [t] and compelling us to select x_j so that (x^*, k^*) is solved.
We form an instance selector by constructing a new gadget, called a MMO-instance-selector.
In G_t, we make some of the MMO-instance-selector vertices adjacent to various vertices of G_H_j for j ∈ [t].
An MMO-instance-selector contains, for each j ∈ [t], an edge with endpoints Select_j and Unselect_j.
It also contains edges f^1g^1, f^2g^2, …, f^σ g^σ and a weights-hub vertex h adjacent to each vertex f^i for i ∈ [σ].
It also comprises edges o^1p^1, o^2p^2, …, o^m p^m and an orientations-quay vertex q adjacent to each vertex o^i for i ∈ [m].
In G_t, we will make the vertices h and q adjacent to different vertices in the t induced subgraphs G_H_j for j ∈ [t].
Additionally, the vertices Select_j or Unselect_j for j ∈ [t] will also be adjacent to different vertices within their corresponding induced subgraph G_H_j.
For some source problems, we may additionally augment subdivisions of G_t by attaching to h and q a number of pendant vertices.
We denote the resulting graphs by augmented subdivisions of G_t.
There exists a log-space transducer that given t MMO instances (H_j, 𝒫_H_j, σ_j, r_j), where for each j ∈ [t], |V(H_j)| = n, |E(H_j)| = m, σ_j: E(H_j) →ℤ_>0 is such that ∑_e_j ∈ E(H_j)σ_j(e_j) = σ, and r_j = r ∈ℤ_+ (integers are given in unary), outputs a path decomposition of the graph G_t (resp. any augmented subdivision of G_t) with width at most max_j ∈ [t] pw(H_j) + 10.
Thus, the graph G_t and any augmented subdivision of it are graphs of bounded pathwidth.
The last statement of the lemma follows from the preceding statement.
Thus, we build log-space transducers for the graphs G_t and its augmented subdivisions and we start with a log-space transducer for (an augmented) G_t.
An augmented G_t is G_t with a number of pendant vertices attached to h and q.
We first ensure that the path decompositions of H_j for each j ∈ [t] are nice (which can be done via a log-space transducer).
Afterwards, the log-space transducer outputs bags that introduce the vertices h and q, introduce and forget the pendant vertices attached to h or q in the case of an augmented G_t one-by one, and then outputs bags that represent the path decompositions of the edges f^1g^1, f^2g^2, …, f^σ g^σ, o^1p^1, o^2p^2, …, o^m p^m, augmented by the vertices h and q.
Next, the log-space transducer outputs for each j ∈ t, the bags in the path decomposition of the graph G_H_j (, behaves as the log-space transducer of <Ref> that outputs a path decomposition of the graph G_H) but augmented by the vertices h, q, Select_j and Unselect_j, followed by bags that forget the vertices Select_j and Unselect_j.
It is easy to see that the result is a path decomposition of (an augmented) G_t with width max_j ∈ [t] pw(H_j) + 10 (as the path decomposition of G_H_j for any j ∈ [t] has width at most max_j ∈ [t] pw(H_j) + 6).
Using <Ref>, we can build a log-space transducer for any augmented subdivision of G_t that outputs a path decomposition of the subdivision with width max_j ∈ [t] pw(H_j) + 10.
In the DS-D composition, we form the graph Ĝ_t in a manner akin to a subdivision of G_t except, we encode each of the t input instances of MMO using the multiple induced subgraphs that are augmented subdivisions of G_H_j for j ∈ [t].
Using <Ref> instead of <Ref> in the proof of <Ref>, the log-space transducer can output for each j ∈ [t], the bags in the path decomposition of an augmented subdivision of G_H_j, and we get the following.
There exists a log-space transducer that given t MMO instances (H_j, 𝒫_H_j, σ_j, r_j), where for each j ∈ [t], |V(H_j)| = n, |E(H_j)| = m, σ_j: E(H_j) →ℤ_+ is such that ∑_e_j ∈ E(H_j)σ_j(e_j) = σ, and r_j = r ∈ℤ_+ (integers are given in unary), outputs a path decomposition of the graph Ĝ_t (resp. any subdivision of Ĝ_t) with width at most max_j ∈ [t] pw(H_j) + 12.
Thus, the graph Ĝ_t and any subdivision of it are graphs of bounded pathwidth.
Given an MMO instance (H, 𝒫_H, σ, r), one can build a log-space transducer that outputs a path decomposition of (any subdivision of) G_H, (resp. (any subdivision of) G_H, or an augmented subdivision of G_H, or an augmented subdivision of G_H) with width at most pw(H) + 6 (resp. pw(H) + 7, or pw(H) + 8, or pw(H) + 9), along with a representation of the graph, any subset of its vertices, and an integer with at most a polynomial (in the input size) number of bits.
Given t MMO instances (H_j, 𝒫_H_j, σ_j, r_j), where for each j ∈ [t], |V(H_j)| = n, |E(H_j)| = m, σ_j is such that ∑_e_j ∈ E(H_j)σ_j(e_j) = σ and r_j = r ∈ℤ_+ (integers are given in unary), one can build a log-space transducer that outputs a path decomposition (of an augmented subdivision) of the graph G_t (resp. a path decomposition of the graph Ĝ_t), with width at most max_j ∈ [t] pw(H_j) + 10 (resp. max_j ∈ [t] pw(H_j) + 12), along with a representation of the graph, any subset of its vertices, and an integer with at most a polynomial (in the input size) number of bits.
§ VERTEX COVER DISCOVERY
Fellows et al. <cit.> showed that VC-D is in with respect to parameter k on general graphs and in with respect to parameter on nowhere dense classes of graphs.
We show in this section that the problem has a polynomial kernel with respect to parameter k.
With respect to the parameter + pw, where pw is the pathwidth of the input graph, we show that the problem does not have a polynomial kernel unless ⊆.
VC-D has a kernel of size (k^2).
Let (G, S, ) be an instance of VC-D.
Let G' be the graph obtained from G by deleting the vertices of degree greater than k. If G' has more than k^2 edges or more than 2k^2 non-isolated vertices we can reject the instance.
The vertices of degree greater than k must be in any vertex cover of size at most k.
The remaining vertices of the vertex cover can cover at most k edges each, as we have at most k vertices for the vertex cover, there can be at most k^2 edges left. These have at most 2k^2 endpoints.
We now construct the kernel (H,S,b).
We define H to be the graph obtained from G as follows: We keep all non-isolated vertices of G' as well as all vertices that contain a token. Furthermore, we keep all vertices of G with degree greater than k (but not all their neighbors).
For all u,v with degree greater than k in G, if N_G(u)∩ N_G(v) contains only isolated vertices in G', then we keep one arbitrary vertex of this intersection and name it x_uv.
These vertices need to be kept to ensure that all discovery sequences survive in H.
Finally, for every vertex u of degree greater than k in G, if u has degree d<k+1 in H, we add arbitrary k+1-d isolated neighbors to u.
We claim that (H,S,) is equivalent to (G,S,) and has at most 3k^2+2k vertices:
at most k vertices with degree at least k+1 (yielding k(k+1) vertices), at most
2k^2 non-isolated vertices from G' and at most k isolated vertices with tokens on them.
It remains to show that the instances are equivalent.
Assume (G,S,) is a yes-instance.
Consider a shortest discovery sequence S=C_0⊢ C_1⊢…⊢ C_ℓ for ℓ≤.
We claim that there exists a discovery sequence S=C_0⊢ C_1'⊢…⊢ C_ℓ-1'⊢ C_ℓ in H of the same length that ends in the same configuration C_ℓ and that also constitutes a vertex cover in H.
First observe that because we consider a shortest discovery sequence all C_i do not contain isolated vertices of G' unless they belong to S, or to N_G(u)∩ N_G(v) for vertices u,v with degree greater than k in G. Furthermore, C_ℓ contains no isolated vertices of G' unless they belong to S.
Now every slide along a vertex x of N(u)∩ N(v) that does not belong to H can be replaced by the slide along x_uv, that is, either C_i'=C_i or C_i'=(C_i∖{x})∪{x_uv}. As C_ℓ is a vertex of G and H is a subgraph of G, also C_ℓ is a vertex cover of H.
Conversely, assume (H,S,) is a yes-instance.
As H is a subgraph of G, every discovery sequence is also a discovery sequence in G.
It remains to show that every vertex cover of size k of H is also a vertex cover of G.
This easily follows from the fact that every vertex of degree greater than k in G is also a vertex of degree greater than k in H and every vertex cover of H must contain all vertices of degree greater than k.
This implies that all edges between high degree vertices and isolated vertices in G' are covered in G.
All other edges appear also in H and are hence covered in H as well as in G.
VC-D is -hard parameterized by pathwidth.
As stated in <Ref>, we present a pl-reduction from MMO.
Let (H, 𝒫_H, σ, r) be an instance of MMO where H is a bounded pathwidth graph, |V(H)| = n, |E(H)| = m, σ: E(H) →ℤ_+ such that ∑_e ∈ E(H)σ(e) = σ and r ∈ℤ_+ (integers are given in unary).
We construct an instance (G_H, 𝒫_G_H, S, ) of VC-D as follows.
We form the graph G_H as outlined below (see <Ref>):
* We subdivide each edge a^ib^i for each i ∈ [σ(e)] for each edge e ∈ E(H), of a subgraph G_e (which is the MMO-edge-e described in <Ref>), and add it to G_H.
We denote the introduced vertex (from a subdivision of an edge a^ib^i) by c^i.
We let C_e = ∪_i ∈ [w(e)] c^i, C = ∪_e ∈ E(H) C_e.
* We subdivide each edge w_v x^v(i) for i ∈ [r], of a subgraph G_v (which is the MMO-vertex-v described in <Ref>), for each vertex v ∈ V(H), and add it to G_H.
We denote the introduced vertex (from a subdivision of an edge w_v x^v(i)) by c(x^v(i)).
We let C(X_v) = ∪_i ∈ [r] c(x^v(i)), C(X) = ∪_v ∈ V(H) C(X_v).
* As described in <Ref> (under the graph G_H heading), we make each vertex b^i_e, for each edge uv=e ∈ E(H), adjacent to the vertices in Z_e^v and Z_e^u, and each vertex e^v for each edge uv=e ∈ E(H) adjacent to all vertices in Y_e^v.
* We let the supplier gadget be G_s as described in <Ref>, add it to G_H, and make the vertex s adjacent to all vertices in X.
By <Ref>, G_H is of bounded pathwidth (it is a subdivision of the original graph G_H constructed in <Ref>).
We set S = C ∪ B ∪ Y ∪⋃_uv=e ∈ E(H) (e^u ∪ e^v) ∪⋃_v ∈ V(H) w_v ∪ s ∪⋃_i ∈ [rn - σ] (d_1^i ∪ d_3^i) and = m + 3rn.
Given that all integers are given in unary, the construction of the graph G_H, or its path decomposition (as described in <Ref>), and as a consequence the reduction, take time polynomial in the size of the input instance.
Additionally, by <Ref>, this reduction is a pl-reduction.
We claim that (H, 𝒫_H, σ, r) is a yes-instance of MMO if and only if (G_H, 𝒫_G_H, S, ) is a yes-instance of VC-D.
If (H, 𝒫_H, σ , r) is a yes-instance of MMO, then (G_H, 𝒫_G_H, S, ) is a yes-instance of VC-D.
Let λ: E(H) → V(H) × V(H) be an orientation of the graph H such that for each v ∈ V(H), the total weight of the edges directed out of v is at most r.
In G_H, the edges between c(X) and X are not covered.
The same applies for the edges between A^+ and B^+.
To fix that, for each edge uv=e ∈ E(H) such that λ(e) = (v, u):
* we move, for each i ∈ [σ(e)], the token on y_e^v(i) to any free vertex of c(X_v) and the token on b_e^i to z_e^v(i) (this consumes 3σ(e) slides),
* we slide the token on e^u to b_e^σ(e)+1, hence covering a_e^σ(e)+1b_e^σ(e)+1 (this consumes 1 slide).
This constitutes 3σ + m slides.
We cover the rn - σ remaining uncovered edges between c(X) and X using three slides per D^i path for i ∈ [rn - σ] (by sliding the token on d^i_3 to d^i_2, and moving the token on d^i_1 to a token-free vertex in X).
If (G_H, 𝒫_G_H, S, ) is a yes-instance of VC-D, then (H, 𝒫_H, σ, r) is a yes-instance of MMO.
First, note that for an edge a_e^σ(e)+1b_e^σ(e)+1, where uv=e ∈ E(H), to be covered with a minimal number of slides, the token on the vertex e^u or the token on the vertex e^v must move to b_e^σ(e)+1 (note that any other token on the vertices of the graph must pass through either e^u or e^v to get to b_e^σ(e)+1, thus we can safely assume that the token already on either of e^u or e^v is the token that slides to b_e^σ(e)+1). This consumes at least m slides, leaving 3rn slides.
No vertex cover formed with a minimal number of slides would need to make the token on the vertex s or the token on a vertex w_v for a vertex v ∈ V(H) slide (as this token must always be replaced by another to cover the incident edges sd^rn-σ+1 or w_vx^r+1 for a vertex v ∈ V(H), respectively, with a minimal number of slides, thus we can always assume that the token has not been moved).
Thus, an edge between a pair of vertices in X_v and C(X_v) can be covered by either moving the token on a vertex d^i_1 for an integer i ∈ [rn - σ] towards the vertex in X_v, or moving a token from a vertex y_e^v(i_1), for an edge uv=e ∈ E(H) and an integer i_1 ∈ [σ(e)], towards the vertex in C(X_v).
If the token on d^i_1 moves towards the vertex in X_v, the token on d^i_3 must slide to d^i_2.
If a token on y_e^v(i_1) moves towards the vertex in C(X_v), it must be the case that another token has moved to either the vertex z_e^v(i_1) or to y_e^v(i_1).
This however requires at least one slide per such a token.
Thus, if an edge between a pair of vertices in X and C(X) is covered by moving a token on one vertex d_1^i for an integer i ∈ [rn-σ] towards the vertex in X, it does not consume more slides than moving a token from a vertex in Y towards the vertex in C(X).
Given that at most rn - σ edges can be covered using tokens from the donor paths (as rn - σ + 1 tokens are needed to cover G_s), each of the at least σ remaining uncovered edges must be covered by moving a token from a vertex in Y towards a vertex in c(X).
Additionally, each of the remaining uncovered edges between c(X) and X will require at least one additional slide (besides the two slides needed to move a token from Y) and thus, tokens on distinct vertices in Y must be used to cover the edges, as the remaining at most σ slides do not allow to get any token not initially on a vertex in Y to a vertex in Y.
This is true because, each G_e_1^sel component for an edge u_1v_1=e_1 ∈ E(H) cannot have less than two tokens, thus if, w.l.o.g., we move a token from e_1^u_1 to Y^u_1, then we have to move another token onto the component.
This also implies that if the token on y_e^v(i_1) moves towards a vertex in c(X_v), and consequently the token on the vertex e^v slides to y_e^v(i_1), it will require at least one more slide as e^ue^v will not be covered.
Given that we have only σ remaining slides, and at least one slide for each of the remaining at least σ remaining uncovered edges is needed, the token on the vertex b_e^(i_1) must slide to z_e^v(i_1).
This totals slides.
Each vertex that is token-free in Y after the slides are consumed must be adjacent to a vertex of the form e_2^u_2 with a token, for an edge u_2v_2=e_2 ∈ E(H) (so that the edges between e_2^u_2 and the vertices in Y are covered).
This implies that for each edge u_2v_2= e_2 ∈ E(H), at most σ(e_2) tokens can move to c(X) from tokens on the vertices of the sets Y^v_2_e_2 and Y^u_2_e_2, and from only one of those sets, as only one of e_2^u_2 and e_2^v_2 has a token.
To cover the σ remaining uncovered edges, each edge u_2v_2=e_2 ∈ E(H) must allow σ(e_2) tokens to move from either vertices in Y^v_2_e_2 and Y^u_2_e_2 and from at most one.
This gives a feasible orientation for the instance (H, 𝒫_H, σ, r) as any of c(X_u) or c(X_v) can receive at most r tokens.
The proofs of Lemmas <ref> and <ref> complete the proof of Theorem <ref>.
There exists an or-cross-composition from MMO into VC-D on bounded pathwidth graphs with respect to parameter . Consequently, VC-D does not admit a polynomial kernel with respect to + pw, where pw denotes the pathwidth of the input graphs, unless ⊆.
As stated in <Ref>, we can assume that we are given a family of t MMO instances (H_j, 𝒫_H_j, σ_j, r_j), where H_j is a bounded pathwidth graph with path decomposition 𝒫_H_j, |V(H_j)| = n, |E(H_j)| = m, σ_j: E_j →ℤ_+ is a weight function such that ∑_e_j ∈ E(H_j)σ_j(e_j) = σ and r_j = r ∈ℤ_+ (integers are given in unary).
The construction of the instance (G_t, 𝒫_G_t, S, ) of VC-D is twofold.
For each instance H_j for j ∈ [t], we add to G_t the graph G_H_j formed as per the construction in <Ref>, but without the supplier gadget.
We refer to the sets A, B, X, X^+, C, Y, and c(X), subsets of vertices of a subgraph G_H_j of G_t, by A_j, B_j, X_j, X_j^+, C_j, Y_j, and c(X_j), respectively.
Subsequently, we let A = ∪_j ∈ [t] A_j, B = ∪_j ∈ [t] B_j, X = ∪_j ∈ [t] X_j, and so on.
We attach to each of the weights-hub vertex h and the orientations-quay vertex q of the MMO-instance-selector described in <Ref>, 2m + 5σ + 2 pendant vertices.
We add the MMO-instance selector and connect it to the rest of G_t as follows (see <Ref>).
We make for each j ∈ [t], the vertex Select_j adjacent to the vertices in V(G_H_j) ∩ S, where S is as defined later.
We make the vertex h adjacent to each vertex in X and the vertex q adjacent to each vertex of e^u and e^v for each edge uv=e ∈ E(H_j) for each j ∈ [t].
By <Ref>, G_t is of bounded pathwidth (it is an augmented subdivision of the original graph G_t appearing in <Ref>).
Now, we set
S = C ∪
B ∪
B^+ ∪
Y ∪
X ∪⋃_j ∈ [t]
uv=e ∈ E(H_j) (e^u ∪ e^v)
∪⋃_j ∈ [t]
v ∈ V(H_j) w_v
∪⋃_j ∈ [t] Unselect_j
∪ h
∪ q
and = 2m + 5σ + 1.
Given that all integers are given in unary, the construction of the graph G_t, or its path decomposition (as described in <Ref>), and as a consequence the reduction take time polynomial in the size of the input instances.
Additionally, by <Ref>, this composition is a pl-reduction.
We claim that (G_t, 𝒫_G_t, S, ) is a yes-instance of VC-D if and only if for some 𝔧∈ [t], (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO.
If for some 𝔧∈ [t], (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO, then (G_t, 𝒫_G_t, S, b) is a yes-instance of VC-D.
Let (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) be a yes-instance of MMO and let λ be a feasible orientation of H_𝔧 such that for each v ∈ V(H_𝔧), the total weight of the edges directed out of v is at most r.
In G_t, the edges f^1g^1, …, f^σg^σ, o^1p^1, …, o^mp^m are not covered.
First, we slide the token on Unselect_𝔧 to Select_𝔧.
Using 2m slides, we move for each edge e ∈ E(H_𝔧) the token on e^u (e^v), if λ(e) = (v, u) (λ(e) = (u, v)), towards a token-free vertex in o^1, …, o^m.
We additionally slide each token on a vertex b^i_e for i ∈ [σ_𝔧(e)] to the vertex z_e^v(i), move the token on y_e^v(i) towards a token-free vertex in c(X_v) and consequently, move the token on the adjacent vertex in X_v towards a token-free vertex in f^1, …, f^σ.
The total number of slides performed is and they achieve a configuration for the tokens that covers all of G_t.
If (G_t, 𝒫_G_t, S, b) is a yes-instance of VC-D, then there exists an integer 𝔧∈ [t], such that (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO.
In any solution that uses the minimal number of slides, the tokens on the vertices h, q, w_v for each v ∈ V(H_j) and integer j ∈ [t], and on the vertices in C do not need to be moved (as these tokens must be replaced by others to cover the edges incident to the pendant vertices and h, or the pendant vertices and q, or to cover the edges w_v x^r+1, or the edges incident to the vertices in A, thus we can assume these tokens remain stationary).
In the same solution, we can similarly assume that a token on one of Unselect_j and Select_j for each j ∈ [t] remains on either one of those vertices.
To cover the edges o^1p^1, …, o^mp^m and f^1g^1, …, f^σg^σ, at least 2m + 2σ slides are needed to get tokens from one or more of the vertices in X onto the vertices f^1, …, f^σ, and from one or more of the vertices of the form e^u for an edge e ∈ E(H_j) incident to a vertex u ∈ V(H_j) for an integer j ∈ [t], onto the vertices o^1, …, o^m.
If a token is moved out of a subgraph G_H_j (for an integer j ∈ [t]) of G_t, which is bound to happen to get tokens onto the vertices f^1, …, f^σ, o^1, …, o^m, at least one slide is needed to cover the edge between now token-free vertices in G_H_j and Select_j and exactly one slide can only be achieved by sliding the token on Unselect_j to Select_j (since otherwise a token has to move from one of the vertices of a subgraph G_H_j' for j' ≠ j ∈ [t] into G_H_j, and this requires more than one slide).
W.l.o.g. assume a token on a vertex, denoted x^i_v, in X, for a vertex v ∈ V(H_𝔧), and integers 𝔧∈ [t] and i ∈ [r], is moved to one of the vertices f^1, …, f^σ, then at least 2 slides are needed to move a token into either x^i_v or c(x^i_v) (since the tokens on w_v and h are assumed to be stationary).
Since a token moving from any other vertex, except x^i_v, in X into x^i_v can replace the token on x^i_v in moving into one of the vertices f^1, …, f^σ, in a solution that uses the minimal number of slides 2 slides can only be achieved by moving a token on a vertex, denoted y_e^v(i_1), in Y^v, for some edge e ∈ E(H_𝔧) adjacent to v and some integer i_1 ∈ [σ_𝔧(e)], to c(x^i_v).
Additionally, 3 slides can only be achieved by moving a token on the same vertex to x^i_v.
Since in a solution that uses the minimal number of slides a token on one of Unselect_𝔧 and Select_𝔧 is assumed to remain on either of those vertices, and a token in B can slide at most one slide to a vertex in Z, if a token on y_e^v(i_1) is moved to a vertex in c(X) (or X), either a token has to move to the vertex z_e^v(i_1), or a token has to slide from the vertex e^v to y_e^v(i_1).
Given the budget and the fact that σ tokens in any solution must move from X onto the vertices f^1, …, f^σ, tokens must move from distinct vertices in X onto the vertices f^1, …, f^σ and from distinct vertices of the form e^u for an edge e ∈ E(H_j) incident to a vertex u ∈ V(H_j) for an integer j ∈ [t], onto the vertices o^1, …, o^m.
Additionally, given the budget, tokens in the same solution must move onto f^1, …, f^σ from only the vertices in X_𝔧 and onto o^1, …, o^m from only the vertices of the form e_1^u, for an edge uw=e_1 ∈ E(H_𝔧) (note that one token sliding to Select_𝔧 will cover the edge between Select_𝔧 and e_1^u and the edge between Select_𝔧 and a vertex in X_𝔧).
In the same solution, if a token moves from the vertex e_1^u onto one of the vertices o^1, …, o^m, the token on e_1^w remains stationary as the budget does not allow for another token to move into either one of the adjacent vertices e_1^u and e_1^w.
To fill all of o^1, …, o^m with tokens, exactly one token must move from G_e_2^sel onto the vertices o^1, …, o^m for each edge e_2 ∈ E(H_𝔧).
The latter implies that the token on e^v does not move to y_e^v(i_1) and given the budget that the token on b_e^i_1 slides to z_e^v(i_1).
W.l.o.g. assume that the token on e^v does not move to one of o^1, …, o^m, then at most the σ(e) tokens on the vertices of Y_e^v can be sent to c(X_v).
This implies that for each edge in H, at most its weight in tokens can move to c(X) from and to exactly one of the vertex gadgets corresponding to the vertices incident to that edge in H.
Given that σ tokens are needed on the vertices of c(X), it must be the case that for each edge, all its weight in tokens must move to c(X).
This gives a feasible orientation for (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧), since for each v ∈ V(H_𝔧), we have at most r vertices in c(X_v).
This concludes the proof of the theorem.
Next we consider the feedback vertex set number (fvs) parameterization of the VC-D problem. In Theorem <ref>, we proved that the VC-D problem is XNLP-hard for the parameter pathwidth of the input graph.
The feedback vertex set number (fvs) and pathwidth are upper bounds of treewidth, but are incomparable.
We will show that the VC-D problem is [1]-hard for the parameter fvs.
The VC-D problem is [1]-hard when parameterized by the feedback vertex set number, i.e., fvs, of the input graph.
We present a parameterized reduction from the problem.
We utilize the reduction presented in Theorem <ref>, and apply some changes over the constructed graph to obtain a reduced instance of the VC-D problem.
Consider the graph H constructed in the proof of Theorem <ref>.
For each i ∈ [κ], we add a neighbor t_i to t_i in the vertex-block H_i.
For each , we add a neighbor t_i,j to t_i,j in the edge-block H_i,j.
For each and l ∈{i,j}, we do the following changes in the connector _i,j^l:
* Add four new vertices s_i,j^l, ŝ_i,j^l, r_i,j^l and r̂_i,j^l.
* Connect s_i,j^l, ŝ_i,j^l with s_i,j^l, and r_i,j^l, r̂_i,j^l with r_i,j^l.
* For each neighboring vertex v of s_i,j^l from the vertex-blocks, remove the edge s_i,j^lv and add the edge s_i,j^lv.
* For each neighboring vertex v of s_i,j^l from the edge-blocks, remove the edge s_i,j^lv and add the edge ŝ_i,j^lv.
* For each neighboring vertex v of r_i,j^l from the vertex-blocks, remove the edge r_i,j^lv and add the edge r_i,j^lv.
* For each neighboring vertex v of r_i,j^l from the vertex-blocks, remove the edge r_i,j^lv and add the edge r̂_i,j^lv.
An illustration of a connector connecting a vertex-block and an edge-block is given in Figure <ref>.
This completes the construction of graph H for the VC-D instance.
Next we describe the initial configuration as follows:
= ⋃_i ∈ [κ], x ∈ [n] Q_i,x∪⋃_e ∈ E Q_e ∪⋃_i ∈ [κ]{t_i , t_i}∪⋃_{t_i,j,t_i,j}∪⋃_, l∈{i,j}{s_i,j^l, ŝ_i,j^l, r_i,j^l, r̂_i,j^l}.
Finally, we set = (12n+2)+2κ and the reduced VC-D instance is (H, , ).
The fvs of the graph H is at most 8.
Let F = {s_i,j^l, ŝ_i,j^l, r_i,j^l, r̂_i,j^l |, l∈{i,j}}.
The removal of F from H results in a forest.
Therefore, the fvs of H is at most |F| = 8κ2.
Next we prove the correctness of the reduction.
If (G, κ) is a yes-instance of the problem, then (H, , ) is a yes-instance of the VC-D problem.
Let C = ⊆ V(G) be a κ-clique in G.
For each i ∈ [κ], let u_i,x_i be the vertex in C ∩ V_i for some x_i ∈ [n].
For each , let e_i,j = u_i,x_iu_j,x_j.
For each i ∈ [κ], we slide the token on t_i to p_i,x_i.
Then, for each j ≠ i ∈ [κ], we slide x_i-tokens in Q_i,x_i^j towards s^i_i,j and n-x_i-tokens in Q_i,x_i^j towards r_i,j^i.
For each , we slide the token on t_i,j to p_e_i,j.
Then, we slide
* n-x_i tokens in Q_e_i,j to ŝ_i,j^i,
* x_i tokens in Q_e_i,j to r̂_i,j^i,
* n-x_j tokens in Q_e_i,j to ŝ_i,j^j, and
* x_j tokens in Q_e_i,j to r̂_i,j^j.
The tokens received at the vertices s_*,*^* and ŝ_*,*^* are pushed to s_*,*^*.
Similarly, the tokens received at the vertices r_*,*^* and r̂_*,*^* are pushed to r_*,*^*.
For each , and for each l ∈i,j, s_i,j^l receives x_l-tokens from H_l and n-x_l-tokens from H_i,j.
Similarly, r_i,j^l receives n-x_l-tokens from H_l and x_l-tokens from H_i,j.
Further, we push the n-tokens received by s_i,j^l to A_i,j^l and n-tokens received by r_i,j^l to D_i,j^l.
The resulting configuration is a valid vertex cover.
Finally, let S' ⊆ V(H) be the solution obtained from the above token sliding steps.
More precisely,
S' = ⋃_i ∈ [κ]{{t_i, p_i, x_i}∪ (Q_i∖ Q_i, x_i)}∪⋃_{{t_i,j, p_e_i,j}∪ (Q_i,j∖ Q_e_i,j)}
∪⋃_, l ∈{i,j} ({s_i,j^l, ŝ_i,j^l, r_i,j^l, r̂_i,j^l}∪ A_i,j^l ∪ D_i,j^l) .
It is clear that the set S' is a vertex cover in H.
Next we count the number of token steps used to obtain S' from .
In each vertex-block, we spend 2(κ-1)n+2 steps to push tokens towards the connectors.
Similarly, at each edge-block, we spend (4n+2) steps.
At each connectors, we spend 2n steps.
Therefore, we spend κ·(2(κ-1)n+2) + κ2·(4n+2) + 2κ2· 2n = (12n+2)κ2 + 2κ =.
Hence, (H, , ) is a yes-instance of VC-D problem.
If (H, , ) is a yes-instance of the VC-D problem, then (G, κ) is a yes-instance of the problem.
Let S^* be a feasible solution for the instance (H, , ) of the VC-D problem.
At each connector _i,j^l for and l ∈{i,j}, at least 2n tokens need to be slid from either vertex-blocks or edge blocks.
Because, each H[_i,j^l] contains a matching of size 2n+4, but it has only four tokens in the initial configuration.
It is clear that every token must move at least 3 steps to reach the sets A_*,*^* and D_*,*^* in order to cover the uncovered edges.
This saturates a budget of 4n· 2 = 12n.
Therefore, we left with exactly 2κ+2 budget to adjust the tokens on the vertex blocks and edge blocks.
For any , let q_i,x^j,z for some integers x,z ∈ [n] be a vertex that looses the token where the token is moved to some vertex in a connector.
Since the edge p_i,xq_i,x^j,z become uncovered, we need to slide a token to q_i,x^j,z or p_i,x.
This cost at least two token step.
By construction of the vertex-block H_i, by sliding a token (for two steps) to the vertex p_i,x for some x ∈ [n], one can release at most n(κ-1) tokens from the neighboring set Q_i,x.
Similarly, on a edge-block H_i,j for some , by sliding a token (for two steps) to the vertex p_e for some e ∈ E_i,j, one can release at most 2n tokens from the neighboring set Q_e.
This implies that by sliding at most 2κ tokens on the vertex-blocks, one can release at most κ· n(κ-1) = 2n token from the vertex-blocks.
Similarly, by sliding at most 2 tokens on the edge-blocks, one can release at most 2n tokens from the edge-blocks.
Therefore, we need to slide exactly one token (for two steps) in each vertex-block and each edge-block.
For each i∈ [κ], let p_i,x_i for some x_i ∈ [n] be the vertex in H_i that gets token in S^* and releases all the tokens in Q_i,x_i.
Similarly, for each , let p_e for some e=u_i,z_iu_i,z_j∈ E_i,j with z_i,z_j ∈ [n] be the vertex in H_i,j that gets token in S^* and releases all tokens in Q_e.
Consider the connector _i,j^i.
The set Q_i,x_i^j pushes x_i tokens to s_i,j^i and n-x_i tokens to r_i,j^i.
The set Q_e pushes z_i tokens to r_i,j^i and n-z_i tokens to s_i,j^i.
The number of tokens passed through s_i,j^i to A_i,j^i is x_i + (n - z_i).
Since A_i,j^i need n tokens, it is mandatory that x_i=z_i.
This equality should hold for every i.
Therefore, for each , there exist an edge u_i,x_iu_j,x_j.
Hence (G,κ) is an yes-instance of the problem.
The proofs of Lemmas <ref>, <ref> and <ref> complete the proof of Theorem <ref>.
§ INDEPENDENT SET DISCOVERY
Fellows et al. <cit.> showed that IS-D is in with respect to parameter k + b on nowhere dense classes of graphs.
We show in this section that IS-D has a polynomial kernel with respect to parameter k on nowhere dense classes of graphs and does not admit a polynomial kernel with respect to the parameter + pw, where pw is the pathwidth of the input graph, unless ⊆.
For any instance = (G, S,) of a -Discovery problem for some vertex (resp. edge) selection problem , we call a vertex v ∈ V(G) ∖ S (resp. e ∈ E(G) ∖ S) irrelevant with respect to s ∈ S if there exists a configuration C_ℓ such that ℓ≤ b, C_ℓ is a solution for , and the token on s is not on v (resp. e) in C_ℓ.
The kernelization algorithm for nowhere dense graphs uses <Ref>, along with other structural properties of the input graph, to form a “sunflower” and find an irrelevant vertex. It then removes from the graph some of the vertices that are irrelevant with respect to every token.
A sunflower with p petals and a core Y is a family of sets F_1, …, F_p such that F_i ∩ F_j = Y for all i ≠ j; the sets F_i ∖ Y are petals and we require none of them to be empty <cit.>.
Let (G, S, ) be an instance of IS-D where |S|=k, and let G' be the subgraph of G induced by the vertices of ⋃_s ∈ S,i ∈ [3k] V(s,i) ∪ S. Then (G', S, ) is equivalent to (G, S, ).
We show that in any solution to (G, S, ), if a token on a vertex s ∈ S moves to a vertex u ∈ C_ℓ such that d(s, u) > 3k, it can instead move towards a vertex v ∈ V(H) such that d(s, v) ≤ 3k, while keeping the rest of the solution unchanged.
First, the vertices in C_ℓ∖{u} can appear in at most k - 1 of the 3k sets V(s,i) for i ∈ [3k] and every such vertex that appears in a set V(s,𝔦) for a specific 𝔦∈ [3k] can be a neighbor of at most all the vertices in V(s,𝔦-1) and V(s,𝔦+1).
This implies that the token on s cannot move towards any vertex of at most 3k - 3 of the 3k sets V(s,i) for i ∈ [3k] (as these contain tokens and thus might result in adjacent tokens) and thus there exists a vertex v which the token on s can move to while maintaining an independent set in C_ℓ∖{u}∪{v}.
Let (G, S, ) be an instance of IS-D where |S|=k, and let 𝒱 = {v_1, v_2, …, v_t} be a set of vertices of G ∖ S such that for a given token on a vertex s ∈ S, d(s, v_i) = d(s, v_j) for i ≠ j ∈ [t]. If 𝒜 = {N[v_1], …, N[v_t]} contains a sunflower with k + 1 petals, then any vertex whose closed neighborhood corresponds to one of those petals is irrelevant with respect to s.
Let v_del be one such vertex whose closed neighborhood corresponds to one of the k + 1 petals, and consider a solution to (G, S, ) in which the token on s is on the vertex v_del in C_ℓ.
First, note that no vertex of C_ℓ∖{v_del} is in the core of the sunflower since that would contradict the fact that v_del∈ C_ℓ as v_del is a neighbor of every vertex in the core.
Second, since the sunflower has k remaining petals (besides the one corresponding to the closed neighborhood of v_del) and |C_ℓ∖{v_del}| = k - 1, there must remain one vertex (denoted v_rep) whose closed neighborhood corresponds to one of the remaining k petals and such that v_rep is not in the closed neighborhood of any of the vertices in C_ℓ∖{v_del}.
Thus, C_ℓ∖{v_del}∪{v_rep} forms an independent set.
Additionally, since d(s, v_del) = d(s, v_rep), the number of slides in the solution remains constant.
As a result, v_del is irrelevant with respect to s.
IS-D has a polynomial kernel with respect to parameter k on nowhere dense graphs.
Let (G, S, ) be an instance of IS-D where G is nowhere dense.
Without loss of generality, we assume the graph G to be connected.
For each vertex s ∈ S and integer i ∈ [3k], we compute V(s,i).
We maintain the invariant that we remove from V(s,i) for each s ∈ S and i ∈ [3k], irrelevant vertices with respect to s (note that a vertex can appear in multiple sets V(s,i)).
We remove an irrelevant vertex with respect to a vertex s ∈ S from V(s,i) for an integer i ∈ [3k] as follows.
If |V(s,i)| > N_2(2^x_2· (k + 1)), where N_2 and x_2 are as per <Ref> (here V(s,i) plays the role of the set A), we can compute sets X, B ⊆ V(s,i) such that |X| ≤ x_2, |B| ≥ 2^x_2· (k + 1) and B is 2-independent in G - X.
Let ℬ' = {B'_1, B'_2, …} be a family of sets that partitions the vertices in B such that for any two vertices u, v ∈ B, u, v ∈ B'_j if and only if N[u] ∩ X = N[v] ∩ X.
Since |B| ≥ 2^x_2· (k + 1) and |X| ≤ x_2, at least one set B_𝔧∈ℬ, for a specific 𝔧, contains at least k + 1 vertices of B.
All vertices in B_𝔧 have the same neighborhood in X and they are 2-independent G-X (, no vertex from outside of X can be in the closed neighborhood of two vertices in B_𝔧); thus their closed neighborhoods form a sunflower with at least k + 1 petals and a core that is contained in X (<Ref>).
By <Ref>, one vertex of B_𝔧 is irrelevant with respect to s and can be removed from V(s,i).
We can repeatedly apply <Ref> on the set
V(s,i) until |V(s,i)| ≤ N_2(2^x_2· (k + 1)).
We form the kernel (G', S, ) of the original instance (G, S, ) as follows.
We set V(G') = ⋃_s ∈ S,i ∈ [3k] V(s,i) ∪ S.
By <Ref>, any vertex v ∈ V(G) such d(s,v) > 3k for every s ∈ S is irrelevant with respect to every s ∈ S and not required in the kernel (G', S, ).
For each vertex v ∈ V(s,i), for s ∈ S and i ∈ [3k], we add to V(G') at most i vertices that are on the shortest path from s to v, if such vertices are not already present in V(G').
G' is the subgraph of G induced by the vertices in V(G').
By the end of this process, |V(G')| ≤ k + [9k^3 · N_2(2^x_2· (k + 1))], as for each s ∈ S and i ∈ [3k], V(s,i) ≤ N_2(2^x_2· (k + 1)) and for each vertex in the latter sets, we added to V(G') at most 3k - 1 vertices that are on a shortest path from that vertex to the vertex s.
(G', S, ) is a kernel as only vertices that are irrelevant with respect to every token in S might not be in V(G') and all vertices needed to move tokens from vertices in S towards an independent set using only b slides are present in V(G').
IS-D is -hard with respect to parameter pathwidth.
As stated in <Ref>, we present an -reduction from MMO.
Let (H, 𝒫_H, σ, r) be an instance of MMO where H is a bounded pathwidth graph, |V(H)| = n, |E(H)| = m, σ: E(H) →ℤ_+ such that ∑_e ∈ E(H)σ(e) = σ and r ∈ℤ_+ (integers are given in unary).
We construct an instance (G_H, 𝒫_G_H, S, ) of IS-D where G_H is exactly as described in <Ref> (under the graph G_H heading).
See <Ref>.
From <Ref>, G_H is of bounded pathwidth.
We set S = A ∪ A^+ ∪ B ∪ B^+ ∪ Y ∪ X^+ and = m + 3 σ.
Given that all integers are given in unary, the construction of the graph G_H, or its path decomposition (as described in <Ref>), and as a consequence the reduction, take time polynomial in the size of the input instance.
Additionally, by <Ref>, this reduction is a pl-reduction.
We claim that (H, 𝒫_H, σ, r) is a yes-instance of MMO if and only if (G_H, 𝒫_G_H, S, ) is a yes-instance of IS-D.
If (H, 𝒫_H, σ, r) is a yes-instance of MMO, then (G_H, 𝒫_G_H, S, ) is a yes-instance of IS-D.
Let λ: E(H) → V(H) × V(H) be an orientation of the graph H such that for each v ∈ V(H), the total weight of the edges directed out of v is at most r.
In (H, 𝒫_H, σ, r), the vertices in A and B contain tokens.
The same applies for the vertices in A^+ and B^+.
To fix that, for each edge e ∈ E(H) such that λ(e) = (v, u):
* we slide, for each i ∈ [σ(e)], the token on b_e^i to z_e^v(i) (this consumes σ(e) slides),
* we move, for each i ∈ [σ(e)], the token on y_e^v(i) to any free vertex of X_v (this consumes 2σ(e) slides),
* we slide the token on b_e^σ(e)+1 to e^v (this consumes 1 slide).
This constitutes m + 3σ slides and we get an independent set in G_H.
Step 2 above is possible (i.e. a token-free vertex exists in X^v) since λ is an orientation of the graph H such that for each v ∈ V(H), the total weight of the edges directed out of v is at most r.
Step 3 is possible for each edge e ∈ E(H) since in Step 2 all tokens were removed from the vertices in Y_e^v.
If (G_H, 𝒫_G_H, S, ) is a yes-instance of IS-D, then (H, 𝒫_H, σ, r) is a yes-instance of MMO.
The minimum number of slides used inside any induced subgraph G_e for an edge uv = e ∈ E(H) is one and it can only be achieved by sliding the token on b_e^σ(e)+1 to one of either e^u or e^v.
Thus, at least m slides are required inside the MMO-edge-gadgets and the budget remaining is 3σ.
Additionally, each token on a vertex b^i_e in B_e, for an edge uv=e ∈ E(H) and an integer i ∈ [σ(e)] must slide to either z^u(i)_e or z^v(i)_e, consuming σ slides.
Since a solution that moves the token on a_e^σ(e)+1 but not the token on b_e^σ(e)+1 is not minimal, we can safely assume that the described m + σ slides are executed in any minimal solution.
In the same solutions, each token on a vertex z^u(i)_e for an edge uv= e ∈ E(H) and an integer i ∈ [σ(e)] requires the token on y^u(i)_e to slide to either e^u or w_u, utilizing as a result σ other slides.
A token that slides from y^u(i)_e to the vertex w_u must slide again at least once, since any independent set that is achieved through the minimal number of slides would never require the sliding of the tokens on the vertices in X^+ (the token that moves to the vertex w_u can be moved, using one less slide, to the vertex the token on x_u^r+1 moves to).
Since G_e^sel can contain at most 2 tokens, a token on y^u(i)_e that slides to the vertex e^u must either slide again at least once to a vertex, denoted y_e^u(i_1) (for an integer i_1 ∈ [σ(e)]) in Y_e^u, or require another token on a vertex in G_e^sel to slide at least once to either a vertex, denoted y_e^u(i_2) (for an integer i_2 ∈ [σ(e)]) in Y_e^u, or a vertex, denoted y_e^v(i_2) (for an integer i_2 ∈ [σ(e)]) in Y_e^v, while the token initially on y^u(i)_e stays on e^u.
Given that at most σ slides remain in any minimal solution, and that each of the σ tokens initially on vertices in Y that moved to either vertices of the form e_1^u_1 or w_u_1, for an edge e_1 ∈ E(H) incident to a vertex u_1 ∈ V(H), uses or requires at least one additional slide, each one such token can use or require exactly one additional slide.
If the token on y^u(i)_e slides to w_u, then either in exactly one more slide it can move to a free vertex in X_u, or it can slide back to a vertex, denoted y_e_2^u(i_3) (for an edge e_2 adjacent to u in H and an integer i_3 ∈ [σ(e_1)]) in Y^u.
However, either y_e_2^u(i_3) (resp. y_e^u(i_1), y_e^u(i_2), or y_e^v(i_2)) or its adjacent vertex, denoted z_e_2^u(i_3) in Z^u (resp. z_e^u(i_1) in Z_e^u, z_e^u(i_2) in Z_e^u, or z_e^v(i_2) in Z_e^v), contains a token, thus requiring at least one other additional slide, which is impossible.
As a result, it can only be the case that a token on y^u(i)_e slides to w_u and then in exactly one more slide it moves to a free vertex in X_u.
For any edge uv = e ∈ E(H), if e^v ∈ C_ℓ (resp. e^u ∈ C_ℓ), then no vertex of Y_e^v (resp. Y_e^u) appears in C_ℓ and the tokens on the vertices of Y_e^v (resp. Y_e^u) have been moved to some of the free vertices of X_v (resp. X_u).
Given the latter, we produce an orientation λ to H, where λ(e) = (v, u) (resp. λ(e) = (u, v)) if e^v ∈ C_ℓ (resp. e^u ∈ C_ℓ).
Since |X_v| = |X_u| ≤ r, λ is such that the total weight directed out of any vertex v ∈ V(H) is at most r.
The proofs of Lemmas <ref> and <ref> complete the proof of Theorem <ref>.
By making some of the vertices of our MMO-instance-selector adjacent to some of the vertices in G_H_1, …, G_H_t constructed following the reduction of <Ref> for t MMO input instances H_1, …, H_t, we prove the following.
There exists an or-cross-composition from MMO into IS-D on bounded pathwidth graphs with respect to . Consequently, IS-D does not admit a polynomial kernel with respect to + pw, where pw denotes the pathwidth of the input graphs, unless ⊆.
As stated in <Ref>, we can assume that we are given a family of t MMO instances (H_j, 𝒫_H_j, σ_j, r_j), where H_j is a bounded pathwidth graph with path decomposition 𝒫_H_j, |V(H_j)| = n, |E(H_j)| = m, σ_j: E_j →ℤ_+ is a weight function such that ∑_e_j ∈ E(H_j)σ_j(e_j) = σ and r_j = r ∈ℤ_+ (integers are given in unary).
The construction of the instance (G_t, 𝒫_G_t, S, ) of IS-D is twofold.
For each instance H_j for j ∈ [t], we add to G_t the graph G_H_j as per the reduction in <Ref>.
We refer to the sets A, B, X, and X^+, subsets of vertices of a subgraph G_H_j of G_t, by A_j, B_j, X_j, and X_j^+, respectively.
Subsequently, we let A = ∪_j ∈ [t] A_j, B = ∪_j ∈ [t] B_j, X = ∪_j ∈ [t] X_j, and X^+ = ∪_j ∈ [t] X^+_j.
We add the MMO-instance-selector described in <Ref> (under the MMO-instance-selector heading) and connect it to the rest of G_t as follows (see <Ref>).
We make for each j ∈ [t], the vertex Unselect_j adjacent to the vertices in V(G_H_j) ∖ S, where S is as defined later.
We make the vertex h adjacent to the vertices in A and the vertex q adjacent to the vertices in A^+.
The result is the original graph G_t appearing in <Ref>.
By <Ref>, G_t is of bounded pathwidth.
Now, we set
S = B ∪ B^+ ∪ X^+ ∪ Y ∪⋃_j ∈ [t] Unselect_j
∪⋃_i ∈ [σ] (f^i ∪ g^i)
∪⋃_i ∈ [m] (o^i ∪ p^i)
∪ h ∪ q
and = 3m + 5σ + 1.
Given that all integers are given in unary, the construction of the graph G_t, or its path decomposition (as described in <Ref>), and as a consequence the reduction take time polynomial in the size of the input instances.
Additionally, by <Ref>, this composition is a pl-reduction.
We claim that (G_t, 𝒫_G_t, S, ) is a yes-instance of IS-D if and only if for some integer 𝔧∈ [t], (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO.
If for some 𝔧∈ [t], (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO, then (G_t, 𝒫_G_t, S, ) is a yes-instance of IS-D.
Let (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) be a yes-instance of MMO and let λ be a feasible orientation of H_𝔧 such that for each v ∈ V(H_𝔧), the total weight of the edges directed out of v is at most r.
In G_t, the tokens on f^i and g^i are adjacent for each i ∈σ, and the tokens on o^i and p^i are adjacent for each i ∈ [m].
First, we slide the token on Unselect_𝔧 onto the vertex Select_𝔧.
Using 2σ + 2m slides, we move the token on the vertex f_i for each i ∈ [σ] towards a token-free vertex in A_𝔧 and we move the token on the vertex o_i for each i ∈ [m] towards a token-free vertex in A^+_𝔧.
This achieves a configuration of the tokens on G_H_𝔧 that is similar to the starting configuration of the tokens on the graph G_H of the reduction of <Ref>.
From <Ref>, given a feasible orientation of the MMO instance (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧), we can achieve a configuration of the tokens that constitutes an independent set in G_H_𝔧 in m + 3σ slides.
This totals slides and achieves a configuration of the tokens that constitutes an independent set in G_t.
If (G_t, 𝒫_G_t, S, ) is a yes-instance of IS-D, then there exists an integer 𝔧∈ [t], such that (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO.
In any solution that uses the minimal number of slides, at least 2m + 2σ slides are needed to get the tokens on f^1, …, f^σ and o^1, …, o^m to vertices in A and A^+, respectively.
Note that a solution that moves the token on g^i for some integer i ∈ [σ] but not the token on f^i or that moves the token on p^i for some integer i ∈ [m] but not the token on o^i is not minimal.
When a token is moved from one of the vertices f^1, …, f^σ, o^1, …, o^m to one of the vertices in V(G_H_j) for an integer j ∈ [t], which is bound to happen, at least one slide is needed so that the token on Unselect_j is not adjacent to any other token and one slide can be achieved by sliding the token on Unselect_j to Select_j.
Note that a solution that uses the minimal number of slides and slides the token on Unselect_j for any j ∈ [t] to any vertex v ∈ V(G_t) ∖{ Select_j} (and possibly slides another token to Select_j) can safely be replaced by a solution that performs the same number of slides and the same slides except that it slides the token on Unselect_j to Select_j (and possibly the other token to v).
Thus, we consider minimal solutions that slide Unselect_j for any j ∈ [t] only to Select_j.
Assume that for some integer j ∈ [t], Unselect_j has moved to Select_j.
In a solution that uses the minimal number of slides, if a token is the first to move from one of the vertices o^1, …, o^m to a vertex a_e^σ_j(e)+1 for an edge uv=e ∈ E(H_j), it will require the token on b_e^σ_j(e)+1 to slide to one of e^u or e^v.
Each other token that moves from one of the vertices o^1, …, o^m to a_e^σ(e)+1 will require in the same solution at least 3 additional slides as one token must exit the induced subgraph G_e^sel and the two remaining tokens must not be on adjacent vertices.
Thus, if we assume that the m tokens on o^1, …, o^m move to distinct vertices in A^+, they require at least m additional slides.
Assume that for some integer j_1 ∈ [t], Unselect_j_1 has moved to Select_j_1.
In a solution that uses the minimal number of slides, if a token is the first to move from one of the vertices f^1, …, f^σ to a vertex a^i_e_1 for an edge u_1v_1= e_1 ∈ E(H_j_1) and i ∈ [σ_j_1(e_1)], it will require the token on b^i_e_1 to slide to one of z^u_1(i)_e_1, or z^v_1(i)_e_1.
W.l.o.g., assume the token on b^i_e_1 slides to z^u_1(i)_e_1.
In the same solution, the token on z^u_1(i)_e_1 will in turn require the token on y^u_1(i)_e_1 to slide to either w_u_1 or e_1^u_1.
Each other token that moves from one of the vertices f^1, …, f^σ to a^i_e_1 will require in the same solution at least 4 additional slides to leave the path a^i_e_1 b^i_e_1 z_e_1^u_1(i) y_e_1^u_1(i) which cannot accommodate more tokens.
Given that the remaining budget is at most 3σ, a second token sliding to the vertex a^i_e_1 is only possible if for some token initially on a vertex in Y that moved
to a vertex w_u_2 or a vertex e_2^u_2, for a vertex u_2 ∈ V(H_j_2), edge u_2v_2=e_2 ∈ E(H_j_2), and integer j_2 ∈ [t], does not slide or require another token to slide.
In any solution that uses the minimal number of slides, the tokens on the vertices in X^+ do not need to be moved (since we can move any token that moves into the adjacent representative vertices to the vertices the tokens on the vertices of X^+ were being moved to).
Thus, a token on w_u_2 must slide again.
Similarly, a token on e_2^u_2 must either slide again, or require another token (on the vertex in B^+) to slide.
Thus, with a remaining budget of at most 3σ, no token can afford to move from f^1, …, f^σ to a^i_e_1 once another token has already moved to the same vertex.
In other words, given , it must be the case that the σ tokens on f^1, …, f^σ slide towards distinct vertices in A.
Additionally, each such token will require at least 3 slides.
Consequently, the m tokens on o^1, …, o^m must also slide to distinct vertices in A^+.
Given that one slide remains for moving the token on one vertex Unselect_𝔧 to Select_𝔧 for an integer 𝔧∈ [t], it must be the case that the tokens on f^1, …, f^σ (resp. o^1, …, o^m) move to distinct vertices in A_𝔧 (resp. A_𝔧^+).
This achieves a configuration of the tokens on G_H_𝔧 that is similar to the starting configuration of the tokens inside G_H of the reduction of <Ref>. From <Ref>, with a remaining budget of m + 3σ, we get that (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO.
This concludes the proof the theorem.
Next we consider the fvs parameterization of the IS-D problem.
The IS-D problem is W[1]-hard for the parameter fvs of the input graph.
We present a parameterized reduction from the problem.
We utilize the reduction given in Theorem <ref>, and apply some changes over the constructed graph to obtain reduced instance for the IS-D instance.
Consider the graph H constructed in the proof of Theorem <ref>.
For each i ∈ [κ], we add a vertex t_i with n(n-1)(κ-1) pendent neighbors (call the set as T_i) in the vertex-block H_i.
For each vertex v in Q_i, we add a pendent neighbor b(v).
For any set B ⊆ Q_i, we used to refer b(B) as the set of all pendent neighbors of the vertices in B.
The vertex t_i is made adjacent to all the vertices in b(Q_i).
We add a pendent neighbor t̂_i to t_i.
For each , we add a vertex t_i,j with 2n|E_i,j|-2n pendent neighbors (call the set as T_i,j) in the edge-block H_i,j.
For each vertex v in Q_i,j, we add a pendent neighbor c(v).
For any set C ⊆ Q_i,j, we used to refer c(C) as the set of all pendent neighbors of the vertices in C.
The vertex t_i,j is made adjacent to all the vertices in c(Q_i,j).
We add a pendent neighbor t̂_i,j to t_i,j.
For each and l ∈{i,j}, remove the vertex subsets B_i,j^l and C_i,j^l, and the edges incident on them in the connector _i,j^l.
We add a pendent neighbor s_i,j^l to s_i,j^l and r_i,j^l to r_i,j^l.
An illustration of a connector connecting a vertex-block and an edge-block is given in Figure <ref>.
This completes the construction of graph H for the IS-D instance.
Next we describe the initial configuration as follows:
= ⋃_i ∈ [κ], x ∈ [n] (Q_i,x∪ b(Q_i,x) ∪⋃_e ∈ E (Q_e ∪ c(Q_e)) ∪⋃_i ∈ [κ]{t_i, t̂_i}∪⋃_{t_i,j, t̂_i,j}.
Finally, we set = (4n^2+1)+4nm+κ and the reduced IS-D instance is (H, , ).
The fvs of the graph H is at most 5+κ.
Let F = {s_i,j^l, r_i,j^l |, l∈{i,j}}∪{t_i,j|}∪{t_i | i ∈ [κ]}.
Removal of F from H results a forest.
Therefore, the fvs of H is at most |F| = 5κ2 + κ.
Next we prove the correctness of the reduction.
If (G, κ) is a yes-instance of the problem, then (H, , ) is a yes-instance of the IS-D problem.
Let C = ⊆ V(G) be a κ-clique in G.
For each i ∈ [κ], let u_i,x_i be the vertex in C ∩ V_i for some x_i ∈ [n].
For each , let e_i,j = u_i,x_iu_j,x_j.
For each i ∈ [κ], we slide the token on t_i to p_i, x_i.
For each x ∈ [n],
* if x = x_i, then for each j ≠ i ∈ [κ], we slide x_i-tokens in Q_i,x_i^j towards s^i_i,j and n-x_i-tokens in Q_i,x_i^j towards r_i,j^i.
* if x ≠ x_i, then we slide all n(κ-1) tokens in b(Q_i,x) to T_i.
For each , we slide the token on t_i,j to p_e_i,j.
For each e ∈ E_i,j, if e=u_i,x_iu_j, x_j, then we slide
* n-x_i tokens in Q_e_i,j to s_i,j^i,
* x_i tokens in Q_e_i,j to r_i,j^i,
* n-x_j tokens in Q_e_i,j to s_i,j^j, and
* x_j tokens in Q_e_i,j to r_i,j^j.
Otherwise, we slide all 2n tokens in c(Q_e) to T_i,j.
For each , and for each l ∈i,j, s_i,j^l receives x_l-tokens from H_l and n-x_l-tokens from H_i,j.
Similarly, r_i,j^l receives n-x_l-tokens from H_l and x_l-tokens from H_i,j.
Further, we push the n-tokens received by s_i,j^l to A_i,j^l and n-tokens received by r_i,j^l to D_i,j^l.
The resulting configuration is a valid independent set.
Finally, let S' ⊆ V(H) be the solution obtained from the above token sliding steps.
It is clear that the set S' is an independent set in H.
Next we count the number of token steps used to obtain S' from .
In each vertex-block, we spend 2(κ-1)n^2+1 steps to push tokens towards the connectors and T_i.
Similarly, at each edge-block, we spend 4n|E_i,j|+1 steps.
Therefore, we spend κ·(2(κ-1)n^2+1) + 4nm+ = (4n^2+1) + 4nm + κ=.
Hence, (H, , ) is a yes-instance of IS-D problem.
If (H, , ) is a yes-instance of the IS-D problem, then (G, κ) is a yes-instance of the problem.
Let S^* be a feasible solution for the instance (H, , ) of the IS-D problem.
In each vertex-block H_i, we need to slide at least n^2(κ-1) tokens as the vertices in both sets Q_i and b(Q_i) have tokens in the initial configuration.
We can accommodate at most n(n-1)(κ-1) tokens at the vertices in T_i.
Therefore, at least n(κ-1) token should be pushed towards connectors.
Each such tokens must slide at least two steps to find a free vertex.
Therefore, we need at least 2n^2(κ-1) token steps to settle the tokens on Q_i and b(Q_i).
Similarly, at each edge-block H_i,j, we need to slide at least 2n|E_i,j| tokens as the vertices in the sets Q_i,j and c(Q_i,j) have tokens in the initial configuration.
We can accommodate at most 2n(|E_i,j| - 1) tokens at the vertices in T_i,j.
Therefore, at least 2n tokens should be pushed towards connectors.
Each such tokens must slide at least two steps to find a free vertex.
Therefore, we need at least 4n|E_i,j| token steps to settle the tokens on Q_i,j and c(Q_i,j).
This saturates a budget of 2n^2(κ-1)κ + 4nm = 4n^2 + 4nm.
We left with a budget of at most + k.
In each vertex-block H_i, we still need to fix the tokens on t_i and t̂_i.
We can use at most one token step to fix this.
Therefore, the token on t_i should move to a neighbor p_i,x for some x ∈ [n].
Similarly, the token on t_i,j for each , should move to a neighbor p_e for some e ∈ E_i,j.
For each i∈ [κ], let p_i,x_i for some x_i ∈ [n] be the vertex in H_i that gets token in S^* and releases all the tokens in Q_i,x_i.
Similarly, for each , let p_e for some e=u_i,z_iu_i,z_j∈ E_i,j with z_i,z_j ∈ [n] be the vertex in H_i,j that gets token in S^* and releases all tokens in Q_e.
Consider the connector _i,j^i.
The set Q_i,x_i^j pushes x_i tokens to s_i,j^i and n-x_i tokens to r_i,j^i.
The set Q_e pushes z_i tokens to r_i,j^i and n-z_i tokens to s_i,j^i.
The number of tokens passed through s_i,j^i to A_i,j^i is x_i + (n - z_i).
Since A_i,j^i need n tokens, it is mandatory that x_i=z_i.
This equality should hold for every i.
Therefore, for each , there exist an edge u_i,x_iu_j,x_j.
Hence (G,κ) is an yes-instance of the problem.
The proofs of Lemmas <ref>, <ref> and <ref> complete the proof of Theorem <ref>.
§ DOMINATING SET DISCOVERY
DS-D was shown to be [1]-hard with respect to parameter k + on general graphs and with respect to parameter on 2-degenerate graphs.
On the positive side, however, it is in for parameter k on biclique-free graphs as well as with respect to parameter on nowhere dense classes of graphs <cit.>.
We show in this section that the problem has polynomial kernels with respect to parameter k on biclique-free classes.
Additionally, via a slight modification to the proofs of <Ref>, we show that DS-D is -hard with respect to parameter pw and does not have a polynomial kernel with respect to the parameter + pw where pw is the pathwidth of the input graph unless ⊆.
Let be a biclique-free class of graphs. Then DS-D has a polynomial kernel on with respect to parameter k.
For the proof of <Ref> we use the concept of k-domination cores, which were introduced by Dawar
and Kreutzer to approach domination type problems <cit.>.
Let G be a graph and k≥ 1. A set C⊆ V(G) is a k-domination core if every set of size at most k that dominates C also dominates G.
Bounded size domination cores do not exist for general graphs, however, they do exist for
many important graph classes, see e.g. <cit.>, most generally for semi-ladder free graphs <cit.>.
For our construction of polynomial kernels it is important that biclique-free classes admit polynomial domination cores. For semi-ladder free graphs no polynomial cores are known and the proof of existence only yields an fpt and no polynomial-time algorithm.
Note that the notion of cores does not appear explicitly in <cit.>, however, it is easily observed that the set of black vertices in the auxiliary RWB-dominating set problem considered in that work yields a k-domination core.
Let be a biclique-free class of graphs. Then there exists a polynomial time algorithm that given G∈ and k∈ decides that G cannot be dominated by k vertices or computes a k-comination core C⊆ V(G) of size polynomial in k.
Let (G, S, ) be an instance of DS-D, where G∈ and |S|=k.
We first compute a domination core C⊆ V(G)
of size polynomial in k, which is possible by <Ref>.
We then compute the projection classes of all vertices towards C, where we classify two vertices u,v∈ V(G) as equivalent if and only if N(u)∩ C=N(v)∩ C.
The number of
projection classes is polynomially bounded in |C|, as biclique-free classes have bounded VC-dimension.
As |C| is polynomially bounded we derive a polynomial bound also for the number of projection classes.
For a set M⊆ V(G) and a vertex v∈ V(G) we define d(v,M)=min_w∈ Md(v,w).
For every projection class X we now fix a minimal set R_X of representative vertices such that for each token t on a vertex v_t the set R_X contains a vertex v_t,X such that d(v_t,v_t,X)=d(t,X).
Note that R_X contains at most k vertices and that such a set can be computed in polynomial time by simple breadth-first searches.
We define W⊆ V(G) as the union of the vertices of C, the vertices of S, and the vertices of a shortest path between v_t and the vertex v_t,X for each v_t∈ S and projection class X. We define the kernel as (G[M], S,b).
First we prove that (G[M],S,) is an equivalent instance.
First assume that (G,S,) is a positive instance.
Let C_0⊢ C_1⊢…⊢ C_ℓ for ℓ≤ C_ be a discovery sequence.
We may assume that in each step a token moves on a shortest path to its final destination in C_.
As C_ is a dominating set, it dominates in particular the core C, say token t is moved to a vertex of projection class X_t.
Then we obtain an equivalent discovery sequence where the token t is moved to v_t,X instead.
The same sequence exists in G[M] and ends in a set of size at most k that dominates C.
Hence, it also dominates G[M], which shows that also (G[M],S,) is a positive instance.
Conversely, a discovery sequence in G[M] to a dominating set of G[M] leads to a dominating set of C in G[M], which exists exactly like this in G. By definition of a k-domination core, we also discover a dominating set in G.
Finally, it remains to show that G[M] has size bounded by a polynomial in k.
As we argued already, we have a polynomial size core C and at most polynomially many projection classes.
From each class we keep at most k representative vertices.
It remains to show that can be upper bounded by a polynomial in k.
This is easily derived from the fact that a graph with a dominating set of size k can have diameter at most 3k+2, as a shortest path with 3k+3 vertices cannot be dominated by k vertices.
Hence, every token arrives in its final position after at most 3k+2 steps and we can assume that b≤ 3k^2+2k.
DS-D is -hard with respect to parameter pathwidth.
As stated in <Ref>, we present a pl-reduction from MMO.
Let (H, 𝒫_H, σ, r) be an instance of MMO where H is a bounded pathwidth graph with path decomposition 𝒫_H, |V(H)| = n, |E(H)| = m, σ: E(H) →ℤ_+ such that ∑_e ∈ E(H)σ(e) = σ and r ∈ℤ_+ (integers are given in unary).
We construct an instance (G_H, 𝒫_G_H, S, ) of DS-D as follows (see <Ref>).
We form the graph G_H as outlined below:
* We subdivide twice each edge a^i_eb^i_e for each i ∈ [σ(e)] for each edge e ∈ E(H), and once each edge a^σ(e)+1_eb^σ(e)+1_e, of a subgraph G_e (which is the MMO-edge-e described in <Ref>), and add it to G_H.
We denote the introduced vertices between a^i_e and b^i_e by, in order, c^i_e and c'^i_e.
We denote the introduced vertex from a subdivision of an edge a^σ(e)+1_e b^σ(e)+1_e by c^σ(e)+1_e.
We let C_e = ∪_i ∈ [σ(e)] c_e^i, C = ∪_e ∈ E(H) C_e, C'_e = ∪_i ∈ [σ(e)] c'^i_e, C' = ∪_e ∈ E(H) C'_e, and C^+ = ∪_e ∈ E(H) c_e^σ(e)+1.
* We subdivide each edge w_v x^v(i) for i ∈ [r+1] for each vertex v ∈ V(H), and each edge w_v y_e^v(i) for uv=e ∈ E(H) and i ∈ [σ(e)], of a subgraph G_v (which is the MMO-vertex-v described in <Ref>), and add it to G_H.
We denote the introduced vertex from a subdivision of an edge w_v x^v(i) by c(x^v(i)) and the introduced vertex from a subdivision of an edge w_v y_e^v(i) by c(y_e^v(i)).
We let c(X_v) = ∪_i ∈ [r] c(x^v(i)),
c(X) = ∪_v ∈ V(H) c(X_v),
c(X^+) = ∪_v ∈ V(H) c(x^v(r+1)),
c(Y_e^v) = ∪_i ∈ [σ(e)] c(y_e^v(i)), c(Y^v) = ∪_e ∈ E(H) c(Y_e^v), and
c(Y) = ∪_v ∈ V(H) c(Y_e^v).
* We make each vertex b^i_e adjacent to the vertices z_e^v(i) and z_e^u(i), for each uv=e ∈ E(H) and i ∈ [σ(e)].
* We connect each vertex e^v and vertex in Y_e^v, for each edge uv=e ∈ E(H), via paths of length 2.
We denote the vertex between e^v and y_e^v(i) for i ∈ [σ(e)] by c'(y_e^v(i)).
We let c'(Y_e^v) = ∪_i ∈ [σ(e)] c'(y_e^v(i)), c'(Y^v) = ∪_e ∈ E(H) c'(Y_e^v), and c'(Y) = ∪_v ∈ V(H) c'(Y^v).
* We subdivide the edges d^i_1d^i_2, and d^i_2d^i_3 for i ∈ [rn - σ] of the supplier gadget G_s (described under the supplier gadget and the graph G_H heading in <Ref>) twice, and denote the introduced vertices by d^i_1^+ (for the vertex adjacent to d^i_1), d^i_2^- (for the vertex adjacent to d^i_2 and d^i_1^+), d^i_2^+ (for the other vertex adjacent to d^i_2), and d^i_3^- (for the vertex adjacent to d^i_3).
We denote the subgraph resulting from subdividing the edges of G_s by the subdivision of G_s.
* We add the subdivision of G_s to G_H and make the vertex s adjacent to all vertices in X.
* We add the edge dd' to G_H and make the dominator vertex d adjacent to all vertices in c(Y).
By <Ref>, G_H has bounded pathwidth (as an augmented subdivision of the original graph G_H constructed in <Ref>).
We set S = C ∪ B ∪ Y ∪ c(X^+) ∪⋃_uv=e ∈ E(H) (e^u ∪ e^v) ∪ ⋃_i ∈ [rn - σ] (d_1^i ∪ d_2^i ∪ d_3^i) ∪ s ∪ d and = 2m + 4rn.
Given that all integers are given in unary, the construction of the graph G_H, or its path decomposition (as described in the discussion for <Ref>), and as a consequence the reduction, take time polynomial in the size of the input instance. Additionally, by <Ref>, this reduction is a pl-reduction.
We claim that (H, 𝒫_H, w, r) is a yes-instance of MMO if and only if (G_H, 𝒫_G_H, S, ) is a yes-instance of DS-D.
If (H, 𝒫_H, σ, r) is a yes-instance of MMO, then (G_H, 𝒫_G_H, S, ) is a yes-instance of DS-D.
Let λ: E(H) → V(H) × V(H) be an orientation of the graph H such that for each v ∈ V(H), the total weight of the edges directed out of v is at most r.
In G_H, the vertices in c(X), A^+, and C^+ are not dominated.
To fix that, for each edge uv=e ∈ E(H) such that λ(e) = (v, u):
* we move, for each i ∈ [σ(e)], the token on y_e^v(i) to any free vertex of c(X_v) and the token on b_e^i to z_e^v(i) (this consume 4σ(e) slides),
* we slide the token on e^u to c_e^σ(e)+1, hence dominating a_e^σ(e)+1 and c_e^σ(e)+1 (this consumes 2 slides).
This constitutes 4σ + 2m slides.
We dominate the rn - σ remaining non-dominated vertices in c(X), using 4 slides per D^i path for i ∈ [rn - σ] (by sliding the token on d^i_3 to d^i_3^-, the token on d^i_2 to d^i_2^- and moving the token on d^i_1 to a token-free vertex in X that neighbors a non-dominated vertex in c(X)).
If (G_H, 𝒫_G_H, S, ) is a yes-instance of DS-D, then (H, 𝒫_H, σ, r) is a yes-instance of MMO.
First, note that for a vertex a_e^σ(e)+1, where uv=e ∈ E(H) to be dominated with a minimal number of slides, the token on the vertex e^u or the token on the vertex e^v must move to c_e^σ(e)+1 (note that any other token on the vertices of the graph must pass through either e^u or e^v to get to c_e^σ(e)+1, thus we can safely assume that the token already on either of e^u or e^v is the token that slides to c_e^σ(e)+1).
This consumes at least 2m slides, leaving 4rn slides.
No dominating set formed with a minimal number of slides would need to make the token on a vertex in c(X^+) or the tokens on either of the vertices s or d slide (as this token must always be replaced by another to dominate the vertices in X^+, or the vertex d_1^rn-σ+1, or the vertex d', respectively, with a minimal number of slides, thus we can always assume that the token has not been moved).
Thus, a pair of vertices in c(X_v) and X_v for a vertex v ∈ V(H) can be dominated by either moving the token on a vertex d_1^i for an integer i ∈ [rn - σ] towards the vertex in X^v, or moving a token from a vertex y_e^v(i_1) for an edge uv=e ∈ E(H) and an integer i_1 ∈ [σ(e)], towards the non-dominated vertex in c(X_v).
If the token on d_1^i moves towards the vertex in X_v, the token on d_2^i must slide to d_2^-^i and, the token on d_3^i must slide to d_3^-^i, so that d^i_1^+ is dominated.
If a token on y_e^v(i_1) moves towards the vertex in c(X_v), it must be the case that another token has moved to either the vertex z_e^v(i_1) or, the vertex c(y_e^v(i_1)) or, the vertex c'(y_e^v(i_1)) or, to y_e^v(i_1) itself (to dominate y_e^v(i_1)).
This however requires at least one slide per such a token (as no vertex that dominates more than one vertex in Y exists).
Thus, if a vertex in c(X) is dominated by moving a token from one vertex d_1^i for an integer i ∈ [rn - σ] towards the vertex in X, it does not consume more slides than moving a token from a vertex in Y towards the vertex in c(X).
Given that at most rn - σ vertices can be dominated using tokens from the donor paths (as rn - σ +1 tokens are needed to dominate the vertices in G_s), each of the at least σ remaining vertices in c(X) must be dominated by moving a token from a vertex in Y towards a vertex in c(X).
Additionally, each of the remaining vertices in c(X) will require at least one additional slide (besides the two slides needed to move a token from Y) and thus, tokens on distinct vertices in Y must be used to dominate the vertices in c(X), as the remaining at most σ slides do not allow to get any token not initially on a vertex in Y to a vertex in Y.
If the token on the vertex e^v, slides to c'(y_e^v(i_1)), it will require at least one more slide as e^u will not be dominated.
Thus, the token on the vertex b_e^i_1 slides to z_e^v(i_1) when the token on y_e^v(i_1) moves towards a vertex in c(X^v).
This totals slides.
For each vertex that is token-free in Y after the slides are consumed, the adjacent vertices in c'(Y) must be adjacent to a vertex of the form e_2^u_2 with a token, for an edge u_2v_2 = e_2 ∈ E(H) (so that they are dominated).
This implies that for each edge u_2v_2= e_2 ∈ E(H), at most σ(e_2) tokens can move to c(X) from tokens on the vertices of the sets Y^v_2_e_2 and Y^u_2_e_2, and from only one of those sets, as only one of e_2^u_2 and e_2^v_2 has a token.
To dominate the σ remaining non-dominated vertices in c(X), each edge u_2v_2=e_2 ∈ E(H) must allow σ(e_2) tokens to move from either vertices in Y^v_2_e_2 and Y^u_2_e_2 and from at most one.
This gives a feasible orientation for the instance (H, 𝒫_H, σ, r) as any of c(X_u) or c(X_v) can receive at most r tokens.
The proofs of Lemmas <ref> and <ref> complete the proof of Theorem <ref>.
There exists an or-cross-composition from MMO into DS-D on bounded pathwidth graphs and where the parameter is . Consequently, DS-D does not admit a polynomial kernel with respect to + pw, where pw denotes the pathwidth of the input graphs, unless ⊆.
As stated in <Ref>, we can assume that we are given a family of t MMO instances (H_j, 𝒫_H_j, σ_j, r_j), where H_j is a bounded pathwidth graph with path decomposition 𝒫_H_j, |V(H_j)| = n, |E(H_j)| = m, σ_j: E_j →ℤ_+ is a weight function such that ∑_e_j ∈ E(H_j)σ_j(e_j) = σ and r_j = r ∈ℤ_+ (integers are given in unary).
The construction of the instance (Ĝ_t, 𝒫_G_t, S, ) of DS-D is twofold.
For each instance H_j for j ∈ [t], we add to Ĝ_t the graph G_H_j formed as per the construction in <Ref>, but without the supplier gadget.
We refer to the sets A, B, X, X^+, C, C', C^+, Y, c(X), c(X^+), c(Y), and c'(Y), subsets of vertices of a subgraph G_H_j of Ĝ_t, by A_j, B_j, X_j, X_j^+, C_j, C_j', C_j^+, Y_j, c(X_j), c(X_j^+), c(Y_j), and c'(Y_j), respectively.
Similarly, we refer to the vertices d and d' of a subgraph G_H_j of G, by d_j and d'_j, respectively.
Subsequently, we let A = ∪_j ∈ [t] A_j, B = ∪_j ∈ [t] B_j, X = ∪_j ∈ [t] X_j, and so on.
We add the MMO-instance-selector (described in <Ref>) and connect it to the rest of Ĝ_t as follows (see <Ref>).
We connect, for each j ∈ [t], the vertex Select_j to the vertices in V(G_H_j) ∩ S, where S is as defined later, via paths of length 2.
We make the vertex h adjacent to each vertex in X and the vertex q adjacent to each vertex of e^u and e^v for each edge uv=e ∈ E(H_j) for each j ∈ [t].
By <Ref>, Ĝ_t is of bounded pathwidth.
Now, we set
S = C ∪ B ∪ A^+ ∪ Y ∪ X ∪ c(X^+) ∪ d ∪⋃_j ∈ [t] Unselect_j ∪⋃_j ∈ [t]
uv=e ∈ E(H_j) (e^u ∪ e^v)
and = 2m + 6σ + 1.
Given that all integers are given in unary, the construction of the graph Ĝ_t or its path decomposition (as described in the discussion for <Ref>), and as a consequence the reduction take time polynomial in the size of the input instances. Additionally, by <Ref>, this composition is a pl-reduction. We claim that (Ĝ_t, 𝒫_Ĝ_t, S, ) is a yes-instance of DS-D if and only if for some integer 𝔧∈ [t], (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO.
If for some 𝔧∈ [t], (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO, then (Ĝ_t, 𝒫_Ĝ_t, S, ) is a yes-instance of DS-D.
Let (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) be a yes-instance of MMO and let λ be a feasible orientation of H_𝔧 such that for each v ∈ V(H_𝔧), the total weight of the edges directed out of v is at most r.
In Ĝ_t, the vertices f^1, …, f^σ, o^1, …, o^m and their neighbors are non-dominated.
First, we slide the token on Unselect_𝔧 to Select_𝔧.
Using 2m slides, we move for each edge e ∈ E(H_𝔧) the token on e^u (resp. e^v) if λ(e) = (v, u) (resp. λ(e) = (v, u)), towards a token-free vertex in o^1, …, o^m.
We additionally slide each token on a vertex b^i_e for i ∈ [σ_𝔧(e)] to the vertex z_e^v(i) (resp. z_e^u(i)), move the token on y_e^v(i) (resp. y_e^u(i)) towards a token-free vertex in c(X_v) (resp. c(X_u)) and consequently, move the token on the adjacent vertex in X_v (resp. X_u) towards a token-free vertex in f^1, …, f^σ.
The total number of slides performed is and they achieve a configuration for the tokens that dominates all of Ĝ_t.
If (Ĝ_t, 𝒫_Ĝ_t, S, ) is a yes-instance of DS-D, then there exists an integer 𝔧∈ [t], such that (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧) is a yes-instance of MMO.
In any solution that uses the minimal number of slides, the tokens on the vertices d_j for each j ∈ [t], and on the vertices in c(X^+) do not need to be moved (as these
tokens must be replaced by others to dominate the vertices d'_j for each j ∈ [t], or the vertices in X^+, thus we can assume these tokens remain stationary).
In the same solution, we can similarly assume that a token on one of Unselect_j and Select_j for each j ∈ [t] remains on either one of those vertices.
To dominate g^1, …, g^σ, p^1, …, p^m, at least 2m + 2σ slides are needed to get tokens from one or more of the vertices in X onto the vertices f^1, …, f^σ, and from one of more of the vertices of the form e^u for an edge e ∈ E(H_j) incident to a vertex u ∈ V(H_j) for an integer j ∈ [t], onto the vertices o^1, …, o^m.
If a token is moved out of a subgraph G_H_j (for an integer j ∈ [t]) of Ĝ_t, which is bound to happen to get tokens onto the vertices f^1, …, f^σ, o^1, …, o^m, at least one slide is needed to dominate the vertex between now token-free vertices in G_H_j and Select_j and exactly one slide can only be achieved by sliding the token on Unselect_j to Select_j (since otherwise a token has to move from one of the vertices of a subgraph G_H_j' for j' ≠ j ∈ [t] into G_H_j, and this requires more than one slide).
W.l.o.g. assume a token on a vertex, denoted x^i_v, in X, for a vertex v ∈ V(H_𝔧), and integers 𝔧∈ [t] and i ∈ [r], is moved to one of the vertices f^1, …, f^σ, then at least 3 slides are needed to move a token into either x^i_v or c(x^i_v) (since the tokens on c(X^+) are assumed to be stationary and a token moving from any other vertex, except x^i_v, in X into x^i_v can replace the token on x^i_v in moving into one of the vertices f^1, …, f^σ).
In a solution that uses the minimal number of slides, 3 slides can only be achieved by moving a token on a vertex, denoted y_e^v(i_1), in Y^v, for some edge e ∈ E(H_𝔧) adjacent to v and some integer i_1 ∈ [σ_𝔧(e)], to c(x^i_v).
Additionally, 4 slides can only be achieved by moving a token on the same vertex to x^i_v.
Since in a solution that uses the minimal number of slides a token on one of Unselect_𝔧 and Select_𝔧 is assumed to remain on either of those vertices,
and a token in B can slide at most one slide to a vertex in Z, if a token on y_e^v(i_1) is moved to a vertex in c(X) (or X), either a token has to move to the vertex z_e^v(i_1) (while a token has to be on the vertex e^v), or a token has to slide from the vertex e^v to c'(y_e^v(i_1)) or to y_e^v(i_1) itself.
Moving the token on e^v to y_e^v(i_1) requires two slides.
Given the budget and the fact that σ tokens in any solution must move from X onto the vertices f^1, …, f^σ, tokens must move from distinct vertices in X onto the vertices f^1, …, f^σ and from distinct vertices of the form e^u for an edge e ∈ E(H_j) incident to a vertex u ∈ V(H_j) for an integer j ∈ [t], onto the vertices o^1, …, o^m.
Additionally, given the budget, tokens in the same solution must move onto f^1, …, f^σ from only the vertices in X_𝔧 and onto o^1, …, o^m from only the vertices of the form e_1^u, for an edge uw=e_1 ∈ E(H_𝔧) (note that one token sliding to Select_𝔧 from Unselect_𝔧 will dominate all vertices on the paths between Select_𝔧 and V(G_H_𝔧)).
In the same solution, if a token moves from the vertex e_1^u onto one of the vertices o^1, …, o^m, the token on e_1^w remains stationary as the budget does not allow for another token to move into either one of the vertices e_1^u and e_1^w (to additionally dominate b_e_1^σ_𝔧(e_1)+1).
To fill all of o^1, …, o^m with tokens, exactly one token must move from G_e_2^sel onto the vertices o^1, …, o^m for each edge e_2 ∈ E(H_𝔧).
The latter implies that the token on e^v does not move to c'(y_e^v(i_1)) and given the budget that the token on b_e^i_1 slides to z_e^v(i_1).
W.l.o.g. assume that the token on e^v does not move to one of o^1, …, o^m, then at most the σ(e) tokens on the vertices of Y_e^v can be sent to c(X_v).
This implies that for each edge in H, at most its weight in tokens can move to c(X) from and to exactly one of the vertex gadgets corresponding to the vertices incident to that edge in H.
Given that σ tokens are needed on the vertices of c(X), it must be the case that for each edge, all its weight in tokens must move to c(X).
This gives a feasible orientation for (H_𝔧, 𝒫_H_𝔧, σ_𝔧, r_𝔧), since for each v ∈ V(H_𝔧), we have at most r vertices in c(X_v).
This concludes the proof of the theorem.
Next, we consider the DS-D problem with respect to the parameter fvs.
The DS-D problem is [1]-hard for the parameter fvs of the input graph.
We present a parameterized reduction from the problem, which is known to be -hard <cit.> with respect to the solution size κ.
In the problem, we are given a graph G, and an integer κ, where V(G) is partitioned into κ independent sets {V_1,… V_κ}.
The goal is to decide whether there exists a clique of size κ.
Let (G, κ) be an instance of the problem.
The edge set E(G) can be partitioned into sets {E_i,j = {uv : u ∈ V_i, v ∈ V_j}|}.
Without loss of generality, we assume that for each i ∈ [κ], |V_i| = n.
Otherwise, add isolated vertices in respective subsets.
We usually use n to denote the number of vertices in the input graph.
However, we use n to denote the number of vertices in each color class.
For each i ∈ [κ], let V_i = {u_i,ℓ|ℓ∈ [n]}.
For an instance (G, κ) of the problem, the reduction outputs an instance (H, , ) of the DS-D problem.
The graph H has an induced subgraph H_i for each i ∈ [κ] and an induced subgraph H_i,j for each .
We refer to these induced subgraphs as edge-blocks and vertex-blocks, respectively.
Finally, the vertex-blocks and edge-blocks are connected by connectors.
Vertex-block. For each i ∈ [κ], we construct a vertex-block H_i as follows.
We start by adding a vertex t_i.
For each x ∈ [n], we add a star-tree rooted at p_i,x with n(κ-1) leaves {q_i, x^j,1, …, q_i, x^j,n| j ≠ i}.
Each vertex p_i, x is connected with t_i by an edge.
For each x ∈ [n] and j ≠ i ∈ [κ], let Q_i, x^j = {q_i, x^j,ℓ|ℓ∈ [n]}.
For each x ∈ [n], Q_i,x = ⋃_j≠i ∈ [κ] Q_i,x^j.
Further, for each i ∈ [κ], let Q_i = ⋃_x ∈ [n] Q_i,x.
Edge-block. For each , we construct a edge-block H_i,j as follows.
We start by adding a vertex t_i,j.
For each edge e ∈ E_i,j,
we add a star-tree rooted at p_e with 2n leaves q_e^1, …, q_e^2n.
Each vertex p_e is connected with t_i by an edge.
For each e ∈ E_i,j, let Q_e = {q_e^ℓ|ℓ∈ [2n]}.
Further, for each , Q_i,j = ⋃_e ∈ E_i,j Q_e.
Connector. For each and for each l ∈{i, j}, we construct a connector ^l_i,j as follows.
Let A_i,j^l = {a_i,j^l,1, …, a_i,j^l, n}, B_i,j^l = {b_i,j^l,1, …, b_i,j^l, n}, C_i,j^l = {c_i,j^l,1, …, c_i,j^l, n} and D_i,j^l = {d_i,j^l,1, …, d_i,j^l, n}.
We add 4n+2 vertices ({s_i,j^l, r_i,j^l}∪ A_i,j^l ∪ B_i,j^l ∪ C_i,j^l ∪ D_i,j^l).
For each x ∈ [n], we add the edges s_i,j^l a_i,j^l,x, a_i,j^l,xb_i,j^l,x, r_i,j^l c_i,j^l,x and c_i,j^l,xd_i,j^l,x.
For each and l ∈{i,j}, the vertex-block H_l is connected with the connector _i,j^l as follows.
Let l' ≠ l ∈{i,j}.
For each x, z ∈ [n],
* if z ≤ x, then add an edge q_l,x^l', z s_i,j^l, and
* if z > x, then add an edge q_l,x^l',z r_i,j^l.
For each , the edge-block H_i,j is connected with the connectors _i,j^i and _i,j^j as follows.
For each e = u_i,x u_j,y∈ E_i,j for some x,y ∈ [n], and for each z,w ∈ [n],
* if z ≤ x, then add an edge q_e^z r_i,j^i,
* if z > x, then add an edge q_e^z s_i,j^i,
* if w ≤ y, then add an edge q_w^n+w r_i,j^j, and
* if w > y, then add an edge q_l,x^n+w s_i,j^j.
For a pair (i,j) with , an illustration of a connector _i,j^i that connects V_H_i and H_i,j is given in Figure <ref>.
This completes the construction of the graph H.
Further, we set =(8n+1)+κ, and
we define the initial configuration as follows:
= ⋃_i ∈ [κ], x ∈ [n] Q_i,x∪⋃_e ∈ E Q_e ∪{t_i | i ∈ [κ]}∪{t_i,j|}.
The fvs of the graph H is at most 4κ2.
Let F = {s_i,j^i, r_i,j^i, s_i,j^j, r_i,j^j |}. Removal of F from H results a forest.
Therefore, the fvs of H is at most |F| = 4κ2.
If (G, κ) is a yes-instance of the problem, then (H, , ) is a yes-instance of the DS-D problem.
Let C = ⊆ V(G) be a κ-clique in G.
For each i ∈ [κ], let u_i,x_i be the vertex in C ∩ V_i for some x_i ∈ [n].
For each , let e_i,j = u_i,x_iu_j,x_j.
For each i ∈ [κ], we slide the token on t_i to p_i,x_i.
Then, for each j ≠ i ∈ [κ], we slide x_i-tokens in Q_i,x_i^j towards s^i_i,j and n-x_i-tokens in Q_i,x_i^j towards r_i,j^i.
For each , we slide the token on t_i,j to p_e_i,j.
Then, we slide
* n-x_i tokens in Q_e_i,j to s_i,j^i,
* x_i tokens in Q_e_i,j to r_i,j^i,
* n-x_j tokens in Q_e_i,j to s_i,j^j, and
* x_j tokens in Q_e_i,j to r_i,j^j.
For each , and for each l ∈i,j, s_i,j^l receives x_l-tokens from H_l and n-x_l-tokens from H_i,j.
Similarly, r_i,j^l receives n-x_l-tokens from H_l and x_l-tokens from H_i,j.
Further, we push the n-tokens received by s_i,j^l to A_i,j^l and n-tokens received by r_i,j^l to D_i,j^l.
The above token slides result the following.
For each i ∈ [κ],
* t_i is dominated by p_i,x_i,
* for each j ≠ i ∈ [κ], the vertices in Q_i, x_i^j are dominated by p_i, x_i, and
* for each ℓ≠ x_i ∈ [n], p_i, ℓ is dominated by Q_i,ℓ^j for any j ≠ i.
for each ,
* t_i,j is dominated by p_e_i,j,
* the vertices in Q_e_i,j are dominated by p_e_i,j, and
* for each e ≠ e_i,j∈ E_i,j, p_e is dominated by the vertices in Q_e.
Finally, let S' ⊆ V(H) be the solution obtained from the above token sliding steps.
More precisely,
S' = ⋃_i ∈ [κ]{{p_i, x_i}∪ (Q_i∖ Q_i, x_i)}∪⋃_{{p_e_i,j}∪ (Q_i,j∖ Q_e_i,j)}∪⋃_, l ∈{i,j} (A_i,j^l ∪ D_i,j^l).
It is clear that the set S' is a dominating set in H.
Next we count the number of token steps used to obtain S' from .
In each vertex-block, we spend (κ-1)n+1 steps to push tokens towards the connectors.
Similarly, at each edge-block, we spend (2n+1) steps.
At each connectors, we spend 2n steps.
Therefore, we spend κ·((κ-1)n+1) + κ2·(2n+1) + 2κ2· 2n = (8n+1)κ2 + κ =.
Hence, (H, , ) is a yes-instance of DS-D problem.
If (H, , ) is a yes-instance of the DS-D problem, then (G, κ) is a yes-instance of the problem.
Let S^* be a feasible solution for the instance (H, , ) of the DS-D problem.
At each connector _i,j^l for and l ∈{i,j}, at least 2n tokens need to be slid from either vertex-blocks or edge blocks.
It is clear that every token must move at least 2 steps to reach the sets A_*,*^* and D_*,*^* in order to dominate the vertices in the set B_*,*^* and C_*,*^*, respectively.
This saturates a budget of 4n· 2 = 8n.
Therefore, we left with exactly κ+ budget to adjust the tokens on the vertex blocks and edge blocks.
For any , let q_i,x^j,z for some integers x,z ∈ [n] be a vertex that looses the token where the token is moved to some vertex in a connector.
Since none of its neighbors have token, we need to slide a token to the vertex or to it's neighbors.
This cost at least one token step.
By construction of the vertex-block H_i, by sliding a token to the vertex p_i,x for some x ∈ [n], one can release at most n(κ-1) tokens from the neighboring set Q_i,x.
Similarly, on an edge-block H_i,j for some , by sliding a token to the vertex p_e for some e ∈ E_i,j, one can release at most 2n tokens from the neighboring set Q_e.
This implies that by sliding at most κ tokens on the vertex-blocks, one can release at most κ· n(κ-1) = 2n token from the vertex-blocks.
Similarly, by sliding at most tokens on the edge-blocks, one can release at most 2n tokens from the edge-blocks.
Therefore, we need to slide exactly one token in each vertex-block and each edge-block.
For each i∈ [κ], let p_i,x_i for some x_i ∈ [n] be the vertex in H_i that gets token in S^* and releases all the tokens in Q_i,x_i.
Similarly, for each , let p_e for some e=u_i,z_iu_i,z_j∈ E_i,j with z_i,z_j ∈ [n] be the vertex in H_i,j that gets token in S^* and releases all tokens in Q_e.
Consider the connector _i,j^i.
The set Q_i,x_i^j pushes x_i tokens to s_i,j^i and n-x_i tokens to r_i,j^i.
The set Q_e pushes z_i tokens to r_i,j^i and n-z_i tokens to s_i,j^i.
The number of tokens passed through s_i,j^i to A_i,j^i is x_i + (n - z_i).
Since A_i,j^i need n tokens, it is mandatory that x_i=z_i.
This equality should hold for every i.
Therefore, for each , there exist an edge u_i,x_iu_j,x_j.
Hence (G,κ) is an yes-instance of the problem.
The proofs of Lemmas <ref>, <ref> and <ref> complete the proof of Theorem <ref>.
§ SHORTEST PATH DISCOVERY
Finally, we show that SP-D does not admit a polynomial kernel unless ⊆/. The employed or-cross-composition is similar to the construction in the hardness proof of SP-D presented in <cit.>.
We denote an instance of SP-D by (G, S, b, a, b) to emphasize that the solution must be a shortest path between the vertices a and b in V(G) (for consistency with the previous sections we do not speak of s-t-connectivity but use t for the number of instances in the cross composition).
There exists an or-cross-composition from Hamiltonian Path into SP-D, parameterized by k +. Consequently, SP-D does not admit a polynomial kernel with respect to k +, unless ⊆.
Let be the polynomial equivalence relation whose equivalence classes are defined by graphs with the same number of vertices, that is, two graphs G and H are equivalent with respect to if and only if |V(G)| = |V(H)|.
Let G_1, …, G_t be a sequence of instances of Hamiltonian Path, where every G_j, j ≤ t, is an n-vertex graph, say V(G_j) = {1, …, n}.
For every G_j we create a new graph H_j that consists of n^2 vertices, say (x,y) for x, y ≤ n. For every x < n and y, y' ≤ n, we connect the vertex (x,y) with the vertex (x+1, y') if and only if yy' ∈ E(G_j).
We construct the following graph G. First, G consists of a disjoint union of all H_j, j ≤ t. Furthermore, we add two fresh vertices a and b, as well as n fresh vertices (we simply call them {1, …, n}, too) to the vertex set of G.
For every y ≤ n we connect the vertex a with every vertex (1, y) in every H_j. Also, for every y ≤ n we connect every vertex (n, y) in every H_j with b. Finally, for every x ≤ n we connect the vertex x in G with every vertex (x, y) in every H_j for all y ≤ n with a path of length n. This finishes the construction of G.
Let S = {a,b,1, …, n}, hence k = n + 2 and = n^2.
Observe that the size of every G_j is (given a suitable encoding) bounded by n^2. Hence, the parameter k + b = n^2 + n + 2 is bounded by a polynomial in max_j=1^t |G_j| + log t.
We claim that (G, S, b, a, b) is a yes-instance of SP-D if and only if at least one G_j admits a Hamiltonian path.
We begin with the backward direction, that is let G_j be a Hamiltonian graph with Hamiltonian path i_1 … i_n.
Then we can move the token on vertex x in G
to (x,i_x) in H_j using n slides for each token. This forms a shortest a-b-path in G which is discovered with the budget b = n^2.
For the other direction assume that (G, S, b, a, b) is a yes-instance of Shortest Path Discovery and observe that every shortest a-b-path in G (which is of length n+1 and hence uses n internal vertices) uses internal vertices from one H_j only.
By the choice of the budget and the connections between vertices x and (x, y), every solution can only move the token from vertex x to a vertex of the form (x, y) for some y ≤ n in H_j. Let a (1, y_1) (2, y_2) … (n, y_n) t be the discovered a-b-path in G. By construction, we have y_i ≠ y_i' for i ≠ i'. Hence y_1 … y_n is a Hamiltonian path in G_j.
§ MATCHING DISCOVERY
Grobler et al. <cit.> show that Mat-D is [1]-hard with respect to the parameter on 3-degenerate graphs, yet it is in with respect to parameter k on general graphs.
We show that, similarly to VC-D, Mat-D admits a polynomial kernel with respect to the parameter k.
In a manner akin to <Ref>, our kernelization algorithm for Mat-D with respect to the parameter k will remove from the graph vertices that are irrelevant for every token.
Here however, to find irrelevant vertices or edges, we will make use of a classical result of Erdős and Rado <cit.> known in the literature as the sunflower lemma.
Let 𝒜 be a family of sets (without duplicates) over a universe 𝒰, such that each set in 𝒜 has cardinality at most d. If |𝒜| > d!(p-1)^d, then 𝒜 contains a sunflower with p petals and such a sunflower can be computed in time polynomial in |A|, |U|, and p.
Mat-D admits a kernel of size 𝒪(k^5).
Let (G, S, ) be an instance of Mat-D.
Without loss of generality, we assume the graph G to be connected.
For each vertex s ∈ S, and integer i ∈ [3k], we compute E(s,i).
We maintain the invariant that we remove from E(s,i) for each s ∈ S and i ∈ [3k], irrelevant vertices with respect to s.
We remove an irrelevant edge with respect to a vertex s ∈ S from E(s,i) for an integer i ∈ [3k] as follows.
From the sunflower lemma (<Ref>), if |E(s,i)| > 8k^2, then it has a sunflower with 2k + 1 petals that can be computed in polynomial time in k.
We arbitrarily choose one edge e corresponding to one petal of the sunflower and remove it from E(s,i).
To see why e is irrelevant with respect to s, assume that the token on s slides to e ∈ C_ℓ, where C_ℓ is a matching in G.
The 2k - 2 vertices of C_ℓ∖{e} can be incident to at most 2k - 2 of the edges corresponding to the petals of the sunflower, leaving at least one petal with an edge e_1 that can replace e in the matching C_ℓ in G.
Since also all edges in E(s,i) are at the same distance i from s, replacing e by e_1 will not increase the number of slides needed to achieve C_ℓ∖{e}∪{e_1}.
We form the kernel (G', S, ) of the original instance (G, S, ) as follows.
First, note that for a token s ∈ S and an edge e ∈ E(H) ∩ C_ℓ such that d(s,e) > 3k, the edges in C_ℓ∖{e} can appear in at most k-1 of the 3k sets of edges E(s,i) for i ∈ [3k] and every such edge that appears in a set E(s,𝔦) for a specific 𝔦∈ [3k] can be incident to at most all the edges in E(s,𝔦-1) and E(s,𝔦+1).
This implies that the token on s cannot move towards any edge of at most 3k - 3 of the 3k sets E(s,i) for i ∈ [3k] (as these contain tokens and thus might result in incident tokens) and thus there exists an edge e_1 which the token on s can move to while maintaining a matching in C_ℓ∖{e}∪{e_1}.
Thus, in any solution to (G, S, ), if a token on an edge s ∈ S moves to an edge e ∈ C_ℓ such that d(s, e) > 3k, it can instead move towards an edge e_1 ∈ E(H) such that d(s, e_1) ≤ 3k, while keeping the rest of the solution unchanged.
Consequently, we set E(G') = ⋃_s ∈ S,i ∈ [3k] E(s,i) ∪ S and for each edge e ∈ E(s,i), for s ∈ S and i ∈ [3k], we add to E(G') at most i edges that are on the shortest path from s to e (if such edges are not already in E(G')).
G' is the subgraph of G induced by the edges in E(G').
By the end of this process, |E(G')| ≤ k + 9k^3 · 8k^2, as for each s ∈ S and i ∈ [3k], E(s,i) ≤ 8k^2 and for each edge of the latter 3k^2 sets of edges, we added to E(G') at most 3k - 1 edges that are on a shortest path from that edge to the edge s.
(G', S, b) is a kernel as only edges that are irrelevant with respect to every token in S might not be in E(G') and all edges needed to move tokens from edges in S towards a matching using only b slides are present in E(G').
§ VERTEX CUT DISCOVERY
Grobler et al. <cit.> showed that VCut-D is [1]-hard with respect to parameter on 2-degenerate bipartite graphs but is in with respect to the parameter k on general graphs.
We show that the problem admits no polynomial kernels unless ⊆.
We denote an instance of VCut-D by (G, S, b, a_1, b_1) to emphasize that the solution must be a vertex cut between a_1 and b_1 in V(G).
Given a graph H and an edge coloring ϕ: E(H) → [c], we say ϕ is proper if, for all distinct edges e, e_1 ∈ E(H), ϕ(e) ≠ϕ(e_1) whenever e and e_1 share a vertex.
We form our or-cross-composition from the Rainbow Matching problem, which is -complete even on properly colored 2-regular graphs and where every i ∈ [c] is used exactly twice in the coloring <cit.>:
Rainbow Matching:
Input: Graph H, a proper edge coloring ϕ and an integer κ.
Question: Does H have a rainbow matching, i.e., a matching whose edges have distinct colors, with at least κ edges?
There exists an or-cross-composition from Rainbow Matching into VCut-D where the parameter is the number of tokens, k. Consequently, VCut-D does not admit a polynomial kernel with respect to k, unless ⊆.
By choosing an appropriate polynomial equivalence relation ℛ, we may assume that we are given a family of t Rainbow Matching instances (H_r, ϕ_r, κ_r), where H_r is a 2-regular graph, |V(H_r)| = n, |E(H_r)| = m, κ_r = κ∈ℕ, and ϕ_r : E(H_r) → [c] is a mapping that properly colors H_r and in which every i ∈ [c] is used exactly twice.
We may duplicate some input instances so that t = 2^s for some integer s.
Note that this step at most doubles the number of input instances.
The construction of the instance (G, S, , a_1, b_1) of VCut-D is twofold.
For each instance (H_r, ϕ_r, κ_r), we create G_r, formed of two vertices, s_r and t_r, as well as κ - 1 sets {E^1_r, …, E_r^κ - 1} of 2m + 2 vertices each.
A set E^p_r for p ∈ [κ - 1] contains 2m vertices, denoted edge-vertices, that represent the edges in H_r twice and two other vertices which are denoted by s^p_r and t^p_r (see <Ref>).
We denote the edge-vertices in a set E^p_r as v_e_h^p,r(1) (v_e_h^p,r(2)) to refer to the first (second) vertex representing the same edge e_h of E(H_r) in E^p_r.
We denote by E^p_r(1) the set of all vertices v_e_h^p,r(1), and by E^p_r(2) the set of all vertices v_e_h^p,r(2).
In G_r, we connect:
* through paths of length m^3 + log t, s_r to each of s_r^p for p ∈ [κ-1] and t_r to each of t_r^p for p ∈ [κ-1],
* through paths of length m^3 + log t, s_r^p to all vertices v_e_h^p,r(1) and t_r^p to all vertices v_e_h^p,r(2) for each e_h ∈ E(H_r) and each p ∈ [κ - 1],
* through paths of length m^3 + log t, all vertices v_e_h^p,r(1) and v_e_g^q,r(2) such that ϕ_r(e_h) = ϕ_r(e_g) for each p ≤ q ∈ [κ-1],
* through paths of length m^3 + log t, v_e_h^p,r(1) and v_e_g^q,r(2), for each p ≤ q ∈ [κ-1], whenever e_h and e_g are incident in H_r,
* through paths of length m^3 + log t, v_e_h^p,r(2) and v_e_g^q,r(1), for each p ∈ [κ - 2], q = p + 1, whenever e_h ≠ e_g.
We form G of all G_r for r ∈ [t] as follows (see <Ref>).
We create two global vertices a_1 and b_1 such that b_1 is connected through paths of length m^3+ log t to t_r for r ∈ [t].
Additionally, we create a binary tree 𝒯 rooted at a_1, with log t + 1 levels, and whose leaves constitute s_r for r ∈ [t].
For each depth d of 𝒯 for d ∈{1, …, log t}, we create a vertex v^d that contains a token and is connected through a single edge to each vertex of 𝒯 that is at depth d.
The edges of 𝒯 are all replaced by paths of length m^3 + log t.
Finally, we create 2(κ - 1) sets {M_1, …, M_2(κ - 1)}, of m - 1 edges each.
We connect each edge e^(i,j)∈ M_i for i ∈ [2(κ-1)] and j ∈ [m-1], from one of its endpoints, denoted u^(i,j), to each vertex v_e_h^i/2,r(1) for each r ∈ [t] if i is odd, and to each vertex v_e_h^i/2,r(2) for each r ∈ [t] if i is even.
Additionally, we connect through paths of length m^3+ log t, each s_r and t_r for r ∈ [t] to all of u^(i,j) for i ∈ [2(κ - 1)] and j ∈ [m - 1].
All vertices in the sets {M_1, …, M_2(κ - 1)} contain tokens.
Setting = log t + 2(2κ - 2) · (m - 1) finalizes the construction of (G, S, , a_1, b_1).
Since we perform only a polynomial number of operations per instance as well as some polynomial in t other operations while creating the tree 𝒯 and connecting some vertices, the reduction is polynomial in Σ^t_i=1 |x_i|.
Additionally, k is O(m^2 + log t) since κ≤ m.
If for some 𝔯∈ [t], (H_𝔯, ϕ_𝔯, κ_𝔯) is a yes-instance of Rainbow Matching, then the constructed instance (G, S, , a_1, b_1) is a yes-instance of VCut-D.
Let ℳ_𝔯 be a solution to the instance (H_𝔯, ϕ_𝔯, κ_𝔯).
ℳ_𝔯⊆ E(H_𝔯) forms a matching in H_𝔯 such that ϕ_𝔯(e_h) ≠ϕ_𝔯(e_g), for all e_h, e_g ∈ℳ_𝔯.
We apply the following slides in (G, S, , a_1, b_1) to disconnect a_1 from b_1.
First, we choose one edge e_h of ℳ_𝔯 and using m - 1 slides, we slide the tokens on u^(1,j) for j ∈ [m-1] onto all vertices in E^1_𝔯(1) except v^1,𝔯_e_h(1).
Then, using (2κ - 3) · (m - 1) slides, for each i ∈ [κ-1], we choose one other edge e_s ∈ℳ_𝔯 and slide the tokens on u^(2i,j) and u^(2i+1,j) (when applicable) for j ∈ [m-1] onto all vertices in E_𝔯^i(2) and E_𝔯^i+1(1) except v^i,𝔯_e_s(2) and v^i+1,𝔯_e_s(1), respectively.
We slide onto u^(i,j) for all i ∈ [2(κ - 1)] and j ∈ [m-1] the tokens adjacent to the latter vertices, on the edges in {M_1, …, M_2(κ-1)}, using (2κ - 2) · (m - 1) slides.
Finally, in 𝒯, we use the tokens on the vertices v^d for d ∈{1, …, log t}, to disconnect all paths from the root a_1 to all of s_r for r ∈ [t] - {𝔯}, using one slide per token.
This ensures that, through at most log t slides, all paths from a_1 to b_1 go through only both s_𝔯 and t_𝔯.
Following the described steps, we have executed a total of slides.
To see that a_1 and b_1 are now disconnected, note that after the slides of the tokens on u^d for d ∈{1, …, log t} are performed, all paths from a_1 to b_1 in G go through s_𝔯 and t_𝔯.
Thus it suffices to argue that the remaining 2(2κ - 2) · (m - 1) slides disconnect s_𝔯 and t_𝔯.
First, if this is not the case, then no path between s_𝔯 and t_𝔯 goes through any u^(i,j) for all i ∈ [2(κ - 1)] and j ∈ [m-1] since the tokens that left those vertices have been replaced.
Also, the last four vertices on any path between s_𝔯 and t_𝔯 must be v_e_h^p,𝔯(1) for some p ∈ [κ-1] and some e_h ∈ E(H_𝔯), v_e_g^q,𝔯(2) for some q ∈{p, …, κ - 1} and some e_g ∈ E(H_𝔯), t_𝔯^q and t_𝔯.
However, by construction, there exists no paths between all vertices v_e_h^p,𝔯(1) and v_e_g^q,𝔯(2) for each p ≤ q ∈ [κ-1], such that ϕ_𝔯(e_h) ≠ϕ_𝔯(e_g) and e_h and e_g are non-adjacent.
Thus, given our choice of the free vertices remaining in E_𝔯^p(.) for all p ∈ [κ-1], no path exists between s_𝔯 from t_𝔯 and therefore between a_1 and b_1.
If (G, S, , a_1, b_1) is a yes-instance of VCut-D, then there exists an integer 𝔯∈ [t] for which (H_𝔯, ϕ_𝔯, κ_𝔯) is a yes-instance of Rainbow Matching.
Assume C_ℓ for ℓ≤ b, is a solution to (G, S, , a_1, b_1) that is reached with only 2(2κ - 2) · (m - 1)+ log t slides and disconnects a_1 from b_1, then any token that slides in G slides at most once, given that everything except:
* for d ∈{1, …, log t}, the vertex v^d and each vertex of 𝒯 that is at level d,
* u^(i,j) for i ∈ [2(κ - 1)] and j ∈ [m - 1], to each vertex v_e_h^i/2,r(1) for each r ∈ [t] if i is odd, and to each vertex v_e_h^i/2,r(2) for each r ∈ [t] if i is even,
* the endpoints of each edge e^(i,j)∈ M_i for i ∈ [2(κ - 1)] and j ∈ [m-1],
is connected by paths of length (m^3+ log t) >.
Thus, we know that the tokens on the vertices v^d for d ∈{1, …, log t} will have to leave some paths that go from a_1 to b_1 at least through one pair of vertices s_𝔯 and t_𝔯 for some 𝔯∈ [t] and can use at most log t slides.
We know that in G ∖ C_ℓ, no path exists between s_𝔯 and t_𝔯.
Since no token can reach s_𝔯 and t_𝔯 in the allocated budget, the remaining slides can only disconnect s_𝔯 from t_𝔯.
Note also that u^(i,j)∈ C_ℓ, for i ∈ [2(κ - 1)] and j ∈ [m - 1] as otherwise, a path from a_1 to b_1 that goes through s_𝔯, u^(i,j) and t_𝔯 will remain tokens-free.
This implies that at most m - 1 tokens can be slid into any one level {E^1_𝔯(·), …, E^κ - 1_𝔯(·)}.
We show via an inductive argument that the set of edges in H_𝔯 represented by the vertices in {E^1_𝔯(·), …, E^κ - 1_𝔯(·)} but not in C_ℓ must form a matching ℳ_𝔯 in H_𝔯 of size κ_𝔯=κ, such that for e_h , e_g ∈ℳ_𝔯, ϕ_𝔯(e_h) ≠ϕ_𝔯(e_g) and the claim follows.
Let P(q) be the proposition that the set ℰ_q of edges represented by vertices in {E^1_𝔯(·), …, E^q_𝔯(·)} but not in C_ℓ form a matching such that for e_h , e_g ∈ℰ_q, ϕ_𝔯(e_h) ≠ϕ_𝔯(e_g) and that vertices that remain free in E^q+1_𝔯(1) for q < κ - 1 represent the same edges as the vertices that remain free in E^q_𝔯(2).
We show that P(q) holds by induction on the levels q = {1, …, κ - 1}.
We prove the base case by contradiction and assume that a vertex v_e_g^1,𝔯(2) that remains free in E^1_𝔯(2) either represents an edge e_g that is incident to an edge e_h represented by a vertex v_e_h^1,𝔯(1) that remains free in E^1_𝔯(1) or it holds that ϕ_𝔯(e_g) = ϕ_𝔯(e_h).
This implies that there exists a path between s_𝔯 and t_𝔯 that goes from s_𝔯 to s^1_𝔯, to
v_e_h^1,𝔯(1), v_e_g^1,𝔯(2), t^1_𝔯 and to t_𝔯 and thus C_ℓ is not a solution to (G, S, , a_1, b_1).
As for the second part of the statement, assume that a vertex v_e_h^1,𝔯(2) that remains free in E^1_𝔯(2) does not represent the same edge as any of the vertices that remain free in E^2_𝔯(1), then there exists a path between s_𝔯 and t_𝔯 that goes through, s^2_𝔯, then any of the latter vertices, followed by v_e_h^1,𝔯(2) and t^1_𝔯 and thus C_ℓ is not a solution to (G, S, , a_1, b_1).
Note that the same arguments used in the base case apply for the inductive step.
In other words, given the second part of the statement, we may assume (for contradiction purposes) that a vertex v_e_g^i,𝔯(2) for i ≤ q (that remains free in E^i_𝔯(2)) either represents an edge e_g that is incident to an edge e_h represented by a vertex v_e_h^i',𝔯(1) for i' ≤ i (that remains free in E^i'_𝔯(1)) or it holds that ϕ_𝔯(e_g) = ϕ_𝔯(e_h).
By construction, this implies that there exists a path from s_𝔯 and t_𝔯 that goes from s_𝔯 to s^i'_𝔯, v_e_h^i',𝔯(1), v_e_g^i,𝔯(2), t^i_𝔯, and to t_𝔯 and thus C_ℓ is not a solution to (G, S, , a_1, b_1).
As for the second part of the statement, assume that a vertex v_e_h^q,𝔯(2) (that remains free in E^q_𝔯(2)) does not represent the same edge as any of the vertices that remain free in E^q+1_𝔯(1), then there exists a path between s_𝔯 and t_𝔯 that goes through, s^q+1_𝔯, then any of the latter vertices, followed by v_e_h^q,𝔯(2) and t^q_𝔯 and thus C_ℓ is not a solution to (G, S, , a_1, b_1).
Thus, P(κ - 1) holds and the set ℰ_κ - 1 of edges represented by vertices in {E^1_𝔯(·), …, E^κ-1_𝔯(·)} but not C_ℓ form a matching of size κ such that for e_h , e_g ∈ℰ_κ-1, ϕ_𝔯(e_h) ≠ϕ_𝔯(e_g).
This concludes the proof of the theorem.
plain
| In the realm of optimization, traditional approaches revolve around computing optimal solutions to problem instances from scratch.
However, many practical scenarios can be formulated as the construction of a
feasible solution from an infeasible starting state.
Examples of such scenarios include reactive systems involving human interactions.
The inherent dynamics of such a system is likely to lead to an infeasible state.
However, computing a solution from scratch may lead to a solution that may
differ arbitrarily from the starting state.
The modifications required to reach such a solution from the starting state may be costly, difficult to implement, or sometimes unacceptable.
Let us examine a specific example to illustrate.
A set of workers is assigned tasks so that every task is handled by a qualified worker.
This scenario corresponds to the classical matching problem in bipartite graphs.
Suppose one of the workers is now no longer available (e.g. due to illness); hence, the schedule has to be changed.
An optimal new matching could be efficiently recomputed from scratch, but it is desirable to find one that is as close to the original one as possible, so that most of the workers keep working on the task that they were initially assigned.
Such applications can be conveniently modeled using the solution discovery framework, which is the central focus of this work.
In this framework, rather than simply finding a feasible solution to an instance of a source problem , we investigate whether it is possible to transform a given infeasible configuration into a feasible one by applying a limited number of transformation steps.
In this work we consider vertex (edge) subset problems Π on graphs, where the configurations of the problem are sets of vertices (edges).
These configurations are represented by the placement of tokens on the vertices (edges) of the configuration.
An atomic modification step consists of moving one of the tokens and the question is whether a feasible configuration is reachable after at most modification steps.
Inspired by the well-established framework of combinatorial reconfiguration <cit.>, commonly allowed modification steps are the addition/removal of a single token, the jumping of a token to an arbitrary vertex/edge, or the slide of a token to an adjacent vertex (edge).
Problems defined in the solution discovery framework are useful and have been appearing in recent literature.
Fellows et al. <cit.> introduced the term solution discovery, and along with Grobler et al. <cit.> initiated the study of the (parameterized) complexity of solution discovery problems for various -complete source problems including Vertex Cover (VC), Independent Set (IS), Dominating Set (DS), and Coloring (Col) as well as various source problems in such as Spanning Tree (ST), Shortest Path (SP), Matching (Mat), and Vertex Cut (VCut) / Edge Cut (ECut).
Fellows et al. <cit.> and Grobler et al. <cit.> provided a full classification of polynomial-time solvability vs. -completeness of the above problems in all token movement models (token addition/removal, token jumping, and token sliding).
For the -complete solution discovery problems, they provided a classification of fixed-parameter tractability vs. [1]-hardness.
Recall that a fixed-parameter tractable algorithm for a problem with respect to a parameter p is one that solves in time f(p) · n^(1), where n is the size of the instance and f is a computable function dependent solely on p, while [1]-hardness provides strong evidence that the problem is likely not fixed-parameter tractable (, does not admit a fixed-parameter tractable algorithm) <cit.>.
A classical result in parameterized complexity theory is that every problem that admits a fixed-parameter tractable algorithm necessarily admits a kernelization algorithm as well <cit.>.
A kernelization algorithm for a problem is a polynomial-time preprocessing algorithm that, given an instance x of the problem with parameter p, produces a kernel – an equivalent instance x' of the problem with a parameter p', where both the size of x' and the parameter p' are bounded by a computable function depending only on p <cit.>.
Typically, kernelization algorithms generated using the techniques of Cai et al. <cit.> yield kernels of exponential (or even worse) size.
In contrast, designing problem-specific kernelization algorithms frequently yields more efficiently-sized kernels, often quadratic or even linear with respect to the parameter.
Note that once a decidable problem with parameter p admits a kernelization algorithm, it also admits a fixed-parameter tractable algorithm, as a kernelization algorithm always produces a kernel of size that is simply a function of p.
The fixed-parameter tractable solution discovery algorithms of Fellows et al. <cit.> and Grobler et al. <cit.> are not based on kernelization algorithms.
Unfortunately, it is unlikely that all fixed-parameter tractable problems admit polynomial kernels.
Bodlaender et al. <cit.> developed the first framework for proving kernel lower bounds and Fortnow and Santhanam <cit.> showed a connection to the hypothesis ⊈.
Specifically, for several -hard problems, a kernel of polynomial size with respect to a parameter would imply that ⊆, and thus an unlikely collapse of the polynomial hierarchy to its third level <cit.>.
Driven by the practical benefits of kernelization algorithms, we explore the size bounds on kernels for most of the above-mentioned solution discovery problems in the token sliding model, particularly those identified as fixed-parameter tractable in the works of Fellows et al. <cit.> and Grobler et al. <cit.>.
Overview of our results. We focus on the kernelization complexity of solution discovery in the token sliding model for the following source problems: Vertex Cover, Independent Set, Dominating Set, Shortest Path, Matching, and Vertex Cut.
For a base problem Π we write Π-D for the discovery version in the token sliding model.
<Ref> summarizes our results.
All graph classes and width parameters appearing in this introduction are defined in the preliminaries.
Fellows et al. <cit.> and Grobler et al. <cit.> gave fixed-parameter tractable algorithms with respect to the parameter k for IS-D on nowhere dense graphs, for VC-D, SP-D, Mat-D, and VCut-D on general graphs and for DS-D on biclique-free graphs.
We show that IS-D, VC-D, DS-D, and Mat-D parameterized by k admit polynomial size kernels (on the aforementioned classes), while VCut-D does not admit kernels of size polynomial in k. For SP-D, we show that the problem does not admit a kernel of polynomial size parameterized by k + unless ⊆.
As -hardness provides strong evidence that a problem admits no polynomial-time algorithm, [t]-hardness (for a positive integer t) with respect to a parameter p provides strong evidence that a problem admits no fixed-parameter tractable algorithm with respect to p.
Fellows et al. <cit.> proved that VC-D, IS-D, and DS-D are [1]-hard with respect to parameter on d-degenerate graphs but provided fixed-parameter tractable algorithms on nowhere dense graphs.
They also showed that these problems are slicewise polynomial () with respect to the parameter treewidth and left open the parameterized complexity of these problems with respect to the parameter treewidth alone.
We show that these problems remain -hard (which implies [t]-hardness for every positive integer t) for parameter pathwidth (even if given a path decomposition realising the pathwidth),
which is greater than or equal to treewidth, and that they admit no polynomial kernels (even if given a path decomposition realising the pathwidth)
with respect to the parameter + pw, where pw is the pathwidth of the input graph, unless ⊆.
Finally, we also consider the parameter feedback vertex set number (fvs), which is an upper bound on the treewidth of a graph, but is incomparable to pathwidth.
We complement the parameterized complexity classification for the results of Fellows et al. <cit.> by showing that IS-D, VC-D, and DS-D are [1]-hard for the parameter fvs.
Several interesting questions remain open.
For instance, while their parameterized complexity was determined, the kernelization complexity of Col-D and ECut-D remains unsettled.
Similarly, the kernelization complexity of IS-D and DS-D with respect to parameter k is unknown on d-degenerate and semi-ladder-free graphs, respectively, where the problems are known to be fixed-parameter tractable.
In addition, it remains open whether VCut-D parameterized by k + admits a polynomial kernel or whether Mat-D parameterized by b admits polynomial kernels on restricted classes of graphs.
Organization of the paper. We introduce all relevant notation in <Ref>. In <Ref>, we provide fundamental graph gadgets that appear in many constructions presented in the paper and provide several lemmas describing useful properties of those gadgets. Afterwards, we present our results for VC-D in <Ref>, IS-D in <Ref>, DS-D in <Ref>, SP-D in <Ref>, Mat-D in <Ref>, and VCut-D in <Ref>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17264v1 | 20240925182105 | Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations | [
"Amey Agrawal",
"Junda Chen",
"Íñigo Goiri",
"Ramachandran Ramjee",
"Chaojie Zhang",
"Alexey Tumanov",
"Esha Choukse"
] | cs.LG | [
"cs.LG",
"cs.DC"
] |
2]Amey Agrawal3]Junda Chen1]Íñigo Goiri1]
Ramachandran Ramjee1]Chaojie Zhang2]Alexey Tumanov1]Esha Choukse [1]Microsoft[2]Georgia Institute of Technology[3]UC San Diego
Model aggregation: minimizing empirical variance
outperforms minimizing empirical error
[
September 28, 2024
==========================================================================================
§ ABSTRACT
As large language models (LLMs) evolve to handle increasingly longer contexts, serving inference requests for context lengths in the range of millions of tokens presents unique challenges.
While existing techniques are effective for training, they fail to address the unique challenges of inference, such as varying prefill and decode phases and their associated latency constraints – like Time to First Token (TTFT) and Time Between Tokens (TBT).
Furthermore, there are no long context inference solutions that allow batching requests to increase the hardware utilization today.
In this paper, we propose three key innovations for efficient interactive long context LLM inference, without resorting to any approximation: adaptive chunking to reduce prefill overheads in mixed batching, Sequence Pipeline Parallelism (SPP) to lower TTFT, and KV Cache Parallelism (KVP) to minimize TBT. These contributions are combined into a 3D parallelism strategy, enabling Mnemosyne to scale interactive inference to context lengths at least up to 10 million tokens with high throughput enabled with batching.
To our knowledge, is the first to be able to achieve support for 10 million long context inference efficiently, while satisfying production-grade SLOs on TBT (30ms) on contexts up to and including 10 million.
§ INTRODUCTION
Correspondence to: Amey Agrawal <[email protected]>, Esha Choukse <[email protected]>.
Emerging applications are pushing large language models (LLMs) to process contexts orders of magnitude longer than current systems support. Tasks like book summarization, movie analysis, multi-agent dialogues with extensive knowledge retrieval, and multi-modal reasoning demand models capable of retaining and reasoning over millions of tokens (<Ref>).
Recently, ring and striped attention mechanisms <cit.> have emerged as powerful tools for training LLMs with long contexts. However, these techniques do not extend well to inference, where the dynamics differ significantly due to the complexities of differences in prefill and decode phases with the distinct latency metrics: Time to First Token (TTFT) and Time Between Tokens (TBT). Inference services also require handling mixed workloads with varying context lengths, and ring/striped attention falls short in such scenarios because of rigid parallelism and head-of-line (HOL) blocking, a situation where small requests are inordinately delayed behind large prefill operations. This paper addresses these gaps by introducing , a system that provides scalable and efficient long-context inference solutions.
The first challenge tackles is HOL blocking. A naive solution is to chunk the input into smaller pieces as suggested by previous work <cit.>. However, chunking introduces read amplification (i.e., large quadratic overheads in GPU memory reads), leading to the prevailing wisdom that the chunking overhead will increase significantly with increase in context lengths <cit.>. Surprisingly, we show the opposite to be true, , the chunking overhead due to quadratic read amplification is high for small prefills, but this overhead reduces as context length increases! In fact, even an extremely small chunk of size 32 incurs only a marginal overhead for attention prefill computation as shown in <Ref>.
Thus, chunking of long context prefills is a key mechanism that helps circumvent HOL blocking. However, choosing the right chunk size presents a continuous trade-off between chunking overhead and maintaining the latency targets for the ongoing prefill and decode phases.
Thus, proposes adaptive chunking, a dynamic approach to chunk sizing based on workload characteristics.
The second challenge lies in minimizing TTFT. Traditional parallelism strategy for this, tensor parallelism (TP), is insufficient to meet the latency requirements, as it cannot scale beyond a single server due to the slower interconnects between GPUs across different servers. To address this, we combine another traditional solution, pipeline parallelism (PP) with adaptive chunking and a more efficient pipeline use during prefill, thereby processing the prefill chunks of a long request in parallel. We refer to this approach as sequence pipeline parallelism (SPP) and this enables significant reductions in TTFT.
The third major challenge is TBT, where neither TP, PP, nor SPP offer adequate improvements for very large context requests. To address this, introduces KV Cache Parallelism (KVP), which distributes the key-value cache across multiple workers across servers during the decode phase, effectively parallelizing and accelerating token generation.
With , we propose an innovative combination of TP, adaptive chunking with SPP, and KVP, resulting in a new 3D parallelism implementation. enables exact inference with long contexts, while achieving performance scaling for context lengths at least up to 10 million tokens — a first in the field!
In summary, we make the following contributions to long context inference, without adding any approximation:
* Sequence pipeline parallelism (SPP), a novel strategy combining prefill chunking and pipeline parallelism to deal with head-of-line blocking during multi-million context prefill without compromising on the TTFT latency.
* Adaptive chunking and KV cache parallelism (KVP) to dynamically trade-off the TTFT and TBT across requests batched together.
* 3D parallelism, combining TP, SPP, and KVP. We demonstrate the first system to scale LLM inference at least up to 10 million tokens, meeting stringent latency requirements while maintaining efficiency through mixed batching across various context lengths.
§ BACKGROUND AND MOTIVATION
§.§ Long Context Transformers
Auto-regressive generative transformer models operate in two phases during inference. The prefill phase processes the input context, building internal representations of all input tokens (KV cache). The subsequent decode phase generates output tokens one by one, based on previously processed context.
Recent research has demonstrated that large language models can be fine-tuned to handle context lengths spanning millions of tokens. This is achieved by re-scaling positional embeddings <cit.>. These long-context transformers unlock new capabilities, including multi-modal processing and reasoning over several books' worth of textual data. Google's Gemini 1.5 model <cit.> exemplifies these advancements, with support for up to 2 million context length in production.
However, this expanded context window introduces significant computational challenges, particularly in the prefill phase where complexity grows quadratically with input size. The tension between these expanded capabilities and the associated computational demands form the core challenge addressed in this work.
To illustrate the magnitude of challenges in serving LLMs with long context lengths, let's consider as an example. For a long request with a million tokens, we need 320 GB for memory to just store the KV cache, and
a massive 2.4 exaFLOPs for prefill computation.
§.§ Performance Metrics
The inference performance of transformer models is characterized by three key metrics:
Time to First Token (TTFT): The latency from input submission to first output token generation. TTFT is determined by the prefill phase and is critical for interactive applications.
Time Between Tokens (TBT): The delay between consecutive token generations during the decode phase. TBT affects the perceived fluency of model output.
Throughput: The request load that an online inference serving system can sustain while satisfying Service Level Objective (SLO) constraints on request latency. Throughput captures the overall system efficiency.
These metrics capture the fundamental tension between latency and efficiency in serving systems. Optimizing for one often involves trade-offs with the other, necessitating careful system design to balance performance across different use cases.
Long Context Inference Current production services <cit.> support interactive inference for requests up to one million tokens, achieving TTFT of ∼50 seconds and TBT of ∼20ms. However, as context length approaches 10 million tokens, the quadratic complexity of attention operations makes interactive prefill increasingly challenging. To address this, operators have introduced on-demand context caching <cit.>. This functionality allows users to ingest long prompts as batch jobs, then leverage the resulting KV cache for interactive processing of the followup queries.
§.§ Popular Parallelism Strategies
Modern large transformer models – with billions of parameters and complex architectures – necessitate distributed computation across multiple processors to achieve high throughput with interactive latencies.
Pipeline Parallelism (PP) splits the model layers across different stages, each running on a separate device. As one stage finishes processing a batch of data, it passes it to the next stage. By splitting the memory load across multiple devices, pipeline parallelism frees up more memory for KV cache, thereby enabling higher batch sizes and throughput. The communication between two stages is minimal in PP, allowing to scale inference across several nodes. However, a key limitation of pipeline parallelism is that it does not help with improving inference latency due to sequential dependency between pipeline stages.
On the other hand, Tensor Parallelism (TP) divides tensors within individual model layers, distributing matrix operations (, those in attention mechanisms) across multiple devices (<Ref>). By parallelizing computation at intra-layer granularity, TP enables faster execution of large operations and reduces memory bottlenecks. Consequently, tensor parallelism is effective at both improving latency and throughput of the system.
However, frequent and large communication operations in tensor parallelism require low latency and high bandwidth. This constrains TP's scalability to the NVLINK domain (typically within a single compute node on current hardware), in contrast to PP's broader scalability. In practice, PP and TP are often combined to optimize resource utilization, minimize latency, and maximize throughput.
Long Context Inference While a combination of PP and TP enables scaling out inference deployment to meet the memory requirements of long context inference, only TP contributes to latency reduction. However, TP's scaling is constrained to a single node. Consequently, when serving requests with millions of tokens, these traditional parallelization approaches fall short in achieving latency objectives.
§.§ Batching Policies
In inference systems, one of the most common ways to increase utilization and throughput is to batch execution of multiple requests together. Orca <cit.> introduced continuous batching wherein requests can dynamically enter or exit a batch at the granularity of individual iterations. However, naively performing iteration-level batching can result in high-tail decode latency due to interference from long prefill requests <cit.>. In order to concurrently achieve high-throughput and low-latency, two popular strategies have emerged the address this prefill-decode interference challenge:
Chunked Prefills <cit.> is a technique to divide the input context into smaller chunks and piggyback the prefill computation of these chunks with existing decode iterations. This mixed batching approach allows efficient computation of prefills at a small delta cost, with predictable decode latency.
<Ref> shows the scheduling of prefill chunks of request B in this approach, unblocking decode phases of request A.
Prefill-Decode Disaggregation <cit.> on the other hand, decouples the execution of prefill and decode phases and splits their computation onto different devices. The dedicated prefill and decode devices run homogeneous batches, with much more predictable latency.
Long context inference Although chunked prefill is effective at reducing decode latency, previous studies <cit.> suggest that prefill computation with chunking becomes inefficient due to KV cache read amplification, deeming the mechanism unsuitable for long-context inference. On the other hand, the disaggregation techniques require moving the whole KV cache between the prefill and decode devices. This reduces the memory available for the KV cache, making disaggregation less appealing for long context inference where the KV cache can span hundreds of GBs.
§ CHALLENGES
Serving extremely long context requests strains computational resources, challenges system efficiency, and pushes the boundaries of achievable latency. In this section, we explore the fundamental technical challenges that make supporting long context inference challenging.
§.§ Resource Demands
The two primary phases of LLM inference – prefill and decode – exhibit distinct resource requirements <cit.>. The prefill phase is compute intensive, due to the concurrent processing of several prompt tokens in a batched manner. In contrast, the decode phase is memory bound due to sequential generation of each decode token.
As the context length of requests increases, the resource requirement across each of these phases grows asymmetrically. We analyze the resource requirements across three primary axis – computational FLOPs, memory bandwidth, and memory capacity. <Ref> shows the notation used across the paper.
During the prefill process, each prompt token needs to attend to all the prior tokens in the sequence. As a result, the arithmetic operations required for prefill attention computation grow quadratically. For a prompt with n input tokens, the computational FLOPs (F_a(n)) can be captured as follows:
F_a(n) = 4n^2dh_q
Subsequently, during the decode phase, we need to scan through the entire prompt KV cache, which results in linear increase in memory reads. Memory capacity requirement for the KV cache (M_kv(n)), and memory reads (R_a(n)) for decode phase can be computed as:
M_kv(n) = 4ndh_kv
R_a(n) = M_kv(n) = 4ndh_kv
Note that while the prefill computational FLOPs increases quadratically with the number of input tokens, the KV cache memory requirement only grows linearly.
There are existing online inference services like Gemini <cit.>, that serve inference for 1M input context with a sub-one minute time to first token and over forty tokens per second output rate.
<Ref> shows the theoretical maximum number of input tokens that can meet the SLO of 30s TTFT and 20ms TBT on a single DGX-H100 node for . Compute quickly becomes a bottleneck at 768K input tokens, while memory capacity scales the most.
Conversely, <Ref> shows the number of GPUs needed to meet this TTFT and TBT SLO as the number of input tokens increase.
20 GPUs are needed for 1M context length, and 80 GPUs for 2M input context.
Production serving systems like Gemini aim to provide interactive user interface that could enable processing and analysis of large input prompts – as a result, the SLO for prefill processing cannot grow quadratically with the prompt length even if the compute requirements do. This dichotomy leads to a stark skew in the resource requirements where the quadratic compute in prefill phase become the most significant bottleneck.
§.§ Parallelizing Long Context Computation
Recent works <cit.> have proposed new attention parallelism techniques to tackle the computational needs for training of long context transformer models. Ring Attention <cit.>, as shown in <Ref>, proposes to parallelize computation across the sequence dimension by partitioning the query across participating workers.
Each worker is responsible for the computation of a shard of query tokens. At the beginning, each worker performs attention computation with the query (Q) and key-value (KV) blocks from the shard assigned to it. Subsequently, each worker transfers its KV block to the next worker, while concurrently receiving the KV block from the previous worker in a ring formation. This allows each query shard to attend to all the KV shards. For better efficiency, Liu et al. <cit.> overlap the attention computation of each block with the KV cache transfer.
The causal nature of attention in modern LLMs (where each token only needs to attend to tokens prior to itself), leads to a workload imbalance in Ring Attention – where each worker is assigned a query block sequentially without accounting for the causal masking.
Striped Attention <cit.> addresses this challenge by assigning non-contiguous strips of query tokens to each worker such that workload across across all the workers is divided almost uniformly resulting in upwards of 1.5speedup compared to Ring Attention.
However, adopting these state-of-the-art training parallelization techniques for inference presents several challenges:
C1. Head-of-line blocking:
Ring and striped attention do not allow preemption of a long context prefill phase. For multi-million context lengths, this can cause head-of-line blocking that can last several minutes.
C2. Batching support:
Ring and striped attention as applied to inference can help solely with reducing prefill latency.
These parallelization approaches do not support batching, or mixed batching. This leads to low system utilization and throughput.
C3. Rigid parallelism and variable input lengths:
In a real world service, the input sequences are of various lengths <cit.>, ranging from 10s to 1000s, and now millions of tokens.
The way ring and striped attention distribute tokens across GPUs is optimized for the longest context they need to serve within a latency SLO.
However, this distribution is rigid, and has to be reused for smaller requests.
Building upon <Ref>, the following equation shows the arithmetic intensity of attention:
I_a(n) ≃F_a(n)/R_a(n) = n h_q/h_kv
Thus, the arithmetic intensity of attention prefill operation is directly proportional to the number of tokens. In ring attention, the n input tokens are distributed evenly across all the participating workers (p_ra), resulting in proportionate decrease in the arithmetic intensity of the operation. Consequently, when the number of tokens per worker becomes too small, the operation becomes memory/network bound and the KV cache block transfers become the bottleneck. Thus the optimal parallelization degree in ring attention is dependent on the sequence length.
This create a trade-off between latency objectives and hardware utilization. If the deployment is optimized for minimizing latency of long context requests (with large number of workers), the hardware utilization suffers for shorter requests and vice versa.
C4. Lacking support for decode computation:
Since ring and striped attention were both originally proposed for long context training, they can be used to parallelize the prefill phase of the long context inference, but do not directly extend to the decode phase.
This renders these solutions incomplete for long context inference, and calls for a solution for the end-to-end long context inference.
§.§ Requirements from an Efficient Long Context Serving System
As shown above, existing techniques like ring/striped attention <cit.> do not meet the needs of long context LLM inference.
Based on our analysis, to build an efficient long context serving system, we need to:
R1: Meet the stringent TTFT and TBT SLOs of long context interactive services.
R2: Drive up the hardware utilization to increase throughput per device, and reduce the cost per inference.
R3: Efficiently handle a wide range of context length requests at the same time within a single system.
§ : SYSTEM DESIGN
§.§ Revisiting Chunked Prefills for Long Context
As described in <Ref>, distributing the input prompt into smaller chunks for prefill phase allows better scheduling across requests and prefill/decode phases. This is because the maximum batch processing time can now be decoupled from the input length of the prefill request.
This technique holds a lot of promise for supporting batching in long context LLM inference. However, previous analyses <cit.> concluded that chunked prefills were unsuitable for long context handling due to read amplification.
The following equation shows the amount of data read into the GPU cores during a contiguous prefill phase for a request with n input tokens:
R_a(n) = M_kv(n) = 4ndh_kv
Since the chunked prefill technique temporally distributes a single prefill phase across several forward passes of the model, the total reads increase linearly with number of chunks.
This is depicted in <Ref>.
Different parts of the query Q are scheduled sequentially. However, each Q_n needs to attend to the KV cache from all the previous chunks.
This read amplification can be formulated as follows:
R_cp(n, c) = ∑_i=1^n/c R_a(ic) = 𝒪(n^2)
This read amplification from chunked prefill increases the cache reads from O(n) to O(n^2). Leading to the belief that chunked prefills for inefficient at processing long context requests. We challenge this view by examining the problem through the lens of arithmetic intensity.
Our key insight is that the arithmetic intensity of a prefill chunk depends solely on the chunk size, not the sequence length. This counter-intuitive result arises because in chunked prefills, even though processing of each chunk requires us to fetch all the previously generated KV-cache tokens from memory, we also need to perform c arithmetic operations for each of the KV-cache token – corresponding to each token in the prefill chunk. Thus, while longer sequences do require more KV-cache reads, the number of arithmetic operations per read remains constant, determined by the chunk size.
Modern LLMs further amplify this effect through grouped-query attention <cit.> as shown in <Ref>. In , for instance, 8 query heads share a single KV head, boosting arithmetic intensity approximately 8-fold compared to linear layers <cit.>. This leads to a surprising conclusion: on , a prefill chunk of merely ∼40 tokens suffices to saturate GPU compute. This observation enables us to implement effective batching and fine-grained preemption policies by decomposing multi-million token prefills into thousands of small, manageable chunks. Each chunk executes in tens of milliseconds, contrasting sharply with ring-attention's minutes-long, monolithic prefill computations.
I^i_cp(n, c) = F^i_cp(n, c)/R^i_cp(n, c)≃4ic^2dh_q/4icdh_kv = c h_q/h_kv
We have identified an additional factor that led to the mischaracterization of chunked prefills for longer contexts in prior studies <cit.>. Traditional attention kernels parallelized prefill computation by distributing work across query tokens. This approach is effective for standard prefill operations where query and KV token counts are equal. However, it falls short in chunked prefills, where the limited number of query tokens constrains parallelization opportunities.
Recently, FlashDecoding <cit.> introduced a method to accelerate decode computations for long requests by sharding work across KV tokens. Building on this insight, state-of-the-art attention kernels <cit.> now parallelize prefill computation across both query and KV token dimensions. This two-dimensional parallelization strategy enables efficient chunked prefill computation, even for very long contexts.
§.§ Adaptive Chunked Prefills
<Ref> shows that using a chunk size of 32 leads to only a moderate overhead of 11% compared to chunk size of 2048 in attention computation. However, operating with a small chunk size can result in significant end-to-end performance degradation due to inefficient computation of linear layers and other fixed CPU overheads. For running on 8 , we observe that chunk size of 32 has 1.75higher prefill latency for a 1 million token request compared to the chunk size of 4096. On the other hand, larger chunk sizes lead to higher decode latency for requests that are batched along as shown in <Ref>. This leads to an undesirable trade-off between prefill and decode latency.
To avoid this trade-off we adopt a dynamic chunk size adjustment policy. This policy is based on the observation that the later iterations of prefill processing where the per chunk latency is high, the fraction of attention runtime dominates over other overheads – as a result, smaller chunks become more efficient in the later phase of prefill processing. To maintaining low decode latency without sacrificing prefill efficiency, we start with a large chunk size and dynamically reduce it as we progress in the prefill. We adopt runtime prediction component from the Vidur simulator <cit.> to identify the largest chunk size that can be used processed without violating decode latency SLO. Adaptive chunking allows to obtain significantly better prefill-decode latency trade-off as shown in <Ref>.
This approach lays the groundwork for future extensions that could incorporate more complex scheduling objectives, such as fairness <cit.> or deadline-aware scheduling <cit.>.
§.§ Sequence Pipeline Parallelism
The quadratic increase in prefill latency with sequence length poses a significant challenge for user experience. While tensor parallelism has traditionally been used to reduce latency, it faces scaling limitations beyond a single compute node due to high communication overhead. Pipeline parallelism, though efficient in scaling across multiple GPUs, primarily improves throughput rather than latency.
We introduce a critical insight – combining chunked prefills with pipeline parallelism can substantially reduce prefill latency through an optimized pipelining schedule. Conventional PP inference systems <cit.> use interleaved micro-batches to maintain pipeline efficiency.
For a request A, chunk i+1 is typically scheduled only after chunk i completes all pipeline stages (<Ref>). While necessary for auto-regressive decoding, this approach is sub-optimal for the prefill phase, where the processing of individual prefill chunks is independent of the model output from the previous chunk.
Our key innovation lies in scheduling chunk i+1 immediately after chunk i completes the first pipeline stage (<Ref>) during prefill. We call this approach sequence pipeline parallelism (SPP). This dense pipelining schedule efficiently parallelizes prefill processing, yielding near-linear speedup with increased GPU count, as described by:
T^spp_p(n, c) ≃T_p(n, c)/p_spp + T^pp_comm(c)n/c∼T_p(n, c)/p_spp
Here, T^spp_p(n, c) represents the SPP prefill time for n tokens with chunk size c, T_p(n, c) is the standard prefill time, p_spp is the degree of sequence pipeline parallelism, and T^pp_comm(c) accounts for inter-stage communication time. The communication overhead term T^pp_comm(c)n/c becomes negligible for large n, leading to near-linear scaling.
This approach presents a distinctive advantage: the effectiveness of SPP remains independent of variations in input sequence length, unlike ring attention, where the degree of parallelism is closely tied to sequence length. Additionally, SPP supports batching and preemption, facilitating more efficient scheduling.
Our approach bears some resemblance to a technique proposed by Terapipe <cit.>.
This work improves pipeline efficiency in model training. While both approaches leverage pipelining across sequence chunks, they differ significantly in their primary objectives and application contexts. TeraPipe aims to optimize throughput in the training phase by minimizing pipeline bubbles. In contrast, applies a similar pipelining concept to the inference phase, specifically targeting latency reduction for online serving of long-context requests.
§.§ KV Parallelism
While SPP offers an effective mechanism to reduce prefill latency, it cannot be leveraged to optimize decode latency due to the cross-iteration dependency in auto-regressive decoding. To address this challenge, we propose KV parallelism (KVP), a novel technique that effectively reduces decode latency by parallelizing KV-cache reads.
In KVP, the KV cache is sharded across multiple GPUs along the sequence dimension. During each iteration, we replicate the query token(s) across all GPUs and compute partial attention outputs based on the local KV-cache shard. These partial outputs are then combined using online-softmax <cit.>. A critical advantage of KVP over techniques like ring-attention and its derivatives is that the communication cost for KVP (T^kvp_comm) is independent of the KV-cache length and only depends on the number of query tokens. This characteristic makes KVP extremely effective in managing decode latency for long-context requests.
The performance improvement of KVP can be modeled by the following equation:
T^kvp_d(n) ≃T^attn_d(n)/p_kvp + (T_d(n) - T^attn_d(n)) + T^kvp_comm
Where T^kvp_d(n) is the decode time with KVP, T^attnd(n) is the attention computation time, p_kvp is the degree of KV parallelism, T_d(n) is the total decode time, and T^kvp_comm is the communication overhead.
Our experiments reveal that KV parallelism is also effective in reducing the latency impact of prefills on the decodes of other batched requests in mixed batching scenarios. For instance, when processing a 4 million context length request, the P95 decode latency (for requests batched along) with even a small chunk of 128 tokens reaches almost 100ms.
The concept of KV parallelism extends naturally to chunked prefills. For long sequences, the communication cost of KV parallelism (^iT^kvp_comm(c)) becomes significantly smaller than the attention computation itself. This relationship can be expressed as:
^iT^kvp_p(n, c) ≃^iT^attn_p(n, c)/p_kvp + (^iT_p(n, c) - ^iT^attn_p(n, c)) + ^iT^kvp_comm(c)
Where ^iT^kvp_p(n, c) is the prefill time for the i-th chunk with KVP, ^iT^attn_p(n, c) is the attention computation time for the chunk, ^iT_p(n, c) is the total prefill time for the chunk, and ^iT^kvp_comm(c) is the communication overhead for the chunk.
To optimize resource utilization, we employ a dynamic growth strategy for KV parallel workers rather than pre-allocating all of them ahead of time. We define a maximum number of KV-cache tokens per request that would be managed by a single KV parallel worker. Initially, a single KV parallel worker is allocated to a request. Once we reach the limit of maximum KV tokens on a worker, we onboard a new KV parallel worker. This approach allows each KV parallel replica to independently batch other short requests while cooperatively processing the long request, ensuring efficient resource use across varied workloads.
§.§ 3D Parallelism
To meet the demanding requirements of both prefill and decode latency in long-context LLM inference, introduces a novel 3D parallelism strategy. This approach combines sequence pipeline parallelism (SPP), KV parallelism (KVP), and tensor parallelism (TP) to effectively scale performance across hundreds of GPUs. The synergy of these parallelization techniques addresses different aspects of the inference process:
SPP accelerates prefill computation,
KVP reduces decode latency, and
TP enhances both prefill and decode phases through model-level parallelization.
Determining the optimal configuration for this 3D parallelism is a complex task that requires careful consideration of multiple factors, including latency SLOs for both prefill and decode phases, the mix of request lengths in the workload, and the scaling efficiency of each parallelization dimension. Our experiments reveal that each parallelization technique has distinct scaling characteristics. SPP demonstrates excellent scalability, maintaining over 80% efficiency when scaled up to 16 servers (<Ref>). This makes SPP particularly effective for reducing prefill latency in very long context scenarios. KVP, while crucial for meeting stringent decode latency targets, exhibits more limited scalability beyond 2-4 servers. This limitation stems from higher communication overhead compared to pipeline parallelism and parallelization limited to attention computation, which constrains its scaling efficacy due to Amdahl's law.
Given the complexity of optimizing this 3D parallelism strategy, existing LLM inference simulators <cit.> can be leveraged to identify optimal parallelization configuration based on the specific workload requirements.
§ PLATFORM OPTIMIZATIONS
Efficient long-context LLM inference requires not only novel parallelization strategies but also careful platform-level optimizations. extends the Sarathi-Serve framework <cit.> to address unique challenges that emerge at scale in three key areas: inter-process communication, model execution, and page-table management. As shown in <Ref>, these optimizations yield up to 4reduction in decode latency compared to existing systems. In rest of this section we detail our targeted engineering improvements in these areas, enabling to handle multi-million token contexts effectively.
Inter-process communication
Traditional systems like vLLM and Sarathi-Serve <cit.> rely on centralized schedulers to allocate memory and communicate page tables and sequence tokens to GPU workers in each iteration. This approach incurs significant overhead as sequence length increases. We mitigate this by replicating sequence state across the scheduler and all GPU workers, thereby reducing the communication volume. Additionally, we replace Ray <cit.> with ZeroMQ <cit.> for scheduler-worker communication, eliminating Global Interpreter Lock (GIL) <cit.> contention as we scale to hundreds of workers.
Model execution
We integrate Flashinfer <cit.> kernels, which efficiently distribute work across both query and KV tokens. This enhancement enables effective chunked prefill computation, crucial for processing extended contexts. To ensure strict latency targets even with small prefill chunks, we implement CUDA graphs for mixed batches.
Page-table management
We observe that copying large page tables from CPU to GPU memory in every iteration introduced considerable overhead. Our solution was to implement GPU-side page tables. We bootstrap the page table for a request upon initial on-boarding and subsequently transfer only delta updates to the GPU. This approach significantly reduces data movement between CPU and GPU memory and the latency impact of paged attention for long context lengths.
§ EVALUATION
's parallelism strategies aim to enable efficient end-to-end inference for multi-million token contexts. We evaluate our system by addressing the following key questions:
* Prefill Performance and Preemptibility How does improve prefill latency (TTFT) and preemptibility for long contexts compared to existing approaches? (<ref>)
* Decode Performance To what extent can maintain low Time Between Tokens (TBT) as context length increases? (<ref>)
* 3D Parallelism Effectiveness How does 's 3D parallelism strategy balance the trade-offs between TTFT and TBT in end-to-end inference scenarios? (<ref>)
* Throughput What level of system throughput can achieve while maintaining latency targets? (<ref>)
* Adaptability How well does handle varying context lengths within a single deployment? (Addressed throughout <ref>-<ref>)
Through answering these questions, we demonstrate 's ability to meet the challenges of long-context LLM inference across a range of operating conditions.
§.§ Evaluation Setup
Platform
We implement , with the additional optimizations and design innovations on top of the Sarathi-Serve framework <cit.>
with the baseline optimizations described in <Ref>.
We implement the design changes for SPP, KVP, and 3D-Parallel strategies on this optimized baseline.
Models and datasets
We use the models and with RoPE <cit.> scaling to support the context length of up to 10M tokens.
Both, and have 8 KV heads allowing up to p_tp=8.
Since is an exact inference system, there is no approximation, or impact to the model accuracy from our design. The effect is solely limited to the latency and throughput of the system.
Therefore, we do not depend on any scoring system or input datasets for our evaluation.
Hardware
We use up to 16 InfiniBand connected DGX-H100 systems <cit.>.
Each DGX-H100 server has 8 NVIDIA H100 GPUs <cit.> with 80GB of high bandwidth memory each for a total of up to 128 GPUs.
GPUs within a server are connected with NVLINK4.0 providing 900GBps bidirectional bandwidth.
GPUs across different servers are connected with InfiniBand <cit.>, offering 50GBps per GPU pair.
System configurations
We evaluate with the following configurations:
* 2D parallelism with SPP+TP, where p_tp is set to 8 (within the GPUs in a single server), and SPP is used to scale across servers for fast and pre-emptable prefill.
* 2D parallelism with KVP+TP, where p_tp = 8, and KVP is used to scale across servers for fast decode.
* 3D parallelism with KVP+SPP+TP, where p_tp = 8, and SPP and KVP are used to scale out the final design.
§.§ Fast and Pre-emptable Prefill with SPP
As discussed in <Ref>, SPP uses chunked prefills with efficient pipeline utilization to accelerate the prefill phase for long sequences.
Baseline comparison
The strongest baseline among previous work for long context prefill is Striped Attention <cit.>.
<Ref> compares the prefill latency of striped attention <cit.> with 2D SPP+TP, for 1M tokens.
We use a chunk size of 4K to show the TTFT latency without the effects from adaptive chunking due to mixed batching.
This evaluation was run using 1-16 DGX-H100 servers.
2D SPP+TP is 64% faster than striped attention using 128 GPUs (16 servers) when processing one million tokens, achieving TTFT latency under 15 seconds.
At the same time, Striped Attention can not support chunked prefills, leading to fully blocking long context prefills.
This exacerbates the head-of-line blocking, as shown in Figure <ref>, leading to delays as high as 120s with Striped attention, compared to just 62ms with 2D SPP+TP.
2D SPP+TP therefore achieves best of both world, with faster, and much more pre-emptable prefill phases than the strongest baseline.
Scaling SPP to 10M
<Ref> shows the TTFT achieved by 2D SPP+TP, as we increase the number of tokens from 1M to 10M, and vary the pipeline depth for SPP from 1 to 16 for and .
The red crosses indicate configurations that are not feasible due to insufficient memory.
2D SPP+TP achieves close to linear scaling efficiency with increasing pipeline depth, thanks to the optimizations described in <Ref>.
We meet the 30 second TTFT for context length up to 2M on with 16 DGX servers.
The strong scaling trendlines suggest that more DGX servers would enable even shorter TTFT latency for longer contexts.
§.§ Fast Decode with KVP
Previous works do not support the decode phase for long context inference.
Therefore, as a baseline, we show the decode phase performance (TBT) with 2D SPP+TP.
Note that the target SLO is 30ms.
Baseline TBT with 2D SPP+TP
SPP is primarily a solution to unlock throughput and lower TTFT.
<Ref> shows the TBT achieved as SPP scales out for 2M context on and .
Given the small overheads of pipeline parallelism, TBT gets worse with high SPP degree p_spp, with more visible effects on smaller models.
This is because of the constant SPP communication overhead and higher computation time per pipeline stage going from to .
Faster TBT with 3D parallelism
<Ref> shows the TBT achieved for and with 4M and 10M context length in 3D parallel KVP+SPP+TP setup, where p_tp=8, p_spp=4 for , p_spp=8 for , and p_kvp is varied.
p_spp=8 was used for , since as shown in <Ref>, longer context lengths for do not fit within p_spp=4.
<Ref> shows that increasing p_kvp brings down the TBT considerably, helping achieve interactivity targets.
The latency benefit is not linear due to Amdahl's law, but gets more pronounced with longer context length.
Increasing p_kvp from 1 to 4, therefore using 4× the GPUs For , reduces TBT by only 1.7× for 4M context length, whereas for 10M context length this benefit increases to 2.5×.
Since p_kvp = 1 can meet TBT target SLOs until 4M context for , and until 2M context for , we propose KVP scaling only if the input context is beyond those respective lengths.
§.§ End-to-end inference with 3D
Finally, we evaluate 3D parallelism with mixed batching of prefills and decodes.
The main contribution of this work is to be able to serve multi-million context lengths end-to-end, while meeting the TTFT and TBT SLOs of interactive jobs.
This requires mixed batching per iteration.
We evaluate the trade-off between TTFT and TBT with our design.
TTFT vs TBT trade-off
We sweep the space of various chunk sizes for the chunked prefill, and also vary p_kvp, while keeping p_spp=4.
<Ref> shows the results on .
For a given p_kvp, increasing the chunk size, reduces TTFT (prefill latency), since it requires fewer iterations.
At the same time, it increases TBT, since each batched iteration takes longer to execute.
Therefore, for sequence length 1M with p_kvp=1, the green line shows the left-most triangle at largest chunk size, and the right-most triangle at the smallest chunk size.
For a given chunk size, increasing p_kvp helps reduce both TTFT and TBT in most cases, thus helping reach more optimal points in this trade-off space.
Indeed, lower p_kvp achieves better TTFT latency in cases with lower arithmetic intensity (due to small chunk size), as exemplified by the right-most points for 1M context length. As we increase the arithmetic intensity (, 2M context length), we see increasing p_kvp achieving the same performance for the smallest chunk size, and, finally, decreasing TTFT for 4M context length.
The main takeaway from <Ref> is the combination of the proposed parallelization strategies creates more choices in the TBT/TTFT trade-off space, which is significant when operating under tight latency SLO specifications.
A system with a more sparsely populated set of configuration options could render the same TBT/TTFT SLO pair infeasible.
In contrast, increases the feasibility range over a broader set of configuration options in this trade-off space.
Timeline of KVP instance addition
<Ref> shows the timeline for 3D processing 2 million tokens using for one of the runs in <Ref> with p_tp=8, p_kvp=4, and p_spp=4.
3D starts with p_kvp = 1, p_tp=8, and p_spp=4, using 32 GPUs.
Then, a KVP instance of 32 more GPUs is added, one at a time based on the sequence length processed so far, until it reaches the max of 128 GPUs.
Increasing context length and increasing parallelism act as opposing forces allowing consistent iteration execution time.
§.§ Enabling high throughput
The final goal for to achieve is high throughput. We evaluate the throughput of 3D system in two ways:
first, Model Flops Utilization (MFU) and Model Bandwidth Utilization (MBU) to show that the hardware resources are well utilized, and
second, Impact of batching requests on latency (minimal impact enables high throughput).
MFU and MBU
MFU and MBU are metrics to measure the hardware utilization for compute-bound phases and memory-bound phases respectively.
These metrics were coined recently and have since been widely used to report the efficiency of LLM inference and training systems <cit.>.
In LLM inference, prefill phases are compute-bound, and decode phases are memory-bound <cit.>.
Therefore, in <Ref>, we show the MFU and MBU in prefill phase of 2D SPP+TP, and decode phase of 2D KVP+TP, respectively.
We achieve between 50-75% MFU, which is very high, even when compared to training-optimized systems <cit.>.
For MBU, we reach as high as 92% representing very efficient use of memory bandwidth.
As we increase the degrees of parallelism to achieve lower latency, the utilization decreases, as expected.
Batching efficiency
<Ref> shows the impact of batching multiple decode phases together with chunked prefill phases. The main takeaway is that increasing batch size up to 128 requests does not increase the batch execution time more than 5%. This enables large batch sizes and high throughput.
Furthermore, the figure also underscores the impact of increasing chunk size on the TBT, or, the batch execution time.
§ DESIGNING SYSTEMS WITH
Section <ref> shows that enables a system to serves varying context lengths as long as 10M length, with low latency and high throughput.
Designing with needs a few more steps.
Scheduling policy
enables preemption of prefill phases, and adaptive chunking to trade-off TTFT and TBT, both of which require a dynamic scheduling policy.
Additionally, it opens up another rich space for scheduling policies to help increase the system throughput - independent scheduling of KVP instances.
Since each KVP instance has a full model replica within it, unless the instances are working together on a long context inference, they can be scheduled separately for shorter requests.
This presents a huge opportunity for throughput optimization.
Finding the right parallelism
The 3D parallelism introduced by provides a rich space for trading off TTFT, TBT, and throughput.
This calls for profiling stage with representative traces to find the right parallelism fit.
Online vs offline inference
Despite strong scaling of SPP and KVP parallelism meeting interactive latency targets for TTFT (, 30 seconds for 10M context length) would require prohibitive number of GPUs.
We envision that longer context lengths will either need approximation, or a disaggregated setup with prefill and decode phases separated as proposed in previous works <cit.>.
A disaggregated setup allows offline context building with slower prefill phase, which is then sent over to a decode-optimized system to meet the interactive TBT latency SLOs.
Exact vs approximate
Even though in this paper, we present as a solution without any approximation to serve long context, it does not preclude the parallelism strategies to be used with approximate techniques described in <Ref>.
§ RELATED WORK
LLMs for long context
Recent research has focused on effectively training and efficiently serving long-context LLM models. Some propose new attention parallelism techniques as more efficient solutions to enable long context <cit.>. We discuss and compare them in detail in <ref>. Concurrently, compute kernel optimizations <cit.> have enhanced parallelism across different dimensions within a GPU, enabling efficient computation long contexts.
In addition, techniques like LongRoPE <cit.> identify effective rescale factors and adapt progressive extension strategy, allowing re-scaling positional embeddings and extending context window beyond million tokens for existing LLMs without the need for additional fine-tuning.
A similar idea to SPP without adaptive chunking, called token-parallelism, was used in TeraPipe <cit.> to parallelize the different micro-batches of a mini-batch along the token dimension in order to reduce pipeline bubbles and improve throughput during training. In , we create small mixed-batches of chunked prefill and decodes and then parallelize these mixed batches to maintain latency targets during inference.
Approximate alternatives
To reduce computational complexity and enhance serving efficiency, State Space Models (SSMs) <cit.> propose alternatives to the traditional attention-based architecture. Other approaches, such as locality-sensitive hashing <cit.>, compressive attention <cit.>, and prompt or KV cache compression <cit.>, aim to reduce both the computational and memory footprint of transformer-based LLMs.
While these approaches trade accuracy for reduced computation and memory needs—potentially enabling long-context inference – we focus on transformer-based models that maintain accuracy by retaining the full context.
LLM inference efficiency
To improve LLM inference serving efficiency, researchers have proposed various techniques including chunked prefills<cit.>, prefill-decode disaggregation <cit.>, elastic parallelism<cit.>, and distributed KV cache<cit.>. Inference requests at million-token scale introduce a new set of challenges (<Ref>) and require careful designs to incorporate these methods effectively at scale. In a concurrent work, Mooncake <cit.>, chunked pipeline parallelism was proposed that is similar to SPP for long context inference serving but focusing solely on prefill requests with up to 128K tokens.
While some other mechanisms (, offloading <cit.>) address memory bottlenecks, compute remains the primary bottleneck for online long context inference serving (<ref>).
Nonetheless, these techniques can be useful for offline processing in resource constrained situations.
§ CONCLUSION
Long context LLM inference use-cases continue to grow, constantly pushing the boundary of what constitutes as "long" context.
With , we propose a set of novel parallelism strategies that enable serving end-to-end inference for context lengths scaling at least up to 10M tokens, with fast prefill and decode phases, meeting the interactive latency target of TTFT up to 2M tokens, and that of TBT up to 10M tokens.
Furthermore, enables variable context requests on input, and mixed batching with adaptive chunking.
This allows dynamic scheduling based on deadlines, enabling higher system throughput with latency guarantees while supporting input of varying context lengths.
plain
| Correspondence to: Amey Agrawal <[email protected]>, Esha Choukse <[email protected]>.
Emerging applications are pushing large language models (LLMs) to process contexts orders of magnitude longer than current systems support. Tasks like book summarization, movie analysis, multi-agent dialogues with extensive knowledge retrieval, and multi-modal reasoning demand models capable of retaining and reasoning over millions of tokens (<Ref>).
Recently, ring and striped attention mechanisms <cit.> have emerged as powerful tools for training LLMs with long contexts. However, these techniques do not extend well to inference, where the dynamics differ significantly due to the complexities of differences in prefill and decode phases with the distinct latency metrics: Time to First Token (TTFT) and Time Between Tokens (TBT). Inference services also require handling mixed workloads with varying context lengths, and ring/striped attention falls short in such scenarios because of rigid parallelism and head-of-line (HOL) blocking, a situation where small requests are inordinately delayed behind large prefill operations. This paper addresses these gaps by introducing , a system that provides scalable and efficient long-context inference solutions.
The first challenge tackles is HOL blocking. A naive solution is to chunk the input into smaller pieces as suggested by previous work <cit.>. However, chunking introduces read amplification (i.e., large quadratic overheads in GPU memory reads), leading to the prevailing wisdom that the chunking overhead will increase significantly with increase in context lengths <cit.>. Surprisingly, we show the opposite to be true, , the chunking overhead due to quadratic read amplification is high for small prefills, but this overhead reduces as context length increases! In fact, even an extremely small chunk of size 32 incurs only a marginal overhead for attention prefill computation as shown in <Ref>.
Thus, chunking of long context prefills is a key mechanism that helps circumvent HOL blocking. However, choosing the right chunk size presents a continuous trade-off between chunking overhead and maintaining the latency targets for the ongoing prefill and decode phases.
Thus, proposes adaptive chunking, a dynamic approach to chunk sizing based on workload characteristics.
The second challenge lies in minimizing TTFT. Traditional parallelism strategy for this, tensor parallelism (TP), is insufficient to meet the latency requirements, as it cannot scale beyond a single server due to the slower interconnects between GPUs across different servers. To address this, we combine another traditional solution, pipeline parallelism (PP) with adaptive chunking and a more efficient pipeline use during prefill, thereby processing the prefill chunks of a long request in parallel. We refer to this approach as sequence pipeline parallelism (SPP) and this enables significant reductions in TTFT.
The third major challenge is TBT, where neither TP, PP, nor SPP offer adequate improvements for very large context requests. To address this, introduces KV Cache Parallelism (KVP), which distributes the key-value cache across multiple workers across servers during the decode phase, effectively parallelizing and accelerating token generation.
With , we propose an innovative combination of TP, adaptive chunking with SPP, and KVP, resulting in a new 3D parallelism implementation. enables exact inference with long contexts, while achieving performance scaling for context lengths at least up to 10 million tokens — a first in the field!
In summary, we make the following contributions to long context inference, without adding any approximation:
* Sequence pipeline parallelism (SPP), a novel strategy combining prefill chunking and pipeline parallelism to deal with head-of-line blocking during multi-million context prefill without compromising on the TTFT latency.
* Adaptive chunking and KV cache parallelism (KVP) to dynamically trade-off the TTFT and TBT across requests batched together.
* 3D parallelism, combining TP, SPP, and KVP. We demonstrate the first system to scale LLM inference at least up to 10 million tokens, meeting stringent latency requirements while maintaining efficiency through mixed batching across various context lengths. | LLMs for long context
Recent research has focused on effectively training and efficiently serving long-context LLM models. Some propose new attention parallelism techniques as more efficient solutions to enable long context <cit.>. We discuss and compare them in detail in <ref>. Concurrently, compute kernel optimizations <cit.> have enhanced parallelism across different dimensions within a GPU, enabling efficient computation long contexts.
In addition, techniques like LongRoPE <cit.> identify effective rescale factors and adapt progressive extension strategy, allowing re-scaling positional embeddings and extending context window beyond million tokens for existing LLMs without the need for additional fine-tuning.
A similar idea to SPP without adaptive chunking, called token-parallelism, was used in TeraPipe <cit.> to parallelize the different micro-batches of a mini-batch along the token dimension in order to reduce pipeline bubbles and improve throughput during training. In , we create small mixed-batches of chunked prefill and decodes and then parallelize these mixed batches to maintain latency targets during inference.
Approximate alternatives
To reduce computational complexity and enhance serving efficiency, State Space Models (SSMs) <cit.> propose alternatives to the traditional attention-based architecture. Other approaches, such as locality-sensitive hashing <cit.>, compressive attention <cit.>, and prompt or KV cache compression <cit.>, aim to reduce both the computational and memory footprint of transformer-based LLMs.
While these approaches trade accuracy for reduced computation and memory needs—potentially enabling long-context inference – we focus on transformer-based models that maintain accuracy by retaining the full context.
LLM inference efficiency
To improve LLM inference serving efficiency, researchers have proposed various techniques including chunked prefills<cit.>, prefill-decode disaggregation <cit.>, elastic parallelism<cit.>, and distributed KV cache<cit.>. Inference requests at million-token scale introduce a new set of challenges (<Ref>) and require careful designs to incorporate these methods effectively at scale. In a concurrent work, Mooncake <cit.>, chunked pipeline parallelism was proposed that is similar to SPP for long context inference serving but focusing solely on prefill requests with up to 128K tokens.
While some other mechanisms (, offloading <cit.>) address memory bottlenecks, compute remains the primary bottleneck for online long context inference serving (<ref>).
Nonetheless, these techniques can be useful for offline processing in resource constrained situations. | null | null | null | Long context LLM inference use-cases continue to grow, constantly pushing the boundary of what constitutes as "long" context.
With , we propose a set of novel parallelism strategies that enable serving end-to-end inference for context lengths scaling at least up to 10M tokens, with fast prefill and decode phases, meeting the interactive latency target of TTFT up to 2M tokens, and that of TBT up to 10M tokens.
Furthermore, enables variable context requests on input, and mixed batching with adaptive chunking.
This allows dynamic scheduling based on deadlines, enabling higher system throughput with latency guarantees while supporting input of varying context lengths.
plain |
http://arxiv.org/abs/2409.17514v1 | 20240926035019 | Characteristics of Powerful Radio Galaxies | [
"Chandra B. Singh",
"Michael Williams",
"David Garofalo",
"Luis Rojas Castillo",
"Landon Taylor",
"Eddie Harmon"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.GA"
] |
§ INTRODUCTION
Understanding feeding and feedback from black holes has been a cornerstone of extragalactic astronomy since radio quasars were detected and interpreted in the 1960s . Models for accreting black holes date back to the 1970s <cit.>, with those for AGN coming into their own by the 1990s <cit.>. Despite some success, the most widely accepted paradigm for astrophysical black holes has yet to explain a number of observations: AGN with jets are the minority; powerful jetted AGN live earlier, while bright, jetless AGN peak later; jetted AGN have more massive black holes; and merger signatures in jetted/non-jetted AGN do not unambiguously point to a trigger. FRI radio galaxies have the most massive black holes while FRII radio galaxies have less massive ones. FRII high-excitation radio galaxies (HERG) are at higher redshift, FRII low-excitation radio galaxies (LERG) are at intermediate redshift, and FRI LERG are at lower redshift. The key missing element that allows a simultaneous understanding of all the above is counter-rotation between accretion disks and black holes <cit.>. With this idea, we have not only addressed the above issues but have recently been able to explain the details of the M- relations between black hole mass and stellar velocity dispersion <cit.> and the lifetimes of radio quasar jets. This model is compatible with the jet/wind scenario discussed in <cit.>, as explained in <cit.>. Here, we connect the relation among star formation rate, stellar velocity dispersion, core stellar brightness, excitation level, and merger signatures. In sect:sec2dot1-universe-3078783, we describe the data, and in sect:sec2dot2-universe-3078783, we describe the theory that explains it. In sect:sec3-universe-3078783, we conclude.
§ DISCUSSION
§.§ Observations
In this section, we begin by deriving a proxy for the average—or overall—behavior of star formation rates as a function of galaxy-bulge stellar velocity dispersion at late times in the universe (i.e., past the peak of quasars at a redshift of 2). To accomplish this, we extract the densities of star-forming galaxies (tabref:universe-3078783-t001) and of quiescent galaxies (tabref:universe-3078783-t002) from the velocity dispersion functions (VDF) of <cit.>, which are given in terms of logarithms. If you take the antilogarithms of the density values in <ref>, i.e., 10log where log is given in <cit.>, you will reproduce the VDF of <cit.>. With these densities, we construct the ratio of star-forming galaxy density to quiescent galaxy density (tabref:universe-3078783-t003) and consider it as a proxy for the average rate of star formation between redshift of 0.6 and 1. We show the anti-correlation between tabref:universe-3078783-t003 data and stellar velocity dispersion in fig:universe-3078783-f001. This is strong evidence that galaxies with high stellar velocity dispersions tend to have low star formation rates <cit.>.
In addition to the inverse relation between star formation rate and stellar velocity dispersion, the evidence associates low star formation rates with the jetted AGN subgroup and higher star formation rates with the non-jetted AGN subgroup. <cit.> evaluated the median star formation rate for MaNGA, with both jetted and non-jetted AGN, finding the non-jetted AGN to have a median star formation rate of 0.631 solar masses per year while the jetted AGN have a median star formation rate of 0.025. This is a factor of 25 difference in the star formation rate between the two AGN subclasses for . The star formation rates for the radio quiet quasars of <cit.> are between 20 and 300 solar masses/year. <cit.> found their sample of radio quiet quasars to have an average star formation rate of 30 solar masses/year for 160 radio-quiet quasars.
In addition to star formation and velocity dispersion, we show evidence for a larger stellar component in the nucleus of powerful, low-excitation FRI radio galaxies (i.e., exhibiting weak emission lines) as compared to radio-quiet quasars. We look at the data from <cit.> and pick out the elliptical galaxies in their sample with the largest Sersic indices and with jets and report them in tabref:universe-3078783-t004. The average Sersic index is n = 9.7. The average Sersic index for the disk galaxies of <cit.> is 3.3. n, which appears in the intensity profile as a function of distance r as (see <cit.> for details)
I(r) = Ieexp {-bn[(r/re)1/n- 1]}.
NGC 1399, the dominant galaxy in the Fornax Cluster, has a black hole just under a billion solar masses, an FRI jet, an estimated Bondi accretion rate of 9.1 × 10-3 solar masses/year <cit.>, and a low rate of star formation <cit.>. NGC 4261 is an elliptical galaxy in the Virgo Cluster with a black hole about 1.67 billion solar masses <cit.>, a visible two-sided FRI jet <cit.>, a low star formation rate <cit.>, a stellar velocity dispersion of 297 km/s <cit.>, and a low excitation nucleus <cit.>, with an estimated Bondi accretion rate of 6 × 10-2 solar masses per year <cit.>. NGC 4374 is an elliptical galaxy in the Virgo Cluster with a black hole of 0.93 billion solar masses <cit.>, a low rate of star formation <cit.>, an estimated Bondi accretion rate of 2.4 × 10-3 solar masses per year <cit.>, and a stellar velocity dispersion of 278 km/s <cit.>. NGC 4486 has a low star formation rate <cit.>, an estimated Bondi accretion rate of 0.13 solar masses per year <cit.>, and a stellar velocity dispersion of 323 km/s <cit.>. We do not discuss NGC 6251 with a Sersic index of 11.8 because it appears to be a giant radio galaxy, which is modeled in our paradigm as a system that experiences a second bout of counter-rotation. It also preserves its nuclear stellar component but requires further discussion beyond this work. Radio-quiet quasars, by comparison, appear to have higher star formation rates, lower velocity dispersions, higher excitation signatures, and dimmer nuclear stellar cores. The star formation rate of the radio-quiet quasar SBS 1520 + 530, for example, is measured to be 190 solar masses/year <cit.>, while its Sersic index is only 2.7 <cit.>. Finally, the merger signatures in FRI radio galaxies are fewer than in other AGN <cit.>, and even more intriguing is the finding that the signatures of mergers decrease with increasing galactic stellar velocity dispersion <cit.>. We place these objects and their properties in tabref:universe-3078783-t004. In the next section, we describe the theoretical context that allows for an understanding of how these five characteristics come about in this particular subclass of AGN.
§.§ Theory
In this section, we use a simple model schematic to show how the formation and evolution of radio quasars strings together the five characteristics described in the previous section, namely,
* bright nuclear stellar cores
* low excitation accretion
* high stellar velocity dispersion
* low star formation rates
* weak merger signatures
a. bright nuclear stellar cores
Our theoretical picture is anchored to the idea of counter-rotation between a spinning black hole and an accretion disk, a configuration that results from a merger of two galaxies. Each galaxy provides a supermassive black hole that eventually coalesces. In order for this to occur, the black hole binary must rid itself of its binary angular momentum such that the black holes are well within a pc of each other, where gravitational waves can complete the merger. Because a corotating disk struggles to bring the binary within 1 pc <cit.>, nearby stars are needed to further eliminate the binary angular momentum, and this leads to a depleted stellar core. If the disk is counter-rotating with respect to the binary black hole angular momentum, on the other hand, the accretion disk is effective in extracting the binary angular momentum <cit.>, and the black holes can merge without depleting their stellar cores. This is illustrated in fig:universe-3078783-f002. As we will show, the active galaxies triggered by counter-rotating accretion will tend to become the red-and-dead FRI radio galaxies which are therefore prescribed by theory to have bright nuclear cores.
b. low excitation accretion
The counter-rotating accretion configuration is associated with an FRII jet <cit.>, whose effect is to create a funnel-like region through the ISM by pushing the cold gas away and enhancing the density. The higher-density region is shown in purple, while the less-dense funnel region is in grey in fig:universe-3078783-f003. The energy deposited in the hotspots circles back to affect the accretion process by feeding heated gas and altering the state of accretion to an advection-dominated accretion flow (ADAF). Such a transition requires no less than and can occur during counter-rotation, but it dominates during the longest-lasting phase of such a radio galaxy, which follows the transition through zero black-hole spin <cit.>. This is captured by the change in color for the accretion disk from yellow in fig:universe-3078783-f003 to dirty blue in fig:universe-3078783-f004.
c. High stellar velocity dispersion
While the black hole spin has a non-zero value, the Bardeen–Petterson effect <cit.> applies, but it disappears when the black hole spin is zero and a new plane of accretion is formed that is determined by the angular momentum of the incoming gas <cit.> and which is responsible for the high stellar velocity dispersion as we now describe. The Bardeen–Petterson effect results from Lens–Thirring precession <cit.>, a general relativistic effect that ultimately forces the disk’s angular momentum to be aligned or anti-aligned with the angular momentum of the black hole. While the Bardeen–Petterson effect is believed to operate continuously in X-ray binaries, where the donor star feeds the compact object with angular momentum that is misaligned with that of the compact object, such an effect operates in a restrictive manner in the subset of AGN that lead to tilted jets, as we now show. The timescale for alignment is given by <cit.> as
T_align = 3aM( 2R_GR/R_BP)^1/2/dM/dt
where a is the dimensionless black hole spin, M is the black hole mass, RGR is the gravitational radius, and RBP is the Bardeen–Petterson radius given as a function of black hole spin as
RBP = Aa2/3RGR
where A is constant, and dM/dt is the accretion rate onto the black hole. While this expression was derived for corotation, it has been shown that the timescale for counter-rotation is the same <cit.>. This expression tends to zero as the black hole spin approaches zero. According to our model prescription (<cit.> and fig:universe-3078783-f004), however, the accretion rate drops as the transition through zero spin occurs, which has the effect of increasing the alignment timescale. We model the timescale for an accreting black hole that approaches the boundary between thin disk and ADAF prior to reaching zero spin (fig:universe-3078783-f005). As the black hole approaches zero spin in the counter-rotating regime, it initially experiences a jump in the alignment timescale due to the decrease in accretion rate (fig:universe-3078783-f006), which is a factor of about 25 times larger than the alignment timescale of the post-merger black hole that accretes at the Eddington rate. A counter-rotating black hole spins down to zero spin from an initially maximally spinning black hole in about 8 × 106 years if it accretes at the Eddington limit. Because the alignment timescale is longer than this <cit.>, counter-rotating black holes tend to have inner disks that are anti-aligned with the black hole angular momentum and outer disks beyond the Bardeen–Petterson radius that are tilted. This will last until the black hole is close to zero black hole spin. Once zero spin is reached, the Bardeen–Petterson effect disappears, the accretion disk forms in the plane determined by the angular momentum of the incoming gas from the greater galaxy, and the alignment timescale becomes irrelevant beyond that time. We therefore only show the alignment timescale as a function of spin during the counter-rotating regime. This bump in the alignment time is larger for the most powerful radio quasars, whose feedback effect rapidly transitions their thin disks into ADAFs. We therefore expect this effect to dominate in denser environments. Our model thus prescribes that FRII radio quasars/radio galaxies possess Bardeen–Petterson transition radii and that FRI radio galaxies do not. In other words, only in post-mergers will the accreting black holes have this feature. While this applies to radio-quiet quasars as well, we are here concerned with the jetted AGN subclass. Because radio quasars are modeled as counter-rotating black holes, they are relatively shorter-lived, at least compared to FRI radio galaxies. The observational consequences of this prediction remain unexplored.
When the black hole spins up again beyond the zero black hole spin in corotation and a jet is again formed, it will be tilted with respect to the earlier FRII jet phase, as shown in fig:universe-3078783-f004. This has a direct impact on the stars in the bulge, which are pushed around and acquire an enhancement in their velocity dispersion (shown by red arrows). Note that the jet is drawn thicker as compared to the jet shown in fig:universe-3078783-f003. This is due to the fact that the jet in fig:universe-3078783-f004 enters a denser region as compared to the FRII jet of fig:universe-3078783-f003. Here we see an explanation for the different jet morphology. The FRI jet morphology is due to both an environmental effect (greater density) and an engine-based effect (its tilt).
d. low star formation rates
The absence of the Bardeen–Petterson effect and the tilted jet that develops in corotation will make it difficult to drill through the denser medium (fig:universe-3078783-f004), and this leads to the FRI morphology, which deposits its energy in the ISM, heating the gas and impeding star formation. This effect on star formation is referred to as the Roy Conjecture <cit.>. As the star formation rate drops, the galaxy experiences increasingly fewer new stars and an aging of the existing stars, which gives the galaxy the character of an old red-and-dead elliptical galaxy. The old stars are drawn in red in fig:universe-3078783-f004, and the hot halo is shown in green.
e. weak merger signatures
A powerful FRII jet that rapidly transitions its accretion disk to an ADAF in a few million years will accrete at 10-2 the Eddington limit, which means that it will spin its black hole down to zero spin in about 5 × 108 years. In order to become an FRI radio galaxy, the black hole spin needs to reach a spin value of about 0.2, which, at the Eddington limit, takes about 20 million years. The accretion rate is at 10-2 the Eddington limit, however, and continues to decrease over this time period, so the FRI jet comes into play no sooner than about 2 × 109 years after zero spin and lives as long as there is accretion fuel. Because of the low accretion rates at this late phase, FRI radio galaxies live billions of years in such states. Because merger signatures last about 100 million years, it is clear that for mature FRI radio galaxies like M87, no merger signatures are expected to survive. In general, the longer the FRI radio galaxy lives, the weaker the merger signatures become. Because theory prescribes jet affecting stellar velocity dispersion, the longer the FRI jet exists, the larger the predicted stellar velocity dispersions. Therefore, the model makes the interesting prediction of an inverse relation between stellar velocity dispersion and merger signatures. If galactic velocity dispersion increases with time, an anti-correlation is generated between merger signatures and galactic stellar velocity dispersion as is found observationally <cit.>.
§ CONCLUSIONS
The fact that star formation is either enhanced <cit.> or suppressed <cit.> due to AGN feedback has been met with mixed evidence. Our paradigm for black hole feedback posits the enhancement of star formation for FRII radio galaxies associated with counter-rotating black holes. Jets likely tilt with respect to the radio quasar phase in the transition through zero spin to produce corotation, and this allows the jet energy to more directly couple to the interstellar medium, heating it and suppressing star formation. This tilt also affects stellar dispersions in the galactic bulge. Hence, in our paradigm, star formation suppression and the enhancement of stellar velocity dispersion go together. It is therefore of primary relevance to explore their connection over cosmic time. <cit.> have provided the data for this up to a redshift of 1. Galaxies with higher stellar dispersion values are strongly associated with lower star formation rates. Because the radio galaxies with the highest stellar dispersion values and lowest star formation rates are prescribed to be the end state of what originate as powerful radio quasars, a prediction is identified from the model that at high redshift, both suppressed star formation rates and stellar velocity dispersions should exhibit less-extreme values. While star formation rates should on average be higher than at lower redshift, the opposite would be true for stellar dispersions. Because this anti-correlation is instantiated as a result of original counter-rotation, black holes that are not triggered into counter-rotation will not experience it. Because counter-rotation is more efficient in merging black hole binaries, the AGN triggered in counter-rotation should experience non-depleted stellar cores. We have provided evidence of such correlations. In addition, both the low excitation character of a subset of radio galaxies and the weak merger signatures are a natural consequence of the late time evolution of these powerful AGNs. Finally, we have identified a prediction concerning the existence of tilted disks in a subset of AGN, whose observational features remain to be explored. Along with recent FRII jet lifetime estimates <cit.>, the evidence presented here constitutes strong support for counter-rotating black holes as a key element in our understanding of black hole feedback.
Conceptualization, C.B.S. and D.G.; methodology, D.G., L.R.C., L.T. and E.H.; software, M.W., L.R.C., L.T. and E.H.; formal analysis, M.W. and D.G.; writing—original draft, C.B.S. and D.G.; writing—review and editing, C.B.S. and D.G.; supervision, D.G. All authors have read and agreed to the published version of the manuscript.
This research was funded by the National Natural Science Foundation of China, grant number 12073021.
No new data were created or analyzed in this study. Date sharing is not applicable to this article.
The authors declare no conflicts of interest.
-0cm
References
999
B1-universe-3078783
Hazard, C.; Mackey, M.B.; Shimmins, A.J. Investigation of the Radio Source 3C 273 by the Method of Lunar Occultations. Nature 1963, 197, 1037. [https://doi.org/10.1038/1971037a0CrossRef]
B2-universe-3078783
Schmidt, M. 3C 273: A Star-Like Object with Large Red-Shift. Nature 1963, 197, 1040. [https://doi.org/10.1038/1971040a0CrossRef]
B3-universe-3078783
Oke, J.B. Absolute Energy Distribution in the Optical Spectrum of 3C 273. Nature 1963, 197, 1040. [https://doi.org/10.1038/1971040b0CrossRef]
B4-universe-3078783
Greenstein, J.L.; Matthews, T. Red-Shift of the Unusual Radio Source: 3 C 48. Nature 1963, 197, 1041. [https://doi.org/10.1038/1971041a0CrossRef]
B5-universe-3078783
Padovani, P.; Alexander, D.M.; Assef, R.J.; De Marco, B.; Giommi, P.; Hickox, R.C.; Richards, G.T.; Smolčić, V.; Hatziminaoglou, E.; Mainieri, V. Active galactic nuclei: What’s in a name? Astron. Astrophys. Rev. 2017, 25, 2. [https://doi.org/10.1007/s00159-017-0102-9CrossRef]
B6-universe-3078783
Shakura, N.I.; Sunyaev, R.A. Black holes in binary systems. Observational appearance. Astron. Astrophys. 1973, 24, 337.
B7-universe-3078783
Blandford, R.D.; Znajek, R.L. Electromagnetic extraction of energy from Kerr black holes. Mon. Not. R. Astron. Soc. 1977, 179, 433. [https://doi.org/10.1093/mnras/179.3.433CrossRef]
B8-universe-3078783
Blandford, R.D.; Payne, D.G. Hydromagnetic flows from accretion disks and the production of radio jets. Mon. Not. R. Astron. Soc. 1982, 199, 883. [https://doi.org/10.1093/mnras/199.4.883CrossRef]
B9-universe-3078783
Urry, C.M.; Padovani, P. Unified schemes for radio-loud active galactic nuclei. Publ. Astron. Soc. Pac. 1995, 107, 803. [https://doi.org/10.1086/133630CrossRef]
B10-universe-3078783
Wilson, A.S.; Colbert, E.J.M. The difference between radio-loud and radio-quiet active galaxies. Astrophys. J. 1995, 438, 62. [https://doi.org/10.1086/175054CrossRef]
B11-universe-3078783
Sikora, M.; Stawarz, L.; Lasota, J.-P. Radio Loudness of Active Galactic Nuclei: Observational Facts and Theoretical Implications. Astrophys. J. 2007, 658, 815. [https://doi.org/10.1086/511972CrossRef]
B12-universe-3078783
Garofalo, D.; Evans, D.A.; Sambruna, R.M. The evolution of radio-loud active galactic nuclei as a function of black hole spin. Mon. Not. R. Astron. Soc. 2010, 406, 975. [https://doi.org/10.1111/j.1365-2966.2010.16797.xCrossRef]
B13-universe-3078783
Ferrarese, L.; Merritt, D. A Fundamental Relation between Supermassive Black Holes and Their Host Galaxies. Astrophys. J. 2000, 539, L9. [https://doi.org/10.1086/312838CrossRef]
B14-universe-3078783
Mehdipour, M.; Costantini, E. Relation between winds and jets in radio-loud AGN. Astron. Astrophys. 2019, 625, A25. [https://doi.org/10.1051/0004-6361/201935205CrossRef]
B15-universe-3078783
Garofalo, D. Resolving the Radio-loud/Radio-quiet Dichotomy without Thick Disks. Astrophys. J. 2019, 876, L20. [https://doi.org/10.3847/2041-8213/ab1be3CrossRef]
B16-universe-3078783
Taylor, L.; Bezanson, R.; van der Wel, A.; Pearl, A.; Bell, E.F.; D’Eugenio, F.; Franx, M.; Maseda, M.V.; Muzzin, A.; Sobral, D.; et al. The Velocity Dispersion Function for Massive Quiescent and Star-forming Galaxies at 0.6 z ≤ 1.0. Astrophys. J. 2022, 939, 90.
B17-universe-3078783
CatalT5án-Torrecilla, C.; De Paz, A.G.; Castillo-Morales, A.; MT5éndez-Abreu, J.; FalcT5ón-Barroso, J.; Bekeraite, S.; Costantin, L.; de Lorenzo-CT5áceres, A.; Florido, E.; GarcT5ía-Benito, R.; et al. Star Formation in the Local Universe from the CALIFA Sample. II. Activation and Quenching Mechanisms in Bulges, Bars, and Disks. Astrophys. J. 2017, 848, 87. [https://doi.org/10.3847/1538-4357/aa8a6dCrossRef]
B18-universe-3078783
FalcT5ón-Barroso, J.; Lyubenova, M.; Van De Ven, G.; Mendez-Abreu, J.; Aguerri, J.A.L.; GarcT5ía-Lorenzo, B.; BekeraitT5é, S.; ST5ánchez, S.F.; Husemann, B.; GarcT5ía-Benito, R.; et al. Stellar kinematics across the Hubble sequence in the CALIFA survey: General properties and aperture corrections. Astron. Astrophys. 2017, 597, A48. [https://doi.org/10.1051/0004-6361/201628625CrossRef]
B19-universe-3078783
Comerford, J.; Negus, J.; Müller-ST5ánchez, F.; Eracleous, M.; Wylezalek, D.; Storchi-Bergmann, T.; Greene, J.E.; Barrows, R.S.; Nevin, R.; Roy, N.; et al. A catalog of 406 AGNs in manga: A connection between radio-mode AGNs and star formation quenching. Astrophys. J. 2020, 901, 159. [https://doi.org/10.3847/1538-4357/abb2aeCrossRef]
B20-universe-3078783
Stacey, H.R.; McKean, J.P.; Jackson, N.J.; Best, P.N.; Rivera, G.C.; Callingham, J.R.; Duncan, K.L.; Gürkan, G.; Hardcastle, M.J.; Iacobelli, M.; et al. LoTSS/HETDEX: Disentangling star formation and AGN activity in gravitationally lensed radio-quiet quasars. Astron. Astrophys. 2019, 622, 18. [https://doi.org/10.1051/0004-6361/201833967CrossRef]
B21-universe-3078783
Zakamska, N.; Lampayan, K.; Petric, A.; Dicken, D.; Greene, J.E.; Heckman, T.M.; Hickox, R.C.; Ho, L.C.; Krolik, J.H.; Nesvadba, N.P.H.; et al. Star formation in quasar hosts and the origin of radio emission in radio-quiet quasars. Mon. Not. R. Astron. Soc. 2016, 455, 4191. [https://doi.org/10.1093/mnras/stv2571CrossRef]
B22-universe-3078783
Graham, A.W.; Driver, S.P. A log-quadratic relation for predicting supermassive black hole masses from the host bulge ST5érsic index. Astrophys. J. 2007, 655, 77. [https://doi.org/10.1086/509758CrossRef]
B23-universe-3078783
Graham, A.W.; Driver, S.P. A concise reference to (projected) sT5érsic R1/n quantities, including concentration, profile slopes, petrosian indices, and kron magnitudes. Publ. Astron. Soc. Aust. 2005, 22, 118. [https://doi.org/10.1071/AS05001CrossRef]
B24-universe-3078783
Plsek, T.; Werner, N.; Grossova, R.; Topinka, M.; Simionescu, A.; Allen, S.W. The relation between accretion rate and jet power in early-type galaxies with thermally unstable hot atmospheres. Mon. Not. R. Astron. Soc. 2022, 517, 3682. [https://doi.org/10.1093/mnras/stac2770CrossRef]
B25-universe-3078783
Di Matteo, T.; Quataert, E.; Allen, S.W.; Narayan, R.; Fabian, A.C. Low radiative efficiency accretion in the nuclei of elliptical galaxies. Mon. Not. R. Astron. Soc. 2000, 311, 507. [https://doi.org/10.1046/j.1365-8711.2000.03134.xCrossRef]
B26-universe-3078783
Birkinshaw, M.; Davies, R.L. The orientations of the rotation axes of radio galaxies. I-Radio morphologies of bright elliptical galaxies. Astrophys. J. 1985, 291, 32. [https://doi.org/10.1086/163038CrossRef]
B27-universe-3078783
Wehrle, A.E.; Jones, D.L.; Meier, D.L.; Piner, B.G. The radio jets and accretion disk in NGC 4261. New Astron. Rev. 2002, 46, 235. [https://doi.org/10.1016/S1387-6473(01)00187-7CrossRef]
B28-universe-3078783
Amblard, A.; Riguccini, L.; Temi, P.; Im, S.; Fanelli, M.; Serra, P. Star formation bimodality in early-type galaxies. Astrophys. J. 2014, 783, 135. [https://doi.org/10.1088/0004-637X/783/2/135CrossRef]
B29-universe-3078783
Zezas, A.; Birkinshaw, M.; Worrall, D.M.; Peters, A.; Fabbiano, G. Chandra observations of NGC 4261 (3C 270): Revealing the jet and hidden active galactic nucleus in a Type 2 LINER. Astrophys. J. 2005, 627, 711. [https://doi.org/10.1086/430044CrossRef]
B30-universe-3078783
Ford, A.; Bregman, J.N. Detection of Ongoing, Low-Level Star Formation in Nearby Ellipticals. In American Astronomical Society Meeting Abstracts #219; American Astronomical Society: Washington, DC, USA, 2012; Volume 219.
B31-universe-3078783
Auger, M.W.; Fassnacht, C.D.; Wong, K.C.; Thompson, D.; Matthews, K.; Soifer, B.T. Lens Galaxy Properties of SBS 1520+ 530: Insights from Keck Spectroscopy and AO Imaging. Astrophys. J. 2008, 673, 778. [https://doi.org/10.1086/524351CrossRef]
B32-universe-3078783
Gordon, Y.A.; Pimbblet, K.A.; Kaviraj, S.; Owers, M.S.; O’Dea, C.P.; Walmsley, M.; Baum, S.A.; Crossett, J.P.; Fraser-McKelvie, A.; Lintott, C.J.; et al. The effect of minor and major mergers on the evolution of low-excitation radio galaxies. Astrophys. J. 2019, 878, 88. [https://doi.org/10.3847/1538-4357/ab203fCrossRef]
B33-universe-3078783
Pearson, W.J.; Santos, D.J.D.; Goto, T.; Huang, T.-C.; Kim, S.J.; Matsuhara, H.; Pollo, A.; Ho, S.C.-C.; Hwang, H.S.; Małek, K.; et al. Effects of galaxy environment on merger fraction. arXiv 2024, arXiv:2403.11615v1. [https://doi.org/10.1051/0004-6361/202349034CrossRef]
B34-universe-3078783
Milosavljevi’c, M.; Merritt, D. Formation of galactic nuclei. Astrophys. J. 2001, 563, 34. [https://doi.org/10.1086/323830CrossRef]
B35-universe-3078783
Nixon, C.J.; Cossins, P.J.; King, A.R.; Pringle, J.E. Retrograde accretion and merging supermassive black holes. Mon. Not. R. Astron. Soc. 2011, 412, 1591. [https://doi.org/10.1111/j.1365-2966.2010.17952.xCrossRef]
B36-universe-3078783
Bardeen, J.M.; Petterson, J.A. The Lense-Thirring effect and accretion disks around Kerr black holes. Astrophys. J. Lett. 1975, 195, L65. [https://doi.org/10.1086/181711CrossRef]
B37-universe-3078783
Garofalo, D.; Joshi, R.; Yang, X.; Singh, C.B.; North, M.; Hopkins, M. A unified framework for X-shaped radio galaxies. Astrophys. J. 2020, 889, 91. [https://doi.org/10.3847/1538-4357/ab64e3CrossRef]
B38-universe-3078783
Lense, J.R.; Thirring, H. über den Einflu? der Eigenro- tation der Zentralk?rper auf die Bewegung der Planeten und Monde nach der Einsteinschen Gravitationstheorie. Phys. Z. 1918, 19, 156.
B39-universe-3078783
Fragile, P.C.; Mathews, G.J.; Wilson, J.R. Bardeen-Petterson effect and quasi-periodic oscillations in x-ray binaries. Astrophys. J. 2001, 553, 955. [https://doi.org/10.1086/320990CrossRef]
B40-universe-3078783
King, A.R.; Lubow, S.H.; Ogilvie, G.I.; Pringle, J.E. Aligning spinning black holes and accretion discs. Mon. Not. R. Astron. Soc. 2005, 363, 49. [https://doi.org/10.1111/j.1365-2966.2005.09378.xCrossRef]
B41-universe-3078783
Lodato, G.; Pringle, J.E. The evolution of misaligned accretion discs and spinning black holes. Mon. Not. R. Astron. Soc. 2006, 368, 1196. [https://doi.org/10.1111/j.1365-2966.2006.10194.xCrossRef]
B42-universe-3078783
Martin, R.G.; Pringle, J.E.; Tout, C.A. Alignment and precession of a black hole with a warped accretion disc. Mon. Not. R. Astron. Soc. 2007, 381, 1617. [https://doi.org/10.1111/j.1365-2966.2007.12349.xCrossRef]
B43-universe-3078783
Garofalo, D.; Moravec, E.; Macconi, D.; Singh, C.B. Is jet re-orientation the elusive trigger for star formation suppression in radio galaxies? Publ. Astron. Soc. Pac. 2022, 134, 114101. [https://doi.org/10.1088/1538-3873/ac9714CrossRef]
B44-universe-3078783
Kalfountzou, E.; Stevens, J.A.; Jarvis, M.J.; Hardcastle, M.J.; Smith, D.J.B.; Bourne, N.; Dunne, L.; Ibar, E.; Eales, S.; Ivison, R.J.; et al. Herschel-ATLAS: Far-infrared properties of radio-loud and radio-quiet quasars. Mon. Not. R. Astron. Soc. 2014, 442, 1181. [https://doi.org/10.1093/mnras/stu782CrossRef]
B45-universe-3078783
Zinn, P.-C.; Middelberg, E.; Norris, R.P.; Dettmar, R.-J. Active galactic nucleus feedback works both ways. Astrophys. J. 2013, 774, 66. [https://doi.org/10.1088/0004-637X/774/1/66CrossRef]
B46-universe-3078783
Nesvadba, N.P.H.; Boulanger, F.; SalomT5é, P.; Guillard, P.; Lehnert, M.D.; Ogle, P.; Appleton, P.; Falgarone, E.; Forets, G.P.D. Energetics of the molecular gas in the H2 luminous radio galaxy 3C 326: Evidence for negative AGN feedback. Astron. Astrophys. 2010, 521, A65. [https://doi.org/10.1051/0004-6361/200913333CrossRef]
B47-universe-3078783
Nesvadba, N.P.H.; Lehnert, M.D.; Eisenhauer, F.; Gilbert, A.; Tecza, M.; Abuter, R. Extreme Gas Kinematics in the z = 2.2 Powerful Radio Galaxy MRC 1138–262: Evidence for Efficient Active Galactic Nucleus Feedback in the Early Universe? Astrophys. J. 2006, 650, 693. [https://doi.org/10.1086/507266CrossRef]
B48-universe-3078783
Pawlik, A.H.; Schaye, J. Photoheating and supernova feedback amplify each other’s effect on the cosmic star formation rate. Mon. Not. R. Astron. Soc. 2009, 396, L46. [https://doi.org/10.1111/j.1745-3933.2009.00659.xCrossRef]
B49-universe-3078783
Morganti, R. The many faces of the gas in Centaurus A (NGC 5128). Publ. Astron. Soc. Aust. 2010, 27, 463. [https://doi.org/10.1071/AS09076CrossRef]
B50-universe-3078783
Breda, I.; Papaderos, P.; Gomes, J.M.; VT5ílchez, J.M.; Ziegler, B.L.; Hirschmann, M.; Cardoso, L.S.M.; Lagos, P.; Buitrago, F. Stellar age gradients and inside-out star formation quenching in galaxy bulges. Astron. Astrophys. 2020, 635, A177. [https://doi.org/10.1051/0004-6361/201937193CrossRef]
B51-universe-3078783
Garofalo, D. Counter-rotating black holes from FRII lifetimes. Front. Astron. Space Sci. 2023, 10, 1123209. [https://doi.org/10.3389/fspas.2023.1123209CrossRef]
-0cm
| Understanding feeding and feedback from black holes has been a cornerstone of extragalactic astronomy since radio quasars were detected and interpreted in the 1960s . Models for accreting black holes date back to the 1970s <cit.>, with those for AGN coming into their own by the 1990s <cit.>. Despite some success, the most widely accepted paradigm for astrophysical black holes has yet to explain a number of observations: AGN with jets are the minority; powerful jetted AGN live earlier, while bright, jetless AGN peak later; jetted AGN have more massive black holes; and merger signatures in jetted/non-jetted AGN do not unambiguously point to a trigger. FRI radio galaxies have the most massive black holes while FRII radio galaxies have less massive ones. FRII high-excitation radio galaxies (HERG) are at higher redshift, FRII low-excitation radio galaxies (LERG) are at intermediate redshift, and FRI LERG are at lower redshift. The key missing element that allows a simultaneous understanding of all the above is counter-rotation between accretion disks and black holes <cit.>. With this idea, we have not only addressed the above issues but have recently been able to explain the details of the M- relations between black hole mass and stellar velocity dispersion <cit.> and the lifetimes of radio quasar jets. This model is compatible with the jet/wind scenario discussed in <cit.>, as explained in <cit.>. Here, we connect the relation among star formation rate, stellar velocity dispersion, core stellar brightness, excitation level, and merger signatures. In sect:sec2dot1-universe-3078783, we describe the data, and in sect:sec2dot2-universe-3078783, we describe the theory that explains it. In sect:sec3-universe-3078783, we conclude. | null | null | null | §.§ Observations
In this section, we begin by deriving a proxy for the average—or overall—behavior of star formation rates as a function of galaxy-bulge stellar velocity dispersion at late times in the universe (i.e., past the peak of quasars at a redshift of 2). To accomplish this, we extract the densities of star-forming galaxies (tabref:universe-3078783-t001) and of quiescent galaxies (tabref:universe-3078783-t002) from the velocity dispersion functions (VDF) of <cit.>, which are given in terms of logarithms. If you take the antilogarithms of the density values in <ref>, i.e., 10log where log is given in <cit.>, you will reproduce the VDF of <cit.>. With these densities, we construct the ratio of star-forming galaxy density to quiescent galaxy density (tabref:universe-3078783-t003) and consider it as a proxy for the average rate of star formation between redshift of 0.6 and 1. We show the anti-correlation between tabref:universe-3078783-t003 data and stellar velocity dispersion in fig:universe-3078783-f001. This is strong evidence that galaxies with high stellar velocity dispersions tend to have low star formation rates <cit.>.
In addition to the inverse relation between star formation rate and stellar velocity dispersion, the evidence associates low star formation rates with the jetted AGN subgroup and higher star formation rates with the non-jetted AGN subgroup. <cit.> evaluated the median star formation rate for MaNGA, with both jetted and non-jetted AGN, finding the non-jetted AGN to have a median star formation rate of 0.631 solar masses per year while the jetted AGN have a median star formation rate of 0.025. This is a factor of 25 difference in the star formation rate between the two AGN subclasses for . The star formation rates for the radio quiet quasars of <cit.> are between 20 and 300 solar masses/year. <cit.> found their sample of radio quiet quasars to have an average star formation rate of 30 solar masses/year for 160 radio-quiet quasars.
In addition to star formation and velocity dispersion, we show evidence for a larger stellar component in the nucleus of powerful, low-excitation FRI radio galaxies (i.e., exhibiting weak emission lines) as compared to radio-quiet quasars. We look at the data from <cit.> and pick out the elliptical galaxies in their sample with the largest Sersic indices and with jets and report them in tabref:universe-3078783-t004. The average Sersic index is n = 9.7. The average Sersic index for the disk galaxies of <cit.> is 3.3. n, which appears in the intensity profile as a function of distance r as (see <cit.> for details)
I(r) = Ieexp {-bn[(r/re)1/n- 1]}.
NGC 1399, the dominant galaxy in the Fornax Cluster, has a black hole just under a billion solar masses, an FRI jet, an estimated Bondi accretion rate of 9.1 × 10-3 solar masses/year <cit.>, and a low rate of star formation <cit.>. NGC 4261 is an elliptical galaxy in the Virgo Cluster with a black hole about 1.67 billion solar masses <cit.>, a visible two-sided FRI jet <cit.>, a low star formation rate <cit.>, a stellar velocity dispersion of 297 km/s <cit.>, and a low excitation nucleus <cit.>, with an estimated Bondi accretion rate of 6 × 10-2 solar masses per year <cit.>. NGC 4374 is an elliptical galaxy in the Virgo Cluster with a black hole of 0.93 billion solar masses <cit.>, a low rate of star formation <cit.>, an estimated Bondi accretion rate of 2.4 × 10-3 solar masses per year <cit.>, and a stellar velocity dispersion of 278 km/s <cit.>. NGC 4486 has a low star formation rate <cit.>, an estimated Bondi accretion rate of 0.13 solar masses per year <cit.>, and a stellar velocity dispersion of 323 km/s <cit.>. We do not discuss NGC 6251 with a Sersic index of 11.8 because it appears to be a giant radio galaxy, which is modeled in our paradigm as a system that experiences a second bout of counter-rotation. It also preserves its nuclear stellar component but requires further discussion beyond this work. Radio-quiet quasars, by comparison, appear to have higher star formation rates, lower velocity dispersions, higher excitation signatures, and dimmer nuclear stellar cores. The star formation rate of the radio-quiet quasar SBS 1520 + 530, for example, is measured to be 190 solar masses/year <cit.>, while its Sersic index is only 2.7 <cit.>. Finally, the merger signatures in FRI radio galaxies are fewer than in other AGN <cit.>, and even more intriguing is the finding that the signatures of mergers decrease with increasing galactic stellar velocity dispersion <cit.>. We place these objects and their properties in tabref:universe-3078783-t004. In the next section, we describe the theoretical context that allows for an understanding of how these five characteristics come about in this particular subclass of AGN.
§.§ Theory
In this section, we use a simple model schematic to show how the formation and evolution of radio quasars strings together the five characteristics described in the previous section, namely,
* bright nuclear stellar cores
* low excitation accretion
* high stellar velocity dispersion
* low star formation rates
* weak merger signatures
a. bright nuclear stellar cores
Our theoretical picture is anchored to the idea of counter-rotation between a spinning black hole and an accretion disk, a configuration that results from a merger of two galaxies. Each galaxy provides a supermassive black hole that eventually coalesces. In order for this to occur, the black hole binary must rid itself of its binary angular momentum such that the black holes are well within a pc of each other, where gravitational waves can complete the merger. Because a corotating disk struggles to bring the binary within 1 pc <cit.>, nearby stars are needed to further eliminate the binary angular momentum, and this leads to a depleted stellar core. If the disk is counter-rotating with respect to the binary black hole angular momentum, on the other hand, the accretion disk is effective in extracting the binary angular momentum <cit.>, and the black holes can merge without depleting their stellar cores. This is illustrated in fig:universe-3078783-f002. As we will show, the active galaxies triggered by counter-rotating accretion will tend to become the red-and-dead FRI radio galaxies which are therefore prescribed by theory to have bright nuclear cores.
b. low excitation accretion
The counter-rotating accretion configuration is associated with an FRII jet <cit.>, whose effect is to create a funnel-like region through the ISM by pushing the cold gas away and enhancing the density. The higher-density region is shown in purple, while the less-dense funnel region is in grey in fig:universe-3078783-f003. The energy deposited in the hotspots circles back to affect the accretion process by feeding heated gas and altering the state of accretion to an advection-dominated accretion flow (ADAF). Such a transition requires no less than and can occur during counter-rotation, but it dominates during the longest-lasting phase of such a radio galaxy, which follows the transition through zero black-hole spin <cit.>. This is captured by the change in color for the accretion disk from yellow in fig:universe-3078783-f003 to dirty blue in fig:universe-3078783-f004.
c. High stellar velocity dispersion
While the black hole spin has a non-zero value, the Bardeen–Petterson effect <cit.> applies, but it disappears when the black hole spin is zero and a new plane of accretion is formed that is determined by the angular momentum of the incoming gas <cit.> and which is responsible for the high stellar velocity dispersion as we now describe. The Bardeen–Petterson effect results from Lens–Thirring precession <cit.>, a general relativistic effect that ultimately forces the disk’s angular momentum to be aligned or anti-aligned with the angular momentum of the black hole. While the Bardeen–Petterson effect is believed to operate continuously in X-ray binaries, where the donor star feeds the compact object with angular momentum that is misaligned with that of the compact object, such an effect operates in a restrictive manner in the subset of AGN that lead to tilted jets, as we now show. The timescale for alignment is given by <cit.> as
T_align = 3aM( 2R_GR/R_BP)^1/2/dM/dt
where a is the dimensionless black hole spin, M is the black hole mass, RGR is the gravitational radius, and RBP is the Bardeen–Petterson radius given as a function of black hole spin as
RBP = Aa2/3RGR
where A is constant, and dM/dt is the accretion rate onto the black hole. While this expression was derived for corotation, it has been shown that the timescale for counter-rotation is the same <cit.>. This expression tends to zero as the black hole spin approaches zero. According to our model prescription (<cit.> and fig:universe-3078783-f004), however, the accretion rate drops as the transition through zero spin occurs, which has the effect of increasing the alignment timescale. We model the timescale for an accreting black hole that approaches the boundary between thin disk and ADAF prior to reaching zero spin (fig:universe-3078783-f005). As the black hole approaches zero spin in the counter-rotating regime, it initially experiences a jump in the alignment timescale due to the decrease in accretion rate (fig:universe-3078783-f006), which is a factor of about 25 times larger than the alignment timescale of the post-merger black hole that accretes at the Eddington rate. A counter-rotating black hole spins down to zero spin from an initially maximally spinning black hole in about 8 × 106 years if it accretes at the Eddington limit. Because the alignment timescale is longer than this <cit.>, counter-rotating black holes tend to have inner disks that are anti-aligned with the black hole angular momentum and outer disks beyond the Bardeen–Petterson radius that are tilted. This will last until the black hole is close to zero black hole spin. Once zero spin is reached, the Bardeen–Petterson effect disappears, the accretion disk forms in the plane determined by the angular momentum of the incoming gas from the greater galaxy, and the alignment timescale becomes irrelevant beyond that time. We therefore only show the alignment timescale as a function of spin during the counter-rotating regime. This bump in the alignment time is larger for the most powerful radio quasars, whose feedback effect rapidly transitions their thin disks into ADAFs. We therefore expect this effect to dominate in denser environments. Our model thus prescribes that FRII radio quasars/radio galaxies possess Bardeen–Petterson transition radii and that FRI radio galaxies do not. In other words, only in post-mergers will the accreting black holes have this feature. While this applies to radio-quiet quasars as well, we are here concerned with the jetted AGN subclass. Because radio quasars are modeled as counter-rotating black holes, they are relatively shorter-lived, at least compared to FRI radio galaxies. The observational consequences of this prediction remain unexplored.
When the black hole spins up again beyond the zero black hole spin in corotation and a jet is again formed, it will be tilted with respect to the earlier FRII jet phase, as shown in fig:universe-3078783-f004. This has a direct impact on the stars in the bulge, which are pushed around and acquire an enhancement in their velocity dispersion (shown by red arrows). Note that the jet is drawn thicker as compared to the jet shown in fig:universe-3078783-f003. This is due to the fact that the jet in fig:universe-3078783-f004 enters a denser region as compared to the FRII jet of fig:universe-3078783-f003. Here we see an explanation for the different jet morphology. The FRI jet morphology is due to both an environmental effect (greater density) and an engine-based effect (its tilt).
d. low star formation rates
The absence of the Bardeen–Petterson effect and the tilted jet that develops in corotation will make it difficult to drill through the denser medium (fig:universe-3078783-f004), and this leads to the FRI morphology, which deposits its energy in the ISM, heating the gas and impeding star formation. This effect on star formation is referred to as the Roy Conjecture <cit.>. As the star formation rate drops, the galaxy experiences increasingly fewer new stars and an aging of the existing stars, which gives the galaxy the character of an old red-and-dead elliptical galaxy. The old stars are drawn in red in fig:universe-3078783-f004, and the hot halo is shown in green.
e. weak merger signatures
A powerful FRII jet that rapidly transitions its accretion disk to an ADAF in a few million years will accrete at 10-2 the Eddington limit, which means that it will spin its black hole down to zero spin in about 5 × 108 years. In order to become an FRI radio galaxy, the black hole spin needs to reach a spin value of about 0.2, which, at the Eddington limit, takes about 20 million years. The accretion rate is at 10-2 the Eddington limit, however, and continues to decrease over this time period, so the FRI jet comes into play no sooner than about 2 × 109 years after zero spin and lives as long as there is accretion fuel. Because of the low accretion rates at this late phase, FRI radio galaxies live billions of years in such states. Because merger signatures last about 100 million years, it is clear that for mature FRI radio galaxies like M87, no merger signatures are expected to survive. In general, the longer the FRI radio galaxy lives, the weaker the merger signatures become. Because theory prescribes jet affecting stellar velocity dispersion, the longer the FRI jet exists, the larger the predicted stellar velocity dispersions. Therefore, the model makes the interesting prediction of an inverse relation between stellar velocity dispersion and merger signatures. If galactic velocity dispersion increases with time, an anti-correlation is generated between merger signatures and galactic stellar velocity dispersion as is found observationally <cit.>. | null |
http://arxiv.org/abs/2409.17880v1 | 20240926142755 | Self-Distilled Depth Refinement with Noisy Poisson Fusion | [
"Jiaqi Li",
"Yiran Wang",
"Jinghong Zheng",
"Zihao Huang",
"Ke Xian",
"Zhiguo Cao",
"Jianming Zhang"
] | cs.CV | [
"cs.CV"
] |
Extremal number of arborescences
[
September 28, 2024
================================
§ ABSTRACT
Depth refinement aims to infer high-resolution depth with fine-grained edges and details, refining low-resolution results of depth estimation models. The prevailing methods adopt tile-based manners by merging numerous patches, which lacks efficiency and produces inconsistency. Besides, prior arts suffer from fuzzy depth boundaries and limited generalizability. Analyzing the fundamental reasons for these limitations, we model depth refinement as a noisy Poisson fusion problem with local inconsistency and edge deformation noises. We propose the () framework to enforce robustness against the noises, which mainly consists of depth edge representation and edge-based guidance. With noisy depth predictions as input, generates low-noise depth edge representations as pseudo-labels by coarse-to-fine self-distillation. Edge-based guidance with and serves as the optimization objective equivalent to Poisson fusion. When depth maps are better refined, the labels also become more noise-free. Our model can acquire strong robustness to the noises, achieving significant improvements in accuracy, edge quality, efficiency, and generalizability on five different benchmarks. Moreover, directly training another model with edge labels produced by brings improvements, suggesting that our method could help with training robust refinement models in future works.
§ INTRODUCTION
Depth refinement infers high-resolution depth with accurate edges and details, refining the low-resolution counterparts from depth estimation models <cit.>. With increasing demands for high resolutions in modern applications, depth refinement becomes a prerequisite for virtual reality <cit.>, bokeh rendering <cit.>, and image generation <cit.>. The prevailing methods <cit.> adopt two-stage tile-based frameworks. Based on the one-stage refined depth of the whole image, they merge high-frequency details by fusing extensive patches with complex patch selection strategies. However, numerous patches lead to heavy computational costs. Besides, as in <ref> (a), excessive integration of local information leads to inconsistent depth structures, e.g., the disrupted billboard.
Apart from efficiency and consistency, depth refinement <cit.> is restricted by noisy and blurred depth edges. Highly accurate depth annotations with meticulous boundaries are necessary to enforce fine-grained details. For this reason, prior arts <cit.>
only use synthetic datasets <cit.> for the highly accurate depth values and edges. However, synthetic data falls short of the real world in realism and diversity, causing limited generalizability with blurred depth and degraded performance on in-the-wild scenarios.
Some attempts <cit.> simply adopt natural-scene datasets <cit.> for the problem. The varying characteristics of real-world depth annotations, e.g., sparsity <cit.>, inaccuracy <cit.>, or blurred edges <cit.>, make them infeasible for supervising refinement models. Thus, GBDF <cit.> uses depth predictions <cit.> as pseudo-labels, while Boost <cit.> leverages adversarial training <cit.> as guidance. Those inaccurate pseudo-labels and guidance still lead to blurred edges as shown in <ref> (a). The key problem is to alleviate the noise of depth boundaries by constructing accurate edge representations and guidance.
To tackle these challenges, we dig into the underlying reasons for the limitations, instead of the straightforward merging of local details. We model depth refinement as a problem, decoupling depth prediction errors into two degradation components: and . We use regional linear transformation perturbation as the to measure inconsistent depth structures. The represents fuzzy boundaries with Gaussian blur. Experiments in <ref> showcase that the noises can effectively depict general depth errors, serving as our basic principle to improve refinement results.
In pursuit of the robustness against the and , we propose the () framework, which mainly consists of depth edge representation and edge-based guidance. A refinement network is considered as the Poisson fusion operator, recovering high-resolution depth from noisy predictions of depth models <cit.>. Given the noisy input, can generate low-noise and accurate depth edge representation as pseudo-labels through coarse-to-fine self-distillation. The edge-based guidance including and is designed as the optimization objective of Poisson fusion. When depth maps are better refined, the pseudo-labels also become more noise-free. Our approach establishes accurate depth edge representations and guidance, endowing with strong robustness to the two types of noises. Consequently, as shown in <ref> (b), significantly outperforms prior arts <cit.> in depth accuracy and edge quality. Besides, without merging numerous patches as the two-stage tile-based methods <cit.>, achieves much higher efficiency.
We conduct extensive experiments on five benchmarks. achieves performance on the commonly-used Middlebury2021 <cit.>, Multiscopic <cit.>, and Hypersim <cit.>. Meanwhile, since can establish self-distillation with accurate depth edge representation and guidance on natural scenes, the evaluations on in-the-wild DIML <cit.> and DIODE <cit.> datasets showcase our superior generalizability. Analytical experiments demonstrate that these noticeable improvements essentially arise from the strong robustness to the noises. Furthermore, the precise depth edge labels produced by can be directly used to train another model <cit.> and yield improvements, which indicates that our method could help with training robust refinement models in future works.
In summary, our main contributions can be summarized as follows:
∙We model the depth refinement task through the problem with and as two types of depth degradation.
∙ We present the robust and efficient () framework, which can generate accurate depth edge representation by the coarse-to-fine self-distillation paradigm.
∙ We design the and , as the edge-based guidance to enforce the model with both consistent depth structures and meticulous depth edges.
§ RELATED WORK
Depth Refinement Models. Depth refinement refines low-resolution depth from depth estimation models <cit.>, predicting high-resolution depth with fine-grained edges and details. Existing methods <cit.> can be categorized into one-stage <cit.> and two-stage <cit.> frameworks. One-stage methods <cit.> conduct global refinement of the whole image, which could produce blurred depth edges and details. To further enhance local details, based on the globally refined results, the prevailing refinement approaches <cit.> adopt the two-stage tile-based manner by selecting and merging numerous patches. For example, Boost <cit.> proposes a complex patch-sampling strategy based on the gradients of input images. PatchFusion <cit.> improves the sampling by shifted and tidily arranged tile placement. However, the massive patches lead to low efficiency. The excessive local information produces inconsistent depth structures or even artifacts. In this paper, we propose the () framework, which can predict both consistent structures and accurate details with much higher efficiency by tackling the problem.
Depth Refinement Datasets. Depth datasets with highly accurate annotations and edges are necessary for refinement models. Prior arts <cit.> utilize CG-rendered datasets <cit.> for accurate depth, but the realism and diversity fail to match the real world. For instance, neither the UnrealStereo4K <cit.> nor the MVS-Synth <cit.> contain people, restricting the generalizability of refinement models. A simple idea for the problem is to leverage natural-scene data <cit.>. However, different annotation methods lead to varying characteristics, e.g., sparsity of LiDAR <cit.>, inaccurate depth of structured light <cit.>, and blurred edges of stereo matching <cit.>. To address the challenge, Boost <cit.> adopts adversarial training as guidance only with a small amount of accurately annotated real-world images. GBDF <cit.> employs depth predictions <cit.> with guided filtering <cit.> as pseudo-labels. Due to the inaccurate pseudo-labels and guidance, they <cit.> produce blurred edges and details. By contrast, constructs accurate depth edge representation and edge-based guidance for self-distillation, leading to fine-grained details and strong generalizability.
§ : SELF-DISTILLED DEPTH REFINEMENT
We present a detailed illustration of our () framework. In <ref>, we introduce the to model the depth refinement task and provide an overview to outline our approach. mainly consists of depth edge representation and edge-based guidance, which will be described in <ref> and <ref> respectively.
§.§
Problem Statement. Based on depth maps of depth prediction models, i.e., depth predictor 𝒩_d, depth refinement recovers high-resolution depth with accurate edges and details by refinement network 𝒩_r. Some attempts in image super-resolution <cit.> and multi-modal integration <cit.> utilize Poisson fusion to merge features and restore details. Motivated by this, we propose to model depth refinement as a problem. The ideal depth D^* with completely accurate depth values and precise depth edges are unobtainable in real world. A general depth prediction D, whether produced by 𝒩_d or 𝒩_r for an input image I, can be expressed as a noisy approximation of D^*:
D≈ D^* + ϵ_cons+ϵ_edge .
ϵ_cons and ϵ_edge denote local inconsistency and to decouple depth prediction errors. Local inconsistency noise ϵ_cons represents inconsistent depth structures through regional linear transformation perturbation. Based on masked Gaussian blur, ϵ_edge showcases degradation and blurring of depth edges. Refer to Appendix <ref> for details of the noises. As in <ref>, depth errors can be depicted by combinations of ϵ_cons and ϵ_edge. Thus, considering 𝒩_r as a Poisson fusion operator, depth refinement can be defined as a problem:
D_0 = 𝒩_r (𝒩_d(L),𝒩_d(H)) ,
s.t. min_D_0,Ω∬ _Ω| ∇ D_0 - ∇ D^* | ∂Ω
+ ∬ _I-Ω| D_0 - D^*|∂Ω .
The refined depth of 𝒩_r is denoted as D_0. ∇ refers to the gradient operator. Typically for depth refinement <cit.> task, input image I is resized to low-resolution L and high-resolution H for 𝒩_d. Ω represents high-frequency areas, while I-Ω showcases low-frequency regions.
Motivation Elaboration. In practice, due to the inaccessibility of truly ideal depth, approximation of D^* is required for training 𝒩_r. For this reason, the optimization objective in <ref> is divided into Ω and I-Ω. For the low-frequency I-Ω, D^* can be simply represented by the ground truth D^*_gt of training data. However, as illustrated in <ref>, depth annotations inevitably suffer from imperfect edge quality for the high-frequency Ω. It is essential to generate accurate approximations of ideal depth boundaries as training labels, which are robust to ϵ_cons and ϵ_edge. Some prior arts adopts synthetic depth <cit.> for higher edge quality, while leading to limited generalization capability with blurred predictions in real-world scenes. To leverage real depth data <cit.>, GBDF <cit.> employs depth predictions <cit.> with guided filter as pseudo-labels, which still contain significant noises and result in blurred depth. Besides, optimization of Ω is also ignored. Kim et al. <cit.> relies on manually annotated Ω regions as input. GBDF <cit.> omits the selection of Ω and supervises depth gradients on the whole image. Inaccurate approximations of ∇ D^* and inappropriate division of Ω lead to limited robustness to and .
Method Overview. To address the challenges, as shown in <ref>, we propose our framework with two main components: depth edge representation and edge-based guidance. To achieve low-noise approximations of ∇ D^*, we construct the depth edge representation G_s through coarse-to-fine self-distillation, where s∈{1,2,⋯ ,S} refers to iteration numbers. The input image is divided into several windows with overlaps from coarse to fine. For instance, we denote the high-frequency area of a certain window w in iteration s as Ω_s^w, and the refined depth of 𝒩_r as D_s^w. In this way, the self-distilled optimization of depth edge representation G_s can be expressed as follows:
D_s^w ≈ D^* +ϵ_cons+ϵ_edge ,
min_G_s∑_w∬ _Ω_s^w| G_s^w - ∇ D_s^w | ∂Ω_s^w .
During training, depth edge representation G_s^w is further optimized based on the gradient of current refined depth D_s^w. The final edge representation G_S of the whole image will be utilized as the pseudo-label to supervise the refinement network 𝒩_r after S iterations. can generate low-noise and robust edge representation, mitigating the impact of ϵ_cons and ϵ_edge (More results in Appendix <ref>).
With G_S as the training label, the next is to enforce 𝒩_r with robustness to the noises, achieving consistent structures and meticulous boundaries. To optimize 𝒩_r, we propose edge-based guidance as an equivalent optimization objective to problem, which is presented by:
min_D0,Ω∬ _Ω| ∇ D_0 - G_S | ∂Ω + ∬ _I-Ω| D_0 - D^*_gt|∂Ω .
For the second term of I-Ω, we adopt depth annotations D^*_gt as the approximation of D^*. For the first term, with the generated G_S as pseudo-labels of ∇ D^*, we propose and to optimize D_0 and Ω predicted by 𝒩_r. The supervises the model to consistently refine depth edges with local scale and shift alignment. The guides 𝒩_r to adaptively fuse low- and high-frequency features based on the learned soft region mask Ω, achieving balanced consistency and details by quantile sampling.
Overall, when depth maps are better refined under the edge-based guidance, the edge representation also becomes more accurate and noise-free with the carefully designed coarse-to-fine manner. The self-distillation paradigm can be naturally conducted based on the , enforcing our model with strong robustness against the and .
§.§ Depth Edge Representation
To build the self-distilled training paradigm, the prerequisite is to construct accurate and low-noise depth edge representations as pseudo-labels. Meticulous steps are designed to generate the representations with both consistent structures and accurate details.
Initial Depth Edge Representation. We generate an initial depth edge representation based on the global refinement results of the whole image. For the input image I, we obtain the refined depth results D_0 from 𝒩_r as in <ref>.
Depth gradient G_0=∇ D_0 is calculated as the initial representation. An edge-preserving filter <cit.> is applied on G_0 to reduce noises in low-frequency area I-Ω. With global information of the whole image, G_0 can preserve spatial structures and depth consistency. It also incorporates certain detailed information from the high-resolution input H. To enhance edges and details in high-frequency region Ω, we conduct coarse-to-fine edge refinement in the next step.
Coarse-to-fine Edge Refinement. The initial D_0 is then refined from course to fine with S iterations to generate final depth edge representation. For a specific iteration s∈{1,2,⋯ ,S}, we uniformly divide input image I into (s+1)^2 windows with overlaps. We denote a certain window w in iteration s of the input image I as I_s^w. The high-resolution H_s^w is then fed to the depth predictor 𝒩_d. D_s-1^w represents the depth refinement results of the corresponding window w in the previous iteration s-1. The refined depth D_s^w of window w in current iteration s as <ref> can be obtained by 𝒩_d and 𝒩_r:
D_s^w = 𝒩_r(D_s-1^w,𝒩_d(H_s^w)),
s ∈{1,2,⋯ ,S} ,
After that, depth gradient ∇ D_s^w is used to update the depth edge representation. The coarse-to-fine manner achieves consistent spatial structures and accurate depth details with balanced global and regional information. In the refinement process, only limited iterations and windows are needed. Thus, achieves much higher efficiency than tile-based methods <cit.>, as shown in <ref>.
Scale and Shift Alignment.
The windows are different among varied iterations. Depth results and edge labels on corresponding window w of consecutive iterations could be inconsistent in depth scale and shift. Therefore, alignment is required before updating the depth edge representation:
(β_1, β_0) = min_β_1, β_0 (β_1∇D_s^w + β_0) - G_s-1^w_2^2 ,
G_s^w = β_1∇D_s^w + β_0 ,
where β_1 and β_0 are affine transformation coefficients as scale and shift respectively. The aligned G_s^w represents the depth edge pseudo-labels for image patch I_s^w generated from the refined depth D_s^w. At last, after S iterations, we can obtain the pseudo-label G_S as the final depth edge representation for self-distillation. For better understanding, we showcase visualization of D_0, D_S, and G_S in <ref>.
Robustness to Noises. In each window, we merge high-resolution 𝒩_d(H_s^w) to enhance details and suppress ϵ_edge. Meanwhile, coarse-to-fine window partitioning and scale alignment mitigate ϵ_cons and bring consistency. Thus, G_S exhibits strong robustness to the two types of noises by self-distillation.
§.§ Edge-based Guidance
With depth edge representation G_S as pseudo-label for self-distillation, we propose the edge-based guidance including and to supervise 𝒩_r.
.
We aim for fine-grained depth by one-stage refinement, while the two-stage coarse-to-fine manner can further improve the results. Thus, instructs the initial D_0 with the accurate G_S. Some problems need to be tackled for this purpose.
As 𝒩_r has not converged in the early training phase, G_S is not sufficiently reliable with inconsistent scales and high-level noises between local areas. Therefore, we extract several non-overlapping regions P_n, n∈{1,2,⋯,N_g} with high gradient density by clustering <cit.>, where N_g represents the number of clustering centroids. The is only calculated inside P_n with scale and shift alignment. By doing so, the model can focus on improving details in high-frequency regions and preserving depth structures in flat areas. The training process can also be more stable. The can be calculated by:
ℒ_grad=1/N_g∑_n=1^N_g||(β_1G_0[P_n]+β_0)-G_S[P_n]||_1 ,
where β_1 and β_0 are the scale and shift coefficients similar to <ref>. We use [·] to depict mask fetching operations, i.e., extracting local area P_n from G_0 and G_S. With the , predicts refined depth with meticulous edges and consistent structures.
. High-resolution feature F_H extracted from H brings finer details but could lead to inconsistency, while the low-resolution feature F_L from L can better maintain depth structures. 𝒩_r should primarily rely on F_L for consistent spatial structures within low-frequency I-Ω, while it should preferentially fuse F_H for edges and details in high-frequency areas Ω. The fusion of F_L and F_H noticeably influence the refined depth. However, prior arts <cit.> adopt manually-annotated Ω regions as fixed masks or even omit Ω as the whole image, leading to inconsistency and blurring. To this end, we implement Ω as a learnable soft mask, with quantile sampling strategy to guide the adaptive fusion of F_L and F_H. The fusion process is expressed by:
F = (1-Ω) ⊙ F_L+Ω⊙ F_H ,
where ⊙ refers to the Hadamard product. Ω is the learnable mask ranging from zero to one. Larger values in Ω showcases higher frequency with denser edges, requiring more detailed information from the high-resolution feature F_H. Thus, Ω can naturally serve as the fusion weight of F_L and F_H.
To be specific, we denote the lower quantile of G_S as t_a, i.e., P(X<t_a|X ∈ G_S)=a. {G_S < t_a} indicates flat areas with low gradient magnitude, while {G_S > t_1 - a} represents high-frequency regions. Ω should be larger in those high-frequency areas {G_S > t_1 - a} and smaller in the flat regions {G_S < t_a}. This suggests that G_S and Ω should be synchronized with similar data distribution. Thus, if we define the lower quantile of Ω as T_a, i.e., P(X<T_a|X ∈Ω)=a, an arbitrary pixel i∈{G_S<t_a} in flat regions should also belong to {Ω<T_a} with a lower weight for F_H, while the pixel i∈{G_S>t_1-a} in high-frequency areas should be contained in {Ω>T_1-a} for more detailed information. The can be depicted as follows:
ℒ_fusion=1/N_wN_p∑_n=1^N_w∑_i=1^N_p{[ max(0,Ω_i-T_n*a), i∈{G_S<t_n*a} ,; max(0,T_1-n*a-Ω_i), i∈{G_S>t_1-n*a} ,; ].
where N_p is the pixel number. We supervise the distribution of Ω with lower quantiles T_n*a and T_1-n*a, n∈{1,2,⋯,N_w}. Therefore, pixels with larger deviations between G_S and Ω will be penalized more heavily. Taking the worst case as an example, if i∈{G_S<t_N_w*a} but i∉{Ω<T_N_w*a}, the error for the pixel will be accumulated for N_w times from a to N_w*a. ℒ_fusion enforces with consistent structures (low ϵ_cons noise) in I-Ω and accurate edges (low ϵ_edge noise) in Ω. The visualizations of quantile-sampled G_S and Ω are presented in <ref>.
Finally, combining ℒ_grad and ℒ_fusion as edge-based guidance for self-distillation, the overall loss ℒ for training 𝒩_r is calculated as <ref>. ℒ_gt supervises the discrepancy between D_0 and ground truth D^*_gt with affinity-invariant loss <cit.>. See Appendix <ref> for implementation details of .
ℒ=ℒ_gt+λ_1ℒ_grad
+λ_2ℒ_fusion .
§ EXPERIMENTS
To prove the efficacy of () framework, we conduct extensive experiments on five benchmarks <cit.> for indoor and outdoor, synthetic and real-world.
Experiments and Datasets. Firstly, we follow prior arts <cit.> to conduct zero-shot evaluations on Middlebury2021 <cit.>, Multiscopic <cit.>, and Hypersim <cit.>. To showcase our superior generalizability, we compare different methods on DIML <cit.> and DIODE <cit.> with diverse natural scenes. Moreover, we prove the higher efficiency of and undertake ablations on our specific designs.
Evaluation Metrics.
Evaluations of depth accuracy and edge quality are necessary for depth refinement models. For edge quality, we adopt
the ORD and D^3R metrics following Boost <cit.>. For depth accuracy, we adopt the widely-used REL and δ_i (i=1,2,3). See Appendix <ref> for details.
§.§ Comparisons with Other Depth Refinement Approaches
Comparisons with One-stage Methods. For fair comparisons, we evaluate one-stage <cit.> and two-stage tile-based <cit.> approaches separately. The one-stage methods predict refined depth based on the whole image. conducts one-stage refinement without the coarse-to-fine manner during inference. Comparisons on Middlebury2021 <cit.>, Multiscopic <cit.>, and Hypersim <cit.> are shown in <ref>. As prior arts <cit.>, we use three depth predictors MiDaS <cit.>, LeReS <cit.>, and ZoeDepth <cit.>. Regardless of which depth predictor is adopted, outperforms the previous one-stage methods <cit.> in depth accuracy and edge quality on the three datasets <cit.>. For instance, our method shows 6.6% and 20.7% improvements over Kim et al. <cit.> for REL and ORD with MiDaS <cit.> on Middlebury2021 <cit.>, showing the efficacy of our self-distillation paradigm.
Comparisons with Two-stage Tile-based Methods. Two-stage tile-based methods <cit.> conduct local refinement on numerous patches based on the global refined depth. moves away from the tile-based manner and utilizes coarse-to-fine edge refinement to further improve edges and details. As in <ref>, with the coarse-to-fine manner shows obvious advantages. For example, compared with the recent advanced PatchFusion <cit.>, achieves 5.2% and 26.7% improvements for δ_1 and ORD with ZoeDepth <cit.> on Hypersim <cit.>. To be mentioned, PatchFusion <cit.> uses ZoeDepth <cit.> as the fixed baseline, whereas is readily pluggable for various depth predictors <cit.>.
Generalization Capability on Natural Scenes.
We prove the superior generalization capability of . In this experiment, we adopt LeReS <cit.> as the depth predictor. DIML <cit.> and DIODE <cit.> datasets are used for zero-shot evaluations, considering their diverse in-the-wild indoor and outdoor scenarios. As in <ref>, shows at least 5.7% and 9.0% improvements for REL and ORD on DIODE <cit.>. On DIML <cit.> dataset, our approach improves D^3R, ORD, and δ_1 by over 17.6%, 7.5%, and 2.0%. The convincing performance proves our strong robustness and generalizability, indicating the efficacy of our modeling and self-distilled training paradigm.
Qualitative Comparisons. We present visual comparisons of one-stage methods <cit.> on natural scenes in <ref>. With our low-noise depth edge representation and edge-based guidance, predicts sharper depth edges and details, e.g., the fine-grained predictions of intricate branches.
The visual results of two-stage approaches <cit.> are shown in <ref>. Due to the excessive fusion of detailed information, tile-based methods <cit.> produce structure disruption, depth inconsistency, or even noticeable artifacts, e.g., disrupted and fuzzy structures of the snow-covered branches. By contrast, can predict more accurate depth edges and more consistent spatial structures.
Robustness against noises. As in <ref>, we evaluate and GBDF <cit.> with different levels of input noises. As the noise level increases, our method presents less degradation. The stronger robustness against the ϵ_cons and ϵ_edges noises is the essential reason for all our superior performance.
Model Efficiency. achieves higher efficiency. Two-stage tile-based methods <cit.> rely on complex fusion of extensive patches with heavy computational overhead. Our coarse-to-fine manner noticeably reduces Flops per patch and patch numbers as in <ref>. For one-stage methods <cit.>, adopts a more lightweight 𝒩_r with less parameters and faster inference speed over the previous GBDF <cit.> and Kim et al. <cit.>. See Appendix <ref> for detailed comparisons of model efficiency.
§.§ Ablation Studies
Coarse-to-fine Edge Refinement.
In <ref>, we adopt the coarse-to-fine manner with varied iterations. S=0 represents one-stage inference. Coarse-to-fine refinement brings more fine-grained edge representations and refined depth. We set S=3 for the with two-stage inference.
Edge-based Guidance. In <ref>, we evaluate the effectiveness of edge-based guidance. ℒ_grad focuses on consistent refinement of depth edges. ℒ_fusion guides the adaptive feature fusion of low- and high-frequency information. With ℒ_gt as the basic supervision of ground truth, adding ℒ_grad and ℒ_fusion improves D^3R by 10.0% and REL by 3.2%, showing the efficacy of edge-based guidance.
Effectiveness of SDDR Framework. As in <ref>, we train with the same HRWSI <cit.> as GBDF <cit.> for fair comparison. Without the combined training data in Appendix <ref>, still improves D^3R and ORD by 13.9% and 2.2% over GBDF <cit.>, proving our superiority convincingly.
Transferability. We hope our depth edge representation G_S can be applicable to other depth refinement models. Therefore, in <ref>, we directly train GBDF <cit.> combining the depth edge representation produced by the trained . The depth accuracy and edge quality are improved over the original GBDF <cit.>, indicating the transferability of G_S in training robust refinement models.
§ CONCLUSION
In this paper, we model the depth refinement task as a problem. To enhance the robustness against local inconsistency and , we propose () framework. With the low-noise depth edge representation and guidance, achieves both consistent spatial structures and meticulous depth edges. Experiments showcase our stronger generalizability and higher efficiency over prior arts. The provides a new perspective for depth refinement in future works. Limitations and broader impact are discussed in Appendix <ref>.
Acknowledgement This work is supported by the National Natural Science Foundation of China under Grant No. 62406120.
plain
§ MORE DETAILS ON FRAMEWORK
§.§ Depth Edge Representation
Coarse-to-fine Edge Refinement.
In Sec. 3.2, line 169 of main paper, we propose the coarse-to-fine edge refinement to generate accurate and fine-grained depth edge representation G_S. Here, we provide visualizations of the refinement process in <ref>. For the initial global refinement stage s=0, we showcase the results of the depth predictor at low and high inference resolutions, i.e., 𝒩_d(L) and 𝒩_d(H). Our refined depth D_0 presents both depth consistency and details. For s=1, 2, 3, the refined depth maps and edge representations are noticeably improved with finer edges and details. The final depth edge representation G_S (S=3) with lower and is utilized as pseudo-label for the self-distillation training process.
Adaptive Resolution Adjustment. Adaptive resolution adjustment is applied to the low and high-resolution input L and H. We denote the resolutions of L and H as l and h, which play a crucial role in refined depth and need to be chosen carefully. Higher resolutions will bring finer details but could lead to inconsistent depth structures due to the limited receptive field of 𝒩_d. Previous works <cit.> upscale images or patches to excessively high resolutions for more details, resulting in evident artifacts in their refined depth maps with higher levels of inconsistency noises ϵ_cons. On the other hand, if h is too low, edge and detailed information cannot be sufficiently preserved in 𝒩_d(H), leading to exacerbation of edge deformation noise ϵ_edge with blurred details in the refined depth. Such errors and artifacts are unacceptable in depth edge representations for training models. Therefore, we adaptively adjust resolutions l and h, considering both the density of depth edges and the training resolution of depth predictor 𝒩_d.
For image window I_s^w, we generally set the low-resolution input L_s^w as the training resolution r̂ of 𝒩_d. If we denote the original resolution of I_s^w as r_s^w, adaptively adjusts the high resolution h_s^w for the certain window as follows:
h_s^w = mean(r̂,r_s^w) *
mean(|∇𝒩_d(L_s^w)|)/α * mean(|∇D_s-1^w|)/mean(|∇D_s-1|) ,
where α is a priori parameter for depth predictor 𝒩_d, averaging the gradient magnitude of the depth annotations on its sampled training data. The second term embodies adjustments according to depth edges. Assuming mean(|∇𝒩_d(L_s^w)|)<α, it indicates that the current window area contains lower edge intensity or density than the training data of 𝒩_d. In this case, we will appropriately decrease h_s^w from mean(r̂,r_s^w) to maintain the similar density of detailed information as the training stage of the depth predictor. The third term portrays adjustments based on the discrepancy of edge intensity between the window area and the whole image. To be mentioned, for the generation of the initial edge representation G_0, the third term is set to ineffective as one. L_0^w is equivalent to L with the whole image as the initial window w.
We present visual results with different resolutions to prove the effectiveness of our design. As shown in <ref>, considering the training data distribution and the edge density, the inference resolution is adaptively adjusted to a smaller one compared to Boost <cit.> (1024 versus 1568). In this way, our achieves better depth consistency and alleviates the artifacts produced by prior arts <cit.>.
§.§ Edge-based Guidance
Edge-guided Gradient Error.
In line 192, Sec 3.3 of the main paper, we mention that we use clustering to obtain several high-frequency local regions to compute our . Here, we elaborate on the details. K-means clustering <cit.> is utilized to obtain the edge-dense areas. Specifically, we binarize the edge pseudo-label, setting the top 5% pixels to one and the rest to zero. Next, we employ k-means clustering on the binarized labels to get several edge-dense areas with the centroid value as one. The clustered areas are shown in the fourth column of the <ref>. Our supervises these high-frequency regions to improve depth details. The depth consistency in flat areas can be preserved without the constraints of depth edges.
Edge-based Fusion Error.
The proposed aligns the data distribution of the learnable region mask Ω and the pseudo-label G_S by quantile sampling (Sec 3.3, line 205, main paper). Here, we provide additional visualizations for intuitive understanding. As shown in <ref>, we visualize the soft region mask Ω of high-frequency areas and the pseudo-label G_S with the same color map in the second and third columns. The regions highlighted in G_S with stronger depth edges and more detailed information naturally correspond to larger values in Ω to emphasize features from high-resolution inputs. We perform quantile sampling on Ω and G_S, as depicted in the fourth and fifth columns. The legends on the right indicate the percentile ranking of the pixel values in the whole image. Our supervises that Ω and G_S have consistent distribution for each color. In this way, Ω tends to have smaller values in flat regions for more information from low-resolution input, while the opposite is true in high-frequency regions. This is advantageous for the model to balance the depth details and spatial structures.
§.§ Refinement Network
We provide the detailed model architecture of the 𝒩_r. As shown in <ref>, the adopts the U-Net architecture similar to prior arts <cit.>. The depth maps from the depth predictor 𝒩_d predicted in different resolutions are up-sampled to a unified input size. A shared Mit-b0 <cit.> serves as the encoder to extract feature maps of different resolutions. The decoder gradually outputs the refined depth map with feature fusion modules (FFM) <cit.> and skip connections. We make two technical improvements to the refinement network, including attention-based feature interaction and adaptive weight allocation.
Attention-based Feature Interaction.
To predict refined depth maps in high resolution (e.g., 2048×2048), prior arts <cit.> adopt a U-Net with numerous layers (e.g., 10 layers or more) as the for sufficient receptive field. This leads to heavy computational overhead. In our case, we leverage the self-attention mechanism <cit.> to address this issue.
The features of low- and high-resolution inputs extracted by the encoder <cit.> are denoted as F^attn_l and F^attn_h. We stack F^attn_l and F^attn_h to obtain F^in for attention calculation. Positional embeddings <cit.> PE_x, PE_y are added to F^in for the height and width dimensions. An additional PE_f is used to distinguish the low- and high-resolution inputs. The attention-based feature interaction process can be expressed as follows:
F^in=Stack(F^attn_l, F^attn_h)+PE_x+PE_y+PE_f ,
K=W^k · F^in, Q=W^q · F^in, V=W^v · F^in,
F^out=Softmax(K^T Q / √(d))V + F^in .
Four attention layers are included in 𝒩_r. The interacted feature F^out is fed to the decoder to predict refined depth. Attention-based feature interaction achieves large receptive field with fewer layers, reducing model parameters and improving efficiency.
Adaptive Weight Allocation. The adopts adaptive weight allocation for the fusion of low- and high-resolution features with the learnable mask Ω. In each decoder layer, the feature go through a convolutional block to generate Ω with a single channel. The fused features F (line 212, main paper) and the feature from the previous layer are fused by the FFM module <cit.>.
§.§ Noise Implementation.
For our , we segment the ideal depth D^* into regular patches of size 64×64, with an overlap of half the patch size. Considering the depth discontinuities on the edges, instead of applying a linear transformation to the entire patch, we extract the edges from D^* and apply a linear transformation to each connected domain to simulate the local depth inconsistency. For edge deformation noise, we first down-sample D^* to the inference resolution and then restore it to the original resolution. Subsequently, we optimize a certain number of Gaussian distributions around the edges of D^* to fit the edge deformation and blurring.
The local inconsistency noise and edge deformation noise can effectively model the degradation of network prediction results compared to ideal depth maps. An additional experiment on the Middlebury2021 <cit.> dataset also proves this point. We optimize the with the least squares method and 50@000 position-constrained Gaussian distributions as edge deformation noise by gradient descent. The PSNR between the noisy depth (D^*+ϵ_cons+ϵ_edge) and model predicted depth D is over 40 dB, which indicates that the difference between D and (D^*+ϵ_cons+ϵ_edge) is very small. The result further demonstrates that the noises can accurately model depth prediction errors ( <ref>, main paper), similar to the visualizations in <ref> of the main text.
§.§ Broader Impacts and Limitations
Although works well in general, it still has limitations. For example, more advanced mechanisms and structures can be explored for the refinement network in future work. For inputs under conditions with specular surfaces, low light, or weak textures, the depth predictor tends to yield sub-optimal results. Although improves upon these results, the outcomes are still not perfect. Our approach exclusively utilizes publicly available datasets during the training process, thereby having no broad societal impact, not involving AI ethics, and not involving any privacy-sensitive data.
§ DETAILED EXPERIMENTAL SETTINGS
§.§ Datasets
Evaluation Datasets. We use five different benchmarks with diverse scenarios for comparisons. The descriptions of our evaluation datasets are as follows:
∙ Middlebury2021 <cit.> comprises 48 RGB-D pairs from 24 real indoor scenes for evaluating stereo matching and depth refinement models. Each image in the dataset is annotated with dense 1920 × 1080 disparity maps. We use the whole set of Middlebury2021 <cit.> for testing.
∙ Multiscopic <cit.> includes a test set with 100 synthetically generated indoor scenes. Each scene consists of RGB images captured from 5 different viewpoints, along with corresponding disparity annotations. The resolution of images is 1280 × 1080. We adopt its official test set for testing.
∙ Hypersim <cit.> lacks an official split for training and testing. In our experiment, we follow the test set defined by GBDF <cit.>, utilizing tone-mapped 286 images generated by their released code. Evaluation is performed using the corresponding 1024 × 768 depth annotations.
∙ DIML <cit.> contains RGB-D frames from both Kinect v2 <cit.> and Zed stereo camera with different resolutions. We conduct the generalization evaluation using the official test set, which includes real indoor and outdoor scene images along with corresponding high-resolution depth annotations.
∙ DIODE <cit.> contains high-quality 1024 × 768 LiDAR-generated depth maps of both indoor and outdoor scenes. We use the whole validation set (771 images) for generalization testing.
Training Datasets.
Our training data is sampled from diverse datasets, which can be categorized into synthetic and natural-scene datasets. The synthetic datasets consist of TartanAir <cit.>, Irs <cit.>, UnrealStereo4K <cit.> and MVS-Synth <cit.>. Among these, the resolutions of TartanAir <cit.> and Irs <cit.> are below 1080p, while MVS-Synth <cit.> and UnrealStereo4K <cit.> reach resolutions of 1080p and 4k, respectively. Irs <cit.> and MVS-Synth <cit.> contain limited types of scenes, whereas others include both indoor and outdoor scenes, some of which <cit.> present challenging conditions like poor lighting. To enhance the generalization to natural scenes, we also sample from four high-resolution real-world datasets, Holopix50K <cit.>, iBims-1 <cit.>, WSVD <cit.>, and VDW <cit.>. IBims-1 <cit.> contains a small number of indoor scenes but provides high-precision depth annotations from the capturing device. The remaining three datasets include large-scale diverse scenes, but their depth annotations, obtained from stereo images <cit.>, lack ideal edge precision.
§.§ Training Recipe
We leverage diverse training data to achieve strong generalizability. For each epoch, we randomly choose 20@000 images from natural-scene data <cit.> and 20@000 images from synthetic datasets <cit.>. For each sample, we adopt similar data processing and augmentation as GBDF <cit.>. To enhance training stability, we first train 𝒩_r for one epoch only with ℒ_gt. In the next two epochs, we involve ℒ_grad and ℒ_fusion for self-distillation. The a and N_w in ℒ_fusion are set to 0.02 and 4. The learning rate is 1e-4. λ_1 and λ_2 in <ref> are 0.5 and 0.1. All training and inference are conducted on a single NVIDIA A6000 GPU.
§.§ Evaluation Metrics
Depth Accuracy.
M denotes numbers of pixels with valid depth annotations, while d_i and d_i^* are estimated and ground truth depth of pixel i. We adopt the widely-used depth metrics as follows:
∙ Absolute relative error (Abs Rel): 1/|M|∑_d ∈ M|d-d^*| / d^* ;
∙ Square relative error (Sq Rel): 1/|M|∑_d ∈ Md-d^*^2 / d^*
∙ Root mean square error (RMSE): √(1/|M|∑_d ∈ Md-d^*^2) ;
∙ Mean absolute logarithmic error (log_10): 1/|M|∑_d ∈ M|log(d)-log(d^*)| ;
∙ Accuracy with threshold t: Percentage of d_i such that max(d_i/d_i^*,d_i^*/d_i) = δ<t∈[1.25, 1.25^2, 1.25^3] .
Edge Quality.
For the edge quality, we follow prior arts <cit.> to employ the ordinal error (ORD) and depth discontinuity disagreement ratio (D^3R). The ORD metric is defined as:
ORD=1/N∑_iϕ(p_i,0-p_i,1) ,
ϕ(p_i,0-p_i,1)={[ log(1+exp(-l(p_i,0-p_i,1))), l ≠ 0 ,; (p_i,0-p_i,1)^2, l = 0 ,; ].
l={[ +1, p_i,0^*/p_i,1^* ≥ 1+τ ,; -1, p_i,0^*/p_i,1^* ≤1/1+τ ,; 0, otherwise ,; ].
where p_i,0 and p_i,1 represent pairs of edge-guided sampling points. p_i,0^* and p_i,1^* are the ground truth values at corresponding positions. l is used to represent the relative ordinal relationship between pairs of points. ORD characterizes the quality of depth edges by sampling pairs of points near extracted edges using a ranking loss <cit.>. On the other hand, D^3R <cit.> uses the centers of super-pixels computed with the ground truth depth and compares neighboring super-pixel centroids across depth discontinuities. It directly focuses on the accuracy of depth boundaries.
§ MORE EXPERIMENTAL RESULTS
§.§ Model Efficiency Comparisons.
In line 277 of the main paper, we mention that our method achieves higher model efficiency than prior arts <cit.>. Here, we provide detailed comparisons of model efficiency in <ref>. For one-stage methods <cit.>, adopts a more lightweight , reducing model parameters by 12.5 times than GBDF <cit.> and improving inference speeds by 3.6 times than Kim et al. <cit.>. Compared with two-stage tile-based methods <cit.>, our coarse-to-fine edge refinement reduces the Flops per patch by 50.6 times and the patch numbers by 5.9 times than PatchFusion <cit.>.
§.§ More Quantitative and Qualitative Results
Training Iterations of Self-distillation
We investigate the iteration numbers of self-distillation in <ref>. The iteration number of zero indicates the model after the training of the first epoch only with ground truth for supervision, i.e., before self-distillation. Clearly, with the proposed self-distillation paradigm, both the depth accuracy and edge quality are improved until convergence.
Formats of Pseudo-labels
We compare the refined depth D_S and the proposed depth edge representation G_S as pseudo-labels. Using the accurate and meticulous depth D_S could be a straightforward idea. However, with depth maps as the supervision, the model cannot precisely focus on improving edges and details. Thus, G_S achieves stronger efficacy than D_S, proving the necessity of our designs.
Quantitative Comparisons. In the main paper, only δ_1, REL, ORD, and D^3R are reported. Here, we present the additional metrics of all the compared methods <cit.> on Middlebury2021 <cit.>, DIML <cit.>, and DIODE <cit.> datasets in <ref>, <ref>, and <ref>. Our method outperforms previous approaches on most evaluation metrics, showing the effectiveness of our framework.
Qualitative Comparison
We provide more qualitative comparisons with one-stage <cit.> and two-stage <cit.> methods in <ref> and <ref>. These visual results further demonstrate the excellent performance and generalization capability of on diverse scenes <cit.>.
| Depth refinement infers high-resolution depth with accurate edges and details, refining the low-resolution counterparts from depth estimation models <cit.>. With increasing demands for high resolutions in modern applications, depth refinement becomes a prerequisite for virtual reality <cit.>, bokeh rendering <cit.>, and image generation <cit.>. The prevailing methods <cit.> adopt two-stage tile-based frameworks. Based on the one-stage refined depth of the whole image, they merge high-frequency details by fusing extensive patches with complex patch selection strategies. However, numerous patches lead to heavy computational costs. Besides, as in <ref> (a), excessive integration of local information leads to inconsistent depth structures, e.g., the disrupted billboard.
Apart from efficiency and consistency, depth refinement <cit.> is restricted by noisy and blurred depth edges. Highly accurate depth annotations with meticulous boundaries are necessary to enforce fine-grained details. For this reason, prior arts <cit.>
only use synthetic datasets <cit.> for the highly accurate depth values and edges. However, synthetic data falls short of the real world in realism and diversity, causing limited generalizability with blurred depth and degraded performance on in-the-wild scenarios.
Some attempts <cit.> simply adopt natural-scene datasets <cit.> for the problem. The varying characteristics of real-world depth annotations, e.g., sparsity <cit.>, inaccuracy <cit.>, or blurred edges <cit.>, make them infeasible for supervising refinement models. Thus, GBDF <cit.> uses depth predictions <cit.> as pseudo-labels, while Boost <cit.> leverages adversarial training <cit.> as guidance. Those inaccurate pseudo-labels and guidance still lead to blurred edges as shown in <ref> (a). The key problem is to alleviate the noise of depth boundaries by constructing accurate edge representations and guidance.
To tackle these challenges, we dig into the underlying reasons for the limitations, instead of the straightforward merging of local details. We model depth refinement as a problem, decoupling depth prediction errors into two degradation components: and . We use regional linear transformation perturbation as the to measure inconsistent depth structures. The represents fuzzy boundaries with Gaussian blur. Experiments in <ref> showcase that the noises can effectively depict general depth errors, serving as our basic principle to improve refinement results.
In pursuit of the robustness against the and , we propose the () framework, which mainly consists of depth edge representation and edge-based guidance. A refinement network is considered as the Poisson fusion operator, recovering high-resolution depth from noisy predictions of depth models <cit.>. Given the noisy input, can generate low-noise and accurate depth edge representation as pseudo-labels through coarse-to-fine self-distillation. The edge-based guidance including and is designed as the optimization objective of Poisson fusion. When depth maps are better refined, the pseudo-labels also become more noise-free. Our approach establishes accurate depth edge representations and guidance, endowing with strong robustness to the two types of noises. Consequently, as shown in <ref> (b), significantly outperforms prior arts <cit.> in depth accuracy and edge quality. Besides, without merging numerous patches as the two-stage tile-based methods <cit.>, achieves much higher efficiency.
We conduct extensive experiments on five benchmarks. achieves performance on the commonly-used Middlebury2021 <cit.>, Multiscopic <cit.>, and Hypersim <cit.>. Meanwhile, since can establish self-distillation with accurate depth edge representation and guidance on natural scenes, the evaluations on in-the-wild DIML <cit.> and DIODE <cit.> datasets showcase our superior generalizability. Analytical experiments demonstrate that these noticeable improvements essentially arise from the strong robustness to the noises. Furthermore, the precise depth edge labels produced by can be directly used to train another model <cit.> and yield improvements, which indicates that our method could help with training robust refinement models in future works.
In summary, our main contributions can be summarized as follows:
∙We model the depth refinement task through the problem with and as two types of depth degradation.
∙ We present the robust and efficient () framework, which can generate accurate depth edge representation by the coarse-to-fine self-distillation paradigm.
∙ We design the and , as the edge-based guidance to enforce the model with both consistent depth structures and meticulous depth edges. | Depth Refinement Models. Depth refinement refines low-resolution depth from depth estimation models <cit.>, predicting high-resolution depth with fine-grained edges and details. Existing methods <cit.> can be categorized into one-stage <cit.> and two-stage <cit.> frameworks. One-stage methods <cit.> conduct global refinement of the whole image, which could produce blurred depth edges and details. To further enhance local details, based on the globally refined results, the prevailing refinement approaches <cit.> adopt the two-stage tile-based manner by selecting and merging numerous patches. For example, Boost <cit.> proposes a complex patch-sampling strategy based on the gradients of input images. PatchFusion <cit.> improves the sampling by shifted and tidily arranged tile placement. However, the massive patches lead to low efficiency. The excessive local information produces inconsistent depth structures or even artifacts. In this paper, we propose the () framework, which can predict both consistent structures and accurate details with much higher efficiency by tackling the problem.
Depth Refinement Datasets. Depth datasets with highly accurate annotations and edges are necessary for refinement models. Prior arts <cit.> utilize CG-rendered datasets <cit.> for accurate depth, but the realism and diversity fail to match the real world. For instance, neither the UnrealStereo4K <cit.> nor the MVS-Synth <cit.> contain people, restricting the generalizability of refinement models. A simple idea for the problem is to leverage natural-scene data <cit.>. However, different annotation methods lead to varying characteristics, e.g., sparsity of LiDAR <cit.>, inaccurate depth of structured light <cit.>, and blurred edges of stereo matching <cit.>. To address the challenge, Boost <cit.> adopts adversarial training as guidance only with a small amount of accurately annotated real-world images. GBDF <cit.> employs depth predictions <cit.> with guided filtering <cit.> as pseudo-labels. Due to the inaccurate pseudo-labels and guidance, they <cit.> produce blurred edges and details. By contrast, constructs accurate depth edge representation and edge-based guidance for self-distillation, leading to fine-grained details and strong generalizability. | null | null | null | In this paper, we model the depth refinement task as a problem. To enhance the robustness against local inconsistency and , we propose () framework. With the low-noise depth edge representation and guidance, achieves both consistent spatial structures and meticulous depth edges. Experiments showcase our stronger generalizability and higher efficiency over prior arts. The provides a new perspective for depth refinement in future works. Limitations and broader impact are discussed in Appendix <ref>.
Acknowledgement This work is supported by the National Natural Science Foundation of China under Grant No. 62406120.
plain |
http://arxiv.org/abs/2409.18005v1 | 20240926161341 | Collapsible Kernel Machine Regression for Exposomic Analyses | [
"Glen McGee",
"Brent A. Coull",
"Ander Wilson"
] | stat.ME | [
"stat.ME"
] |
1]Glen McGee
2]Brent A. Coull
3]Ander Wilson
[1]Department of Statistics and Actuarial Science, University of Waterloo. ([email protected])
[2]Department of Biostatistics, Harvard T. H. Chan School of Public Health
[3]Department of Statistics, Colorado State University
Collapsible Kernel Machine Regression for Exposomic Analyses
[
September 28, 2024
============================================================
§ ABSTRACT
An important goal of environmental epidemiology is to quantify the complex health risks posed by a wide array of environmental exposures. In analyses focusing on a smaller number of exposures within a mixture, flexible models like Bayesian kernel machine regression (BKMR) are appealing because they allow for non-linear and non-additive associations among mixture components. However, this flexibility comes at the cost of low power and difficult interpretation, particularly in exposomic analyses when the number of exposures is large. We propose a flexible framework that allows for separate selection of additive and non-additive effects, unifying additive models and kernel machine regression. The proposed approach yields increased power and simpler interpretation when there is little evidence of interaction. Further, it allows users to specify separate priors for additive and non-additive effects, and allows for tests of non-additive interaction.
We extend the approach to the class of multiple index models, in which the special case of kernel machine - distributed lag models are nested. We apply the method to motivating data from a subcohort of the Human Early Life Exposome (HELIX) study containing 65 mixture components grouped into 13 distinct exposure classes.
§ INTRODUCTION
Characterizing the relationships between health outcomes and a wide array of environmental exposures—or exposomic analysis—is a major priority of the current and future strategic plan of the National Institute of Environmental Health Sciences <cit.>. This marks an ongoing shift from simpler, single-pollutant analyses and low dimensional multi-pollutant mixture analyses to large scale analyses that attempt to quantify the full burden of a person's environment (their exposome) <cit.>. Among the many challenges in these environmental analyses are the facts that the functional form of exposure-reponse relationships is unknown and exposures may interact in complex ways <cit.>. A popular non-parametric approach proposed for mixtures analyses, Bayesian kernel machine regression (BKMR), addresses both issues, allowing non-linear exposure response relationships as well as high-order interactions in a parsimonious model <cit.>.
In practice, however, it is often too flexible. Coupling non-linearity with non-additivity in a kernel framework makes identifying either type of relationship challenging, particularly in small samples with small effect sizes common in environmental health research. Even when there is enough power to detect these effects, interpretation can be prohibitively difficult in exposomic analyses, where there is a large number of exposures (p), requiring investigation of each (p) individual exposure-response curve and each of p× (p-1)/2 two-way interaction plots.
In contrast to BKMR, some simpler approaches like exposome wide association studies (EWAS) have been proposed to scale to higher numbers of exposures <cit.>, but these screening approaches are restrictive, not allowing for interactions or non-linearity. Instead recent work has extended BKMR to reduce its flexiblity. Adapting a projection approach from the spatial literature,
<cit.> improved efficiency by projecting out linear effects and two-way (linear) interactions. This improved efficiency when most effects were linear but did not improve interpretability, and the nonidentifiability precluded hypothesis testing. <cit.> circumvented the kernel framework and assigned hierarchical variable selection priors to a natural cubic spline basis expansion, but computation is challenging when there are a large number of exposures. Another approach suited to exposomic analyses is the Bayesian multiple index model (BMIM), which exploits group structure in the exposures. The BMIM reduces the dimension of the inputs of BKMR by constructing linear multi-exposure indices defined as weighted sums of exposures with weights estimated from the data <cit.>.
However, when the number of exposures, or the number of indices in a multiple index model, is large, these approaches can be overly flexible and challenging to interpret.
We propose a collapsible kernel machine regression (CKMR) framework that decouples non-linear additive relationships from non-additive interactions. A novel adaptive projection approach decomposes the non-parametric surface into an additive function space modeled by basis expansions and its orthogonal complement that contains interactions.
Paired with a hierarchical variable selection prior, the model can collapse to a generalized additive model (GAM) when there is no evidence of interactions. At the same time, the model framework allows for interactions among a subset of exposures or all exposures through the BKMR framework when this level of complexity is supported by the data.
This strategy has three advantages over non-parametric approaches like BKMR. First, it improves efficiency when there is little evidence of interactions. Second, it improves interpretability by obviating the need to investigate a large number of interaction plots. Third, thanks to a novel adaptive projection approach, it allows one to test for non-additive interaction separately from overall tests of association between an exposure and the outcome.
In this paper we analyze a motivating dataset from the Human Early Life Exposome (HELIX) project <cit.>. In particular, interest lies in characterizing the complex relationships between youth BMI and 65 different environmental and chemical exposures, grouped into 13 distinct exposure classes. To that end we extend the collapsible kernel framework to multiple index models, and apply the method to the HELIX data, treating each exposure class as a separate index.
The rest of the paper proceeds as follows. In Section <ref>, we briefly review existing methods for mixtures analyses. In Section <ref> we present the novel collapsible kernel machine regression framework and collapsible index models. We then evaluate the performance of the collapsed methods in simulations in Section <ref>, and apply it to motivating data on associations between youth BMI and 13 different classes of exposures from the Human Early Life Exposome (HELIX) study in Section <ref>. We conclude with a discussion in Section <ref>.
§ BACKGROUND
§.§ BKMR
Let y_i be a continuous outcome of interest, 𝐱_i=(x_i1,⋯,x_ip)^T be a vector of p exposures and z_i be a vector of covariates for the i^th observation (i=1,⋯,N). The BKMR model is
y_i = h(𝐱_i)+𝐳_i^Tα +ϵ_i, ϵ_i∼ N(0,σ^2),
where h(·): ℝ^p →ℝ is an unknown and potentially non-linear function of 𝐱_i, and α is a vector of regression coefficients.
The exposure-response function h(·) is defined by a kernel function K:ℝ^p ×ℝ^p →ℝ <cit.>.
By default we use a Gaussian kernel, K(𝐱_i,𝐱_i')=exp[-∑_j=1^p ρ_j (x_ij-x_i'j)^2], where ρ_j ≥ 0 is an unknown component weight; this corresponds to a radial basis function representation of h(·).
The model can be conveniently represented as a linear mixed effects model <cit.>:
y_i|h_i ∼ N(h_i+𝐳_i^Tα,σ^2),
(h_1,⋯,h_N)^T ∼ N(0,ν^2σ^2 𝐊),
where 𝐊 is the kernel matrix with elements 𝐊_ij=K(𝐱_i,𝐱_j), and ν^2>0 is a penalty term that controls smoothness. We then base estimation and inference on the marginal likelihood for 𝐲. The model is completed by specifying priors for {ρ,γ,σ^2,ν^2}, and we adopt default priors from <cit.> throughout.
This non-parametric formulation has the advantage of allowing non-linearity as well as high-order interactions without needing to explicitly include all desired interaction terms a priori. This flexibility has two drawbacks, however. First, it makes interpretation difficult because one must manually investigate interactions by, say, plotting exposure response curves while holding other exposures at different levels. Second, the model may be too flexible when there are few or no higher order interactions; in such a case BKMR is likely less efficient than a GAM, which assumes additivity among the exposure effects a priori.
§.§ MixSelect
<cit.> augmented BKMR by including linear main effects and two-way interactions in an approach called MixSelect. The MixSelect model is
y_i|h_i ∼ N(𝐱_i^Tβ+∑_j=1^p ∑_k>jλ_jkx_ijx_ik+ h_i^*+𝐳_i^Tα,σ^2).
Here h^*_i is the i^th element of [𝐈-𝐇]𝐡, where 𝐇 is the usual hat matrix, i.e. the projection matrix onto the column space of the exposures, and only include the linear main effects. This serves to reduce so-called confounding between the linear main effects and the non-linear 𝐡, as in the spatial literature <cit.>. Then placing separate selection priors on the linear main and interactions coefficients {β, λ} and for non-linear kernel components ρ, this allows for non-linear components to be selected out of the model while still allowing linear main and interaction effects. This proved to be more accurate than BKMR in the absence of non-linearity. That being said, if there is in fact are non-linear relationships, this is only captured by the kernel function which is again overly flexible, allowing for high order non additive interactions, which makes interpretation challenging.
What's more, the 𝐇 adjustment does not maintain identifiability due to shared components in the linear two-way interactions and the kernel function. Interestingly, the authors wrote that although one could in theory use the orthogonal projection onto the column space of both linear main and interaction effects, they “noticed in [their] simulations that this would make the resulting nonparametric surface too restrictive.” The lack of identifiability makes it is difficult to characterize evidence in favour of non-linearity and non-additivity.
§.§ BMIMs
In exposomic analyses, the number of exposures p is large, and interpreting the fully non-parametric BKMR can be prohibitive. Researchers often have knowledge of different classes of exposures; for example in the HELIX data (see Section <ref>), 65 exposures are grouped naturally into 13 different groups, including pollutant classes like pthalates and organochlorines as well as indoor air measurements and traffic density variables. The BMIM leverages this group structure to reduce the dimensionality of the non-parametric function in BKMR and make more interpretable inferences.
Suppose the p exposures {x_i1,⋯,x_ip} are partitioned into M, M∈{1,…,p}, mutually exclusive groups denoted 𝐱_im=(x_im1,⋯,x_imL_m)^T for m=1,…,M. The BMIM <cit.> is
y_i = h(𝐄_i)+𝐳_i^Tα +ϵ_i, ϵ_i∼ N(0,σ^2),
where 𝐄_i=(E_i1,…,E_iM)^T
is a vector of multi-exposure indices E_im=𝐱_im^T θ_m with L_m-vector of unknown index weights θ_m=(θ_m1,…,θ_mL_m)^T, where θ_ml quantifies the contribution of the l^th component to the m^th index. Note that h(·) is now an unknown M-dimensional function of an index vector, whereas in Section <ref> it was a p-dimensional function of exposures (M≤ p). The weights appear in the kernel function, K(𝐄_i,𝐄_i')=exp[-∑_m=1^M ρ_m ([𝐱_im -𝐱_i'm]^T θ_m)^2], and need to be constrained for identifiability due to the unknown ρ_j: (i) 1_L_m^Tθ_m≥ 0, where 1_L_m is the unit vector of length L_m, and (ii) θ_m^Tθ_m =1.
Instead of estimating parameters in this constrained space, we follow <cit.> and reparameterize the model in terms of unconstrained θ_m^*=ρ_m^1/2θ_m, on which one places priors directly. Despite this dimension reduction, the BMIM suffers from the same drawbacks as BKMR, inextricably linking non-linearity and non-additivity, albeit in lower dimension.
§ PROPOSED METHODS
§.§ Collapsible Kernel Machine Regression
We propose a model that addresses these problems by collapsing to an additive structure when suggested by the data, while retaining the flexibility of BKMR when necessary. The collapsible kernel machine regression (CKMR) model is
y_i = ∑_j=1^p f_j(x_ij)+h^*(𝐱_i)+𝐳_i^Tα +ϵ_i, ϵ_i∼ N(0,σ^2),
where f_j(·) is a smooth unknown function of the j^th exposure, and h^*(·) is a p-dimensional non-parametric function of 𝐱_i, subject to an identifiability constraint.
We represent the f_j(·) via basis expansion: f_j(·)≈𝐁^T_ijβ_j, where 𝐁_ij=[b_j1(x_ij),… b_jd_j(x_ij)]^T, and b_jl(·), l=1,…,d_j, are known basis functions, and β_j is a d_j-vector of coefficients. We use B-splines, though alternative basis functions could be used. We encourage smoothness via a quadratic roughness penalty 1/2τ_j^2β_j^T S_j β_j, where S_j is a known d_j × d_j matrix of second derivatives of basis functions, and τ_j^2 is an unknown smoothing parameter estimated from the data <cit.>.
This is equivalent to (the log of) a mean-zero Gaussian prior with precision 1/τ_j^2S_j.
To ensure identifiability, h^*(𝐱_i) must be restricted in some way. We define 𝐡^*=[h^*(𝐱_1),…,h^*(𝐱_n)]^T as 𝐡^*=P𝐡, where 𝐡∼ N(0,ν^2σ^2𝐊 )
as in the mixed effects representation in (<ref>), P is the projection matrix P:=𝐈-𝐁(𝐁^T𝐁)^-1𝐁^T using a generalized inverse as appropriate, and 𝐁 is the N×∑_k=1^p d_k design matrix with i^th row equal to 𝐁_i^T. Equivalently, 𝐡^*∼ N(0, ν^2σ^2 P𝐊P). Thus 𝐡^* is the projection of the usual non-parametric component 𝐡 onto the orthogonal complement of the column space of 𝐁. The columns of 𝐁 span the space of smooth additive functions of 𝐱_i, so 𝐡^*(·) captures non-additive deviations. Inuitively, this decomposes the surface 𝐡(·) into: (i) additive effects, captured by ∑_j=1^p f_j(·), and (ii) non-additive interactions, captured by 𝐡^*(·). We note that although 𝐡^*(·) could technically capture non-linear main effects that are more flexible than allowed by the spline basis, this is negligible when using a rich enough basis; in the penalized splines approach, one generally sets the number of basis function to be fairly large and encourages smoothness via penalization.
§.§ Variable Selection & Hierarchical Constraints
We propose a hierarchical variable selection approach that uses two sets of indicators for (i) selection of the main effect of each component and for (ii) selection of each component into the interaction kernel.
To first allow inclusion or exclusion of the j^th additive component, we place spike-and-slab priors on the vector β_j:
β_j ∼γ_j g(β_j)+(1-γ_j) δ_0
γ_j ∼Bernoulli(π)
π ∼Beta(a_π,b_π),
where g(β_j) is a multivariate zero-mean Gaussian density with precision 1/τ_j^2S_j, and
δ_0 represents a point mass at the zero vector. To include or exclude non-additive effects, we place hierarchically constrained spike-and-slab-priors on the non-additive components ρ_j. The prior is
ρ_j ∼γ_j^ρGamma(a^ρ,b^ρ)+(1-γ_j^ρ)δ_0
γ_j^ρ ∼γ_jBernoulli(π^ρ)+(1-γ_j)δ_0
π^ρ ∼Beta(a_π^ρ,b_π^ρ).
This formulation ensures that if the j^th component is included in the kernel (i.e. ρ_j≠ 0), then its additive component must also appear (γ_j=1).
The hierarchical construction, paired with the model decomposition, has several advantages.First, it allows one to test for both any effect of an exposure as well as non-additive effects of an exposure using posterior inclusion probabilities. Specifically, γ_j=0 indicates no effect, γ_j=1 and γ_j^ρ=0 indicates additive effect only and γ_j=1 and γ_j^ρ=1 indicates a non-additive effect. The last combination γ_j=0 and γ_j^ρ=1 is infeasible with the hierarchical structure.
Second, it simplifies the interpretation task because one need not investigate all p(p-1)/2 two-way exposure response plots when one or more non-additive component are not included in the model; rather, one need only investigate potential two-way interactions for components jointly selected into the kernel component (i.e. for which γ_j^ργ_k^ρ=1) with probability.
Third, it improves efficiency when there is little evidence of interaction among exposures. In particular, users can tune the prior probability of non-additivity via hyperparameters {a_π^ρ,b_π^ρ}.
§.§ Adaptive Projection
The construction 𝐡^*=𝐏𝐡 guarantees identifiability, but we find it overly restrictive in practice. When p is even moderately large, the number of columns of 𝐁, ∑_j=1^p d_j, is large, and the column space of 𝐁 may be rich enough to capture some of the variability otherwise attributable to non-additive effects. This poses no problem for identifiability, which is addressed by the projection matrix 𝐏, but it inhibits interpretation and inference. Consider, for example, a sparse, high dimensional setting in which there is a non-additive interaction between two exposures, and there are many other exposures with no effect at all. If the columns of 𝐁 are multi-collinear with an interaction, then, after projection, 𝐡^* only captures a fraction of the variability due to the interaction. As a consequence we may erroneously include additive effects of irrelevant exposures and exclude legitimate non-additive effects of others.
To circumvent this, we propose an adaptive projection matrix, replacing 𝐏 by 𝐏_γ:=𝐈-𝐁_γ(𝐁_γ^T𝐁_γ)^-1𝐁_γ^T, where 𝐁_γ is constructed by deleting from 𝐁 the columns corresponding to components not included in the model. That is, 𝐁_γ has i^th row equal to [𝐁_i,j_1,…, 𝐁_i,j_D] where γ_j_1=⋯=γ_j_D=1 and all other γ_j=0, j ∉{j_1,…,j_D }.
§.§ Collapsible Multiple Index Models
We extend the collapsible kernel framework to the multiple index model setting. The proposed collapsible multiple index model (CMIM) is
y_i = ∑_j=1^M f_j(E_ij)+h^*(𝐄_i)+𝐳_i^Tα +ϵ_i, ϵ_i∼ N(0,σ^2),
where h^*(·) is an M-dimensional non-parametric function of a multi-exposure index vector subject to an identifiability constraint, and f_j(·) is a smooth unknown function of the j^th index.
We again approximate the f_j(·) via basis expansions, f_j(·)≈𝐁^T_ijβ_j, where 𝐁_ij=[b_j1(E_ij),…, b_jd_j(E_ij)]^T, β_j is a d_j-vector of coefficients and the b_jl(·), l=1,…,d_j, are known basis functions. However, a key difference and complicating factor for the CMIM is that these basis functions are now functions of unknown weights θ_m. We impose smoothness via the same quadratic penalty as before, which is equivalent to adopting a mean-zero Gaussian prior with precision 1/τ_j^2S_j.
Unlike the previous framework, 𝐁_ij now depends on unknown parameters θ_m; hence, we must update 𝐁_ij within the MCMC. The penalty matrices S_j are no more complicated than before, since they are functions of the known basis functions and not the underlying indices, but we need to update 𝐁_ij for every new value for E_ij=𝐱_ij^T θ_j (i.e. whenever θ_j is updated). This is not computationally intensive, as the basis functions b_jl(·) are known and can be easily evaluated at any new value for E_ij=𝐱_ij^T θ_j, for fixed knots.
Second, θ_j now appears in both the non-additive and the additive components of the model, and both components need to be included in Metropolis-Hastings steps.
Reparameterizing the kernel in terms of unconstrained θ^*_m as in the standard BMIM <cit.> is no longer possible because the basis functions b_jd(·) are non-linear functions of E_ij=𝐱_ij^T θ_j. Instead we build on the approach of <cit.>, which used Fisher von-Mises priors (and proposals) for weights in single index models. We augment the hierarchical prior structure in Section <ref> to be
θ_j ∼γ_j vonMisesFisher(κ,μ) + (1-γ_j) δ_μ_L_m,
where δ_μ_L_m is a point mass at L_m^-1/21_L_m, and κ and μ are hyperparameters representing the concentration and the mean direction of the distribution. Absent prior knowledge of relative weights, we set μ=L_m^-1/21_L_m.
We again employ the adaptive projection approach of Section <ref>. Paired with the hierarchical variable selection, the proposed model collapses to an additive index structure where possible while still allowing for non-additive interactions among indices where appropriate.
Posterior inference follows via MCMC sampling; see supplementary Appendix A.2 for complete details on the sampler. In particular, we implement the Gaussian predictive process approximation of <cit.> and <cit.> to facilitate faster computation when N is large (see supplementary Appendix A.3).
§.§ Extensions
Extending to the proposed collapsible framework to BMIMs yields several convenient models. First, it extends naturally to identifying windows of susceptibility given time-varying exposures. Suppose M exposures are measured longitudinally, and 𝐱_im=(x_im1,⋯,x_imT)^T corresponds to the m^th exposure measured at T times. Following <cit.>, we take a functional approach and represent 𝐱_im and weights θ_m using the same orthonormal basis: θ_m=Ψ_m θ̃_m and 𝐱_im=Ψ_mξ_im, where Ψ_m is a matrix whose columns define an orthogonal basis expansion, and θ̃_m and ξ_im are unknown coefficients. A least squares approximation then yields E_im=𝐱_im^T Ψ_m θ̃_m=𝐱̃_im^T θ̃_m, which takes the form of a linear index with transformed exposure vector 𝐱̃_im=Ψ_m^T 𝐱_im. The upshot is that we can apply a known linear transformation Ψ_m^T to 𝐱_i and place priors directly on θ̃_m. Estimation then follows as above, and posterior draws of θ_m—indicating times at which exposure effects are strongest—are recovered by transforming posterior draws of θ̃_m.
Second, a user can incorporate prior knowledge about the epxosures into the index structure in a way that would be challenging in a fully non-parametric approach <cit.> . For example, when exposures are believed to act in the same direction (i.e., either all protective or all detrimental), then one can incorporate this information to improve efficiency by adopting a non-informative Dirichlet prior on index weights θ_m (under the slightly different constraint that 1^Tθ_m=1). When one further has knowledge about the relative magnitudes of exposure effects, for example based on toxic equivalency factors from toxicology research, one can incorporate this via informative Dirichlet priors or hard constraints on index weight orderings (see for details).
§ SIMULATIONS
We conducted a series of simulations to investigate the performance of the proposed methods. The goals of the simulations were to: (i) compare accuracy and efficiency relative to existing methods; (ii) verify that the proposed approaches yield valid inference; and (iii) show that the proposed methods can quantify evidence of non-additive interactions. We considered both standard (non-index) models as well as multiple index models, and we investigated two scenarios, corresponding to: (A) no interactions and (B) interactions between some exposures/indices.
§.§ Setup
We consider 4 scenarios. For each scenario we generated R=500 data sets of N=200 observations. Scenarios A and B are non-index scenarios, we first drew ten standardized uniformly-distributed exposures, {x_1,…,x_10}, as well as two independent continuous covariates, {z_1,z_2}, treated linearly. We then generated outcomes according to one of two exposure-response functions. In both, six of the ten exposures had non-null effects, four of which were non-linear. In Scenario A, there were no interactions. We generated outcomes as y=μ_A +0.5 z_1+ϵ, where
μ_A=2cos(2 x_1) + x_2 +4 f_t(2x_3) +sin(2x_4)+x_5^2 -x_6,
f_t()̇ is the density function of the student's t-distribution with ten degrees of freedom, and ϵ∼ N(0,σ^2). In Scenario B, we included an interaction between x_1 and x_5. We generated outcomes as above, replacing μ_A with μ_B=μ_A + cos(2 x_1)x_5^2. In the main simulations we set σ^2=1, and additionally considered a higher noise setting in the supplementary material where σ^2=2.
Scenarios C and D are multiple index scenarios. We replaced the first two exposures with multipollutant indices containing four exposures each. We first generated sixteen exposures {x_1,…,x_16}. We then created two indices each consisting of four components. Those indices are E_1=3 x_1 +2 x_11 +1 x_12 +0 x_13 and E_2=x_2+x_14+x_15+x_16. The remainder of the exposures are each in their own index, E_j=x_j for j=3,…,10. Thus we generated from a 10-index model with (L_1,…,L_10)^T=(4,4,1,…,1)^T. Scenario C is the same as scenario A expect the exposure-response function is a function of the indices E_1,…,E_10. Scenario C is the same as Scenario B with index exposures as the input and included an interaction between multi-exposure index E_1 and single component index E_5=x_5.
In each dataset we estimated the unknown exposure-response surface with several methods. In the non-index setting, we fit the proposed CKMR (with 9 spline degrees of freedom), standard BKMR (via the bkmr package; <cit.>), and two competing methods. As a gold standard when there are no interactions, we fit GAM with spike-and-slab priors (ssGAM) via the spikeslabGAM package in R <cit.>.
We also fit the non-linear interaction (NLinter) approach of <cit.>, which allows for separate selection of additive and non-additive effects via natural cubic splines with sparsity inducing priors (and we used 6 spline degrees of freedom), and the MixSelect approach of <cit.>. In the index setting, we fit the proposed CMIM and the standard BMIM. We also fit a version of CKMR & CMIM that does not use the adaptive projection approach of <ref> (that is, 𝐏 is computed once as the orthogonal projection matrix for the column space of all basis functions regardless of whether some main effects are selected out of the model).
To quantify accuracy and efficiency, we report mean squared error (MSE) and 95% credible interval width for the true exposure-response surface, h(·), averaged over observed exposure levels. To assess validity we then report bias and interval coverage. To show the proposed methods can identify evidence of non-additivity, we compute average posterior inclusion probabilities (PIPs) of main effects as well as of interactions (i.e. inclusion in the kernel). In particular, we report joint posterior probabilities that γ^ρ_j γ^ρ_k =1, indicating joint selection into the kernel and thus potential pairwise interaction. Other than these proposed interaction PIPs, only NLinter offers PIPs for interactions.
§.§ Results
We report average MSE and interval width in Table <ref> (and boxplots can be found in the supplementary material). Consider first the non-index setting.
When there was no interaction (Scenario A), BKMR performed worst, with an average MSE over 50% higher than all others, as it did not exploit the lack of interactions. All other methods yielded comparable MSE and good coverage; in particular, CKMR performed nearly as well as the GAM, which would be the optimal approach in this scenario.
When there was interaction (Scenario B), ssGAM naturally yielded very bad coverage, because it erroneously assumed additivity. By contrast, the proposed CKMR performed best, yielding the lowest MSE and nominal coverage.
Though we expected the greatest gains in the absence of interactions, BKMR was still outperformed by CKMR in Scenario B.
This is because even though there is indeed an important interaction between two exposures, the other exposures did not interact; CKMR was able to exploit this where BKMR could not.
We found similar results in the multiple index setting. When there was no interaction (Scenario A), BMIM was unable to exploit the absence of interactions, and yielded 50% higher MSE and 40% wider intervals than CMIM. When there was an interaction (Scenario B), CMIM still performed better than BMIM but the difference was smaller (BMIM had 30% higher MSE), again because CMIM exploited the fact that not all exposures had interactions.
We plot average main effect and interaction PIPs in Figure <ref>. For the componentwise setting, all methods assigned high posterior probability to the six non-null effects and low probability to the four null effects most of the time. Only CKMR and NLInter provide PIPs for interactions, and we we see that the proposed approach has much higher power to detect an interaction than NLInter (92% vs 22%). In the higher noise setting we found similar results but lower power overall: (41% vs 8%; see Supplementary Material). In the multiple-index models, the BMIM had lower power to detect the x_3 main effect, with PIPs of 66% and 53% with and without interaction, respectively, and had slightly elevated PIPs for null effects. By contrast the proposed CMIM correctly captured all non-null main and interaction effects with high PIPs, and yielded low PIPs for null effects.
Moreover we find
that unlike the proposed approach, the non-adaptive version failed to accurately pick up interactions, with average PIPs of 69% rather than 92% (and 24% rather than 41% in the high noise setting) in the componentwise setting, with similar results in the indexwise setting (see Figures <ref> and <ref>). This led to higher MSEs when data were generated with interactions: e.g., 0.65 vs 0.56 in scenario D.
§ CASE STUDY: HELIX
§.§ Setup
We analyze data from the Human Early Life Exposome (HELIX) project <cit.>, a large study of environmental and chemical exposures among of 30,000 mother-child pairs in six European countries. In particular, we are interested in early life (postnatal) exposures and their relationship with childhood BMI, so we analyzed data from a subcohort of children for which childhood BMI was measured, made available by The Barcelona Institute for Global Health (ISGlobal). To protect participant privacy while making data publicly available, ISGlobal used a combination of real and simulated data; details on the data anonymization process can be found in <cit.>. Data can be downloaded at github/isglobal-exposomeHubhttps://github.com/isglobal-exposomeHub/ExposomeDataChallenge2021.
We observed age- and sex-standardized BMI for N=1,301 children aged 6 to 11. For each subject we also observed p=65 postnatal exposures, which were naturally pre-grouped into 13 classes: air pollution variables, indoor air measurements, lifestyle factors, metals, meteorological variables, natural spaces, organochlorines, organophosphates, PBDEs, PFAS, phenols, phthalates, and traffic density variables. Exposures were strongly correlated within classes, with correlations as high as 0.9 among phthalates and 0.8 among organochlorines (see Figure <ref>). Cross correlations were lower between different classes, with a maximum of 0.5. Due to the high dimensionality, the strong within-class correlations, and expected similarity behaviour of exposures within a class, we used these classes as index groupings and thus fit a M=13-index model via the proposed CMIM approach in addition to the existing BMIM approach. These allowed for 13 distinct non-linear indexwise curves, in which interactions were permitted among different indices but not within indices. That is, metals and phenols may interact, but two different metals may not. We also adjusted (linearly) for several maternal covariates consisting of maternal age, pre-pregnancy BMI, pregnancy weight gain, education level, parity before pregnancy, and cohort, as well as several child-level covariates consisting of gestational age, year of birth, and number of parents native to the country.
§.§ Results
The BMIM yielded high indexwise PIPs for metals, meteorological variables and organochlorines (see Table <ref>). Estimated indexwise curves (when holding all other exposures at their medians) indicated moderate linear associations with metals and meteorological variables, and a strong non-linear association with organochlorines (see indexwise curves in Figure <ref>). Further investigation of two-way interaction plots—plotting indexwise curves while holding another index at 10th, 50th and 90th percentiles—suggests some indication of modest interaction between meteorological variables and organochlorines, but it is difficult to tell given the uncertainty.
The CMIM only yielded high PIPs for metals and organochlorines; all others had PIPs below 25%. Moreover, the CMIM approach quantifies evidence in favour of interaction, and we found no evidence of interaction (two-way interaction PIPs near 0). This allows us to focus the interpretation task on the main effects of the non-null indices.
We plot the estimated indexwise exposure-response curves from the CMIM in the top row of Figure <ref>. Metals had a moderate linear association with BMI; we note that the association appears negative whereas it looked positive in the BMIM approach, but closer investigation of the index weights indicate that the underlying component-wise associations remain in the same direction. Organochlorines had a stronger non-linear association, with dramatic reductions in mean BMI at low levels of the index, and smaller reductions at higher levels of the index.
In addition to reducing the dimension of the problem, the index modelling strategy allows us to quantify the contribution of each exposure component to the index. In Figure <ref> (bottom row), we plot the posterior distributions for the index weights θ_ml. Among metals, the strongest contributions were from cesium (posterior median -0.52; 95% credible interval [-0.73, -0.24]) and copper (-0.49; 95% CI [-0.70, -0.22]); in particular they had negative index weights, indicating positive linear associations with BMI (because the slope of the indexwise curve is also negative). Molybdenum and lead both had smaller but non-null associations in the opposite direction (0.39 95% CI [0.18, 0.61] and 0.26 95% CI [0.00, 0.49]). Among organochlorines, HCB (hexachlorobenzene) was the dominant mixture component (0.89; 95%CI [0.78, 0.94]), and DDE (dichlorodiphenyldichloroethylene) and PCB170 (0.30; 95%CI [0.18, 0.45]) also had significant contributions. All other organochlorines had small weights and their intervals covered zero.
§ DISCUSSION
The proposed CKMR approach is a scalable method for exposome health studies. The work extends the popular BKMR method to the exposure by adding efficiency and interpretability. In addition, we extend the approach to multiple index models, and successfully applied it to an exposome analysis, in which p=65 exposures are grouped into 13 classes.
This work is related to previous projection schemes in the spatial <cit.> and environmental literature <cit.>. <cit.> modelled linear main effects and two-way interaction effects outside of a kernel component and projected out the linear main effects to reduce “confounding” between the kernel component and the main effects. By splitting the linear from the non-linear, this improved efficiency when there were few non linear effects. In contrast, our approach splits the non-linear from the non-additive, which improves efficiency when there is a lack of interactions. Moreover, it streamlines the interpretation task, as one need not investigate all p(p-1)/2 two-way interaction plots. Interestingly, <cit.> did not enforce identifiability, projecting out only the linear effects and not the interactions, because doing so would overly restrict the kernel component. A goal of our work is to quantify evidence of non-additive interaction, necessitating identifiability. We found that using a full projection matrix led to poor performance, exacerbated perhaps by the large design matrix 𝐁 whose ∑_j^p d_j columns span the rich space of smooth additive functions. We found that the issue was due to multicollinearity between the additive effects and non-additive interactions and circumvented it via a novel adaptive projection scheme that maintined identifiability and correctly distinguished non-linear main and interaction effects.
§ CODE AND DATA AVAILABILITY
R code to replicate simulations and data analysis will be available on Github upon publication.
The data were created as part of the ISGlobal Exposome data challenge 2021, presented in this publication <cit.>. The HELIX study <cit.> represents a collaborative project across six established and ongoing longitudinal population-based birth cohort studies in six European countries (France, Greece, Lithuania, Norway, Spain, and the United Kingdom). The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no 308333 — the HELIX project and the H2020-EU.3.1.2. — Preventing Disease Programme under grant agreement no 874583 (ATHLETE project). The data used for the analyses described in this manuscript were obtained from: https://figshare.com/account/home#/projects/98813Figshare (project number 98813) and https://github.com/isglobal-exposomeHub/ExposomeDataChallenge2021/Github (accessed on 09/10/2024).
§ ACKNOWLEDGEMENT
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), DGECR-2022-00433 and RGPIN-2022-03068. This research was also supported by NIH grants ES000002,ES030990 and ES035735.
apalike
§ SUPPLEMENTARY MATERIAL FOR “COLLAPSIBLE KERNEL MACHINE REGRESSION FOR EXPOSOMIC ANALYSES”
§ TECHNICAL DETAILS
§.§ Full Hierarchical Specification
For simplicity we let 𝐁_i=[𝐁_i1^T,…,𝐁_ip^T]^T and β=[β_1^T,…,β_p^T]^T, and we rewrite ∑_j=1^p f_j(x_ij) ≈𝐁_i^T β, subject to the quadratic penalty 1/2β^T(∑_j=1^p 1/τ_j^2S^*_j)β, where S^*_j is the expanded block diagonal matrix with j^th block S_j and all other elements zero.
Also let 𝐁_θ=[𝐁_1^T,…,𝐁_N^T], where the indicates that this may be a function of index weights θ in the CMIM setting.
Likelihood
[ 𝐲|𝐡 ∼N(𝐁_θβ+P_θ𝐡+𝐙α,σ^2), where P_θ=𝐈-𝐁_θ(𝐁_θ^T𝐁_θ)^-1𝐁_θ^T; 𝐡 ∼N(0,ν^2σ^2 𝐊) where 𝐊_ij=K(𝐄_𝐢,𝐄_𝐣); 𝐲 ∼N(𝐁_θβ+𝐙α,σ^2[𝐈+ν^2 P_θ𝐊P_θ]) ]
Priors
[ β_j ∼γ_j N(0,τ^2_j S_j^-1)+(1-γ_j) δ_0, for j=1,⋯,p; γ_j ∼Bernoulli(π); π ∼Beta(a_π,b_π); τ_j^2 ∼Inv-Gamma(a_τ,b_τ); θ_j |max(γ_j,γ_j^ρ)=1∼vMF(κ,μ); ρ_j ∼γ_j^ρ Gamma(a^ρ,b^ρ)+(1-γ_j^ρ)δ_0; γ_j^ρ ∼γ_jBernoulli(π^ρ)+(1-γ_j)δ_0 HIERARCHICAL; π^ρ ∼Beta(a_π^ρ,b_π^ρ); ν^2 ∼Inv-Gamma(a_*,b_*); α ∼N(0,𝐈); σ^2 ∼Inv-Gamma(a_σ,b_σ) ]
Posterior
[ |Σ|^-1/2exp(-1/2[ 𝐲- (𝐁_θβ+𝐙α) ]^T Σ^-1 [ 𝐲- (𝐁_θβ+𝐙α) ] ), ⟶ ℒ(·); where Σ=σ^2[𝐈_n+ν^2 P_θ𝐊P_θ] ; × ∏_j=1^p (γ_j (2π)^-d_j/2 (τ_j^2)^-d_j/2 |S_j|^1/2 exp(-β_j^TS_jβ_j/2τ_j^2)+[1-γ_j]δ_0) ⟶ f(β); × π^[a_π+∑_j=1^p γ_j-1] (1-π)^[b_π+p-∑_j=1^p γ_j-1] ⟶ f(γ_j,π); × b_τ^a_τ /Γ(a_τ) ( τ_j^2)^-a_τ-1 exp(- b_τ/ τ_j^2) ⟶ f(τ_j^2); × ∏_j=1^p exp[{1-(1-γ_j)(1-γ_j^ρ)}κθ_j^T μ] ⟶ f(θ_j ); × ∏_j=1^p (γ_j^ρ(b^ρ )^a^ρ /Γ(a^ρ )ρ_j^a^ρ-1 exp(-b^ρ ρ_j) + [1-γ_j^ρ]δ_0 ) ⟶ f(ρ_j ); × (π^ρ)^[a^ρ_π-1+∑_j=1^p γ_j γ^ρ_j] (1-π^ρ)^[b^ρ_π-1+∑_j=1^p γ_j( 1-γ^ρ_j)] ⟶ f(γ_j^ρ,π^ρ); × b_*^a_* /Γ(a_*) ( ν^2)^-a_*-1 exp(- b_*/ ν^2) ⟶ f(ν^2); × exp( -1/2 α^Tα ) ⟶ f(α); × b_σ^a_σ/Γ(b_σ)(σ^2)^-a_σ-1exp(-b_σ/σ^2) ⟶ f(σ^2) ]
§.§ MCMC Sampler
* Between models move. For j=1,…,p, there are now 3 possible states for (γ_j,γ^ρ_j): (0,0), (1,0), (1,1). Given the current state of (γ_j,γ^ρ_j), randomly propose a move to one of the other two states. [Note that we randomly select from the two available moves at any state, so the probability of selecting the m^th move type (which includes the forward and reverse move) cancels out and we can then ignore it in the acceptance ratio. ]
If the current state is (γ_j,γ^ρ_j)=(0,0), this move involves increasing model dimensions because of the index θ_j; if the proposed state is (γ_j,γ^ρ_j)=(0,0), then this move involves decreasing model dimensions, since the proposed model no longer includes θ_j. In both cases we need to use a formal MHG type move:
* Increasing dimension: If γ_j=0 and γ_j^prop=1, then propose β_j^prop from the slab prior, N(0,τ_j^2 𝐒_j^-1). If γ_j^ρ, prop=0, ρ_j^prop=0. If γ_j^ρ=0 and γ_j^ρ,prop=1, then propose ρ_j^prop from the slab prior, Gamma(a^ρ,b^ρ). And θ_j^prop=𝐮∼ vMF (κ, μ).
Then draw U∼ Unif(0,1) and accept (γ_j,β_j,γ_j^ρ,ρ_j,θ_j)=(γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop,θ_j^prop) if
log U < log( P(γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop,θ_j^prop|𝐲,…)q(γ_j,β_j,γ_j^ρ,ρ_j|γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop,θ_j^prop) /
P(γ_j,β_j,γ_j^ρ,ρ_j|𝐲,…) q(γ_j^prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop,θ_j^prop|γ_j,β_j,γ_j^ρ,ρ_j)×|J| )
where |J|=1 here.
Decreasing Dimension: Or if (γ_j^prop,γ_j^ρ,prop)=(0,0), set β_j^prop=0, ρ_j=0, and 𝐮'=θ_j. Then draw U∼ Unif(0,1) and accept (γ_j,β_j,γ_j^ρ,ρ_j)=(γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop) if
log U <log( P(γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop|𝐲,…)q(γ_j,β_j,γ_j^ρ,ρ_j,θ_j|γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop) /
P(γ_j,β_j,γ_j^ρ,ρ_j,θ_j|𝐲,…) q(γ_j^prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop|γ_j,β_j,γ_j^ρ,ρ_j,θ_j)×|J| )
where |J|=1 here.
Alternatively if the current state and the proposed state is one of (1,0), or (1,1), then the dimension is maintained, and we can use a standard MH step:
* If γ_j^ρ, prop=0, set ρ_j^prop=0. If γ_j^ρ=0 and γ_j^ρ,prop=1, then propose ρ_j^prop from the slab prior, Gamma(a^ρ,b^ρ). Then draw U∼ Unif(0,1) and accept (γ_j,β_j,γ_j^ρ,ρ_j)=(γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop) if
log U <log( P(γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop|γ_-j,β_-j,γ^ρ_-j,ρ_-j,𝐲,…)q(γ_j,β_j,γ_j^ρ,ρ_j|γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop) /
P(γ_j,β_j,γ_j^ρ,ρ_j|γ_-j,β_-j,γ^ρ_-j,ρ_-j,𝐲,…) q(γ_j^ρ,prop,β_j^prop,γ_j^ρ,prop,ρ_j^prop|γ_j,β_j,γ_j^ρ,ρ_j))
(and β_j^prop=β_j so you can ignore this above; just leaving there in case this changes.)
* Note: when we update β_j^prop≠ 0, we cannot directly draw from the prior, since S_j has a column/row of 0s (due to the unpenalized component). So we draw the final element of β_j^prop from N(0,1).
* Within model moves:
* Index weights:
* Refinement step to improve mixing: For all j=1,…,p such that max(γ_j,γ_j^ρ)=1, propose θ_j∼ vMF(κ_prop,μ_prop), and we set μ_prop to the current value, and set κ_prop=1000 (which is quite large, but worked well in Antoniadis et al. 2004)
* Additive effects:
* Refinement step to improve mixing: draw β from full conditional
β_γ ∼N([𝐁_γ^TΣ^-1𝐁_γ+𝐕_γ]^-1𝐁_γ^TΣ^-1[𝐲-𝐙α], [𝐁_γ^TΣ^-1𝐁_γ+𝐕_γ]^-1),
where Σ =σ^2[𝐈_n+ν^2 P𝐊P],
𝐕_γ = ∑_k=1^p γ_k/τ_k^2S^*_k,
and 𝐁_γ contains columns corresponding to x_k only if γ_k=1
* Draw τ_j^2 from inverse gamma (full conditional) for j=1,…,p
τ_j^2 ∼Inv-Gamma(a_τ+γ_j d_j/2 , b_τ +γ_j 1/2β^TS^*_jβ)
(where β^TS^*_jβ= β_j^TS_jβ_j)
* Draw π from Beta (full conditional):
π∼Beta(a_π+∑_k=1^p γ_k, b_π+p-∑_k=1^p γ_k)
* Non-additive effects:
* Refinement step to improve mixing:
* For all j=1,…,p such that γ_j^ρ=1 propose ρ_j^prop∼Gamma(ρ_j^2/s^2,rate=ρ_j/s^2) where s is some jump size (i.e. sample from a Gamma centered at the current value with some user-specified sd/jump size; default s=0.1). Then draw U∼ Unif(0,1) and accept ρ_j=ρ_j^prop if log U <log( P(ρ_j^prop|γ^ρ,ρ_-j,𝐲,…)q(ρ_j|ρ_j^prop) /
P(ρ_j|γ^ρ,ρ_-j,𝐲,…) q(ρ_j^prop|ρ_j))
* Draw π^ρ from Beta (full conditional)
π^ρ∼Beta(a_π^ρ+∑_j=1^p γ_j γ_j^ρ, b_π^ρ+∑_j=1^p γ_j (1-γ_j^ρ))
* Draw ν^2 from full conditional via Metropolis-Hastings step: propose ν^2,prop∼Gamma([ν^2]^2/s^2, ν^2/s^2), i.e. a Gamma centered at the current value with default sd/jump size s=0.1. Then draw U∼ Unif(0,1) and accept ν^2=ν^2,prop if log U <log( P(ν^2,prop|𝐲,…)q(ν^2|ν^2,prop) /
P(ν^2|𝐲,…) q(ν^2,prop|ν^2))
* Draw α from Gaussian (full conditional)
α ∼N([𝐙^T Σ^-1𝐙+𝐈_d ]^-1𝐙^T Σ^-1[𝐲-𝐁β], [𝐙^T Σ^-1𝐙+𝐈_d ]^-1)
where Σ=σ^2[𝐈_n+ν^2 P𝐊P]
* Draw σ^2 from inverse gamma (full conditional)
σ^2 ∼Inv-Gamma(a_σ+n/2 , b_σ +1/2[ 𝐲- (𝐁β+𝐙α) ]^T [𝐈_n+ν^2 P𝐊P] ^-1[ 𝐲- (𝐁β+𝐙α) ])
§.§ Gaussian Predictive Process
Inverting the N × N matrix K can be slow when N is large. Instead we follow the Gaussian predictive process approach of <cit.> and <cit.>, by projecting onto a set of N_1<N knots. Omitting the subscript k here for simplicity, the projection approach involves first approximating K by K_10^T K_11^-1K_10, where the joint kernel matrix for the user defined knots and the observed exposures is
[[ K_11 K_10; K_10^T K ]].
Then we apply the Woodbury identity to obtain the inverse matrix:
[I+τ^2 𝐏K𝐏]^-1 ≈ [I+τ^2 𝐏K_10^T K_11^-1K_10𝐏]^-1
=I-τ^2 𝐏K_10^T [K_11+τ^2 K_10𝐏K_10^T]^-1K_10𝐏
and the matrix determinant lemma:
logdet[I+τ^2 𝐏K𝐏]^-1 ≈logdet[I+τ^2 𝐏K_10^T K_11^-1K_10𝐏]^-1
=2tr log(chol[K_11] )-2tr log(chol[K_11+τ^2 K_10𝐏K_10^T])
§.§ Sign Flipping and Polar Transformation
The main proposed approach we only constrains θ_j to have L2-norm one, following <cit.>. This can sometimes lead to sign flipping without further constraint. One option is to constrain θ_j1>0. Here we instead place uniform priors on the half unit-(hyper)-sphere. Sampling is somewhat more challenging in this case, so we transform the original weight vectors θ_j to polar coordinates as in <cit.>:
θ_j1 = sin(ϕ_j1)
θ_j2 = sin(ϕ_j2)cos(ϕ_j1)
⋮
θ_jL-1 = sin(ϕ_jL-1)∏_l=1^L-2 cos(ϕ_jl)
θ_jL = ∏_l=1^L-1 cos(ϕ_jl)
where ϕ_j1∈[0,π/2] and ϕ_jl∈[-π/2,π/2] for l=2,…,L-1. This parameterization ensures θ_j1≥ 0 in addition to the L2 constraint. Using this parameterization we can easily modify the sampler in two ways:
* Between-model moves: θ_j^prop is again drawn from the prior when increasing dimension. The prior is now uniform on the unit half-sphere, so we draw from the uniform vMF and set θ_j1^prop= |θ_j1^prop|.
* Within model-move
* Index weights: Refinement step: For all j=1,…,p such that max(γ_j,γ^ρ_j)=1, draw θ^prop_j as follows: For l=1,…,L_j-1 draw ϕ^prop_jl using scaled/shifted Beta proposals: Beta(a_ϕ,b_ϕ) with the mode set to be the previous value; i.e. b_ϕ=((1-[ϕ_cj+π/2/π])a_ϕ+2 [ϕ_cj+π/2/π]-1)/[ϕ_cj+π/2/π] for j>1 b_ϕ=((1-[ϕ_cj/π/2])a_ϕ+2 [ϕ_cj/π/2]-1)/[ϕ_cj/π/2] for j=1. The acceptance ratio must then be corrected by adjusting for the change of variable with factor | ∏_l=1^L-1cos(ϕ_jl)^L-j |.
§ ADDITIONAL SIMULATION RESULTS
§.§ Non-Adaptive Results
§ ADDITIONAL HELIX RESULTS
| Characterizing the relationships between health outcomes and a wide array of environmental exposures—or exposomic analysis—is a major priority of the current and future strategic plan of the National Institute of Environmental Health Sciences <cit.>. This marks an ongoing shift from simpler, single-pollutant analyses and low dimensional multi-pollutant mixture analyses to large scale analyses that attempt to quantify the full burden of a person's environment (their exposome) <cit.>. Among the many challenges in these environmental analyses are the facts that the functional form of exposure-reponse relationships is unknown and exposures may interact in complex ways <cit.>. A popular non-parametric approach proposed for mixtures analyses, Bayesian kernel machine regression (BKMR), addresses both issues, allowing non-linear exposure response relationships as well as high-order interactions in a parsimonious model <cit.>.
In practice, however, it is often too flexible. Coupling non-linearity with non-additivity in a kernel framework makes identifying either type of relationship challenging, particularly in small samples with small effect sizes common in environmental health research. Even when there is enough power to detect these effects, interpretation can be prohibitively difficult in exposomic analyses, where there is a large number of exposures (p), requiring investigation of each (p) individual exposure-response curve and each of p× (p-1)/2 two-way interaction plots.
In contrast to BKMR, some simpler approaches like exposome wide association studies (EWAS) have been proposed to scale to higher numbers of exposures <cit.>, but these screening approaches are restrictive, not allowing for interactions or non-linearity. Instead recent work has extended BKMR to reduce its flexiblity. Adapting a projection approach from the spatial literature,
<cit.> improved efficiency by projecting out linear effects and two-way (linear) interactions. This improved efficiency when most effects were linear but did not improve interpretability, and the nonidentifiability precluded hypothesis testing. <cit.> circumvented the kernel framework and assigned hierarchical variable selection priors to a natural cubic spline basis expansion, but computation is challenging when there are a large number of exposures. Another approach suited to exposomic analyses is the Bayesian multiple index model (BMIM), which exploits group structure in the exposures. The BMIM reduces the dimension of the inputs of BKMR by constructing linear multi-exposure indices defined as weighted sums of exposures with weights estimated from the data <cit.>.
However, when the number of exposures, or the number of indices in a multiple index model, is large, these approaches can be overly flexible and challenging to interpret.
We propose a collapsible kernel machine regression (CKMR) framework that decouples non-linear additive relationships from non-additive interactions. A novel adaptive projection approach decomposes the non-parametric surface into an additive function space modeled by basis expansions and its orthogonal complement that contains interactions.
Paired with a hierarchical variable selection prior, the model can collapse to a generalized additive model (GAM) when there is no evidence of interactions. At the same time, the model framework allows for interactions among a subset of exposures or all exposures through the BKMR framework when this level of complexity is supported by the data.
This strategy has three advantages over non-parametric approaches like BKMR. First, it improves efficiency when there is little evidence of interactions. Second, it improves interpretability by obviating the need to investigate a large number of interaction plots. Third, thanks to a novel adaptive projection approach, it allows one to test for non-additive interaction separately from overall tests of association between an exposure and the outcome.
In this paper we analyze a motivating dataset from the Human Early Life Exposome (HELIX) project <cit.>. In particular, interest lies in characterizing the complex relationships between youth BMI and 65 different environmental and chemical exposures, grouped into 13 distinct exposure classes. To that end we extend the collapsible kernel framework to multiple index models, and apply the method to the HELIX data, treating each exposure class as a separate index.
The rest of the paper proceeds as follows. In Section <ref>, we briefly review existing methods for mixtures analyses. In Section <ref> we present the novel collapsible kernel machine regression framework and collapsible index models. We then evaluate the performance of the collapsed methods in simulations in Section <ref>, and apply it to motivating data on associations between youth BMI and 13 different classes of exposures from the Human Early Life Exposome (HELIX) study in Section <ref>. We conclude with a discussion in Section <ref>. | §.§ BKMR
Let y_i be a continuous outcome of interest, 𝐱_i=(x_i1,⋯,x_ip)^T be a vector of p exposures and z_i be a vector of covariates for the i^th observation (i=1,⋯,N). The BKMR model is
y_i = h(𝐱_i)+𝐳_i^Tα +ϵ_i, ϵ_i∼ N(0,σ^2),
where h(·): ℝ^p →ℝ is an unknown and potentially non-linear function of 𝐱_i, and α is a vector of regression coefficients.
The exposure-response function h(·) is defined by a kernel function K:ℝ^p ×ℝ^p →ℝ <cit.>.
By default we use a Gaussian kernel, K(𝐱_i,𝐱_i')=exp[-∑_j=1^p ρ_j (x_ij-x_i'j)^2], where ρ_j ≥ 0 is an unknown component weight; this corresponds to a radial basis function representation of h(·).
The model can be conveniently represented as a linear mixed effects model <cit.>:
y_i|h_i ∼ N(h_i+𝐳_i^Tα,σ^2),
(h_1,⋯,h_N)^T ∼ N(0,ν^2σ^2 𝐊),
where 𝐊 is the kernel matrix with elements 𝐊_ij=K(𝐱_i,𝐱_j), and ν^2>0 is a penalty term that controls smoothness. We then base estimation and inference on the marginal likelihood for 𝐲. The model is completed by specifying priors for {ρ,γ,σ^2,ν^2}, and we adopt default priors from <cit.> throughout.
This non-parametric formulation has the advantage of allowing non-linearity as well as high-order interactions without needing to explicitly include all desired interaction terms a priori. This flexibility has two drawbacks, however. First, it makes interpretation difficult because one must manually investigate interactions by, say, plotting exposure response curves while holding other exposures at different levels. Second, the model may be too flexible when there are few or no higher order interactions; in such a case BKMR is likely less efficient than a GAM, which assumes additivity among the exposure effects a priori.
§.§ MixSelect
<cit.> augmented BKMR by including linear main effects and two-way interactions in an approach called MixSelect. The MixSelect model is
y_i|h_i ∼ N(𝐱_i^Tβ+∑_j=1^p ∑_k>jλ_jkx_ijx_ik+ h_i^*+𝐳_i^Tα,σ^2).
Here h^*_i is the i^th element of [𝐈-𝐇]𝐡, where 𝐇 is the usual hat matrix, i.e. the projection matrix onto the column space of the exposures, and only include the linear main effects. This serves to reduce so-called confounding between the linear main effects and the non-linear 𝐡, as in the spatial literature <cit.>. Then placing separate selection priors on the linear main and interactions coefficients {β, λ} and for non-linear kernel components ρ, this allows for non-linear components to be selected out of the model while still allowing linear main and interaction effects. This proved to be more accurate than BKMR in the absence of non-linearity. That being said, if there is in fact are non-linear relationships, this is only captured by the kernel function which is again overly flexible, allowing for high order non additive interactions, which makes interpretation challenging.
What's more, the 𝐇 adjustment does not maintain identifiability due to shared components in the linear two-way interactions and the kernel function. Interestingly, the authors wrote that although one could in theory use the orthogonal projection onto the column space of both linear main and interaction effects, they “noticed in [their] simulations that this would make the resulting nonparametric surface too restrictive.” The lack of identifiability makes it is difficult to characterize evidence in favour of non-linearity and non-additivity.
§.§ BMIMs
In exposomic analyses, the number of exposures p is large, and interpreting the fully non-parametric BKMR can be prohibitive. Researchers often have knowledge of different classes of exposures; for example in the HELIX data (see Section <ref>), 65 exposures are grouped naturally into 13 different groups, including pollutant classes like pthalates and organochlorines as well as indoor air measurements and traffic density variables. The BMIM leverages this group structure to reduce the dimensionality of the non-parametric function in BKMR and make more interpretable inferences.
Suppose the p exposures {x_i1,⋯,x_ip} are partitioned into M, M∈{1,…,p}, mutually exclusive groups denoted 𝐱_im=(x_im1,⋯,x_imL_m)^T for m=1,…,M. The BMIM <cit.> is
y_i = h(𝐄_i)+𝐳_i^Tα +ϵ_i, ϵ_i∼ N(0,σ^2),
where 𝐄_i=(E_i1,…,E_iM)^T
is a vector of multi-exposure indices E_im=𝐱_im^T θ_m with L_m-vector of unknown index weights θ_m=(θ_m1,…,θ_mL_m)^T, where θ_ml quantifies the contribution of the l^th component to the m^th index. Note that h(·) is now an unknown M-dimensional function of an index vector, whereas in Section <ref> it was a p-dimensional function of exposures (M≤ p). The weights appear in the kernel function, K(𝐄_i,𝐄_i')=exp[-∑_m=1^M ρ_m ([𝐱_im -𝐱_i'm]^T θ_m)^2], and need to be constrained for identifiability due to the unknown ρ_j: (i) 1_L_m^Tθ_m≥ 0, where 1_L_m is the unit vector of length L_m, and (ii) θ_m^Tθ_m =1.
Instead of estimating parameters in this constrained space, we follow <cit.> and reparameterize the model in terms of unconstrained θ_m^*=ρ_m^1/2θ_m, on which one places priors directly. Despite this dimension reduction, the BMIM suffers from the same drawbacks as BKMR, inextricably linking non-linearity and non-additivity, albeit in lower dimension. | null | null | The proposed CKMR approach is a scalable method for exposome health studies. The work extends the popular BKMR method to the exposure by adding efficiency and interpretability. In addition, we extend the approach to multiple index models, and successfully applied it to an exposome analysis, in which p=65 exposures are grouped into 13 classes.
This work is related to previous projection schemes in the spatial <cit.> and environmental literature <cit.>. <cit.> modelled linear main effects and two-way interaction effects outside of a kernel component and projected out the linear main effects to reduce “confounding” between the kernel component and the main effects. By splitting the linear from the non-linear, this improved efficiency when there were few non linear effects. In contrast, our approach splits the non-linear from the non-additive, which improves efficiency when there is a lack of interactions. Moreover, it streamlines the interpretation task, as one need not investigate all p(p-1)/2 two-way interaction plots. Interestingly, <cit.> did not enforce identifiability, projecting out only the linear effects and not the interactions, because doing so would overly restrict the kernel component. A goal of our work is to quantify evidence of non-additive interaction, necessitating identifiability. We found that using a full projection matrix led to poor performance, exacerbated perhaps by the large design matrix 𝐁 whose ∑_j^p d_j columns span the rich space of smooth additive functions. We found that the issue was due to multicollinearity between the additive effects and non-additive interactions and circumvented it via a novel adaptive projection scheme that maintined identifiability and correctly distinguished non-linear main and interaction effects. | null |
http://arxiv.org/abs/2409.17960v1 | 20240926153733 | Relativistic diffusion model for hadron production in p-Pb collisions at the LHC | [
"Philipp Schulz",
"Georg Wolschin"
] | hep-ph | [
"hep-ph",
"nucl-th"
] |
[email protected]
Institute for Theoretical Physics, Heidelberg University, Philosophenweg 16, 69120 Heidelberg, Germany, EU
§ ABSTRACT
We investigate charged-hadron production in relativistic heavy-ion collisions of asymmetric systems within a nonequilibrium-statistical framework.
Calculated centrality-dependent pseudorapidity distributions for p-Pb collisions at √(s_NN)=5.02 and 8.16 TeV are compared with data from the Large Hadron Collider (LHC). Our approach combines a relativistic diffusion model with formulations based on quantum chromodynamics while utilizing numerical solutions of a Fokker–Planck equation to account for the shift
and broadening of the fragmentation sources for particle-production with respect to the stopping (net-baryon) rapidity distributions.
To represent the centrality dependence of charged-hadron production in asymmetric systems over a broad region of pseudorapidities,
the consideration and precise modelling of the fragmentation sources – along with the central gluon-gluon source – is found to be essential.
Specifically, this results in an inversion of the particle-production amplitude from backward- to forward-dominance when transitioning from central to peripheral collisions, in agreement with recent ATLAS and ALICE p-Pb data at √(s_NN)=5.02.
Relativistic diffusion model for hadron production in p-Pb collisions at the LHC
Georg Wolschin
September 28, 2024
================================================================================
§ INTRODUCTION
In relativistic heavy-ion collisions involving asymmetric systems such as d-Au at the BNL Relativistic Heavy Ion Collider (RHIC) or p-Pb at the CERN Large Hadron Collider (LHC),
hadron production can be accounted for as occuring from three sources: a forward-going source arising from the leading particles and the interactions of their partons with those of the backward-going nucleus, a central-rapidity source <cit.> mainly attributable to gluon-gluon interactions, and a backward-going source caused by the excited fragment participants of the backward-going nucleus. The relative contributions of the three sources change substantially as function of centrality. In particular, the role of the fragmentation sources becomes more pronounced in peripheral collisions, where they can produce an inversion of the maximum of the particle-production amplitude from backward (Pb-going) to forward (p-going). This effect has recently been observed experimentally by the ALICE collaboration in √(s_NN)=5.02 TeV p-Pb collisions at the LHC <cit.>, and it calls for a theoretical explanation.
In relativistic collisions of symmetric systems, the role of the fragmentation sources in particle production is less pronounced, but still relevant. In Pb-Pb or Au-Au, a spatially extended fireball is formed and provides the dominant central-rapidity source for particle production. The contribution of the fragmentation sources in symmetric systems had been discussed in a phenomenological three-sources diffusion model in rapidity space <cit.>, using the proper Jacobian transformation for a mean transverse momentum to pseudorapidity (η) space. Agreement with centrality-dependent ALICE dN/dη data has been achieved. We have recently extended the model to simultaneously treat transverse-momentum and rapidity variables <cit.>.
In this three-dimensional model with cylindrical symmetry, the contribution of the fragmentation sources was found to be smaller than in the one dimensional model when compared to d^3N/(dηdp_T^2) ATLAS and ALICE data given the more stringent model constraints, but it remains non-negligible.
Whereas in particle production, fragmentation and fireball sources cannot be disentangled experimentally, there is no central-rapidity source in stopping (net-baryon transport)
<cit.>, because particles and antiparticles are produced in equal amounts in the fireball and therefore, do not contribute to net-baryon distributions. Hence, the existing experimental net-proton (proton minus antiproton) data at SPS <cit.> and RHIC <cit.> energies directly document the physical reality and significance of the fragmentation sources. This suggests that their role in hadron production cannot be neglected, although it is difficult to exactly pin down their contribution in symmetric systems.
In this work, we concentrate on a three-sources model for charged-hadron production in asymmetric systems, where it is easier to access the contribution of the fragmentation sources through the centrality-dependence of charged-hadron production than in symmetric systems. We focus on pseudorapidity distributions in p-Pb collisions at LHC energies of √(s_NN)=5.02 and 8.16 TeV, with special emphasis on the relative role of the three sources for particle production as function of centrality.
We had already performed related, but more phenomenologically-oriented investigations of d-Au at √(s_NN)=200 <cit.> in comparison with PHOBOS data <cit.>
from RHIC, and p-Pb at 5.02 TeV <cit.> in comparison with ALICE data <cit.>.
These works are nonequilibrium-statistical model calculations without explicit connection to the underlying nonperturbative quantum-chromodynamical (QCD) physics. The initial rapidity distributions at t=0 were assumed to be delta-functions or Gaussians, thus providing an analytical solution of the time-dependent problem, and the time evolution for constant diffusion coefficients and linear drift was obtained from analytical solutions of a Fokker–Planck equation (FPE) in rapidity space. The transport parameters were determined in χ^2-minimizations with respect to the available data, allowing for predictions at higher energies through extrapolations of the parameters.
As a substantial refinement, we now aim to connect the relativistic diffusion model (RDM) with elements from nonperturbative QCD. The initial states in the three sources are modeled as color-glass condensate (CGC) states, corresponding to valence-quark – soft-gluon interactions as accounted for in the fragmentation sources for stopping, and gluon-gluon interactions for the central source. The subsequent nonequilibrium-statistical time evolution of the fragmentation sources is then calculated from a linear Fokker–Planck equation as before, but now numerical methods must be used for its solution because of the more involved initial conditions. The transport parameters governing the time evolution are determined in χ^2-minimizations with respect to the available ATLAS and ALICE data of p-Pb collisions for charged-hadron production as functions of pseudorapidity and centrality dN/dη. For the central gluon source, we take the color-glass distribution since the partial thermalization is less pronounced as compared to the fragmentation sources, where – as it will turn out – sizable drift and diffusion is actually observed. With this model, we aim to disentangle the respective role of the forward- and backward fragmentation sources, and the central-rapidity gluon source as functions of centrality in the full pseudorapidity range.
The CGC states for the central-rapidity source based on k_T-factorization, and the CGC initial states for the two fragmentation sources based on hybrid factorization are reviewed in the next section. The diffusion-model approach to the subsequent time evolution of the fragmentation distribution functions in rapidity space and the numerical solution of the corresponding FPE with the CGC initial conditions is considered in Sec. III. Results for charged-hadron production in pseudorapidity space for p-Pb collisions at √(s_NN)=5.02 and 8.16 TeV are presented in Sec. IV, and compared with ATLAS and ALICE data at various centralities. The role of the fragmentation sources is emphasized and the unexpected dominance of the proton-going source in very peripheral collisions is explained. Conclusions are given in Sec. V.
§ CGC INITIAL STATES
Whereas in our earlier applications of the three-sources relativistic diffusion model to charged-hadron production in symmetric <cit.> and asymmetric <cit.> systems, simplified initial conditions such as delta-functions or Gaussians were used to provide analytical solutions of the Fokker–Planck equation, we now determine the form of the initial state from a QCD-inspired model. For the fragmentation sources, this is analogous to the stationary state that is attained at the end of the baryon-transport (stopping) process <cit.>. The central gluon source cancels out in stopping of symmetric systems, but is relevant in particle production.
We expect that most of the produced hadrons – at least, in the fragmentation sources – participate in the subsequent time-dependent thermalization process which is modelled through numerical solutions of the FPE for a constant diffusion coefficient as in our recent work on charged-hadron production in symmetric systems such as Pb-Pb <cit.>. We choose a linear dependence of the drift coefficient on the rapidity as in our earlier phenomenological results <cit.>, see the next section.
In addition to the two fragmentation sources, the model incorporates a central-rapidity source arising mostly from gluon collisions. In collisions of symmetric systems at RHIC and LHC energies, this is the dominant source for particle production from the hot fireball. For asymmetric collisions such as p-Pb, it is expected to be less pronounced, in particular, for more peripheral collisions. It is presently still a matter of discussion whether the mid-rapidity source (“QGP-droplet") exhibits collective phenomena similar to the ones in the spatially extended fireball that is formed in symmetric collisions such as Pb-Pb, but it certainly contributes to particle production.
For the asymmetric p-Pb system, we take the p-going (projectile) direction to define positive rapidities, while the Pb-going (target) direction corresponds to negative rapidities. If necessary, data are mirrored to agree with this convention. The theoretical results as calculated in the center-of-mass system are converted to the laboratory system when comparing to data, taking the centrality- and energy-independent rapidity shift of Δ y = (1/2)ln [Z_1 A_2/(Z_2 A_1)] with Δ y=0.465 for p-Pb between the two systems in the direction of the proton beam into account. Charged-hadron production from the forward- and the backward-going fragmentation sources are calculated independently, and finally – due to the linearity of the FPE – added incoherently to the contribution of the central source.
For the initial particle distributions, we use a QCD-inspired framework <cit.> that is based on gluon saturation in the scattering of participant partons which we have already employed in our previous studies on baryon stopping <cit.>, and on charged-hadron production in central Pb-Pb collisions at LHC energies <cit.>.
§.§ Central gluon-gluon source
Charged-hadron production from the central gluon-gluon source in relativistic collisions has commonly been described by using k_T-factorization. The inclusive cross-section <cit.> must then be adapted to asymmetric collisions with distinct forward and backward processes. The result
d^3N^h_gg/dy dp_T^2 =
2/C_Fα_s(m_T^2)/m_T^2
×∫_0^p_Tdk_T^2
[.
N_1φ_1(x_1,k^2_T)
φ_2(x_2, |p_T-k_T|^2 )
+ N_2φ_2(x_2,k^2_T)
φ_1(x_1, |p_T-k_T|^2 )
.]
is a function of rapidity y and transverse momentum p_T of the produced hadron.
The unintegrated gluon distributions φ_1,2(x,k_T^2) depend on the transverse momentum of the gluon k_T^2 and Bjorken-x, with x_1=(m_T/√(s_NN)) e^y and x_2=(m_T/√(s_NN)) e^-y, φ_1 referring to the gluons in the proton, and φ_2 to the ones in lead, A=Pb=208. The transverse mass is m_T=√(m^2+p_T^2), the color factor C_F = N_c^2-1/2N_c
such that C_F=4/3 for three colors as in our following calculations.
We initially scale the cross-section by the number of participants N_1,N_2 to account for the centrality dependence, rather than considering impact-parameter-dependent unintegrated gluon distributions. However, when comparing with data, discrepancies remain, which are likely due to the centrality-dependence of nuclear suppression. We shall later account for this effect by considering an impact-parameter dependence of the gluon saturation scale, Q_s^2(x)→ Q_s^2(x,b).
In Eq. (<ref>), the presence of a large intrinsic momentum scale Q_s for gluon saturation is taken to justify the use of a perturbative QCD expansion for non-perturbative observables such as hadronic cross sections, including pions <cit.>. The unintegrated gluon distributions adhere to the different gluon saturation scales in the proton, and in Pb, causing strong nuclear suppression at low p_T on the p-going (forward) side.
For φ(x,k^2) we use the Kharzeev-Levin-Nardi (KLN) model
<cit.>
φ_KLN(x,k^2) :=
2C_F/3π^2α_s(Q^2), k^2≤ Q^2_s(x)
2C_F/3π^2α_s(Q^2)Q_s^2(x)/k^2, k^2>Q^2_s(x) ,
where k^2 defines the internal momentum transfer scale.
The gluon saturation scale is
Q_s^2(x)=A^1/3Q_0^2(x/x_0)^-λ,
in agreement with the CGC literature.
Specific parameters in our p-Pb calculations for the forward-rapidity central source are x_0=1, A=1, and λ = 0.288.
The latter value is taken from fits to deep-inelastic scattering e-p data from HERA <cit.>, where Q_0^2x_0^λ≃ 0.097 GeV^2 was determined together with λ.
These (λ,Q_0^2)-values where within experimental uncertainties found to be consistent with the ones needed for stopping in A-A collisions at SPS and RHIC energies <cit.>, and
charged-hadron production in Pb-Pb collisions at LHC energies <cit.>. For the backward-rapidity central source in p-Pb, we use A=208.
Since the applicability of this approach is limited to small values of x,
the unintegrated gluon distributions are usually modified according to <cit.>
φ(k,x) = (1-x)^4 φ(k,x)
in order to avoid overcounting for large value of x, thus adhering to quark counting rules. Accordingly, we have introduced this
modification into the subsequent calculations.
The running of the strong coupling is accounted for as in Ref. <cit.>. We use the parametrization
α_s(k^2) = 4π/βln( 4k^2/Λ^2_QCD +μ) .
where β = 11-2/3 N_f = 9 with N_f=3, Λ_QCD = 0.241, and
μ regulating the strong coupling for large dipole sizes. It is determined by the condition α_s(∞) = 0.5, resulting in μ = 16.322.
A very similar but slightly different parametrization of the strong-coupling constant has been used in <cit.>.
§.§ Fragmentation sources
The fragmentation sources for charged-hadron production are made of valence-quark – soft-gluon interactions in the forward direction, and vice-versa in the backward direction.
In addition, asymmetric collisions require separate calculations for both fragmentation sources by interchanging the roles of the projectile and target, and then adding the contributions incoherently.
In the backward direction, we adjust the valence-quark distribution functions f_q/A of the nucleus by scaling them with the number of participants N_part
(x_1≡ x_p)
x_A f_q/A = N_part x_p f_q/p ,
which is relevant for the correct calculation of the centrality dependence, see Sec. IV.
We obtain the initial distributions <cit.>
as in our
previous stopping calculations for symmetric systems within the CGC model <cit.>, but now consider the significant forward-backward difference.
For valence quark-gluon scattering <cit.>, we express the CGC initial condition for single-inclusive hadron production at rapidity y and transverse momentum p_T in the fragmentation sources of asymmetric proton-nucleus scattering as
d^3N^h_qg/dy d^2 p_T =
K/(2π)^21/m^2_T
×∫_x_F^1dz/z^2
D_h/q(z,μ_f^2)
x_1 f_q/p(x_1,Q_f^2)
φ(x_2,q_T^2),
where x_1 is the Bjorken-x of the valence quark in the proton, x_2 the one of a soft gluon in the nucleus A≡ Pb, and h≡π, K, p stands for the produced charged hadrons and their antiparticles.
The quark distribution function in a proton is f_q/p(x_1,Q^2_f) with the factorization scale Q_f^2, the gluon distribution function in the nucleus is φ(x_2,q_T^2). We distinguish the produced hadrons that are considered explictly in our calculations by their masses and their Feynman-x_F, which is defined as
x_F = xp_T/k_T,
where p_T is the transverse momentum of the produced hadron and k_T the transverse momentum of the parton <cit.>.
Using the methodology of <cit.>, the fraction z of quark energy carried by the produced hadrons is
z(x):= x_F/x,
where z(1) = x_F and z(x_F) = 1 are the boundary conditions.
The differential dz is expressed as
dx/x_F = -x^2/x_F^2 dz = -dz/z^2 .
Since massive constituents contribute an additional portion to the transverse momentum, we define an effective transverse momentum
q_T := m_T/z = √(p_T^2 + m^2)/z.
For an effective impact parameter <b>, this expression corresponds to the minimum-bias cross section, with a fragmentation function D_h/q of quarks into hadrons, using
Q_f^2= p_T^2 for the factorization scale, and also μ_f^2 =p_T^2 for the factorization scale of the fragmentation function.
The factor K accounts for higher-order corrections and additional dynamical effects that are not considered within the hybrid framework <cit.>.
We set K=1 and include these effects into Q_0^2.
To obtain the full result for the initial distribution of the fragmentation sources in our charged-hadron production calculation, the role of projectile and target must be interchanged.
§ RELATIVISTIC DIFFUSION MODEL
Whereas the CGC distribution functions for the quark-gluon and gluon-quark fragmentation sources already provide an excellent description <cit.> of the measured stopping distributions at SPS and RHIC energies, they do not properly account for produced hadrons: As initial-state functions, they cannot consider the subsequent partial thermalization process and hence, their maxima occur at rapidity values too close to the beam rapidity when compared to final-state data. At LHC energies, this turns out to be more than two units of rapidity too large for a proper reproduction of the measured charged-hadron yields in p-Pb collisions.
In order to take time-dependent partial thermalization through drift and diffusion into account, we apply the relativistic diffusion model to the initial CGC distribution functions.
We have recently given a rigorous derivation of the relativistic diffusion model in a framework for Markovian stochastic processes <cit.>. This allows to calculate the time evolution of particle-number distribution functions from solutions of a Fokker–Planck equation with respect to transverse and longitudinal rapidity, which are then transformed to transverse-momentum and pseudorapidity space and compared to data. In our previous Ref. <cit.>, we have confined the model to central collisions of heavy symmetric systems using cylindrical symmetry.
Here, we focus on the centrality-dependence of charged-hadron production in asymmetric systems, where the role of the fragmentation sources is substantially more pronounced compared to the central source, and the centrality dependence reveals interesting effects. For a correct description of the centrality dependence, the time evolution of the fragmentation sources towards equilibrium is essential. Integrating over the transverse coordinates, one obtains a one-dimensional Fokker–Planck equation for the time evolution of the distribution function R (y,t) in rapidity space <cit.>
∂/∂ tR (y,t)=-∂/∂ y[J(y)R (y,t)] + D_y∂^2/∂ y^2R (y,t) .
For a constant rapidity diffusion coefficient D and
the conjecture that the stationary solution must be equal to a Maxwell-Jüttner equilibrium distribution, the drift term J(y) can be derived as <cit.>
J(y) = -m_T D/Tsinh(y) ,
with the transverse mass m_T=√(m^2+p_T^2), the proton mass m≡ m_p and the equilibrium temperature T.
In leading order, this assumes the form of a linear relaxation term <cit.>
J(y) = y_eq-y/τ_y ,
with the rapidity relaxation time τ_y that governs the speed of the approach to thermal equilibrium. It is reached
at y_eq, which is equal to zero for symmetric systems, or can be calculated from energy-momentum conservation for asymmetric systems <cit.>.
For constant diffusion and linear drift as in the Uhlenbeck-Ornstein case <cit.>, the FPE can be solved analytically in closed form in case of simple δ-function or gaussian initial conditions. This approach has been used to calculate and predict charged-hadron distributions in symmetric systems, but also in d-Au and p-Pb <cit.>. We have shown <cit.>
that the differences to a numerical solution with the full drift term Eq. (<ref>) are small at RHIC energies, but may become more pronounced at LHC energies. In the present work, we use the CGC distribution as initial condition together with constant diffusion and linear drift. With the more sophisticated initial conditions that are provided by the CGC stopping distribution functions, only numerical solutions are possible. A corresponding C++ code using the finite-element method has been written for this purpose.
The equilibrium distribution that would be approached in the course of the system's time evolution is taken to be the Maxwell-Jüttner distribution
Ed^3N/dp^3 ∝ Eexp(-E/T)
= m_T cosh(y)exp(-m_Tcosh(y)/T) ,
with the transverse mass m_T of the produced particle.
Accordingly, we express the time-dependent longitudinal production as
EdN/dy(y,t) = c∫_m^∞ m^2_T cosh(y)R (y,t) dm_T .
The underlying assumption is that the production yield can be described as the incoherent superposition of three distinct sources <cit.>
dN^ch/dy(τ_int) =
N_ch^1 R_1(y,τ_int) +
N_ch^2 R_2(y,τ_int) +
N_ch^gg R_gg(y,τ_int),
where N^1,2_ch correspond to the produced charged hadrons in the fragmentation sources, while N^gg_ch accounts for charged-hadron production in the mid-rapidity gluon source.
The interaction or freezeout time t=τ_int is the time span between first nuclear contact until separation, it is characteristic for the incomplete thermalization, and will be determined from the comparison of the calculated nonequilibrium distribution with the available data.
In asymmetric collisions, the mean value of the equilibrium rapidity y_eq(b) differs from zero <cit.>.
When the beam rapidity y_beam is sufficiently large, it can be expressed as
y_eq(b)= 1/2ln<m^(2)_T(b)>/<m^(1)_T(b)> ,
with the average centrality-dependent transverse mass
<m^1,2_T(b)> = √(m^2_1,2(b)+<p_T>^2),
where <p_T> is the average transverse momentum and m^2_1,2(b) = m_p N_part^1,2 the participant mass.
With the initial conditions derived from microscopic quark-gluon interactions for the fragmentation source, as well as for the mid-rapidity source obtained from gluon-gluon interactions, we compute the corresponding rapidity distributions. Since data for charged-hadron distributions are obtained as functions of transverse momentum and pseudorapidity
η=-ln(tan(θ/2)) with the scattering angle θ, we transform <cit.> using the Jacobian transformation
d^3N(η,p_T)/dη dp_T^2 =J(η,p_T)d^3N(y,p_T)/dy dp_T^2 ,
J(η,p_T) ≡cosh(η)[1+(m/p_T)^2+sinh^2(η)]^-1/2.
For the gluonic source, we calculate the distribution as function of transverse momentum and rapidity with the above full Jacobian and subsequently perform a p_T-integration, whereas we use the transformation with an
average ⟨ p_T⟩ and an effective J(m/⟨ p_T⟩) for the fragmentation sources, such that
dN/dη = J( p )dN(y)/dy .
A representative result for the pseudorapidity distributions solely from the fragmentation sources as function of time is shown in
Fig. <ref>: The calculated diffusion process for the distribution functions originating from both fragmentation sources is displayed in a central (0-5%) p-Pb collision at √(s_NN)=5.02 TeV. The parameters will be discussed in the next section.
Each fragmentation peak experiences a shift towards the equilibrium rapidity y_eq, which is situated close to midrapidity. Concurrently, the distributions undergo a diffusion that enlarges their widths in rapidity space. The time interval is spanning from initial nuclear contact to the freezeout or interaction time τ_int. To obtain the full distribution function that is to be compared to data, the central gluon source will have to be added as described in the following section.
§ MODEL RESULTS COMPARED TO LHC DATA
To compare our model results with the available experimental dN/dη centrality-dependent data for produced charge hadrons in p-Pb at √(s_NN)= 5.02 and 8.16 TeV from the ATLAS and ALICE collaborations, we integrate our results for the central source over p_T, add the diffusion-model result for the fragmentation sources that are obtained with an average ⟨ p_T⟩, and perform a χ^2-minimization to determine the values of the diffusion-model parameters. The CGC parameters of the initial state for the RDM time evolution are kept fixed with λ=0.288, but to properly account for the centrality dependence, we allow for an impact-parameter dependent Q_0≡ Q_0(b).
For asymmetric collisions, distinct calculations are required for forward and backward rapidities.
In case of p-Pb collisions at 5.02 TeV, the beam momenta are 4.0 for the proton and 1.577 for lead.
These beam momenta correspond to beam rapidities y^p_beam = 9.051 and y^Pb_beam = 8.120, respectively.
The energy per nucleon-nucleon pair is thus √(s_NN)=5.023, which corresponds to a beam rapidity in the nucleon-nucleon frame of reference y_beam = 8.586.
The rapidity shift between the laboratory frame and the nucleon-nucleon center-of-mass frame is Δ y = 0.465.
In accordance with our convention, the ALICE data are transformed from the laboratory frame Pb-p to p-Pb. This involves transforming the experimental data to the center of mass frame, performing a mirroring operation, and then shifting it to the other laboratory frame.
The results for √(s_NN)= 5.02 TeV are shown in Fig. <ref>, with the ATLAS data from 2016 <cit.> in eight centrality classes up to 90% (upper frame), and the ALICE data from 2015 and 2023 in seven centrality classes plus minimum bias (|η|<2 from <cit.>, |η|>2 from <cit.>, middle frame). Parameters and results of the χ^2-minimization results will be given in the next subsection.
In the lower frame, we show for comparison a corresponding centrality-dependent calculation for Pb-Pb at √(s_NN)= 5.02 TeV, where we use the same diffusion coefficient as on the Pb-going side in p-Pb, thus achieving reasonable agreement with the ALICE Pb-Pb data <cit.>.
§.§ Charged-hadron distributions in √(s_NN)=5.02 TeV p-Pb
We proceed to discuss charged-hadron distributions for p-Pb collisions at √(s_NN)=5.02 TeV in more detail, presenting the individual contributions from both fragmentation sources, and the central source including their centrality dependence. The result is shown in a comparison with the ATLAS data <cit.> for eight centrality regions up to 60-90% in Fig. <ref>.
The largest fraction of the charged-hadron yield is due to the gluon-gluon source at all centralities, but the relative contribution of the fragmentation sources – which consist of charged pions, kaons and protons in our model calculation, and their antiparticles – increases toward more peripheral collisions.
Although this calculation does not extend to the most peripheral collisions, the model results clearly show that the amplitude of the forward-going fragmentation source becomes larger than the one of the backward-going source for more peripheral collisions. The consequence of this effect is already seen in the 60-90% ATLAS data, although it is covered by the substantial asymmetry of the gluon-gluon source towards the Pb-going side. The effect will, however, more clearly be displayed when comparing to the more recent very peripheral 80-100% ALICE data.
The origin of this new effect – which is obviously not present in peripheral collisions of symmetric systems – is the strong gluon field in the Pb nucleus that the valence quarks in the forward-going proton experience.
The yields of produced charged hadrons (π, K, p) in the three sources are summarized in Tab. <ref>. The number of produced charged hadrons falls monotonically towards peripheral collisions, with central collisions yielding approximately six times more hadrons compared to the most peripheral collisions. The ratio R_p^Pb gives the particle yield produced in the Pb-going relative to the one from the p-going fragmentation source, and R^gg_qg is the proportion of charged hadrons originating from the mid-rapidity source relative to the one from both fragmentation sources. It has a scaling behavior of
R^gg_qg∼A^1/3 N_part/A^1/3+N_part .
The diffusion-model parameters and the average numbers of participants for each centrality window are listed in Tab. <ref>.
The numbers of participants N_part^Pb are obtained from the ATLAS collaboration, where they were calculated using Glauber Monte Carlo simulations <cit.>.
The respective interaction times τ_in^p,Pb on the p- and the Pb-going side are given with respect to the rapidity relaxation time τ_y, thus avoiding the determination of an absolute timescale, which is not an observable. Values of the relative time scales are t_p/τ_y = 0.6, t_Pb/τ_y = 0.6 and the diffusion coefficient is D^Pb = 12τ_y^-1. We use ⟨ p_T⟩=0.5 GeV and ⟨ m ⟩=m_π in the calculation of the Jacobian. Results from minimizations of our model results with respect to the ATLAS data at each centrality show reasonable χ^2/N_dof values.
The results of our corresponding calculations in seven slightly different centrality classes and minimum bias compared to the ALICE p-Pb data at √(s_NN)=5.02 are displayed in Fig. <ref>. The earlier data set <cit.> extends up to |η_lab|≤2, the later dataset is for |η_lab|>2 <cit.>. In particular, the most peripheral region 80-100% is included in these data, and with the new data set, a larger region in pseudorapidity space is covered as compared to the previous ATLAS data.
Regarding the ratio of produced particles in the midrapidity source relative to the one in the fragmentation sources shown in Tab. III, the results from the analysis of the ATLAS data are confirmed. Diffusion-model parameters and χ^2 results are given in Tab. IV. Values of the relative time scales are t_p/τ_y = 0.6 (except for 80-100%), t_Pb/τ_y = 0.6 and the diffusion coefficient is D^Pb = 12τ_y^-1. We use ⟨ p_T⟩=0.5 GeV and ⟨ m ⟩=m_π in the calculation of the Jacobian.
As a significant outcome of this investigation, an effect that had already shown up in the comparison with the ATLAS data is now clearly confirmed: In very peripheral collisions
(60-80% and, in particular, 80-100%), the maximum of the particle-production amplitude moves from the Pb-going (backward) to the p-going (forward) pseudorapidity region.
Although the midrapidity source still has a slight preference toward the backward region, the fragmentation source in the forward region becomes much stronger than the backward one, thus causing the observed particle-production dominance on the p-going side in the most peripheral collisions. The origin of this effect is the strong gluon field that the valence quarks in the forward-going proton experience in the target, whereas the backward-going proton(s) in peripheral collisions feel only the relatively weak gluon field in a single proton.
With R_p^Pb≃ N_part^PbA^-1/3 the fraction of produced charged hadrons originating from both fragmentation sources,
the p-going fragmentation source yields more hadrons than the Pb-going one for A^1/3>N_part^Pb, which is the case for peripheral collisions, see Tabs. III, IV.
For very peripheral collisions, this is not compensated for by the asymmetry of the central source anymore (dot-dashed curve in Fig. <ref>), and therefore, it becomes visible in the data. As a consequence, model calculations that do not explicitly include the effect of the fragmentation sources may have difficulties to account for the data in peripheral collisions of asymmetric systems.
The diffusion-model coefficients obtained by our comparisons with ATLAS
(Tab. II)
and ALICE (Tab. IV) data at 5.02 TeV are consistent with each other, underlining their universality within the applied model. Small deviations occur because the mean numbers of participants from the ATLAS Glauber Monte Carlo calculations for different centrality classes are slightly different from the ALICE results (e.g., 13.6 vs. 13.0 for 5-10%), but the results for the transport coefficients are almost identical.
§.§ Charged-hadron distributions in √(s_NN)=8.16 TeV p-Pb
At the highest currently available energy at the LHC, we compare our calculation for the same centrality classes as before with data for charged-hadron production in √(s_NN)=8.16 TeV p-Pb from the ALICE collaboration, which are so far confined to a smaller pseudorapidity interval η_lab<1.8 <cit.>.
The beam momenta are 6.5 for the proton and 2.563 for the lead beam.
The corresponding beam rapidities are y^p_beam = 9.536 and y^Pb_beam = 8.606.
This configuration results in an energy per nucleon-nucleon pair of √(s_NN)=8.162, with a beam rapidity in the nucleon-nucleon frame of reference of y_beam = 9.071.
(The rapidity shift of Δ y= 0.465 between the laboratory frame and the nucleon-nucleon pair reference frame is independent of energy).
The centrality-dependent results are displayed in Fig. <ref> and are found in agreement with the data – which are, however, not yet available in the most interesting pseudorapidity region where the inversion of the particle-production amplitude in peripheral collisions occurs, as is evident in the calculation for the 80-100% centrality class.
The numbers of produced charged hadrons are given in Tab. V, the diffusion-model parameters and χ^2-values in Tab. VI. Values of the relative time scales are t_p/τ_y = 0.4 (except for 80-100%), t_Pb/τ_y = 0.5 and the diffusion coefficient is D^Pb = 24τ_y^-1. We use ⟨ p_T⟩=0.5 GeV and ⟨ m ⟩=m_π in the calculation of the Jacobian.
As was already evident at 5.02 TeV, the effect of the Jacobian transformation from rapidity to pseudorapidity on the midrapidity source is very small, such that the midrapidity minimum that is seen in the data at all centralities must be attributed to the smallness of the fragmentation sources at midrapidity, although the dominant overall contribution to particle production arises from the gluon-gluon source. The number of produced charged hadrons depends monotonically on centrality, with central collisions generating about 15 times more hadrons compared to the most peripheral collisions.
§.§ Centrality dependence of the initial saturation scale
The initial gluon saturation-scale momentum Q_s^2 is defined by Q_0^2 and the exponent, for which we take the literature values from fitting HERA e-p data as Q_0^2=0.09 GeV^2 and λ=0.288 <cit.>. For heavy-ion collisions, however, a possible centrality-dependence of the gluon saturation scale becomes relevant. Sometimes,
the geometry of the collision is considered through the dipole cross section, mainly involving adjustments to the thickness function <cit.>.
The dependence of Q^2_s on the mass number A can be regarded as an approximation applicable primarily to very large collision systems.
However, the behaviour of this dependence when the mass number decreases, or when only a limited number of participants are involved remains largely unexplored.
In accordance with calculations presented in <cit.>, we have adjusted
Q_0^2 depending on the number of participants N_part in our centrality-dependent model calculations, as shown in Tabs. II, IV, VI.
Using a simple two-parameter expression, Q_0^2 as determined from the data is found to rise strongly from central to peripheral p-Pb collisions at LHC energies by up to a factor of five, whereas the rise is much weaker in Pb-Pb at the same incident energy.
Fig. <ref> shows a double-logarithmic plot of the values of Q_0^2(N_part) that are required from χ^2-minimizations of the calculated centrality-dependent pseudorapidity distributions in p-Pb collisions with respect to the ATLAS and ALICE data.
§ CONCLUSIONS
We have refined the three-sources relativistic diffusion model to include color-glass condensate initial conditions in an investigation of centrality-dependent p-Pb collisions at LHC energies of √(s_NN)=5.02 and 8.16 TeV. Whereas the largest contribution to charged-hadron production at all energies and centralities arises from the midrapidity gluon-gluon source, the relative contribution of the two fragmentation sources is found to be much more significant than in Pb-Pb collisions at LHC energies. Glauber-based calculations provided by the experimental collaborations are used for the numbers of participants in the centrality-dependent p-Pb calculations, with N_part<18.
Using the k_T- and hybrid-factorization schemes for small-x gluon-gluon and valence-quark – gluon interaction, respectively, we obtain the initial conditions, which we introduce into the relativistic diffusion model to account for the partial thermalization of the fragmentation sources in the time evolution of the collision. For both asymmetric fragmentation sources, the Fokker–Planck equation is solved numerically for constant diffusion coefficients and a linear dependence of the drift on rapidity. Due to the linearity of the FPE, we can add the three sources incoherently. We perform Jacobian transformations to pseudorapidity distributions for charged hadrons produced at various centralities, and compare to recent data from the ATLAS and ALICE collaborations.
The computed pseudorapidity distributions accurately match the experimental p-Pb data from the ATLAS and ALICE collaborations at 5.02 TeV, as well as the one from ALICE at 8.16 TeV across a wide range of pseudorapidity values. The parameters of the initial-state CGC distributions are kept fixed except for an impact-parameter dependence of Q_0^2 that sets the scale for the gluon saturation momentum Q_s^2. The diffusion-model parameters are obtained in χ^2-minimizations of the calculated dN/dη distribution functions
with respect to the data, corresponding values at each energy and centrality are listed in the tables together with χ^2/N_dof.
In particular, we have presented a first comparison with the new p-Pb ALICE data at 5.02 TeV in a large pseudorapidity range extending up to η_lab=5. The results include very peripheral events up to 100% centrality, whereas ATLAS has provided results up to η_lab≃ 3 and 90% centrality. The significant role of the fragmentation sources becomes obvious in very peripheral collisions: Whereas in more central collisions, the maxima of charged-hadron production are on the Pb-going side due
to asymmetric gluon-gluon source that peaks in the backward region, in peripheral collisions the maxima shift towards the p-going side, as was already indicated in the early 60-90% ATLAS data, and confirmed in the 80-100% ALICE results.
This observation arises naturally in our three-sources relativistic diffusion model because the p-going fragmentation source in peripheral collisions is much larger than the Pb-going source due to the strong gluon field in the Pb nucleus that interacts with the few valence quarks in the forward direction. The effect is counteracted, but not overcome by the inherent bias of the central source towards the Pb-going side. The central source is often assumed to be the only one for particle production. In relativistic hydrodynamical calculations, it is unlikely that the observed amplitude inversion in particle production for peripheral collisions can be accounted for – unless a three-fluid model is used.
At the higher energy of 8.16 TeV, our RDM-calculations exhibit similar trends, but the presently available data are still confined to the much smaller pseudorapidity region |η_lab|<1.8. As in case of the 5.02 TeV results, the dip observed
around η=0 in charged-hadron production cannot be explained solely by the Jacobian transformation.
Instead, the smallness of the fragmentation sources in the central-rapidity region plays a significant role in the suppression around η=0, as these sources essentially peak at the same pseudorapidity where the distribution of charged-hadron production reaches its two local maxima.
This observation can be exploited to ascertain both the initial saturation scale and its dependence on centrality, which we have determined at both incident energies.
The model can be used for predictions with suitably adapted transport coefficients. This may be of particular interest for the planned √(s_NN)=9.9 TeV p-O pilot run in 2025. Our approach can also be
improved in various ways should future data in a larger pseudorapidity range become available to compare with. In particular, our simple assumption of a constant diffusion coefficient and a drift that depends linearly on the rapidity is easy to upgrade since we have solved the transport equation numerically already. The partial thermalization of the central source could also explicitly be considered, but would not change significantly the general outcome of the present investigation that puts the main emphasis on the transport properties of the fragmentation sources in the diffusion model once the initial conditions are prepared in the CGC model.
We acknowledge discussions with Klaus Reygers about the ALICE p-Pb LHC data. GW is grateful to Farid Salazar for a discussion at LBNL Berkeley.
| In relativistic heavy-ion collisions involving asymmetric systems such as d-Au at the BNL Relativistic Heavy Ion Collider (RHIC) or p-Pb at the CERN Large Hadron Collider (LHC),
hadron production can be accounted for as occuring from three sources: a forward-going source arising from the leading particles and the interactions of their partons with those of the backward-going nucleus, a central-rapidity source <cit.> mainly attributable to gluon-gluon interactions, and a backward-going source caused by the excited fragment participants of the backward-going nucleus. The relative contributions of the three sources change substantially as function of centrality. In particular, the role of the fragmentation sources becomes more pronounced in peripheral collisions, where they can produce an inversion of the maximum of the particle-production amplitude from backward (Pb-going) to forward (p-going). This effect has recently been observed experimentally by the ALICE collaboration in √(s_NN)=5.02 TeV p-Pb collisions at the LHC <cit.>, and it calls for a theoretical explanation.
In relativistic collisions of symmetric systems, the role of the fragmentation sources in particle production is less pronounced, but still relevant. In Pb-Pb or Au-Au, a spatially extended fireball is formed and provides the dominant central-rapidity source for particle production. The contribution of the fragmentation sources in symmetric systems had been discussed in a phenomenological three-sources diffusion model in rapidity space <cit.>, using the proper Jacobian transformation for a mean transverse momentum to pseudorapidity (η) space. Agreement with centrality-dependent ALICE dN/dη data has been achieved. We have recently extended the model to simultaneously treat transverse-momentum and rapidity variables <cit.>.
In this three-dimensional model with cylindrical symmetry, the contribution of the fragmentation sources was found to be smaller than in the one dimensional model when compared to d^3N/(dηdp_T^2) ATLAS and ALICE data given the more stringent model constraints, but it remains non-negligible.
Whereas in particle production, fragmentation and fireball sources cannot be disentangled experimentally, there is no central-rapidity source in stopping (net-baryon transport)
<cit.>, because particles and antiparticles are produced in equal amounts in the fireball and therefore, do not contribute to net-baryon distributions. Hence, the existing experimental net-proton (proton minus antiproton) data at SPS <cit.> and RHIC <cit.> energies directly document the physical reality and significance of the fragmentation sources. This suggests that their role in hadron production cannot be neglected, although it is difficult to exactly pin down their contribution in symmetric systems.
In this work, we concentrate on a three-sources model for charged-hadron production in asymmetric systems, where it is easier to access the contribution of the fragmentation sources through the centrality-dependence of charged-hadron production than in symmetric systems. We focus on pseudorapidity distributions in p-Pb collisions at LHC energies of √(s_NN)=5.02 and 8.16 TeV, with special emphasis on the relative role of the three sources for particle production as function of centrality.
We had already performed related, but more phenomenologically-oriented investigations of d-Au at √(s_NN)=200 <cit.> in comparison with PHOBOS data <cit.>
from RHIC, and p-Pb at 5.02 TeV <cit.> in comparison with ALICE data <cit.>.
These works are nonequilibrium-statistical model calculations without explicit connection to the underlying nonperturbative quantum-chromodynamical (QCD) physics. The initial rapidity distributions at t=0 were assumed to be delta-functions or Gaussians, thus providing an analytical solution of the time-dependent problem, and the time evolution for constant diffusion coefficients and linear drift was obtained from analytical solutions of a Fokker–Planck equation (FPE) in rapidity space. The transport parameters were determined in χ^2-minimizations with respect to the available data, allowing for predictions at higher energies through extrapolations of the parameters.
As a substantial refinement, we now aim to connect the relativistic diffusion model (RDM) with elements from nonperturbative QCD. The initial states in the three sources are modeled as color-glass condensate (CGC) states, corresponding to valence-quark – soft-gluon interactions as accounted for in the fragmentation sources for stopping, and gluon-gluon interactions for the central source. The subsequent nonequilibrium-statistical time evolution of the fragmentation sources is then calculated from a linear Fokker–Planck equation as before, but now numerical methods must be used for its solution because of the more involved initial conditions. The transport parameters governing the time evolution are determined in χ^2-minimizations with respect to the available ATLAS and ALICE data of p-Pb collisions for charged-hadron production as functions of pseudorapidity and centrality dN/dη. For the central gluon source, we take the color-glass distribution since the partial thermalization is less pronounced as compared to the fragmentation sources, where – as it will turn out – sizable drift and diffusion is actually observed. With this model, we aim to disentangle the respective role of the forward- and backward fragmentation sources, and the central-rapidity gluon source as functions of centrality in the full pseudorapidity range.
The CGC states for the central-rapidity source based on k_T-factorization, and the CGC initial states for the two fragmentation sources based on hybrid factorization are reviewed in the next section. The diffusion-model approach to the subsequent time evolution of the fragmentation distribution functions in rapidity space and the numerical solution of the corresponding FPE with the CGC initial conditions is considered in Sec. III. Results for charged-hadron production in pseudorapidity space for p-Pb collisions at √(s_NN)=5.02 and 8.16 TeV are presented in Sec. IV, and compared with ATLAS and ALICE data at various centralities. The role of the fragmentation sources is emphasized and the unexpected dominance of the proton-going source in very peripheral collisions is explained. Conclusions are given in Sec. V. | null | null | null | null | null |
http://arxiv.org/abs/2409.17295v1 | 20240925191229 | Electromagnetically Consistent Optimization Algorithms for the Global Design of RIS | [
"M. W. Shabir",
"M. Di Renzo",
"A. Zappone",
"M. Debbah"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
[
*
=====
§ ABSTRACT
The reconfigurable intelligent surface is an emerging technology for wireless communications. We model it as an inhomogeneous boundary of surface impedance, and consider various optimization problems that offer different tradeoffs in terms of performance and implementation complexity. The considered non-convex optimization problems are reformulated as a sequence of approximating linear quadratically constrained or semidefinite programs, which are proved to have a polynomial complexity and to converge monotonically in the objective value.
Reconfigurable intelligent surface, optimization.
§ INTRODUCTION
The reconfigurable intelligent surface (RIS) is a physical layer technology that allows information and communication providers to optimize the propagation of electromagnetic waves, hence sculpting favorable communication channels for the efficient transmission and processing of information. Several optimization algorithms for RIS-aided systems are available, but most of them rely on simple (often simplistic) communication models <cit.>. The development of electromagnetically consistent and tractable communication models for evaluating the performance and optimizing RIS-aided wireless networks, from a signal-level and system-level perspective, is, on the other hand, an important subject open to research <cit.>.
In communication engineering, in addition, current optimization criteria for RIS-aided channels are usually based on the so-called local design, i.e., a specified constraint is imposed to each reconfigurable element of the RIS <cit.>. In the electromagnetic community, however, the local design criterion is known to be sub-optimal in terms of scattered power <cit.>, and to possibly result in scattered electromagnetic waves towards directions different from the intended one <cit.>, <cit.>, <cit.>. To overcome these limitations, an RIS needs to be optimized based on the so-called global design <cit.>, <cit.>, which ensures that the total power reflected towards the direction of interest is as close as possible (ideally equal) to the total incident power. This is ensured by imposing a single reflection constraint that encompasses all the reconfigurable elements of the RIS simultaneously. Existing optimization algorithms for the global design of an RIS are, however, based on communication models that are either not electromagnetically consistent <cit.> or rely on general-purpose optimization functions with no performance guarantee in terms of computational complexity, convergence properties, and optimality of the solution <cit.>, <cit.>.
In this context, we embrace the electromagnetically consistent communication model introduced in <cit.>, which models an RIS as an inhomogeneous boundary of surface impedance. Accordingly, we formulate several optimization problems with the following distinguishing features: (i) electromagnetic consistency of the solution; (ii) specified power efficiency towards the intended direction of reflection; (iii) specified maximum power towards unwanted directions of reflection; and (iv) specified physical implementation constraints, which are all imposed by design in the problem formulation. The considered optimization problems are shown not to be convex. To tackle them efficiently, we reformulate them as a sequence of approximating linear quadratically constrained or semidefinite programs, which are proved to have a polynomial complexity and to converge monotonically in the objective value <cit.>.
The significance of the proposed optimization algorithms is that the global design solution for anomalous reflectors is known only under ideal assumptions, i.e., unitary power efficiency, no parasitic scattering, and no physical constraints on the surface impedance, whose real part needs to be positive and negative <cit.>. To the best of our knowledge, there exist no optimization algorithms to the design of RISs for which the power efficiency, undesired reradiations, and implementation constrains (e.g., the real part of the surface impedance shall not be negative) are specified as optimization constraints.
Notation: Matrices and column vectors are denoted by bold uppercase and lowercase fonts. |·|, (·)^*, (·) denote the absolute value, conjugate, real part. (·)^H, (·)^T, (·) denote the hermitian, transpose, trace operators. ‖·‖, ‖·‖_*, ‖·‖_F denote the spectral, nuclear, Frobenius norms. ≽ denotes positive semidefinite. O(·) stands for the big-O notation. j is the imaginary unit. 1 is the all ones vector. ∇( f( x)) denotes the gradient of f( x) with respect to x^*. ∇_n ( f( x)) is the nth entry of ∇( f( x)). f( x ) ≈ f( x̅) + 2( ∇^T( f( x̅)) ( x - x̅)^*) is the first-order Taylor approximation of f( x ) at the point x̅.
§ ELECTROMAGNETIC MODEL
§.§ RIS Model
We consider the same system model as in <cit.>, which encompasses a single-antenna transmitter, a single-antenna receiver, and an RIS (a flat surface 𝒮) that is modeled as an inhomogeneous boundary of surface impedance with negligible thickness with respect to the considered wavelength. The RIS is modeled as a rectangle that lies in the xy-plane (i.e., z = 0) with its center located at the origin. Specifically, 𝒮 is defined as 𝒮 = {(x,y) : |x|≤ L_x,|y|≤ L_y}, with 2L_x and 2L_y being the lengths of 𝒮 along the x-axis and y-axis, respectively. We consider a reflecting RIS, i.e., the transmitter and receiver are located on the same side of 𝒮.
The transmitter and receiver are located in the Fraunhofer far-field region of each other and of the RIS. Thus, the incident and reflected signals are modeled as plane waves, whose angles of incidence and reflection, with respect to the normal (i.e., the z-axis) to 𝒮, are denoted by θ_i and θ_r, respectively. As in <cit.>, the incident and reflected signals propagate in the yz-plane, so that the dependence on the azimuth angle is ignored. We consider only the signal reflected by the RIS and ignore the transmitter-receiver direct link due to the presence of blocking objects. A free space propagation environment is considered.
Since the RIS is modeled as an inhomogeneous boundary of surface impedance, it is characterized by a surface impedance Z(x,y) or, equivalently, by a surface reflection coefficient Γ(x,y) for (x,y) ∈𝒮, according to the definitions in <cit.>. Because of the independence from the azimuth angle, Z(x,y) and Γ(x,y) are constant functions along the x-axis. For system optimization, we can hence consider Z(x,y) = Z(y) and Γ(x,y) = Γ(y). For ease of writing and numerical implementation when solving the optimization problems, the surface impedance and reflection coefficient are discretized with spatial sampling Δ_x and Δ_y along the x-axis and y-axis, respectively, as detailed in <cit.>. Accordingly, Z(y) and Γ(y) are represented by two column vectors 𝐳 = [z_1,z_2,…,z_N]^T and γ = [γ_1,γ_2,…,γ_N]^T, respectively. Specifically, z_n = Z(y_n) with y_n = -L_y - Δ_y/2 + nΔ_y, n=1,2,…,N, and N=2L_y/Δ_y. The same applies to γ.
The entries of 𝐳 and γ are related to one another as follows:
z_n = η_0 1 + γ_n/cosθ_i - γ_ncosθ_r, γ_n=z_ncosθ_i-η_0/z_ncosθ_r+η_0
where η_0 is the free space impedance and n=1,2,…,N.
§.§ Electromagnetic Consistency
The entries of 𝐳 and γ are arbitrary complex values, with the only requirement that they need to produce reflected electric and magnetic fields, given the incident electric and magnetic fields, that are electromagnetically consistent, i.e., that fulfill Maxwell's equations <cit.>. This implies that the entries of γ need to satisfy the constraint <cit.>
H_n(γ)=|f_n^''-2j κ f_n^'sinθ_r|/κ^2|γ_n| = 0, n=1, …, N-2
where f_n=γ_ne^jκ(sinθ_r-sinθ_i)y_n, f_n^'=(f_n+1-f_n)/Δ_y, f_n^''=(f_n+1^'-f_n^')/Δ_y, κ=2π/λ, and λ is the considered wavelength. The corresponding constraint that the entries of 𝐳 need to fulfill can be found by inserting γ_n in (<ref>) into (<ref>).
As detailed in <cit.>, the condition H_n = 0 results in the optimal solution, at the highest implementation complexity, as the surface impedance is characterized by large variations, and by positive and negative values of its real part, which make it difficult to implement the resulting RIS in practice <cit.>, <cit.>. For this reason, the constraint H_n = 0 that ensures the electromagnetic consistency of the solution is usually relaxed, by replacing it with the constraint ε_ L≤ H_n≤ε_ U, where 0 ≤ε_ L≤ε_ U are small positive constants that control the tradeoff between the optimality and electromagnetically consistency (i.e., ε_ U→ 0) of the obtained design against the implementation complexity (i.e., ε_ L > 0) of the RIS.
§.§ Performance Metrics
As detailed in <cit.>, the performance of an RIS is completely characterized by two performance metrics.
Surface Net Power Flow – The surface net power flow is defined as the difference between the power reradiated by the whole RIS towards the intended direction of reflection and the total incident power. If the power reradiated towards the intended direction of reflection is hence equal to the total incident power, the surface net power flow is zero. An RIS for which the surface net power flow is equal to zero is defined as globally optimum. To optimize an RIS according to the global design criterion, the surface net power flow needs then to be zero. Considering typical performance versus implementation tradeoffs, the global design criterion is hence tantamount to either minimizing the surface net power flow or ensuring that it is as close as possible to zero within a specified tolerance.
Based on the considered electromagnetic model, the surface net power flow can be formulated, in terms of γ, as follows:
P_𝒮(γ) = a_XΔ _y( c_i + α _rγ^Hγ + 0.5α _ir( 1^Tγ + γ^H1))
where a_X=|E_0|^2L_x/η_0≥ 0, α_i=cosθ_i≥ 0, α_r=cosθ_r≥ 0, c_i=-2L_yα_i/Δ _y≤ 0, α_ir=α_r-α_i∈ [-1,+1], E_0 is the amplitude of the incident electric field (a plane wave).
Power Flux – The power flux characterizes the amount of power reradiated by an RIS at a specified point of observation. We focus on observation points located in the far-field region of the RIS. The power flux provides information on the angular response of the RIS, i.e., how the total incident power is reradiated towards different directions. In the far-field, the power flux is proportional to the radiation pattern of the RIS. The power flux is an essential performance indicator in wireless communications since it determines the amount of received power and hence the signal-to-interference ratio.
Based on the considered electromagnetic model, the power flux evaluated towards a generic direction of reradiation θ_k can be formulated, as a function of γ, as follows:
P_θ _k( γ) = a_kΔ _y^2χ _ik| γ^Tu_ik|^2
where the nth entry of vector u_ik is u_ik,n=e^j κ(sinθ_k-sinθ_i)y_n for n=1,2, …, N, a_k=κ^2/η_0|E_0|^2L_x^2/8π^2R_k^2, R_k is the distance from the RIS towards the direction θ_k, and χ_ik=cos^2θ_r+cos^2θ_k+2cosθ_rcosθ_k≥ 0. If the direction of observation coincides with the intended direction of reflection, i.e., θ_k = θ_r, then R_k = R_r, χ_ik = χ_ir= 4cos^2θ_r, and P_θ_k(γ) = P_θ_r(γ).
Next, we formulate and solve optimization problems that aim to minimize P_𝒮(γ) in (<ref>) (Sec. III) and to maximize P_θ_r(γ) in (<ref>) (Sec. IV) subject to specified design constraints.
§.§ Mathematical Preliminaries
Given the complex-valued vector γ and the real-valued function f( γ), the gradient of f( γ) is denoted by ∇( f( γ)) = ∇ _γ^*f( γ) and its entries are the first-order derivatives of f( γ) with respect to γ^*. The first-order Taylor approximation of f( γ) at the point γ̅ is hence given by
f̅( γ) = f( γ̅) + 2( ∇^T( f( γ̅)) ( γ - γ̅)^*)
If f( γ) is a convex or concave function, then f̅( γ) is a lower-bound or an upper-bound of f( γ), respectively.
For ease of writing, the following functions are introduced:
δ( | γ _n|) = δ _γ _n^*( | γ _n|) = d| γ _n|/
. -dγ _n^* = 0.5γ _n/ . -| γ _n|
γ^Hγ = γ^2, ∇( γ^Hγ) = γ, δ( γ _nγ_n^*) = γ _n
§ OPTIMIZATION: SURFACE NET POWER FLOW
In <cit.>, it is shown that minimizing P_𝒮(γ) in (<ref>) results in an optimal anomalous reflector that steers the total incident power towards the intended direction of reflection with no undesired scattering towards other directions. The optimal surface impedance z is known in a closed-form expression only if the constraint in (<ref>) is strictly fulfilled, leading to a high implementation complexity <cit.>, <cit.>. To the best of our knowledge, there exists no general and efficient optimization framework that aims to minimize P_𝒮(γ) in (<ref>) by imposing specified design constraints. This is tackled in this section.
§.§ Global Design – Helmholtz Constraint
We commence generalizing <cit.>, by relaxing the constraint in (<ref>) within the specified tolerances ε_HC_L≥ 0 and ε_HC_U≥ 0. This keeps under control the variations of the surface impedance z along 𝒮, facilitating the implementation of the resulting RIS, while ensuring a quasi-Maxwellian solution <cit.>.
The considered problem can be stated as follows:
(S-HC) γmin | P_𝒮(γ) |
s.t. ℋ_n(γ) ≤ε_HC_U, n=1, 2, …, N-2 (a)
ℋ_n(γ) ≥ε_HC_L, n=1, 2, …, N-2 (b)
The problem S-HC is not convex. To tackle it efficiently, the objective function in (<ref>) is rewritten in epigraph form <cit.>. Also, the function ℋ_n(γ) in (<ref>) is formulated explicitly in terms of the optimization variable γ, as follows:
ℋ_n(γ) = | g_n( γ _n,γ _n + 1,γ _n + 2)|/κ ^2Δ _y^2| γ _n| = | g_n( γ_n )|/κ ^2Δ _y^2| γ _n|
where β_1=1+2j κsinθ_rΔ_y, β_2=-(2+2j κsinθ_rΔ_y), and g_n( γ_n ) = g_n( γ _n,γ _n + 1,γ _n + 2) = u_n + 2γ _n + 2 + β _2u_n + 1γ _n + 1 + β _1u_nγ _n with γ_n = ( γ _n,γ _n + 1,γ _n + 2)^T.
The problem S-HC can then be rewritten as follows:
(S-HC-a) γ,tmin t s.t. -t ≤ 0 (a)
P_𝒮(γ) - t ≤ 0 (b), |g_n( γ_n )| - ε̃_HC_U| γ _n| ≤ 0 (c)
- P_𝒮(γ) - t ≤ 0 (d), -|g_n( γ_n )| + ε̃_HC_L| γ _n| ≤ 0 (e)
where ε̃_HC_L,U = ε _HC_L,Uκ ^2Δ _y^2 and n=1, 2, …, N-2.
Problem S-HC-a is still not convex due to the constraints (<ref>c)-(<ref>e). To tackle it, we embrace the iterative inner approximation framework <cit.>. Specifically, we consider convex upper bounds for the concave functions in (<ref>c)-(<ref>e), replacing them with their first-order Taylor approximation. As for the function γ^Hγ, we note that ∇( γ^Hγ) = γ and introduce the following lower bound a_XΔ _yα _rγ^Hγ≥p_𝒮( γ, γ̅) at the point γ̅:
p_𝒮( γ, γ̅) = a_XΔ _yα _r(γ̅^Hγ̅ + 2 ( ∇ ^T( γ̅^Hγ̅)( γ - γ̅)^*) )
As for the function |g_n( γ_n )|, we note that ∇ _n( | g_n( γ_n)|) = 0.5β _1^*u_n^*g_n( γ_n)/
. -| g_n( γ_n)| and introduce the following lower bound | g_n( γ_n)| ≥| g_n^L( γ_n)| evaluated at the point γ̅_n:
| g_n^L( γ_n)| = | g_n( γ̅_n)| + 2 Re( ∇ _n( | g_n( γ̅_n)|)( γ _n - γ̅_n)^*)
Therefore, the following reformulation for S-HC-a is obtained at the generic iteration of the inner algorithm:
(S-HC-b) γ,tmin t s.t. (<ref>a), (<ref>b)
- a_XΔ _y( c_i + 0.5α _ir( 1^Tγ + γ^H1)) - p_𝒮( γ, γ̅) - t ≤ 0 (a)
|g_n( γ_n)| - ε̃_HC_U( | γ̅_n| + 2 (∇_n ( | γ̅_n|)( γ _n - γ̅_n)^*) ) ≤ 0 (b)
-|g_n^ L( γ_n )| + ε̃_HC_L| γ _n| ≤ 0 (c), γ - γ̅≤ε _TR (d)
where γ̅ is the point at which Taylor's approximation is made (the solution of the preceding iteration of the inner algorithm), γ̅_n = ( γ̅_n,γ̅_n + 1, γ̅_n + 2)^T, and ∇ _n( | γ _n|) = 0.5γ _n/ . -| γ _n|. Eq. (<ref>d) is the trust region, which ensures that, at each iteration, the set of feasible solutions is limited to the points for which the approximation is sufficiently accurate <cit.>. The radius of the trust region is controlled via the small positive constant ε _TR, which is updated at each iteration as detailed in Sec. V.
S-HC-b is convex and is solved as detailed in Sec. III-D.
§.§ Global Design – Reradiation Mask
Fulfilling the constraint in (<ref>) ensures the electromagnetic consistency of the obtained solution. The optimization problem has, however, a number of constraints that is equal to the number of optimization variables N. Motivated by recent results <cit.>, <cit.>, <cit.>, <cit.>, we show that the constraint in (<ref>) can be replaced by keeping the power scattered towards specified (undesired) directions of reradiation below a given maximum level. The advantage of this formulation is that the unwanted directions of reradiation may be known apriori <cit.>, <cit.>, and their number may be much less than the number of optimization variables N. The resulting optimization constraint is referred to as reradiation mask constraint <cit.>, <cit.>.
The considered problem can be stated as follows:
(S-RM) γmin | P_𝒮(γ) |
s.t. P_θ _k( γ) ≤ε _RM, θ _k∈𝒦
where 𝒦 is the set of undesired directions of radiation and ε _RM is the maximum amount of radiated power towards them.
The problem S-RM can be tackled efficiently by rewriting the objective function in epigraph form <cit.>, and by making explicit the constraint in (<ref>) by using (<ref>), as follows:
(S-RM-a) γ,tmin t s.t. (<ref>a), (<ref>b), (<ref>a), (<ref>d)
| γ^Tu_ik| - √(ε̃_RM)≤ 0, θ _k∈𝒦 (a)
where ε̃_RM = ε _RM/a_kΔ _y^2χ _ik. S-RM-a is convex and can be solved efficiently as detailed in Sec. III-D.
§.§ Approximated Global Design – Reactive Impedance
The optimization problems S-HC and S-RM aim to maximize the radiation efficiency of the RIS, by simultaneously maximizing and minimizing the power reradiated towards the intended and undesired directions of reradiation, respectively. S-HC and S-RM impose, however, mild or no implementation constraint to the feasible set of solutions, respectively. As illustrated in <cit.>, <cit.>, the surface impedance z obtained by solving P-HC and P-RM has a negative real part, which makes the obtained design difficult to be implemented <cit.>. Thus, we consider an optimization problem that imposes specified implementation constraints by design. A sought after implementation requirement is that the real part of z is not negative and as small as possible, ideally equal to zero, in order to minimize the power losses. Since the real part of z cannot take negative values, the obtained design cannot be deemed globally optimum but only approximately globally optimum.
The considered problem can be stated as follows:
(S-RI) γmin | P_𝒮(γ) |
s.t. P_θ _k( γ) ≤ε _RM, θ _k∈𝒦(a)
( z_n ) ≥ 0, n=1, 2, …, N (b)
( z_n ) ≤ε _RI, n=1, 2, …, N (c)
where ε _RI≥ 0 is used to adjust the tradeoff between the power losses and implementation complexity. If ε _RI =0, the losses are zero but the implementation complexity is the highest.
The objective function in (<ref>) and the constraint in (<ref>a) can be tackled as in S-RM-a. The constraints in (<ref>b) and (<ref>c) can be reformulated in terms of γ from (<ref>). For each pair (z_n,γ_n), (<ref>b) and (<ref>c) are equivalent to the following:
(<ref>b): α _r| γ _n|^2 - ( α _i - α _r)( γ _n) - α _i≤ 0
(<ref>c): - ( 1 + ε̃_RIα _r) α _r| γ _n|^2 + α _i( 1 - ε̃_RIα _i)
+ ( α _i - α _r + 2ε̃_RIα _iα _r) Re( γ _n) ≤ 0
where ε̃_RI = ε _RI/ . -η _0. The constraint in (<ref>) is convex. The constraint in (<ref>) is concave, and it is tackled through an upper bound obtained from Taylor's approximation applied to | γ _n|^2.
By introducing the function ψ( γ _n) = α _i( 1 - ε̃_RIα _i) + ( α _i - α _r + 2ε̃_RIα _iα _r) Re( γ _n) and considering ∇_n ( γ^Hγ) = γ_n, S-RI can be reformulated as follows (for n=1,2,…,N):
(S-RI-a) γ,tmin t s.t. (<ref>a), (<ref>b), (<ref>a), (<ref>d), (<ref>a), (<ref>)
ψ( γ _n) - α̃_r( | γ̅_n|^2 + 2 ( ∇_n ( γ̅^Hγ̅) ( γ _n - γ̅_n)^*) ) ≤ 0
where α̃_r= ( 1 + ε̃_RIα _r) α _r. The problem S-RI-a is convex and can be solved efficiently as detailed in Sec. III-D.
§.§ Convergence and Complexity
The complete algorithm to solve S-HC-b, S-RM-a, and S-RI-a is an instance of sequential programming. The trust region is, in fact, applied in the neighborhood of γ̅, which is, by design, a feasible point at any iteration. Thus, the proposed algorithm generates a monotonically decreasing sequence of objective values that converges in the objective <cit.>. The convex problem solved at each iteration has a linear objective and quadratic constraints, which can be tackled using the interior-point method. Thus, the total arithmetic cost per iteration is 𝒪( c^1/2( cv^2 + v^3) ) <cit.>, with v and c being the numbers of optimization variables and constraints, respectively.
§ OPTIMIZATION: POWER FLUX
In electromagnetic theory, the surface net power flow is the optimality criterion usually used to design perfect anomalous reflectors <cit.>. In communication theory, the optimization problem is usually formulated by maximizing the power scattered towards the intended direction of reflection (known as the power flux), while imposing specified constraints to the power efficiency of the RIS <cit.>. The two problem formulations are naturally related to one another, but the corresponding solutions and algorithms are different. Often, the problem formulation considered in communication theory is more general, as (i) it can be extended to different objective functions, besides the reflected power, and scenarios, and (ii) it can account for the surface net power flow as an optimization constraint, resulting in specified performance versus implementation complexity tradeoffs by design <cit.>. However, the computational complexity and memory requirements of the power flux optimization problem are usually higher.
In this section, therefore, we analyze the globally optimum design of an RIS under the lenses of communication theory, by considering alternative formulations for S-RM and S-RI. S-HC is not considered for brevity and because it is de facto equivalent to S-RM from a communication perspective.
§.§ Global Design – Reradiation Mask
By considering the power scattered towards the intended direction of reflection as the objective function and the surface net power flow as an optimization constraint, a problem de facto equivalent to S-RM can be formulated as follows:
(P-RM) γmax P_θ _r( γ)
s.t. P_θ _k( γ) ≤ε _RM, θ _k∈𝒦 (a), | P_𝒮(γ) | ≤ε_SP (b)
where ε_SP≥ 0 is used to adjust the tradeoff between the amount of power scattered towards the intended direction of reflection and the implementation complexity of the RIS.
P-RM is not convex. To tackle it, we introduce Γ = γγ^H, U_ik = u_iku_ik^H. Since γ^Hγ = ( Γ) in (<ref>), we define
P̂_𝒮( Γ,γ) = a_XΔ _y( c_i + α _r( Γ) + 0.5α _ir( 1^Tγ + γ^H1))
Then, P-RM can be equivalently formulated as follows:
(P-RM-a) Γ, γmax a_rΔ _y^2χ _ir( ΓU_ir^*)
s.t. a_kΔ _y^2χ _ik( ΓU_ik^*) - ε _RM≤ 0, θ _k∈𝒦(a)
|P̂_𝒮(Γ, γ) | - ε_SP≤ 0 (b), Γ = γγ^H (c)
The only non-convex constraint in P-RM-a is (<ref>c). To tackle it, we reformulate it equivalently as follows:
(<ref>c): Γ_* - Γ_F≤ 0 (a)
(<ref>c): [ [ Γ γ; γ^H 1 ]] ≽ 0 (b), Γ - γ^2≤ 0 (c)
where (<ref>a) ensures that Γ has rank one and (<ref>b) ensures that Γ is positive semidefinite. Also, (<ref>b) and (<ref>c) ensure that Γ - γ^2 =0, i.e., the only singular value of Γ is equal to γ^2. This is because the spectral norm is monotone, i.e., Γ≽γγ^H in (<ref>b) implies Γ≥γγ^H = γ^2.
The constraints in (<ref>a) and (<ref>d) are not convex, since they are given by the difference of two convex functions. We tackle them by applying the iterative inner approximation framework <cit.>. Specifically, we consider the following convex lower bounds for the functions γ^2 and Γ_F:
f_V( γ, γ̅) = γ̅^2 + 2 Re( ∇ ^T( γ̅^2)( γ - γ̅)^*)
f_F( Γ, Γ̅) = Γ̅_F + 2 Re( ∑_n,mδ _n,m( Γ̅)( Γ _n,m - Γ̅_n,m)^*)
with γ^2≥f_V( γ, γ̅), Γ_F≥f_F( Γ, Γ̅), Γ _n,m is the (n,m)th entry of Γ, ∇( γ^2) = γ, δ _n,m( Γ) = 0.5Γ _n,m/ . -Γ_F, and Γ̅ and γ̅ are the points at which Taylor's approximation is made (the solution of the preceding iteration).
At the generic iteration of the inner algorithm, therefore, P-RM-a is tackled by solving the following problem:
(P-RM-b) Γ, γmax a_rΔ _y^2χ _ir( ΓU_ir^*)
s.t. (<ref>a), (<ref>b), (<ref>b) (a)
Γ_* - f_F( Γ,Γ̅) ≤ 0 (b), Γ - f_V( γ,γ̅) ≤ 0 (c)
γ - γ̅≤ε _TR,γ (d), Γ - Γ̅≤ε _TR,Γ (e)
where ε _TR,γ≥ 0, ε _TR,Γ≥ 0, and (<ref>e) and (<ref>d) are trust region constraints, similar to the constraint in (<ref>d).
P-RM-b is convex and is solved as detailed in Sec. IV-C.
§.§ Approximated Global Design – Reactive Impedance
By considering the power scattered towards the intended direction of reflection as the objective function and the surface net power flow as an optimization constraint, a problem de facto equivalent to S-RI can be formulated by adding the constraints in (<ref>b) and (<ref>c), or, equivalently, the constraints in (<ref>) and (<ref>), to the problem P-RM-b in (<ref>).
By setting | γ _n|^2 = Γ _n,n, we then obtain (n=1,2, …,N):
(P-RI) Γ, γmax a_rΔ _y^2χ _ir( ΓU_ir^*)
s.t. (<ref>a), (<ref>b), (<ref>b), (<ref>b)-(<ref>e)
α _rΓ _n,n - ( α _i - α _r) Re( γ _n) - α _i≤ 0
- ( 1 + ε̃_RI)α _rΓ _n,n + α _i( 1 - ε̃_RIα _i)
+ ( α _i - α _r + 2ε̃_RIα _iα _r) Re( γ _n) ≤ 0
P-RI is convex and is solved as detailed in Sec. IV-C.
§.§ Convergence and Complexity
Problems P-RM-b and P-RI fulfill the same convergence properties as the problems analyzed in Sec. III-C. The only difference is that the convex problem solved at each iteration is a semidefinite program <cit.>, which can be tackled by using the interior-point method. By utilizing the same notation as that in Sec. III-C, the total arithmetic cost per iteration is 𝒪( c^1/2( c^3v + c^2v^2 + v^3) ) <cit.>.
§ NUMERICAL RESULTS
In this section, we focus on the problems S-RI-a and P-RI, as they encompass all the others, are the most challenging to solve, and analytical solutions are not known. The algorithms are implemented in CVX. The simulation parameters are f = 28 GHz, θ_i = 0^∘, θ_r = 60^∘, R_r=R_k = 100 m, η_0 = 377 Ω, E_0 = 1 Watt/m^2, L_x = 0.5 m, L_y = 4.9652 λ m, Δ_y = λ/6.0420, N=60. Also, ε_RM = 2 · 10^-8, ε_RI = 10^-2, ε_SP = 10^-9, and the reradiation mask comprises the set of angles [-2, 2] and [-58, -62] with angular resolution 0.1^∘.
As for the trust region, the radii in (<ref>d), (<ref>d), (<ref>e) are set to large values at the first iteration and are progressively reduced at each iteration by 1.2 for S-RI-a and by 1.1 for P-RI. If the objective does not decrease due to numerical inaccuracies, the following approach is used: the algorithm steps back to the solution and setup attained three iterations earlier, while the radii of the trust region are reduced by 1.2 for S-RI-a and by 1.1 for P-RI. At convergence, we attained ε_TR = 8.79 · 10^-13, ε _TR,γ = 8.53 · 10^-6, ε _TR,Γ= 1.02 · 10^-4.
To ease the convergence of P-RI, (<ref>b) and (<ref>c) are rewritten as Γ_* - f_F( Γ,Γ̅) ≤ε _rk1-b and Γ - f_V( γ,γ̅) ≤ε _rk1-c, with ε _rk1-b≥ 0 and ε _rk1-c≥ 0 small positive values. The impact is minor, as it only implies that the difference of the single singular value of Γ and γ^2 is less than ε _rk1-c≥ 0. Again, ε _rk1-b and ε _rk1-c are set to large values and are progressively reduced by 5 at each iteration, attaining ε _rk1-b = 1.25 · 10^-9, ε _rk1-c = 1.27 · 10^-7 at convergence.
As performance metric, we consider the reradiation pattern of the RIS, i.e., we plot the power flux P_θ(γ) in (<ref>) as a function of the angle of observation θ, with γ obtained from the proposed optimization algorithms. Also, three benchmark schemes are considered: (i) GO is the geometric optics solution in <cit.>, with non-negative values of the real part of the surface impedance in (<ref>); (ii) GD is the global design solution in <cit.>, with positive and negative values of the real part of the surface impedance in (<ref>); and (iii) GO-RI is the GO solution in <cit.>, by setting the real part of the surface impedance equal to zero. The initial values of the Taylor series approximations in S-RI-a and P-RI are set to GO-RI for ensuring the feasibility of the initial point.
The results are illustrated in Fig. <ref>. According to theory <cit.>, GO and GD provide reradiation patterns with no beams towards undesired directions of radiation, with GD offering the best beamforming gain at the highest implementation complexity (local amplifications due to the negative values of (z) in (<ref>)). It is worth nothing that the difference of beamforming gain (at θ_r = 60^∘) between GO and GD is 3 dB. GD-RI, which is often utilized as a simple solution <cit.>, offers a good beamforming gain towards the direction of interest, but strong beams towards two undesired directions <cit.>. The proposed algorithms ensure a good, close to optimal, beamforming gain, no beams towards unwanted directions, and a real part of the surface impedance almost equal to zero, i.e., positive and less than ε_RI. The problem P-RI usually needs higher memory requirements due to the SDP formulation (in matrix form), but it is more stable than the problem S-RI-a from the numerical point of view. Therefore, the proposed algorithms provide an efficient approach for optimization, ensuring close to optimal performance while fulfilling specified design constraints.
§ CONCLUSION
We have introduced a suite of algorithms for optimizing reconfigurable anomalous reflectors based on the global design criterion. Several extensions of this work can be envisioned, including multi-beam anomalous reflecting and refracting surfaces, multiple antenna systems, near-field RIS-aided channels.
IEEEtran
| The reconfigurable intelligent surface (RIS) is a physical layer technology that allows information and communication providers to optimize the propagation of electromagnetic waves, hence sculpting favorable communication channels for the efficient transmission and processing of information. Several optimization algorithms for RIS-aided systems are available, but most of them rely on simple (often simplistic) communication models <cit.>. The development of electromagnetically consistent and tractable communication models for evaluating the performance and optimizing RIS-aided wireless networks, from a signal-level and system-level perspective, is, on the other hand, an important subject open to research <cit.>.
In communication engineering, in addition, current optimization criteria for RIS-aided channels are usually based on the so-called local design, i.e., a specified constraint is imposed to each reconfigurable element of the RIS <cit.>. In the electromagnetic community, however, the local design criterion is known to be sub-optimal in terms of scattered power <cit.>, and to possibly result in scattered electromagnetic waves towards directions different from the intended one <cit.>, <cit.>, <cit.>. To overcome these limitations, an RIS needs to be optimized based on the so-called global design <cit.>, <cit.>, which ensures that the total power reflected towards the direction of interest is as close as possible (ideally equal) to the total incident power. This is ensured by imposing a single reflection constraint that encompasses all the reconfigurable elements of the RIS simultaneously. Existing optimization algorithms for the global design of an RIS are, however, based on communication models that are either not electromagnetically consistent <cit.> or rely on general-purpose optimization functions with no performance guarantee in terms of computational complexity, convergence properties, and optimality of the solution <cit.>, <cit.>.
In this context, we embrace the electromagnetically consistent communication model introduced in <cit.>, which models an RIS as an inhomogeneous boundary of surface impedance. Accordingly, we formulate several optimization problems with the following distinguishing features: (i) electromagnetic consistency of the solution; (ii) specified power efficiency towards the intended direction of reflection; (iii) specified maximum power towards unwanted directions of reflection; and (iv) specified physical implementation constraints, which are all imposed by design in the problem formulation. The considered optimization problems are shown not to be convex. To tackle them efficiently, we reformulate them as a sequence of approximating linear quadratically constrained or semidefinite programs, which are proved to have a polynomial complexity and to converge monotonically in the objective value <cit.>.
The significance of the proposed optimization algorithms is that the global design solution for anomalous reflectors is known only under ideal assumptions, i.e., unitary power efficiency, no parasitic scattering, and no physical constraints on the surface impedance, whose real part needs to be positive and negative <cit.>. To the best of our knowledge, there exist no optimization algorithms to the design of RISs for which the power efficiency, undesired reradiations, and implementation constrains (e.g., the real part of the surface impedance shall not be negative) are specified as optimization constraints.
Notation: Matrices and column vectors are denoted by bold uppercase and lowercase fonts. |·|, (·)^*, (·) denote the absolute value, conjugate, real part. (·)^H, (·)^T, (·) denote the hermitian, transpose, trace operators. ‖·‖, ‖·‖_*, ‖·‖_F denote the spectral, nuclear, Frobenius norms. ≽ denotes positive semidefinite. O(·) stands for the big-O notation. j is the imaginary unit. 1 is the all ones vector. ∇( f( x)) denotes the gradient of f( x) with respect to x^*. ∇_n ( f( x)) is the nth entry of ∇( f( x)). f( x ) ≈ f( x̅) + 2( ∇^T( f( x̅)) ( x - x̅)^*) is the first-order Taylor approximation of f( x ) at the point x̅. | null | null | null | null | We have introduced a suite of algorithms for optimizing reconfigurable anomalous reflectors based on the global design criterion. Several extensions of this work can be envisioned, including multi-beam anomalous reflecting and refracting surfaces, multiple antenna systems, near-field RIS-aided channels.
IEEEtran |
http://arxiv.org/abs/2409.17827v1 | 20240926132646 | BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text | [
"Siyan Wang",
"Bradford Levy"
] | cs.CL | [
"cs.CL"
] |
Phase glides and self-organization of atomically abrupt interfaces out of stochastic disorder in
Andrej Kuznetsov
===================================================================================================
§ ABSTRACT
Many of the recent breakthroughs in language modeling have resulted from scaling effectively the same model architecture to larger datasets. In this vein, recent work has highlighted performance gains from increasing training dataset size and quality, suggesting a need for novel sources of large-scale datasets. In this work, we introduce BeanCounter, a public dataset consisting of more than tokens extracted from businesses' disclosures. We show that this data is indeed novel: less than 0.1% of BeanCounter appears in Common Crawl-based datasets and it is an order of magnitude larger than datasets relying on similar sources. Given the data's provenance, we hypothesize that BeanCounter is comparatively more factual and less toxic than web-based datasets. Exploring this hypothesis, we find that many demographic identities occur with similar prevalence in BeanCounter but with significantly less toxic context relative to other datasets. To demonstrate the utility of BeanCounter, we evaluate and compare two LLMs continually pre-trained on BeanCounter with their base models. We find an 18-33% reduction in toxic generation and improved performance within the finance domain for the continually pretrained models. Collectively, our work suggests that BeanCounter is a novel source of low-toxicity and high-quality domain-specific data with sufficient scale to train multi-billion parameter LLMs.
§ INTRODUCTION
A key ingredient to the recent breakthroughs in language modeling has been the availability of large-scale datasets. Along these lines, recent work has shown that training incrementally larger language models demands a similar increase in training data <cit.>, and that data quality is a significant determinant of a model's ultimate performance <cit.>. However, the creation and scaling of text datasets sourced from the public domain raises a variety of concerns such as inclusion of personally identifiable information <cit.>, degrading quality <cit.>, and biases or false information <cit.>.
In this work, we contribute to overcoming these challenges of scaling text datasets. Specifically, we introduce BeanCounter, a dataset comprised of more than tokens extracted from public domain business-oriented disclosures. The introduction of BeanCounter makes four primary contributions:
1. Novel large-scale dataset of domain specific content. BeanCounter consists of content produced by businesses to communicate with a variety of stakeholders such as investors and regulators. This content is not easily accessible via web scraping and we find that little of BeanCounter is included in other commonly used datasets. For example, less than 0.1% of BeanCounter is in C4 <cit.>. In considering the scale of BeanCounter, we define “large-scale” as sufficient size to pre-train a multi-billion parameter model <cit.>, and of similar order of magnitude as other web-based datasets (i.e., 100B+ tokens) <cit.>. Along these lines, BeanCounter consists of more than tokens of cleaned text and more than tokens of deduplicated text (see Table <ref>). Thus, to the best of our knowledge, BeanCounter is the largest and most comprehensive public dataset of business-oriented text.
2. Timely, factual, and high-quality content. Considering timeliness, <cit.> show that incorporating the concept of time into LLMs can enhance their ability to recall time-dependent facts, e.g., the current president of the United States. In contrast to web-based datasets which usually do not have a precise timestamp of when the web page was created or updated, every observation in BeanCounter has a timestamp–accurate to the second–of when the content became available. Considering factuality and quality, there are at least two reasons BeanCounter is comparatively more factual and of higher-quality than web-based datasets. First, the principal executive and financial officers, e.g., the CEO and CFO, must certify the disclosures from which BeanCounter is sourced <cit.>. Second, these disclosures are the primary channel used by businesses to communicate with stakeholders. Thus, the businesses are incentivized to produce effective communications. In this sense, BeanCounter supports future research on the role of time in LLMs, factuality, and quality.
3. Toxicity and demographic identity analysis. As access to LLMs has proliferated, a common concern is the extent to which LLMs produce toxic or otherwise harmful content. Along these lines, a large literature has examined the toxicity of various web-based datasets. In this vein, we examine the prevalence of a variety of demographic identity descriptors and the toxicity of text in which they are mentioned <cit.>. We contrast our findings to text from C4 <cit.> and find that many descriptors are similarly prevalent in BeanCounter but the context in which they are mentioned is significantly less toxic. For instance, the descriptor “Asian” is mentioned in 4.67% and 9.26% of documents in C4 and filings of BeanCounter, respectively, yet the text surrounding “Asian” in BeanCounter is 72.4% less toxic, on average. Similarly, “LGBTQ” is mentioned 0.37% and 0.17% in C4 and BeanCounter, respectively, yet the toxicity of its surrounding content is 88.8% lower in BeanCounter [A table with the most prevalent descriptors in C4-en is shown in Appendix <ref>.]. We find this general trend across nearly all descriptors.
4. Model Evaluation. Given the influence of training data on model generation, our analyses suggest that models trained on BeanCounter may exhibit less toxic generation. We explore this possibility by continuing the pre-training of models on BeanCounter and evaluate their performance on domain-specific tasks and toxic generation. We find that models trained on BeanCounter exhibit better performance within financial applications while exhibiting 18-33% less toxic generation.
In summary, this paper introduces a novel business-oriented text dataset comprised of more than tokens which are comparatively more factual, of higher-quality, and less toxic than commonly used web-based datasets. We have open-sourced BeanCounter and made it available via Hugging Face Hub. We hope that BeanCounter will enable the creation of new foundation models which are less likely to generate toxic content and are better suited to business-oriented tasks.
§ RELATED WORK
Foundation models and large-scale text datasets. The current scale of Transformer-based language models has grown more than 1000-fold since early examples of such models, e.g., from GPT-1 <cit.> to OPT-175B <cit.>. While the scale of these models has grown dramatically, the size of the data has grown comparatively more slowly, e.g., even though OPT-175B is 100 times larger than GPT-2 <cit.> it was trained on only 10 times as much data (180B versus roughly 20B tokens). In this vein, recent work has explored how to best allocate a fixed training compute budget and found results which suggest earlier models were significantly under-trained, i.e., a 10-fold increase in model size should correspond to a 10-fold increase in training data size <cit.>. Alternatively, one can view this resource allocation problem as desiring the best possible inference performance for a fixed budget <cit.>. In which case models should be trained on even more data than that suggested by the scaling laws of <cit.>. The common theme throughout this literature is that an increase in model size requires at least a similar increase in training data as well. This begs the question of where to source the additional data.
Early models such as BERT <cit.> and GPT-1 <cit.> sourced training data from books <cit.> and/or Wikipedia. As models scaled, researchers began collecting larger datasets by scraping text from the web, e.g., WebText consists of upvoted links from Reddit <cit.>, and this approach is now the dominant method for assembling large-scale text datasets <cit.>. While this is now the dominant approach, it is not without concerns. These concerns range from socially benign, e.g., low-quality content that leads to gibberish generations <cit.> or large amounts of duplicate content <cit.>, to more socially harmful outcomes such as generation which is: biased and/or toxic <cit.>, revealing of personally identifiable information <cit.>, or factually inaccurate <cit.>. In this paper, we present a novel dataset sourced from business-oriented content which is more factually accurate, less toxic, and paired with the time the content became public.
Business-oriented text datasets. Early business-oriented corpuses for NLP research were generally of small scale, i.e., tens of millions of tokens <cit.>, and therefore more suitable for fine-tuning or downstream evaluation than pre-training. Recently, datasets on the scale of billions of tokens have been developed using proprietary data, e.g., Bloomberg's catalog of content <cit.>, and publicly available data, e.g., content from the Securities and Exchange Commission's (SEC) Electronic Data Gathering and Retrieval (EDGAR) system. Notably, prior work using EDGAR data generally considers only a subset of the disclosures posted to EDGAR. For example, <cit.> extract text solely from annual reports to create a dataset of tokens. <cit.> extract text from annual and quarterly reports to create a non-public dataset of tokens. Other datasets relying on EDGAR are of significantly smaller size, i.e., less than 1B tokens <cit.>. We extend this literature by considering all disclosures on EDGAR and find that 83% of BeanCounter comes from disclosure types not included in prior work. Additionally, we consider not only the main content of EDGAR disclosures but attachments as well and find that 29% of tokens are sourced from attachments.[For example, consider American Airlines' 2016 Annual Report available from EDGAR https://www.sec.gov/Archives/edgar/data/1193125/0001193125-17-051216-index.htmhere and in the dataset of <cit.> under the filename “6201_2016.htm.” In addition to the main document (Type 10-K on EDGAR), American Airlines also attaches a credit agreement (Type EX-10.1 on EDGAR). The credit agreement contains a similar amount of text as the main 10-K document but does not appear in <cit.>. We find that more than 89% of annual reports include at least one such attachment.]
§ DATASET CONSTRUCTION AND ANALYSIS
BeanCounter is constructed from all public filing submissions to EDGAR–the main mechanism through which entities satisfy their disclosure obligations under the Securities Acts of 1933 and 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940 <cit.>. The SEC allows unrestricted copying and redistribution of the content on EDGAR <cit.>. Notably, entities which file false or misleading information to EDGAR are subject to legal jeopardy since harmed individuals can seek recourse against them. Along these lines, when a firm does engage in fraud and is caught, the SEC generally publishes an enforcement action against the firm. In creating BeanCounter, we leverage these enforcement actions to filter content produced by known fraudulent firms into a distinct split of the dataset. In this sense, EDGAR serves as one of the largest public repositories of business-oriented content that is more likely to be factual than general web content or literary works. While BeanCounter includes personal identifiable information such as names and phone numbers, the entities have consented to making this information publicly available and the SEC allows for confidential treatment requests if they would prefer the information be withheld.
A flowchart representing our dataset pipeline is presented in Figure <ref>. At a high-level, our pipeline involves four steps: (1) collection of all filings on EDGAR, (2) extraction of text from those filings, (3) cleaning of extracted text, and (4) deduplication of cleaned text. For each step, we follow the relevant best practices developed in recent years. The full details of dataset construction can be found in Appendix <ref>. The resulting cleaned, fraud, and deduplicated datasets contain roughly , and tokens, respectively (see Table <ref> for more details).
Prior literature has highlighted the sensitivity of downstream model performance based on dataset content, e.g., <cit.> finds that existing biases in the model can be amplified by fine-tuning. Following previous work <cit.>, we analyze the content of BeanCounter along a variety of dimensions including content source (e.g., industry, firm, and form type), demographic representation, and toxicity to understand the implications of either pre-training or fine-tuning on BeanCounter.
§.§ Content Analysis
Year and Form Type. Figure <ref> presents the volume of content in BeanCounter by year and form-type. As illustrated in Figure <ref>, there are roughly tokens per year on average and there is a general increase in number of tokens filed throughout time. Considering the source of content by form type, Figure <ref> shows that four form types contribute 43.7% of all tokens in BeanCounter: 485BPOS (separate account registration statement for investment companies), 8-K (report of unscheduled material events or corporate changes), 10-Q (quarterly report) and 10-K (annual report). In Appendix <ref> we present examples of content from the ten most common form types in BeanCounter. The content is surprisingly diverse, ranging from discussions of financial performance (see Figure <ref>) to Amazon.com's plans to use IPO funds for the implementation of collaborative filtering <cit.> (see Figure <ref>).
Compared to prior literature, we find that our approach appears to extract nearly twice as much content: vs. tokens from annual reports <cit.> and vs. tokens from both annual and quarterly reports <cit.>. Based on manual inspection of <cit.>, we attribute this difference to our processing of not only the main filing but also any attachments.
Industry and Firm. To better understand group representation in BeanCounter, we explore which industries and firms contribute the most content. We adopt the commonly used industry classification of <cit.> to assign each firm to one of 48 high-level industries. Figure <ref> shows that Banks and Financial Services are the two most represented industries followed by Business Services, Pharmaceuticals, and Chip Fabrication.
Next, we consider which specific firms are most prevalent in BeanCounter. Given that the financial services industries are very highly represented, we present representation separately for the top 20 firms and the top 20 non-financial firms, i.e. all firms not in the Finance, Insurance, Banks nor Real Estate industries. Figure <ref> shows that many well-known financial firms, e.g., Goldman Sachs, contribute large volumes of content to BeanCounter. Figure <ref> shows that the most represented non-financial firms tend to be older, well-established firms such as AT&T and United Airlines.
§.§ Exploration of Gender and Pronouns
Numerous studies have shown that NLP systems tend to propagate gender bias found in their training corpora <cit.>. Along these lines, we explore gender biases in BeanCounter by computing the percentage of filings which contain at least one instance of a specific type of pronoun. Table <ref> presents results. Similar to prior work, we observe that He pronouns are over-represented relative to She pronouns <cit.>. This could mean that models trained on BeanCounter will generate He pronouns at a higher rate than She pronouns, as well as falling back to a male default in the absence of explicit mentions of gender. We also observe that 2nd person pronouns are used roughly 20% less frequently–on an absolute basis–than in the dataset of <cit.>.
§.§ Exploration of Demographic Identities
Similar to <cit.>, we measure demographic representation in BeanCounter by aggregating observations of descriptors from the HolisticBias dataset <cit.>. Specifically, we compute the number of filings that contain at least one specific demographic descriptor across five axes: Gender and Sex, Sexual Orientation, Nationality, Race and Ethnicity, and Religion. We remove several demographic terms such as “straight”, “white”, “Black”, “bi”, “pan”, “ace” and “poly” because these terms can have multiple meanings and after manual inspection we found these are generally used in ways other than as demographic descriptors. We also include expanded versions of abbreviated descriptors (e.g., “AFAB” and “Assigned-Female-At-Birth”) as well as terms that are often hyphenated or capitalized (e.g. “African-American” and “African American”; “Male-to-Female” and “male-to-female”) to capture as many mentions of these descriptors as possible. We note that while most of the terms are used in the intended context, some terms may be used in contexts outside of demographic identity, e.g., “Christian” is also a common English name, and we do not exclude these from the analysis.
In comparison to the pretraining corpuses of <cit.> and <cit.>, Table <ref> shows that BeanCounter has significantly more representation of content along the Nationality and Race and Ethnicity axes, whereas it has a significantly lower representation under the Sexual Orientation axis.[<cit.> perform a similar analysis in their demographic representations analysis. To compare BeanCounter's demographic composition to these datasets, we perform the same analysis on C4-en, and its demographic composition is availabe in Appendix <ref>.] Similar to the two aforementioned datasets, we observe that BeanCounter is also skewed towards over-representation of Western identities such as “American”, “European” and “Christian.”
§.§ Toxicity Analysis of Content Associated with Demographic Identities
To our knowledge, toxicity has not been explored in business-oriented corpuses. We extend this literature by exploring toxic language surrounding the demographic descriptors identified in the previous prevalence analysis (Section <ref>). Prior work on toxicity analysis typically focuses on assigning a toxicity score to randomly sampled spans of 100 tokens <cit.> or computing document-level average toxicity by scoring each line of a document separately <cit.>. Our approach improves upon previous approaches in several ways.
First, prior literature has identified that toxic language is relatively rare, e.g., 0.2% of documents in the corpus of <cit.> are labeled as toxic. Thus, randomly sampling text spans could omit highly toxic samples and hence underestimate the toxicity of the dataset. Further, even though highly toxic text may be rare, it does not mean such text is harmless or has limited downstream impact. Rather than randomly sampling parts of BeanCounter to measure toxicity, we focus on instances where toxicity is likely to occur: around demographic descriptors<cit.>. As a result, one can view our analysis as providing evidence on the question “conditional on the existence of a demographic descriptor, how likely is the surrounding content to be toxic?”
Second, most SOTA toxicity classifiers are trained on comments from various web-based sources <cit.> or machine generated sentences about minority groups <cit.>. As a result, they are best equipped to evaluate toxicity on texts of sentence lengths and are likely to make erroneous predictions for larger text spans such as those from breaking a document by new-line characters. We address this issue by identifying sentences containing these descriptors and passing them into the Perspective API <cit.>. Manual inspection of classification results suggests that this leads to more accurate results relative to attempting to classify longer snippets or using other models such as ToxiGen-HateBERT <cit.>.
Specifically, for every sentence in BeanCounter, we use a regex statement to determine whether the sentence contains any demographic descriptors; if there are multiple descriptors in a sentence, the sentence is assigned to the longest descriptor, i.e., if a sentence contains the descriptor “Latin American”, it would be assigned to “Latin American” instead of “Latin” or “American”. This is a generally reliable heuristic since most of the extracted sentences contain one descriptor; however, when sentences contain multiple distinct descriptors, it becomes unclear which descriptor is the primary “target.” We follow the same procedure to extract all sentences with descriptors from the C4-en dataset. Since C4-en has a different frequency of descriptors than BeanCounter, we generate a balanced sample from C4-en which matches the frequency in BeanCounter. In particular, for each descriptor d_i, we count the number of sentences n_i in BeanCounter which contain d_i. We then randomly sample n_i sentences from C4-en which contain d_i. We find that the average sentence length is 54 tokens, which is of comparable length to the training data of toxicity classifiers <cit.>.
Next, we assign each sentence a toxicity score between 0 and 1 using the Perspective API <cit.>. Table <ref> presents the difference in average toxicity between C4-en and BeanCounter on the most prevalent descriptors in each demographic axis. We find that the average toxicity of sentences containing a given descriptor in BeanCounter is at least 59% lower than those from C4-en. In addition to quantitatively exploring differences in toxicity, we also read and compare examples of toxic content from C4-en and BeanCounter. We find that toxic content in BeanCounter tends to come from the discussion of business models which involve breeding animals, medical ailments which affect certain portions of the population at different rates, firms providing examples of actions that would violate their ethics policies, and entities raising money to produce certain artistic works, e.g., movies, which contain content labeled as toxic. In contrast, the toxic content in C4-en tends to be directed at specific groups. Appendix <ref> presents a sample of the most toxic examples.
Considering other datasets, <cit.> also uses Perspective API and reports mean and median toxicity scores of 0.10 and 0.07, respectively, for their random sample of text spans. Even when explicitly sampling text likely to contain toxic language, toxicity of BeanCounter is an order of magnitude lower: the mean and median toxicity score of BeanCounter's descriptor sentences are 0.009 and 0.006, respectively. This suggests that BeanCounter can serve as a large-scale corpus of very low toxicity text.
§ EVALUATION
Next, we explore the potential utility of BeanCounter along two dimensions: (i) domain-specific applications within finance and (ii) generation toxicity. To facilitate this analysis, we continue pre-training Pythia-1.4B <cit.> and Phi-1.5 <cit.> for 1B tokens sampled from BeanCounter stratified by year. We choose these models because Pythia-1.4B is a well-studied model which is known to generate toxic content and Phi-1.5, while newer, has demonstrated excellent performance across a number of domains and exhibits comparatively low toxicity. We use the same optimization parameters used for initial pre-training of these models except that we reduce the maximum learning rate by 50% and cosine decay to 10% of the maximum without any warm-up phase <cit.>. Details of each aforementioned evaluation task and additional evaluation on general LLM benchmarks can be found in Appendix <ref>.
§.§ Financial Domain
To understand whether BeanCounter can improve a model’s finance domain knowledge, we evaluated the models on two widely used benchmarks: Financial Phrasebank (FPB) <cit.> and Fin NER<cit.>. FPB is a sentiment classification task on 4840 sentences sampled from financial news<cit.>. The sentences are assigned “positive”, “neutral” and “negative” labels according to how it will impact the mentioned company’s stock prices. Fin NER is a named entity recognition task consisting of 1467 labeled sentences extracted from financial agreements filed on the SEC <cit.>.
As seen in Table <ref>, we find larger improvements in the Phi-1.5 model after continued pretraining on BeanCounter; there is a 4.3% improvement on the Fin NER task and 2.5% improvement on FPB. The improvements for Pythia-1.4B are 1.4% and 1.1% for Fin NER and FPB respectively.
§.§ Generation Toxicity
To quantify the models' propensity for toxic generation, we evaluate them on RealToxicityPrompts <cit.> and SafeNLP <cit.>. RealToxicityPrompts is a set of 100K prompts extracted from sentences in the OpenWebText corpus intended to illicit toxic generation <cit.>. The generation is assigned a toxicity score of 0 if the Perspective API <cit.> score is < 0.5 and 1 otherwise. SafeNLP uses a subset of the ToxiGen dataset <cit.> to compute safety scores across 13 marginalized demographics for a pre-trained language model <cit.>.
Table <ref> presents results. We find a reduction of 18-33% in Toxicity score after continued pretraining on BeanCounter. In terms of safety scores in Figure <ref>, these increase by 6-21% across demographics for Pythia-1.4B and by 3-7% for Phi-1.5 except for “jewish,” “latino,” “mexican,” and “physical disability.” It is important to note that although higher safety scores indicate less propensity to generate toxic content, it does not mean that the models are immune to harmful generations.
We hypothesize that the reduction in toxic generation results from (i) multi-turn conversations between firms and shareholders discussing topics associated with toxicity and/or (ii) firms presenting and explaining examples of violations of their ethics policies. For example, Section <ref> presents a discussion between the CEO of PepsiCo and a shareholder. While the discussion is some of the more toxic content in BeanCounter, it is still comparatively civil and each side explains their point of view. If LLMs can learn from such multi-turn conversations then this may be one reason we observe an improvement in safety scores.
§ LIMITATIONS
Ablation We evaluated the impact of continued pre-training with BeanCounter on two small LLM models (1.3B and 1.4B parameters for Phi-1.5 and Pythia-1.4B respectively). It is unclear if BeanCounter would have the same, lesser, or greater impact on model performance with different baseline model architectures, larger sizes, or training data mixes, e.g., pre-training solely on BeanCounter. Future work could explore additional models and incorporating BeanCounter into the training data mix at various percentages.
Deceptive Content The content in BeanCounter is produced by businesses with economic incentives to communicate a specific image of their business within the confines of the regulatory environment. As a result, the content may comply with regulations while being subtly misleading. Training on such data could lead to models which are misaligned with users' preferences, e.g., models which intentionally deceive the user in ways that are difficult to detect. In more extreme cases, the content may be entirely false, e.g., in the case of frauds such as ENRON and Wirecard. While this type of deceptive content is likely present in web-based datasets, researchers should be aware that the more regulated environment in which the contents of BeanCounter was produced does not guarantee that the data is free of deceptive content. The "BeanCounter.fraud" split of the data containing content from known frauds could enable future research to detect deceptive content.
Generalizability While this work suggests that BeanCounter is a novel source of low-toxicity data, the data is sourced from an environment that differs substantially in content and style from the web. Along these lines, we find that some descriptors have different meanings within the business domain, and the prevalence of certain demographics differs from prior datasets. As a result, models trained solely on BeanCounter may generate text which lacks imagination, is not desirable for a broad audience, fails to capture the meaning of certain words, and/or exhibits biases. Related to biases, there are limitations to our exploration of toxicity and demographic identities. Specifically, we only explored a subset of demographic identity descriptors from five axes. Future works could explore the additional axes, e.g., Ability, Socioeconomic Status and Age, to understand how these identities are represented in NLP datasets.
Out-of-Domain Performance While our work suggests that continued pre-training on BeanCounter can improve performance on financial related tasks, this may come at the expense of reduced performance on tasks unrelated to BeanCounter's content. Results for the two small LLMs we consider are mixed. Table <ref> illustrates the models' performance on a suite of common general LLM comprehension and reasoning tasks. Pythia-1.4B models seem to perform similarly or slightly better on these benchmarks after continued pre-training, whereas Phi-1.5 models seem to experience decreases in performance in some tasks. Hence, the impact of continued pre-training on BeanCounter may be model-specific or there may be interactions between the datasets used to train the models (Phi-1.5 is trained on a novel mix of data <cit.>).
§ CONCLUSION
In this work, we present BeanCounter, a token dataset of business-oriented text extracted from publicly available financial disclosures. To our knowledge, BeanCounter is the largest public corpus in the business domain. Relative to datasets derived from the web, the content in BeanCounter is more likely to be truthful and of high quality because the entities producing the content face civil and criminal penalties if it is not. Additionally, each piece of content in BeanCounter is associated with a timestamp representing when the content was made available thereby facilitating future research on the role of time in language models. Lastly, we explore biases and toxicity in the data using a novel evaluation scheme which focuses on locations where biased and/or toxic content is most likely to be present. We find that BeanCounter is significantly less toxic compared to another widely used corpus, and our preliminary exploration of using BeanCounter for continued pre-training shows improvements in finance domain knowledge and reduced toxic generation.
We acknowledge generous financial support from the Booth School of Business and the Center for Applied AI. The authors have no competing interests to disclose.
§ CHECKLIST
* For all authors...
* Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
* Did you describe the limitations of your work?
See conclusion <ref> as well as Section <ref> for discussion of limitations on dataset applicability and toxicity analysis.
* Did you discuss any potential negative societal impacts of your work?
Discussion of negative societal impacts are included in discussions of limitations. See previous checklist item <ref> for specific locations.
* Have you read the ethics review guidelines and ensured that your paper conforms to them?
* If you are including theoretical results...
* Did you state the full set of assumptions of all theoretical results?
* Did you include complete proofs of all theoretical results?
* If you ran experiments (e.g. for benchmarks)...
* Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
Reviewers should see the supplementary materials for details on accessing the dataset and experiment code. Long term, we will make these artifacts available via Github and Hugging Face Hub.
* Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
See appendix <ref>.
* Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
We did not train models multiple times with different random seeds since it is prohibitively expensive to do so given our resources.
* Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
See Appendix <ref>.
* If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
* If your work uses existing assets, did you cite the creators?
* Did you mention the license of the assets?
See beginning of Section <ref> and reference <cit.>.
* Did you include any new assets either in the supplemental material or as a URL?
The full dataset will be made available via Hugging Face Hub. Reviewers should see the supplementary materials for details on interim access.
* Did you discuss whether and how consent was obtained from people whose data you're using/curating?
See beginning of Section <ref> and reference <cit.>.
* Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
See first paragraph of Section <ref> for discussion of personally identifiable information and Section <ref> for discussion of toxicity analysis.
* If you used crowdsourcing or conducted research with human subjects...
* Did you include the full text of instructions given to participants and screenshots, if applicable?
* Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
* Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
§ ADDITIONAL DETAILS OF DATASET CONSTRUCTION
While our approach generally follows the best practices for data curation, filtering, and deduplication which have been developed in recent years, this appendix provides details specific to the construction of BeanCounter. An overview of the four high-level steps involved in constructing BeanCounter is presented in Figure <ref> and each step is detailed below. Figure <ref> presents an example disclosure which highlights our approach to constructing BeanCounter relative to the source document and prior literature.
§.§ Collection
The pipeline begins with downloading daily archives of all submissions to EDGAR. These archives contain the full contents of each submission, i.e., the main filing as well as any and all attachments, and are produced daily by the SEC. Availability of archives begins on 1996-01-12 and continues through to the present, although BeanCounter's coverage is only through the end of 2023. The submissions in these archives also contain metadata on the submission such as the time the submission was accepted by EDGAR. This metadata is parsed and included in BeanCounter. In total, these archive files contain 3TB (compressed) of content.
§.§ Extraction
After downloading all filings, the pipeline extracts text from the main filing and all attachments. While EDGAR supports a variety of attachments such as plain text, HTML, and images, the pipeline only extracts content from text and HTML-based documents since these documents represent the majority of the content on EDGAR. Historically, EDGAR accepted text-based filings which had been formatted in a fixed-width style. This is in contrast to modern HTML files where the browser is largely responsible for formatting the content to match a particular device. For this reason, the pipeline extracts content from text and HTML-based filings using distinct logic.
Text-based filings consist of both text and standard generalized markup language (SGML). These SGML tags are used to denote pages, tables, and indicate other formatting details for older devices. The first step of processing text-based filings is to remove all numeric tables from the filing. SGML tags indicating page breaks are then used to identify sentences spanning multiple pages and “unbreak” them. Lastly, since these filings were prepared for fixed-width screens, e.g., lines were wrapped after 80 characters, we unwrap all lines and replace sequences of more than two newline characters with just two.
HTML-based filings are processed using the Beautiful Soup <cit.> library. As with text-based filings, the pipeline first removes all numeric tables from the filing. Since table tags are frequently used in HTML to structure content, we cannot rely on the mere presence of a table tag as indicia of a numeric table. Instead, for each table identified, we compute a measure of text-density, characters per tag (CPT), using only alphabetical characters <cit.>. Any table with a CPT less than 10 is removed. Via manual inspection, we find that this approach is high precision in the sense that there are few false positives.
After removing tables, we extract text from the remaining content while taking care to preserve formatting such as indentation and spacing. Where there are page breaks, we again “unbreak” sentences spanning multiple pages and remove any “header” content. The effects of our approach can be seen in Figure <ref>: Panel <ref> presents the disclosure as rendered by a web browser, Panel <ref> presents the extracted text in EDGAR-CORPUS <cit.>, and Panel <ref> presents the extracted text in BeanCounter. The first notable difference between our approach and prior literature is that we preserve white-space according to HTML and CSS directives.
The second notable difference is our removal of header content. Note that in Panels <ref> and <ref>, the text “Item 1. Business (Continued)” appears at the top of the page. Similar text is repeated at the top of each page to indicate the current section. Simply extracting the text without any consideration of this header content would lead to situations where a sentence spanning multiple pages has header content injected mid-sentence. Our approach identifies such content and removes it, as can be seen in Figure <ref>.
§.§ Cleaning
Prior work has used heuristics such as line length and terminal punctuation to identify and remove low-quality content, e.g., <cit.>. While there is such content in EDGAR filings, we find that in many instances this content is relevant to the long-form text surrounding such lines which would be removed under these heuristics. For example, many filings contain lists enumerating risks facing the business or on-going activities. The elements of these lists often do not end in punctuation and/or are relatively short–as can be seen in Figure <ref>. Since the content on EDGAR is meant to convey important value-relevant information, the entities producing content for EDGAR are more likely to dedicate significant resources to ensuring it is high-quality. Along these lines, we adopt a lighter approach to cleaning than some of the prior literature.
Specifically, we exclude (i) all filings which are of a type known to be standardized forms containing little long-form narrative text, (ii) all documents with fewer than 200 words, and (iii) all documents where the proportion of whitespace is more than the 99-th percentile for the whole dataset (41%). We refer to the resulting dataset as “BeanCounter.clean” and find that it contains more than tokens.
§.§ Deduplication
Following prior literature, e.g., <cit.>, we deduplicate BeanCounter at the document level using MinHash and LSH. Specifically, we convert each attachment into 5-grams which is then hashed using 260 permutations (20x13). Near duplicates are identified using an LSH index with a similarity threshold set to 80%. In cases where the set of near duplicates is too large to compute the true duplicates, e.g., a set of a million attachments, we assume that all near duplicates are in fact true duplicates. While this is not accurate, it is conservative in the sense that we remove some attachments which are not actually duplicates. For each set of attachments identified as duplicates, we keep the attachment which was first accepted by EDGAR, i.e., the attachment with the earliest acceptance timestamp. We refer to the resulting dataset as “BeanCounter.final” and find that it contains more than tokens.
§ ADDITIONAL DETAILS OF MODEL EVALUATION
This section details the various financial, toxicity and general benchmarks used to evaluate the LLMs and their continually pre-trained variants. To ensure replicability of our results, we use libraries and harnesses that are publicly available. For continued pre-training and fine-tuning we use a combination of local and cloud hosted NVIDIA RTX 4090, A100, and H100 GPUs for a total of roughly 400 GPU-hours.
Financial Phrasebank (FPB) <cit.>: This task requires the model to classify the sentiment of text from the financial domain. Specifically, <cit.> create a dataset consisting of 4844 sentences that mentions a specific company randomly selected from financial news articles on LexisNexis database. The sentences are annotated by 16 annotators with finance-related background (e.g. 3 researchers and 13 Master’s students majoring in finance, accounting and economics) and they assigned “positive”, “neutral” or “negative” labels according to how they perceive the information in the sentences will impact the mentioned company’s stock price. The sentence labels have varying levels of agreement across the annotators, and we evaluated the models on the subset (2264 sentences) of the original dataset with 100% agreement amongst the annotators. We finetuned the model, with an AdamW optimizer with learning rate of 5e-5, for 3 epochs on the training set and used the model with the highest weighted F-1 score for evaluation on the validation set.
Fin NER <cit.>: In this task, the model is suppose to complete NER labeling on sentences extracted from financial agreements. To construct the dataset, <cit.> randomly selected 8 financial agreements filed on the SEC EDGAR for manual annotation, based on the name entity types in the CoNLL-2003 dataset: location (LOC), organization (ORG), person (PER), and miscellaneous (MISC). To evaluate the models, we used the T-NER library <cit.>; the models are fine-tuned for 10 epochs with grid search on the training and validation set to find the set of hyper-parameters that yields the highest F-1 score. The model with the highest F-1 score is then evaluated on the test split of Fin NER.
RealToxicityPrompts <cit.>: The purpose of this task is to understand how likely it is for the model to produce toxic content when the selected prompts are prone to illicit toxic generation. This benchmark consists of 99k prompts extracted from sentences in the OpenWebText corpus <cit.>, where 22% of the prompts are classified as toxic, i.e. being assigned a toxicity score ≥ 0.5 by Perspective API <cit.>, and the average toxicity score of the prompts is 0.29. The models are given a prompt and generation is run until a newline character is produced. The generation is assigned a toxicity score of 0 if the Perspective API score is < 0.5 and 1 otherwise. In Table <ref>, the “Toxicity score” column indicates the percentage of generations that are classified as toxic by the aforementioned threshold, whereas the “Perspective score” column is an average of the raw output score of Perspective API, which ranges from 0 to 1 where 1 is highly toxic. We used the lm-evaluation-harness <cit.> from EleutherAI to evaluate the models on RealToxicityPrompts.
SafeNLP <cit.>: This evaluation task uses a subset of the ToxiGen dataset <cit.> to compute safety scores across 13 marginalized demographics for a pre-trained language model. The scaled perplexity of each statement is computed by the model's perplexity divided by the pre-computed toxicity score of the statement. Lastly, the scaled perplexity values of harmful and benign sentences of each target group is fed into the Mann-Whitney U-test to compute a final safety score for that demographic. A non-toxic pre-trained model should have high scaled perplexity for implicitly harmful sentences and low scaled perplexity for benign sentences. We used the evaluation package provided by <cit.> to compute the safety scores for our models.
Big Bench Hard (BBH) <cit.>: This benchmark includes a suite of 23 challenging BIG-Bench tasks where prior LLMs have yet to outperform the average human-rater. Some example tasks include evaluating Boolean expressions, logical deduction of object position and other “riddle-like” tasks. This evaluation and the following evaluations are performed with lm-evaluation-harness <cit.> from EleatherAI on the leaderboard setting, which is the setting used for evaluations on the Huggingface Open LLM Leaderboard. For ease of comparison and presentation, the accuracy we present in Table <ref> is an average across the 23 tasks.
Massive Multitask Language Understanding (MMLU-Pro) <cit.>: This evaluation is an extension of MMLU, a multiple choice benchmark focused on language comprehension and reasoning across varying domains such as Math, Law, Health, Business, etc. Compared to MMLU, the MMLU-Pro benchmark contains more challenging, reasoning-focused questions and includes 10 choices instead of the original 4 choices. We ran the evaluation on a 5-shot setting via lm-evaluation-harness <cit.>, which is the setting used in the Huggingface Open LLM Leaderboard.
Multistep Soft Reasoning (MuSR) <cit.>: This reasoning evaluation focuses on evaluating the model's ability to perform multistep soft reasoning tasks in a natural language narrative context. One of the most challenging subtasks is murder mysteries, which includes an approximately 1,000 word story where the model is prompted to identify the killer. MuSR combines commonsense and multistep reasoning, which is lacking in prior LLM benchmarks. The reported accuracy in Table <ref> is an average over 3 sub-tasks: murder mysteries, object placements and team allocation.
Instruction-Following Eval (IFeval) <cit.>: This instruction following benchmark focuses on evaluating models on a set of easily “verifiable instructions” such as “mention the keyword of AI at least 3 times”. Each prompt contains one or more verifiable instruction, and the accuracy presented in Table <ref> is an average of “prompt-level” and “instruction-level” accuracies.
Graduate-Level Google-Proof Q&A (GPQA) <cit.>: This evaluation task consists of 448 challenging multi-choice questions written by domain-experts who have or are pursuing PhDs in biology, physics and chemistry. For highly skilled human non-experts (i.e. experts in another field but not the tested fields) with access to internet resources, they reach 34% accuracy after spending an average of 37 minutes trying to answer each question. The accuracy reported in Table <ref> is an average over the three sets of questions: GPQA main, GPQA extended and GPQA diamond.
§ DESCRIPTOR PREVALENCE OF C4 EN AND AVERAGE TOXICITY SCORE OF BEANCOUNTER AND C4-EN
§ EXAMPLES OF CONTENT FROM MOST FEQUENT FORM TYPES
In this section we present examples of content from the most common form types in BeanCounter along with descriptions of how these forms are typically used.
§.§ Forms 485APOS and 485BPOS
Forms 485APOS and 485BPOS are frequently used by investment management companies to satisfy their reporting obligations under Rules 485(a) and 485(b). These forms generally contain information such as past performance of the fund, risks, investment strategies used by the fund, and expenses.
§.§ Form 8-K
Form 8-K, or the “Current Report,” is frequently used by businesses to report certain types of events in a timely manner. Examples of events which trigger required filing of Form 8-K include reporting of financial results, delisting of a business's stock, departure or appointment of principal officers, and, more recently, cybersecurity incidents.
| A key ingredient to the recent breakthroughs in language modeling has been the availability of large-scale datasets. Along these lines, recent work has shown that training incrementally larger language models demands a similar increase in training data <cit.>, and that data quality is a significant determinant of a model's ultimate performance <cit.>. However, the creation and scaling of text datasets sourced from the public domain raises a variety of concerns such as inclusion of personally identifiable information <cit.>, degrading quality <cit.>, and biases or false information <cit.>.
In this work, we contribute to overcoming these challenges of scaling text datasets. Specifically, we introduce BeanCounter, a dataset comprised of more than tokens extracted from public domain business-oriented disclosures. The introduction of BeanCounter makes four primary contributions:
1. Novel large-scale dataset of domain specific content. BeanCounter consists of content produced by businesses to communicate with a variety of stakeholders such as investors and regulators. This content is not easily accessible via web scraping and we find that little of BeanCounter is included in other commonly used datasets. For example, less than 0.1% of BeanCounter is in C4 <cit.>. In considering the scale of BeanCounter, we define “large-scale” as sufficient size to pre-train a multi-billion parameter model <cit.>, and of similar order of magnitude as other web-based datasets (i.e., 100B+ tokens) <cit.>. Along these lines, BeanCounter consists of more than tokens of cleaned text and more than tokens of deduplicated text (see Table <ref>). Thus, to the best of our knowledge, BeanCounter is the largest and most comprehensive public dataset of business-oriented text.
2. Timely, factual, and high-quality content. Considering timeliness, <cit.> show that incorporating the concept of time into LLMs can enhance their ability to recall time-dependent facts, e.g., the current president of the United States. In contrast to web-based datasets which usually do not have a precise timestamp of when the web page was created or updated, every observation in BeanCounter has a timestamp–accurate to the second–of when the content became available. Considering factuality and quality, there are at least two reasons BeanCounter is comparatively more factual and of higher-quality than web-based datasets. First, the principal executive and financial officers, e.g., the CEO and CFO, must certify the disclosures from which BeanCounter is sourced <cit.>. Second, these disclosures are the primary channel used by businesses to communicate with stakeholders. Thus, the businesses are incentivized to produce effective communications. In this sense, BeanCounter supports future research on the role of time in LLMs, factuality, and quality.
3. Toxicity and demographic identity analysis. As access to LLMs has proliferated, a common concern is the extent to which LLMs produce toxic or otherwise harmful content. Along these lines, a large literature has examined the toxicity of various web-based datasets. In this vein, we examine the prevalence of a variety of demographic identity descriptors and the toxicity of text in which they are mentioned <cit.>. We contrast our findings to text from C4 <cit.> and find that many descriptors are similarly prevalent in BeanCounter but the context in which they are mentioned is significantly less toxic. For instance, the descriptor “Asian” is mentioned in 4.67% and 9.26% of documents in C4 and filings of BeanCounter, respectively, yet the text surrounding “Asian” in BeanCounter is 72.4% less toxic, on average. Similarly, “LGBTQ” is mentioned 0.37% and 0.17% in C4 and BeanCounter, respectively, yet the toxicity of its surrounding content is 88.8% lower in BeanCounter [A table with the most prevalent descriptors in C4-en is shown in Appendix <ref>.]. We find this general trend across nearly all descriptors.
4. Model Evaluation. Given the influence of training data on model generation, our analyses suggest that models trained on BeanCounter may exhibit less toxic generation. We explore this possibility by continuing the pre-training of models on BeanCounter and evaluate their performance on domain-specific tasks and toxic generation. We find that models trained on BeanCounter exhibit better performance within financial applications while exhibiting 18-33% less toxic generation.
In summary, this paper introduces a novel business-oriented text dataset comprised of more than tokens which are comparatively more factual, of higher-quality, and less toxic than commonly used web-based datasets. We have open-sourced BeanCounter and made it available via Hugging Face Hub. We hope that BeanCounter will enable the creation of new foundation models which are less likely to generate toxic content and are better suited to business-oriented tasks. | Foundation models and large-scale text datasets. The current scale of Transformer-based language models has grown more than 1000-fold since early examples of such models, e.g., from GPT-1 <cit.> to OPT-175B <cit.>. While the scale of these models has grown dramatically, the size of the data has grown comparatively more slowly, e.g., even though OPT-175B is 100 times larger than GPT-2 <cit.> it was trained on only 10 times as much data (180B versus roughly 20B tokens). In this vein, recent work has explored how to best allocate a fixed training compute budget and found results which suggest earlier models were significantly under-trained, i.e., a 10-fold increase in model size should correspond to a 10-fold increase in training data size <cit.>. Alternatively, one can view this resource allocation problem as desiring the best possible inference performance for a fixed budget <cit.>. In which case models should be trained on even more data than that suggested by the scaling laws of <cit.>. The common theme throughout this literature is that an increase in model size requires at least a similar increase in training data as well. This begs the question of where to source the additional data.
Early models such as BERT <cit.> and GPT-1 <cit.> sourced training data from books <cit.> and/or Wikipedia. As models scaled, researchers began collecting larger datasets by scraping text from the web, e.g., WebText consists of upvoted links from Reddit <cit.>, and this approach is now the dominant method for assembling large-scale text datasets <cit.>. While this is now the dominant approach, it is not without concerns. These concerns range from socially benign, e.g., low-quality content that leads to gibberish generations <cit.> or large amounts of duplicate content <cit.>, to more socially harmful outcomes such as generation which is: biased and/or toxic <cit.>, revealing of personally identifiable information <cit.>, or factually inaccurate <cit.>. In this paper, we present a novel dataset sourced from business-oriented content which is more factually accurate, less toxic, and paired with the time the content became public.
Business-oriented text datasets. Early business-oriented corpuses for NLP research were generally of small scale, i.e., tens of millions of tokens <cit.>, and therefore more suitable for fine-tuning or downstream evaluation than pre-training. Recently, datasets on the scale of billions of tokens have been developed using proprietary data, e.g., Bloomberg's catalog of content <cit.>, and publicly available data, e.g., content from the Securities and Exchange Commission's (SEC) Electronic Data Gathering and Retrieval (EDGAR) system. Notably, prior work using EDGAR data generally considers only a subset of the disclosures posted to EDGAR. For example, <cit.> extract text solely from annual reports to create a dataset of tokens. <cit.> extract text from annual and quarterly reports to create a non-public dataset of tokens. Other datasets relying on EDGAR are of significantly smaller size, i.e., less than 1B tokens <cit.>. We extend this literature by considering all disclosures on EDGAR and find that 83% of BeanCounter comes from disclosure types not included in prior work. Additionally, we consider not only the main content of EDGAR disclosures but attachments as well and find that 29% of tokens are sourced from attachments.[For example, consider American Airlines' 2016 Annual Report available from EDGAR and in the dataset of <cit.> under the filename “6201_2016.htm.” In addition to the main document (Type 10-K on EDGAR), American Airlines also attaches a credit agreement (Type EX-10.1 on EDGAR). The credit agreement contains a similar amount of text as the main 10-K document but does not appear in <cit.>. We find that more than 89% of annual reports include at least one such attachment.] | null | null | null | In this work, we present BeanCounter, a token dataset of business-oriented text extracted from publicly available financial disclosures. To our knowledge, BeanCounter is the largest public corpus in the business domain. Relative to datasets derived from the web, the content in BeanCounter is more likely to be truthful and of high quality because the entities producing the content face civil and criminal penalties if it is not. Additionally, each piece of content in BeanCounter is associated with a timestamp representing when the content was made available thereby facilitating future research on the role of time in language models. Lastly, we explore biases and toxicity in the data using a novel evaluation scheme which focuses on locations where biased and/or toxic content is most likely to be present. We find that BeanCounter is significantly less toxic compared to another widely used corpus, and our preliminary exploration of using BeanCounter for continued pre-training shows improvements in finance domain knowledge and reduced toxic generation.
We acknowledge generous financial support from the Booth School of Business and the Center for Applied AI. The authors have no competing interests to disclose. |
http://arxiv.org/abs/2409.17585v1 | 20240926070634 | Anomalous Conformations and Dynamics of Active Block Copolymers | [
"Suman Majumder",
"Subhajit Paul"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.stat-mech"
] |
17.50cm
23.0cm
-0.5cm
-0.5cm
1.0
r⃗L(t)a(L,T)1 2α ( )⟨⟩ [ ]∂∇ et al ↔→fnsymbol#1#1ctrerr
AIP/123-QED[][email protected], [email protected] Institute of Applied Sciences, Amity University Uttar Pradesh, Noida 201313,
India
[][email protected] of Physics and Astrophysics, University of Delhi, Delhi 110007, India
§ ABSTRACT
Heterogeneous distribution of passive and active domains in the chromosome plays a crucial role for its dynamic organization within the cell nucleus. Motivated by that here we investigate the steady-state conformation and dynamics of a model active-block copolymer using numerical simulations. Our results show that depending on the relative arrangements of the active and passive blocks, the polymer shows an unusual swelling, even larger than the corresponding fully active polymer. On the one hand, the dynamics of the full polymer show usual enhanced diffusion and Rouse-like scaling behavior. On the other hand, individual passive and active blocks show anomalous transient super- and sub-diffusive dynamics. We characterize this anomalous dynamics in terms of the dependence of a generalized diffusion constant with the polymer length and activity strength.
Anomalous Conformations and Dynamics of Active Block Copolymers
Subhajit Paul
September 28, 2024
===============================================================
§ INTRODUCTION
During cell division the genomic DNA combines with proteins, viz., histone, to form a complex called chromatin. This helps in compaction of the long DNA (≈ 1 m) to a relatively smaller nucleosome (typically 6-10 μm in diameter) that fits within the cell nucleus, and thereby allows the cell to divide <cit.>. The nucleosome folds to form chromatin fibres which further condense to create chromosomes, which eventually get replicated and separated during cell division. Naturally, chromatin plays a crucial role in the dynamic positioning of the chromosome within the cell nucleus. Hence, understanding the structure and dynamics of chromatin and chromosome has eluded physicists over the years <cit.>.
Apart from the usual thermal fluctuations, it is known that athermal stochastic forces arising from local ATP-dependent energy. consumption, are inhomogeneously distributed within the chromosome <cit.>. Presence of such athermal kicks render it out of equilibrium and thus chromosome can be investigated using understandings of active matter <cit.>. A more precise physical description can be given by polymers comprised of monomers that are themselves active or can have activity induced by some external force. Considering the topology of many living entities, recently, a number of studies have been dedicated to the physics of active polymers <cit.>. All these studies considered a fully active polymer, i.e., all the monomers are active. However, chromosome has heterogeneous distribution of regions of inactive and active domains <cit.>. Chromatin remodeling with this consideration translate the transcription-coupled enzymatic activity into differing levels of stochastic forces on each monomer of a polymer model of chromosome.
The conformation and dynamics of a passive polymer are characterized by specific scaling laws <cit.>. For example the size of the polymer measured using the radius of gyration R_g and the polymer length N measured as the number of monomers, are related via the scaling R_g∼ N^ν, where the critical exponent ν≈ 0.588 in a good solvent <cit.>. Similarly, the dynamics in absence of hydrodynamics, is highlighted by the Rouse scaling of the diffusion constant D ∼ N^-1<cit.>. The Zimm scaling law D∼ N^-ν describes the corresponding behavior in presence of hydrodynamics <cit.>. For active polymers too, the focus has always been on understanding these scaling laws. Theoretical and computational models of active polymers rely on introducing the activity by applying a local
force tangential to the polymer backbone <cit.> or
by choosing the monomers to be active <cit.>. Depending on the kind of activity, the polymer may exhibit a coil-globule transition in a good solvent <cit.> or a globule to coil transition in a poor solvent <cit.>, which is in contrast with the behavior of a passive polymer in the respective solvents. In almost all these studies, irrespective of the model the dynamics of an active polymer is highlighted by the remarkable enhancement of the effective long-time diffusion constant D_ eff<cit.>. For active Brownian polymer, one also realizes a universal Rouse-like scaling, that gets hardly affected by hydrodynamic interactions <cit.>.
In this work, motivated by the partial active nature of chromosomes, using computer simulations we investigate the static and dynamic properties of active copolymers where blocks of passive monomers are connected with blocks of active monomers along the contour of the polymer (see Fig. <ref>). Our results considering three different sequences of relative arrangements of the passive and active blocks (as described in Fig. <ref>) reveal unusual conformational behavior when compared to a fully active polymer (FAP). Our novel strategy of monitoring the dynamics of the individual passive and active blocks separately reveal anomalous behavior characterized by the simultaneous existence of super-diffusion of the passive block and sub-diffusion of the active block. In spite of this diverse dynamics the long-time diffusivity of the full polymer still obeys a universal Rouse-like scaling <cit.>.
The rest of the paper is organized in the following way. Next in Sec. <ref>, we provide a detail description of the model used and the method of simulations. Following that in Sec. <ref> we present the results. It also includes description of the calculations of relevant observables. Finally in Sec. <ref> we summarize the results and also provide an outlook to the future work.
§ MODEL AND METHOD
We use a flexible bead-spring copolymer consisting of blocks of active and passive beads arranged according to the sequences shown in Fig. <ref>. Position r⃗_i of each bead follows the over-damped Langevin equation
_tr⃗_i = D_tr/k_BT[f_ an̂_i-∇⃗ U_i]+√(2D_tr) Λ⃗_i^tr.
While for an active bead the stochastic self-propulsion force of strength f_ a>0 acts along the unit vector n̂_i, using f_ a≡ 0 imposes no activity for a passive bead. In Eq. (<ref>)U_i=V_ B+V_ NB is the total energy, which consists of the bond energy
V_B(r_i,i+1) = -0.5 K(r_i,i+1-r_0)^2,
with K=100 and the the non bonded interaction V_ NB, given by the standard Lennard-Jones potential
V_LJ(r_ij) = 4ϵ[(σ/r_ij)^12- (σ/r_ij)^6],
where ϵ is the interaction strength and σ≡ r_0≡1 denotes the diameter of the monomers. For convenience during simulations, instead of the full V_ LJ, we use the truncated and shifted LJ potential so that the effective non-bonded interaction has the form
V_NB(r)=
V_LJ(r)-V_LJ(r_c) -(r-r_c)dV_LJ/dr|_r=r_c r<r_c ,
0 otherwise ,
where the cut-off distance r_c=2^1/6σ. The orientation of the particles are updated as
_tn̂_i = √(2D_rot)(n̂_i×Λ⃗_i^rot).
We set the ratio between the translational and rotational diffusion constants to D_tr/D_rotσ^2 =1/3. The vectors Λ⃗_i^tr and Λ⃗_i^rot are white Gaussian noises with zero-mean and unit-variance, and are delta-correlated over time and space. We choose the friction coefficient γ≡ 1 and unit of time as τ_0=σ^2γ/ϵ (∝ 1/D_rot=Δσ^2 γ /k_BT at a fixed k_BT/ϵ, where k_B is the Boltzmann constant). The activity strength is expressed using the dimensionless Péclet number
Pe=f_ aσ/k_B T.
We perform simulations for different Pe at a fixed temperature T=0.1ϵ/k_B, using the velocity-Verlet integration scheme with a time step of 10^-4τ_0<cit.>.
§ RESULTS
We start with the conformations of active block copolymers. Fig. <ref> shows typical steady-state conformations for the three types of copolymers of length N=128 at Pe=50. As expected, with increasing Pe all the copolymers get extended. On comparing with a fully active polymer (FAP) at the same Pe, surprisingly copolymers APA and PA look more extended, while PAP appears to be shorter. To confirm this observation we measure the size of the polymer in terms of the radius of gyration
R_g=⟨√(1/2N^2∑_i,j(r⃗_i-r⃗_j)^2)⟩,
where ⟨…⟩ denotes averaging over steady state and independent simulation runs. The distributions P(R_g) at Pe=50 for N=128 are presented in Fig. <ref>(a). While for PAP the peak is at a smaller value (R_g ≈ 7) than for FAP (R_g ≈ 10), for both PA and APA the corresponding peaks are at larger R_g, confirming their anomalously larger swelling than a FAP. For APA, the peak is almost at a value (R_g ≈ 18) twice than that for FAP. The variation of R_g with Pe presented in Fig. <ref>(d) reveals a more concrete picture. For Pe< 10, R_g for different copolymers show marginal differences. Once the activity is of considerable strength, i.e., for Pe>10 differences between them show up. FAP and PAP show linear increase in R_g with Pe for the full range, as illustrated by the corresponding fitted dashed lines. The data for PA show an initial steeper but linear increase until Pe=50, following which it almost catch up with the data for FAP. Interestingly, the data for APA show an even steeper linear increase until Pe=50, beyond which it almost saturates.
The unexpectedly larger R_g of the APA copolymer than a FAP can be phenomenologically understood from a careful observation of the steady-state trajectory. In an APA copolymer the two active blocks pull the passive block from both sides, resulting in forces that propagate via the bonds connecting a passive monomer with an active monomer on either end of the passive block. The thermal fluctuations of the passive monomers oppose the pulling forces, however, are much weaker comparatively at T=0.1. For the FAP case, the active forces of the monomers away from the ends can easily cancel out the pulling force generated by the end monomers, and thus it behaves like a self-avoiding polymer. For the PA copolymer the pulling occurs from one end only, hence, the extension of the passive block is not as much resulting in a size comparable to the FAP. To consolidate this speculation we calculate R_g of individual active and passive blocks. Note that if there are two blocks of the same type of monomer then the the presented R_g is an average of the two blocks. The corresponding distributions are presented in Figs. <ref>(b) and (c). There the x-axes are scaled respectively by the number of monomers present in each of the blocks. Indeed, for all the copolymers the swellings for the passive blocks are greater than the active blocks. Among them, APA copolymer show maximum swelling for both active and passive blocks, whereas data for PA and PAP copolymers are comparable. The corresponding variations with Pe presented in Figs. <ref>(e) and (f) also reveal the same fact of larger swelling for the passive blocks than the active ones. Interestingly, the variations are quite non-monotonic with increasing Pe. At smaller Pe, PA and APA have comparable sizes before they start deviating from each other. For larger Pe, however, PA and PAP have comparable sizes. In case of APA and PA, the similarity in the behavior of R_g for the full polymer and the passive block with increasing Pe suggests that indeed the overall size of the polymer is mostly guided by the behavior of the passive block. Likewise, for PAP the full polymer behavior is similar to the behavior of the active block. Same set of similarities can also be spotted for the N dependence of R_g, as depicted in Figs. <ref>(g)-(i). Except for the APA, data for R_g of the full polymer [Fig. <ref>(g)] more or less obey R_g∼ N^0.588 scaling. The same is obeyed by the R_g of the active blocks [Fig. <ref>(h)] for all the copolymers. However, for the passive blocks in all cases the data show significant deviation from self-avoiding scaling behavior [see Fig. <ref>(i)]. Again for APA, the behavior of the passive block is similar to what is observed for the full polymer.
The unusual conformational behavior naturally indulges us to probe the dynamics of the center of mass (cm) of the full polymer and the individual blocks. From their respective trajectories we calculate the corresponding mean square displacements
MSD(t) = ⟨[r⃗_ cm(t)-r⃗_ cm(0)]^2⟩,
where r⃗_ cm is the position of the cm. Typically, MSD at short time captures a ballistic motion followed by normal diffusion with
MSD(t)=6D_ efft,
where D_ eff is the effective diffusion constant. Unlike typical diffusion, anomalous diffusion is described by
MSD(t) = D_ gt^α,
where D_ g is the generalized diffusion constant <cit.>. The exponent α determines if it is a super-diffusion(α >1) or sub-diffusion(α <1). A typical diffusive behavior is obeyed by the data for FAP with N=64 in Fig. <ref>, where for the convenience of identification of different regimes we plot the time dependence of MSD/t. The data show an initial linear increase followed by a plateau highlighting the diffusive regime. Similar behavior is also observed for the motion of the cm of the full polymer (red line) for all the copolymers, as presented in Figs. <ref>(a)-(c). In all cases, the length of the copolymers is chosen to be exactly twice than that of the FAP such that they contain the same number of active monomers. The plateau for all the copolymers correspond to the same value of D_ eff, which is significantly smaller than the corresponding D_ eff for the FAP, indicating a strong influence of the passive blocks on the dynamics.
To disentangle the dynamics of the cm of the passive and active blocks we present their respective MSDs in Fig. <ref>. Although in the long-time limit (t > 10^4) MSD data of both blocks for all cases merge with the data of the full polymer, the preceding transient period show anomalous diffusion. There, the data for the passive block show a super-diffusive behavior with α∈ [1.35,1.37], while the active block show sub-diffusion with α≈ 0.77. Such anomalous diffusive behavior is a signature of intra-cellular transport phenomena <cit.>. In most cases this has been attributed to the crowded environment of the cell experienced by the probed bio-entity <cit.>. In the present case, however, this simply is a virtue of the tug-of-war between the passive and the active blocks.
In Fig. <ref> we present the chain-length dependence of the anomalous behavior of the MSD(t) for passive and active blocks. Clearly, data for all chain lengths show similar behavior as presented in Fig. <ref>. The data for the passive blocks for all the copolymers show pronounced super-diffusive behavior. The corresponding sub-diffusive behavior of the active blocks are comparatively less pronounced for PA, still deviating significantly from a normal diffusion. One also notices that for all the copolymers the data of the active blocks for the shortest chain length (N=32) merge with the data for corresponding the passive block at late time, when the full chain starts diffusing.. This merging occurs even later for longer chains. Importantly, both the passive and active blocks show a monotonic decrease of the amplitude of MSD(t) with increaseing chain length. This urges us to check the presence of any scaling of the generalized diffusion constant D_ g as a function of N, which will be presented subsequently.
To dig deep into the anomalous diffusion we analyze the power-law behavior of MSD(t) in the transient regime by calculating the exponent as
α=⟨d ln MSD(t)/d ln t⟩,
where the ⟨…⟩ indicates an average over different times within the transient regime as well as different trajectories. In Fig. <ref>(a) we present the variation of α with Pe, for a fixed N for both the blocks of different copolymers. It shows how starting from α=1 the super- and sub-diffusion emerge, respectively, for the passive and active blocks as Pe increases. In the large Pe limit, α for the super-diffusion settles at a slightly smaller value for PAP compared to the other copolymers. The sub-diffusive behavior of the active blocks do not show the same trend, and roughly settles around 0.8 for all of them. On the other hand, one can notice from Fig. <ref>(b) that for long N the super-diffusive α is almost the same for all the copolymers, as also is the case for α for the sub-diffusion of the active blocks.
Having established the evidence of anomalous diffusion in the transient regime, we calculate the corresponding generalized diffusion constant as
D_ g=⟨exp[ln t ln MSD(t^') -ln t^'ln MSD(t)/ln t-ln t^'] ⟩,
where the times t and t^' are within the transient regime, and ⟨…⟩ denotes averaging over different (t,t^') pairs and independent trajectories. The corresponding plots for D_ g as a function of Pe is shown in Fig. <ref>(c) for all the copolymers. For smaller Pe, D_ g remains almost invariant for all cases. At around Pe≈ 10, the active blocks show a transition to a Pe-dependent behavior. The passive blocks show a similar transition at larger Pe≈ 25. Even though the active blocks show the usual D_ g∼ Pe^2 dependence <cit.>, the behavior of the passive block is even more enhanced with D_ g∼ Pe^3. The corresponding scaling behaviors with respect to the chain length N are shown in Fig. <ref>(d). The data for the passive blocks show a lot of diversity with only the behavior for PA being roughly consistent with a Rouse-like scaling D_ g∼ N^-1<cit.>. Both PAP and APA show anomalous behavior with even slower dynamics. In contrast, the data for the active blocks of all the copolymers are more or less consistent with the Rouse-like scaling <cit.>. Of course, a proper theoretical treatment is required to confirm the validity of the observed scaling. Nevertheless, it can be inferred that the dynamics in the transient regime is rich, and is highlighted by the anomalous diffusion and related scaling.
Finally, to check if the anomalous transient dynamics of the individual blocks have any effect on the long-time diffusion of the full polymer, we calculate the effective diffusion constant of the cm of polymer as
D_ eff=⟨1/6lim_t →∞d/dt MSD(t)⟩.
The variation of D_ eff with Pe is shown in Fig. <ref>(a), which also includes the data for a FAP. All the copolymers show behavior similar to that of a FAP. For Pe < 10, D_ eff varies marginally, and beyond that it starts showing a quadratic dependence <cit.>. The crossover occurs almost at the same value of Pe where D_ g of the active blocks show a transition to a quadratic behavior as shown in Fig. <ref>(c). The corresponding scaling with respect to the chain length N is presented in Fig. <ref>(b) showing again a behavior proportional to the data for a FAP. In other words, the enhanced diffusion constant obeys a Rouse-like scaling D_ eff∼ N^-1 for all the copolymers <cit.>.
§ CONCLUSION
To summarize, using numerical simulations we have presented results for the conformation and dynamics of linear active block copolymers where blocks of passive and active monomers reside along its contour. Our results show that depending on the respective positions of passive and active blocks one can tune the amount of swelling achieved by the polymer due to the exerted activities. For example, the copolymer APA where two active blocks are present at the two ends with a passive block in between, exhibits more swelling than a fully active polymer of same length. This is attributed to the pulling of the passive block by the two connected active blocks leading to an expansion of the passive block. The behavior of the other copolymers can also be argued on the basis of the tug-of-war between the passive and active blocks. This fact possibly can explain similar peculiarities in conformations and dynamics, and thereby the functionality of many bio-polymers. Even though the long-time dynamics of the cm of the copolymers exhibit the usual diffusive Rouse-like scaling, the long-lived transient regime of the passive and active blocks, respectively, show anomalous super- and sub-diffusion. The corresponding generalized diffusion constant also exhibits non-trivial scaling behaviors with the chain length. Considering the recent progress in developing synthetic active polymers <cit.>, the results presented here should indulge design of new polymeric materials with tailored static and dynamic properties depending on the relative position of the passive and active blocks.
In future, it would be interesting to explore other patterns of relative arrangements of the passive and active blocks. Sequences mimicking real chromosomal patterns would reveal the relevance of the anomalous features observed here with such dynamics pertinent to motion of cell organelles within the cell nucleus. From a technical point of view it would also be intriguing to include the role of inertia <cit.>, which for particle system triggers orientational ordering <cit.>.
The work was funded by the Science and Engineering Research Board (SERB), Govt. of India for a Ramanujan Fellowship (file no. RJF/2021/000044). S.P. acknowledges University of Delhi for providing financial assistance through the Faculty Research Programme Grant-IOE (ref. no. /IOE/2024-25/12/FRP).
63
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0] `
12 `$12
`&12 `#12 `1̂2 `_12 `%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Sneppen and Zocchi(2005)]sneppen2005
authorauthorK. Sneppen and authorG. Zocchi, @noop titlePhysics in Molecular
Biology (publisherCambridge University Press, Cambridge,year2005)NoStop
[Pollard et al.(2022)Pollard, Earnshaw, Lippincott-Schwartz, and Johnson]pollard2022
authorauthorT.D. Pollard, authorW.C. Earnshaw, authorJ. Lippincott-Schwartz, and authorG. Johnson, @noop titleCell
Biology (publisherElsevier, Philadelphia, year2022)NoStop
[Van den Engh et al.(1992)Van den Engh, Sachs, and Trask]van1992
authorauthorG. Van den Engh, authorR. Sachs,
and authorB.J. Trask,titletitleEstimating genomic distance
from DNA sequence location in cell nuclei by a random walk model,@noop journaljournalScience volume257, pages1410–1412 (year1992)NoStop
[Sachs et al.(1995)Sachs,
Van Den Engh, Trask, Yokota, and Hearst]sachs1995
authorauthorR.K. Sachs, authorG. Van Den Engh,
authorB. Trask, authorH. Yokota, and authorJ.E. Hearst, titletitleA random-walk/giant-loop model for interphase
chromosomes. @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume92,pages2710–2714 (year1995)NoStop
[Marko and Siggia(1997)]marko1997
authorauthorJ.F. Marko and authorE.D. Siggia, titletitlePolymer models
of meiotic and mitotic chromosomes, @noop journaljournalMol. Biol. Cell volume8,pages2217–2231 (year1997)NoStop
[Münkel and Langowski(1998)]munkel1998
authorauthorC. Münkel and authorJ. Langowski, titletitleChromosome
structure predicted by a polymer model, @noop journaljournalPhys. Rev. E volume57, pages5888 (year1998)NoStop
[Mateos-Langerak et al.(2009)Mateos-Langerak, Bohn, De Leeuw,
Giromus, Manders, Verschure,
Indemans, Gierman, Heermann,
Van Drielet al.]mateos2009
authorauthorJ. Mateos-Langerak, authorM. Bohn, authorW. De Leeuw,
authorO. Giromus, authorE.M.M. Manders, authorP.J. Verschure, authorM.H.G. Indemans, authorH.J. Gierman, authorD.W. Heermann, authorR. Van Driel, et al., titletitleSpatially confined folding of chromatin
in the interphase nucleus, @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume106, pages3812–3817 (year2009)NoStop
[Barbieri et al.(2012)Barbieri, Chotalia, Fraser, Lavitas, Dostie, Pombo, and Nicodemi]barbieri2012
authorauthorM. Barbieri, authorM. Chotalia,
authorJ. Fraser, authorL.-M. Lavitas, authorJ. Dostie, authorA. Pombo, and authorM. Nicodemi, titletitleComplexity of chromatin folding is captured by the strings and
binders switch model, @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume109, pages16173–16178 (year2012)NoStop
[Jost et al.(2014)Jost,
Carrivain, Cavalli, and Vaillant]jost2014
authorauthorD. Jost, authorP. Carrivain,
authorG. Cavalli, andauthorC. Vaillant, titletitleModeling epigenome folding: formation
and dynamics of topologically associated chromatin domains, @noop journaljournalNucleic Acids Res. volume42, pages9553–9561 (year2014)NoStop
[Ganai et al.(2014)Ganai,
Sengupta, and Menon]ganai2014
authorauthorN. Ganai, authorS. Sengupta, and authorG.I. Menon,titletitleChromosome positioning from
activity-based segregation, @noop journaljournalNucleic Acids Res. volume42, pages4145–4159 (year2014)NoStop
[Di Pierro et al.(2016)Di Pierro, Zhang, Aiden, Wolynes, and Onuchic]di2016
authorauthorM. Di Pierro, authorB. Zhang,
authorE.L. Aiden, authorP.G. Wolynes, and authorJ.N. Onuchic, titletitleTransferable model for chromosome
architecture, @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume113,pages12168–12173 (year2016)NoStop
[Agrawal et al.(2017)Agrawal, Ganai, Sengupta, andMenon]agrawal2017
authorauthorA. Agrawal, authorN. Ganai,
authorS. Sengupta, andauthorG.I. Menon, titletitleChromatin as active matter,@noop journaljournalJ. Stat. Mech.:
Theor. Exp. volume2017, pages014001
(year2017)NoStop
[Shi et al.(2018)Shi,
Liu, Hyeon, and Thirumalai]shi2018
authorauthorGuang Shi, authorLei Liu, authorChangbong Hyeon, and authorDave Thirumalai, titletitleInterphase human chromosome exhibits out
of equilibrium glassy dynamics, @noop journaljournalNat. Commun. volume9,pages3161 (year2018)NoStop
[Menon(2020)]menon2020
authorauthorG.I. Menon, titletitleChromatin as an
active polymeric material, @noop journaljournalEmerg. Top. Life. Sci. volume4,pages111–118 (year2020)NoStop
[Weber et al.(2012)Weber,
Spakowitz, and Theriot]weber2012
authorauthorS.C. Weber, authorA.J. Spakowitz,
and authorJ.A. Theriot,titletitleNonthermal ATP-dependent
fluctuations contribute to the in vivo motion of chromosomal loci,@noop journaljournalProc. Nat. Acad.
Sci. U. S. A. volume109, pages7338–7343 (year2012)NoStop
[Ramaswamy(2010)]Ramaswamy2010
authorauthorS. Ramaswamy, titletitleThe mechanics
and statistics of active matter, @noop journaljournalAnnu. Rev. Condens. Matter Phys. volume1, pages323–345 (year2010)NoStop
[Marchetti et al.(2013)Marchetti, Joanny, Ramaswamy, Liverpool, Prost, Rao, and Simha]Marchetti2013
authorauthorM. C. Marchetti, authorJ. F. Joanny, authorS. Ramaswamy,
authorT. B. Liverpool, authorJ. Prost, authorMadan Rao, and authorR. Aditi Simha, titletitleHydrodynamics of soft active matter,@noop journaljournalRev. Mod. Phys.volume85, pages1143–1189 (year2013)NoStop
[Elgeti et al.(2015)Elgeti,
Winkler, and Gompper]elgeti2015
authorauthorJ. Elgeti, authorR.G. Winkler,
and authorG. Gompper,titletitlePhysics of
microswimmers–single particle motion and collective behavior: A review,@noop journaljournalRep. Prog. Phys.volume78, pages056601 (year2015)NoStop
[Shaebani et al.(2020)Shaebani, Wysocki, Winkler, Gompper, and Rieger]shaebani2020
authorauthorM.R. Shaebani, authorA. Wysocki,
authorR.G. Winkler, authorG. Gompper, and authorH. Rieger, titletitleComputational models for active matter,@noop journaljournalNat. Rev. Phys.volume2, pages181–199 (year2020)NoStop
[Winkler and Gompper(2020)]winkler2020
authorauthorR.G. Winkler and authorG. Gompper, titletitleThe physics of
active polymers and filaments, @noop journaljournalJ. Chem. Phys. volume153,pages040901 (year2020)NoStop
[Kaiser and Löwen(2014)]kaiser2014
authorauthorA. Kaiser and authorH. Löwen, titletitleUnusual
swelling of a polymer in a bacterial bath, @noop journaljournalJ. Chem. Phys. volume141, pages044903 (year2014)NoStop
[Harder et al.(2014)Harder,
Valeriani, and Cacciuto]harder2014
authorauthorJ. Harder, authorC. Valeriani, and authorA. Cacciuto,titletitleActivity-induced collapse
and reexpansion of rigid polymers, @noop journaljournalPhys. Rev. E volume90,pages062312 (year2014)NoStop
[Kaiser et al.(2015)Kaiser,
Babel, ten Hagen, von
Ferber, and Löwen]kaiser2015
authorauthorA. Kaiser, authorS. Babel,
authorB. ten Hagen, authorC. von Ferber, and authorH. Löwen, titletitleHow does a flexible chain of active particles
swell? @noop journaljournalJ. Chem.
Phys. volume142, pages124905
(year2015)NoStop
[Isele-Holder et al.(2015)Isele-Holder, Elgeti, and Gompper]isele2015
authorauthorR.E. Isele-Holder, authorJ. Elgeti, and authorG. Gompper, titletitleSelf-propelled
worm-like filaments: spontaneous spiral formation, structure, and
dynamics, @noop journaljournalSoft
Matter volume11, pages7181–7190
(year2015)NoStop
[Isele-Holder et al.(2016)Isele-Holder, Jäger, Saggiorato,
Elgeti, and Gompper]isele2016
authorauthorR.E. Isele-Holder, authorJ. Jäger, authorG. Saggiorato, authorJ. Elgeti,
and authorG. Gompper,titletitleDynamics of self-propelled
filaments pushing a load, @noop journaljournalSoft Matter volume12, pages8495–8505 (year2016)NoStop
[Bianco et al.(2018)Bianco,
Locatelli, and Malgaretti]bianco2018
authorauthorV. Bianco, authorE. Locatelli, and authorP. Malgaretti,titletitleGlobulelike conformation and
enhanced diffusion of active polymers, @noop journaljournalPhys. Rev. Lett. volume121, pages217802 (year2018)NoStop
[Chaki and Chakrabarti(2019)]chaki2019
authorauthorS. Chaki and authorR. Chakrabarti, titletitleEnhanced
diffusion, swelling, and slow reconfiguration of a single chain in
non-Gaussian active bath, @noop journaljournalJ. Chem. Phys. volume150, pages094902 (year2019)NoStop
[Liu et al.(2019)Liu,
Jiang, and Hou]liu2019
authorauthorX. Liu, authorH. Jiang, andauthorZ. Hou, titletitleConfiguration dynamics of a flexible
polymer chain in a bath of chiral active particles, @noop journaljournalJ. Chem. Phys. volume151, pages174904 (year2019)NoStop
[Das et al.(2021)Das,
Kennedy, and Cacciuto]das2021
authorauthorS. Das, authorN. Kennedy, andauthorA. Cacciuto, titletitleThe coil–globule transition in
self-avoiding active polymers, @noop journaljournalSoft Matter volume17,pages160–164 (year2021)NoStop
[Paul et al.(2021)Paul,
Majumder, and Janke]paul2021
authorauthorS. Paul, authorS. Majumder, and authorW. Janke, titletitleMotion of a polymer globule with
Vicsek-like activity: From super-diffusive to ballistic behavior,@noop journaljournalSoft Mater.volume19, pages306–315 (year2021)NoStop
[Anderson et al.(2022)Anderson, Briand, Dauchot, andFernández-Nieves]anderson2022
authorauthorC.J. Anderson, authorG. Briand,
authorO. Dauchot, andauthorA. Fernández-Nieves,titletitlePolymer-chain configurations
in active and passive baths, @noop journaljournalPhys. Rev. E volume106,pages064606 (year2022)NoStop
[Paul et al.(2022a)Paul, Majumder,
Das, and Janke]paul2022
authorauthorS. Paul, authorS. Majumder,
authorS.K. Das, and authorW. Janke, titletitleEffects of alignment activity on the collapse
kinetics of a flexible polymer, @noop journaljournalSoft Matter volume18,pages1978–1990 (year2022a)NoStop
[Paul et al.(2022b)Paul, Majumder, and Janke]paul2022activity
authorauthorS. Paul, authorS. Majumder, and authorW. Janke, titletitleActivity mediated globule to coil
transition of a flexible polymer in a poor solvent, @noop journaljournalSoft Matter volume18, pages6392–6403 (year2022b)NoStop
[Majumder et al.(2024)Majumder, Paul, and Janke]majumder2024
authorauthorS. Majumder, authorS. Paul, and authorW. Janke, titletitleEnhanced diffusion and universal
rouse-like scaling of an active polymer in poor solvent, @noop journaljournalPhys. Rev. Mater. volume8, pages075601 (year2024)NoStop
[Amitai and Holcman(2017)]amitai2017
authorauthorA. Amitai and authorD. Holcman, titletitlePolymer physics
of nuclear organization and function, @noop journaljournalPhys. Rep. volume678,pages1–83 (year2017)NoStop
[Agrawal et al.(2020)Agrawal, Ganai, Sengupta, andMenon]agrawal2020
authorauthorA. Agrawal, authorN. Ganai,
authorS. Sengupta, andauthorG.I. Menon, titletitleNonequilibrium biophysical processes
influence the large-scale architecture of the cell nucleus, @noop journaljournalBiophy. J. volume118, pages2229–2244 (year2020)NoStop
[de Gennes(1980)]deGennesbook
authorauthorP.-G. de Gennes, @noop titleScaling Concepts in
Polymer Physics (publisherAIP, Melville, New York,year1980)NoStop
[Doi(1996)]doi1996
authorauthorM. Doi, @noop titleIntroduction to Polymer
Physics (publisherOxford University Press, New York,year1996)NoStop
[Rubinstein and Colby(2003)]rubinstein2003
authorauthorM. Rubinstein and authorR.H. Colby, @noop titlePolymer Physics (publisherOxford University Press, New York, year2003)NoStop
[Clisby(2010)]clisby2010
authorauthorN. Clisby, titletitleAccurate
estimate of the critical exponent ν for self-avoiding walks via a fast
implementation of the pivot algorithm, @noop journaljournalPhys. Rev. Lett. volume104, pages055702 (year2010)NoStop
[Clisby and Dünweg(2016)]clisby2016
authorauthorN. Clisby and authorB. Dünweg, titletitleHigh-precision estimate of the hydrodynamic radius for self-avoiding
walks, @noop journaljournalPhys.
Rev. E volume94, pages052102
(year2016)NoStop
[Rouse Jr(1953)]rouse1953
authorauthorP.E. Rouse Jr, titletitleA theory of
the linear viscoelastic properties of dilute solutions of coiling
polymers, @noop journaljournalJ.
Chem. Phys. volume21, pages1272–1280 (year1953)NoStop
[Zimm(1956)]zimm1956
authorauthorB.H. Zimm, titletitleDynamics of
polymer molecules in dilute solution: Viscoelasticity, flow birefringence
and dielectric loss, @noop journaljournalJ. Chem. Phys. volume24, pages269–278 (year1956)NoStop
[Vatin et al.(2024)Vatin,
Kundu, and Locatelli]vatin2024
authorauthorM. Vatin, authorS. Kundu, andauthorE. Locatelli, titletitleConformation and dynamics of partially
active linear polymers, @noop journaljournalSoft Matter volume20, pages1892–1904 (year2024)NoStop
[Frenkel and Smit(2002)]frenkel_book
authorauthorD. Frenkel and authorB. Smit,@noop titleUnderstanding Molecular Simulations:
From Algorithms to Applications (publisherAcademic Press,
San Diego, year2002)NoStop
[Note1()]Note1
noteSee Supplemental Material (SM) at [… ]. The SM
contains movies of steady-state trajectories for a fully active polymer and
different active copolymers of length N=128 at Pe=50.Stop
[Note2()]Note2
noteIf there are two blocks of the same type of monomer then the
the presented R_g is an average of the two blocks.Stop
[Bouchaud and Georges(1990)]bouchaud1990
authorauthorJ.-P. Bouchaud and authorA. Georges, titletitleAnomalous
diffusion in disordered media: statistical mechanisms, models and physical
applications, @noop journaljournalPhys. Rep. volume195, pages127–293 (year1990)NoStop
[Metzler and Klafter(2000)]metzler2000
authorauthorR. Metzler and authorJ. Klafter, titletitleThe random
walk's guide to anomalous diffusion: a fractional dynamics approach,@noop journaljournalPhys. Rep.volume339, pages1–77 (year2000)NoStop
[Metzler et al.(2014)Metzler, Jeon, Cherstvy, andBarkai]metzler2014
authorauthorR. Metzler, authorJ.-H. Jeon,
authorA.G. Cherstvy, andauthorE. Barkai, titletitleAnomalous diffusion models and their
properties: non-stationarity, non-ergodicity, and ageing at the centenary of
single particle tracking, @noop journaljournalPhys. Chem. Chem. Phys. volume16,pages24128–24164 (year2014)NoStop
[Caspi et al.(2000)Caspi,
Granek, and Elbaum]caspi2000
authorauthorA. Caspi, authorR. Granek, and authorM. Elbaum,titletitleEnhanced diffusion in active
intracellular transport, @noop journaljournalPhys. Rev. Lett. volume85, pages5655 (year2000)NoStop
[Tolićć-Nørrelykke, I.M. and Munteanu, E.-L. and Thon, G. and
Oddershede, L. and Berg-Sørensen, K.(2004)]Toli2004
authorauthorTolićć-Nørrelykke, I.M. and Munteanu, E.-L. and Thon,
G. and Oddershede, L. and Berg-Sørensen, K., titletitleAnomalous Diffusion in Living Yeast Cells, 10.1103/PhysRevLett.93.078102journaljournalPhys. Rev. Lett. volume93,pages078102 (year2004)NoStop
[Bronstein et al.(2009)Bronstein, Israel, Kepten, Mai, Shav-Tal, Barkai, and Garini]bronstein2009
authorauthorI. Bronstein, authorY. Israel,
authorE. Kepten, authorS. Mai, authorY. Shav-Tal, authorE. Barkai, and authorY. Garini, titletitleTransient anomalous diffusion of telomeres in the nucleus
of mammalian cells, @noop journaljournalPhys. Rev. Lett. volume103, pages018102 (year2009)NoStop
[Reverey et al.(2015)Reverey, Jeon, Bao, Leippe,
Metzler, and Selhuber-Unkel]reverey2015
authorauthorJ.F. Reverey, authorJ.-H. Jeon,
authorH. Bao, authorM. Leippe, authorR. Metzler, and authorC. Selhuber-Unkel, titletitleSuperdiffusion dominates intracellular particle motion in the
supercrowded cytoplasm of pathogenic Acanthamoeba castellanii,@noop journaljournalSci. Rep.volume5, pages11690 (year2015)NoStop
[Chen et al.(2015)Chen,
Wang, and Granick]chen2015
authorauthorK. Chen, authorB. Wang, andauthorS. Granick, titletitleMemoryless self-reinforcing
directionality in endosomal active transport within living cells,@noop journaljournalNat. Mater.volume14, pages589–593 (year2015)NoStop
[Lampo et al.(2017)Lampo,
Stylianidou, Backlund, Wiggins, and Spakowitz]lampo2017
authorauthorT.J. Lampo, authorS. Stylianidou,
authorM.P. Backlund, authorP.A. Wiggins, and authorA.J. Spakowitz, titletitleCytoplasmic RNA-protein particles
exhibit non-Gaussian subdiffusive behavior, @noop journaljournalBiophys. J. volume112, pages532–542 (year2017)NoStop
[Song et al.(2018)Song,
Moon, Jeon, and Park]song2018
authorauthorM.S. Song, authorH.C. Moon,
authorJ.-H. Jeon, andauthorH.Y. Park, titletitleNeuronal messenger ribonucleoprotein
transport follows an aging Lévy walk, @noop journaljournalNat. Commun. volume9, pages1–8 (year2018)NoStop
[Molina-Garcia et al.(2018)Molina-Garcia, Sandev, Safdari,
Pagnini, Chechkin, and Metzler]molina2018
authorauthorD. Molina-Garcia, authorT. Sandev, authorH. Safdari,
authorG. Pagnini, authorA. Chechkin, and authorR. Metzler, titletitleCrossover from anomalous to normal diffusion:
truncated power-law noise correlations and applications to dynamics in lipid
bilayers, @noop journaljournalNew
J. Phys. volume20, pages103027
(year2018)NoStop
[Banks and Fradin(2005)]banks2005
authorauthorD.S. Banks and authorC. Fradin,titletitleAnomalous diffusion of
proteins due to molecular crowding, @noop journaljournalBiophy. J. volume89,pages2960–2971 (year2005)NoStop
[Jeon et al.(2011)Jeon,
Tejedor, Burov, Barkai,
Selhuber-Unkel, Berg-Sørensen,
Oddershede, and Metzler]jeon2011
authorauthorJ.-H. Jeon, authorV. Tejedor,
authorS. Burov, authorE. Barkai, authorC. Selhuber-Unkel, authorK. Berg-Sørensen, authorL. Oddershede, and authorR. Metzler, titletitleIn vivo anomalous diffusion and weak ergodicity breaking
of lipid granules, @noop journaljournalPhys. Rev. Lett. volume106, pages048103 (year2011)NoStop
[Biswas et al.(2017)Biswas,
Manna, Laskar, Kumar,
Adhikari, and Kumaraswamy]biswas2017
authorauthorB. Biswas, authorR.K. Manna,
authorA. Laskar, authorP.B.S. Kumar, authorR. Adhikari, and authorG. Kumaraswamy, titletitleLinking catalyst-coated isotropic colloids into
“active” flexible chains enhances their diffusivity, @noop journaljournalACS nano volume11, pages10025–10031 (year2017)NoStop
[Löwen(2020)]lowen2020
authorauthorH. Löwen, titletitleInertial
effects of self-propelled particles: From active Brownian to active
Langevin motion, @noop journaljournalJ. Chem. Phys. volume152 (year2020)NoStop
[Paul et al.(2024)Paul,
Majumder, and Janke]paul2024
authorauthorS. Paul, authorS. Majumder, and authorW. Janke, titletitleSpontaneous micro flocking of active
inertial particles without alignment interaction, @noop journaljournalarXiv preprint arXiv:2402.04397 (year2024)NoStop
| During cell division the genomic DNA combines with proteins, viz., histone, to form a complex called chromatin. This helps in compaction of the long DNA (≈ 1 m) to a relatively smaller nucleosome (typically 6-10 μm in diameter) that fits within the cell nucleus, and thereby allows the cell to divide <cit.>. The nucleosome folds to form chromatin fibres which further condense to create chromosomes, which eventually get replicated and separated during cell division. Naturally, chromatin plays a crucial role in the dynamic positioning of the chromosome within the cell nucleus. Hence, understanding the structure and dynamics of chromatin and chromosome has eluded physicists over the years <cit.>.
Apart from the usual thermal fluctuations, it is known that athermal stochastic forces arising from local ATP-dependent energy. consumption, are inhomogeneously distributed within the chromosome <cit.>. Presence of such athermal kicks render it out of equilibrium and thus chromosome can be investigated using understandings of active matter <cit.>. A more precise physical description can be given by polymers comprised of monomers that are themselves active or can have activity induced by some external force. Considering the topology of many living entities, recently, a number of studies have been dedicated to the physics of active polymers <cit.>. All these studies considered a fully active polymer, i.e., all the monomers are active. However, chromosome has heterogeneous distribution of regions of inactive and active domains <cit.>. Chromatin remodeling with this consideration translate the transcription-coupled enzymatic activity into differing levels of stochastic forces on each monomer of a polymer model of chromosome.
The conformation and dynamics of a passive polymer are characterized by specific scaling laws <cit.>. For example the size of the polymer measured using the radius of gyration R_g and the polymer length N measured as the number of monomers, are related via the scaling R_g∼ N^ν, where the critical exponent ν≈ 0.588 in a good solvent <cit.>. Similarly, the dynamics in absence of hydrodynamics, is highlighted by the Rouse scaling of the diffusion constant D ∼ N^-1<cit.>. The Zimm scaling law D∼ N^-ν describes the corresponding behavior in presence of hydrodynamics <cit.>. For active polymers too, the focus has always been on understanding these scaling laws. Theoretical and computational models of active polymers rely on introducing the activity by applying a local
force tangential to the polymer backbone <cit.> or
by choosing the monomers to be active <cit.>. Depending on the kind of activity, the polymer may exhibit a coil-globule transition in a good solvent <cit.> or a globule to coil transition in a poor solvent <cit.>, which is in contrast with the behavior of a passive polymer in the respective solvents. In almost all these studies, irrespective of the model the dynamics of an active polymer is highlighted by the remarkable enhancement of the effective long-time diffusion constant D_ eff<cit.>. For active Brownian polymer, one also realizes a universal Rouse-like scaling, that gets hardly affected by hydrodynamic interactions <cit.>.
In this work, motivated by the partial active nature of chromosomes, using computer simulations we investigate the static and dynamic properties of active copolymers where blocks of passive monomers are connected with blocks of active monomers along the contour of the polymer (see Fig. <ref>). Our results considering three different sequences of relative arrangements of the passive and active blocks (as described in Fig. <ref>) reveal unusual conformational behavior when compared to a fully active polymer (FAP). Our novel strategy of monitoring the dynamics of the individual passive and active blocks separately reveal anomalous behavior characterized by the simultaneous existence of super-diffusion of the passive block and sub-diffusion of the active block. In spite of this diverse dynamics the long-time diffusivity of the full polymer still obeys a universal Rouse-like scaling <cit.>.
The rest of the paper is organized in the following way. Next in Sec. <ref>, we provide a detail description of the model used and the method of simulations. Following that in Sec. <ref> we present the results. It also includes description of the calculations of relevant observables. Finally in Sec. <ref> we summarize the results and also provide an outlook to the future work. | null | null | We start with the conformations of active block copolymers. Fig. <ref> shows typical steady-state conformations for the three types of copolymers of length N=128 at Pe=50. As expected, with increasing Pe all the copolymers get extended. On comparing with a fully active polymer (FAP) at the same Pe, surprisingly copolymers APA and PA look more extended, while PAP appears to be shorter. To confirm this observation we measure the size of the polymer in terms of the radius of gyration
R_g=⟨√(1/2N^2∑_i,j(r⃗_i-r⃗_j)^2)⟩,
where ⟨…⟩ denotes averaging over steady state and independent simulation runs. The distributions P(R_g) at Pe=50 for N=128 are presented in Fig. <ref>(a). While for PAP the peak is at a smaller value (R_g ≈ 7) than for FAP (R_g ≈ 10), for both PA and APA the corresponding peaks are at larger R_g, confirming their anomalously larger swelling than a FAP. For APA, the peak is almost at a value (R_g ≈ 18) twice than that for FAP. The variation of R_g with Pe presented in Fig. <ref>(d) reveals a more concrete picture. For Pe< 10, R_g for different copolymers show marginal differences. Once the activity is of considerable strength, i.e., for Pe>10 differences between them show up. FAP and PAP show linear increase in R_g with Pe for the full range, as illustrated by the corresponding fitted dashed lines. The data for PA show an initial steeper but linear increase until Pe=50, following which it almost catch up with the data for FAP. Interestingly, the data for APA show an even steeper linear increase until Pe=50, beyond which it almost saturates.
The unexpectedly larger R_g of the APA copolymer than a FAP can be phenomenologically understood from a careful observation of the steady-state trajectory. In an APA copolymer the two active blocks pull the passive block from both sides, resulting in forces that propagate via the bonds connecting a passive monomer with an active monomer on either end of the passive block. The thermal fluctuations of the passive monomers oppose the pulling forces, however, are much weaker comparatively at T=0.1. For the FAP case, the active forces of the monomers away from the ends can easily cancel out the pulling force generated by the end monomers, and thus it behaves like a self-avoiding polymer. For the PA copolymer the pulling occurs from one end only, hence, the extension of the passive block is not as much resulting in a size comparable to the FAP. To consolidate this speculation we calculate R_g of individual active and passive blocks. Note that if there are two blocks of the same type of monomer then the the presented R_g is an average of the two blocks. The corresponding distributions are presented in Figs. <ref>(b) and (c). There the x-axes are scaled respectively by the number of monomers present in each of the blocks. Indeed, for all the copolymers the swellings for the passive blocks are greater than the active blocks. Among them, APA copolymer show maximum swelling for both active and passive blocks, whereas data for PA and PAP copolymers are comparable. The corresponding variations with Pe presented in Figs. <ref>(e) and (f) also reveal the same fact of larger swelling for the passive blocks than the active ones. Interestingly, the variations are quite non-monotonic with increasing Pe. At smaller Pe, PA and APA have comparable sizes before they start deviating from each other. For larger Pe, however, PA and PAP have comparable sizes. In case of APA and PA, the similarity in the behavior of R_g for the full polymer and the passive block with increasing Pe suggests that indeed the overall size of the polymer is mostly guided by the behavior of the passive block. Likewise, for PAP the full polymer behavior is similar to the behavior of the active block. Same set of similarities can also be spotted for the N dependence of R_g, as depicted in Figs. <ref>(g)-(i). Except for the APA, data for R_g of the full polymer [Fig. <ref>(g)] more or less obey R_g∼ N^0.588 scaling. The same is obeyed by the R_g of the active blocks [Fig. <ref>(h)] for all the copolymers. However, for the passive blocks in all cases the data show significant deviation from self-avoiding scaling behavior [see Fig. <ref>(i)]. Again for APA, the behavior of the passive block is similar to what is observed for the full polymer.
The unusual conformational behavior naturally indulges us to probe the dynamics of the center of mass (cm) of the full polymer and the individual blocks. From their respective trajectories we calculate the corresponding mean square displacements
MSD(t) = ⟨[r⃗_ cm(t)-r⃗_ cm(0)]^2⟩,
where r⃗_ cm is the position of the cm. Typically, MSD at short time captures a ballistic motion followed by normal diffusion with
MSD(t)=6D_ efft,
where D_ eff is the effective diffusion constant. Unlike typical diffusion, anomalous diffusion is described by
MSD(t) = D_ gt^α,
where D_ g is the generalized diffusion constant <cit.>. The exponent α determines if it is a super-diffusion(α >1) or sub-diffusion(α <1). A typical diffusive behavior is obeyed by the data for FAP with N=64 in Fig. <ref>, where for the convenience of identification of different regimes we plot the time dependence of MSD/t. The data show an initial linear increase followed by a plateau highlighting the diffusive regime. Similar behavior is also observed for the motion of the cm of the full polymer (red line) for all the copolymers, as presented in Figs. <ref>(a)-(c). In all cases, the length of the copolymers is chosen to be exactly twice than that of the FAP such that they contain the same number of active monomers. The plateau for all the copolymers correspond to the same value of D_ eff, which is significantly smaller than the corresponding D_ eff for the FAP, indicating a strong influence of the passive blocks on the dynamics.
To disentangle the dynamics of the cm of the passive and active blocks we present their respective MSDs in Fig. <ref>. Although in the long-time limit (t > 10^4) MSD data of both blocks for all cases merge with the data of the full polymer, the preceding transient period show anomalous diffusion. There, the data for the passive block show a super-diffusive behavior with α∈ [1.35,1.37], while the active block show sub-diffusion with α≈ 0.77. Such anomalous diffusive behavior is a signature of intra-cellular transport phenomena <cit.>. In most cases this has been attributed to the crowded environment of the cell experienced by the probed bio-entity <cit.>. In the present case, however, this simply is a virtue of the tug-of-war between the passive and the active blocks.
In Fig. <ref> we present the chain-length dependence of the anomalous behavior of the MSD(t) for passive and active blocks. Clearly, data for all chain lengths show similar behavior as presented in Fig. <ref>. The data for the passive blocks for all the copolymers show pronounced super-diffusive behavior. The corresponding sub-diffusive behavior of the active blocks are comparatively less pronounced for PA, still deviating significantly from a normal diffusion. One also notices that for all the copolymers the data of the active blocks for the shortest chain length (N=32) merge with the data for corresponding the passive block at late time, when the full chain starts diffusing.. This merging occurs even later for longer chains. Importantly, both the passive and active blocks show a monotonic decrease of the amplitude of MSD(t) with increaseing chain length. This urges us to check the presence of any scaling of the generalized diffusion constant D_ g as a function of N, which will be presented subsequently.
To dig deep into the anomalous diffusion we analyze the power-law behavior of MSD(t) in the transient regime by calculating the exponent as
α=⟨d ln MSD(t)/d ln t⟩,
where the ⟨…⟩ indicates an average over different times within the transient regime as well as different trajectories. In Fig. <ref>(a) we present the variation of α with Pe, for a fixed N for both the blocks of different copolymers. It shows how starting from α=1 the super- and sub-diffusion emerge, respectively, for the passive and active blocks as Pe increases. In the large Pe limit, α for the super-diffusion settles at a slightly smaller value for PAP compared to the other copolymers. The sub-diffusive behavior of the active blocks do not show the same trend, and roughly settles around 0.8 for all of them. On the other hand, one can notice from Fig. <ref>(b) that for long N the super-diffusive α is almost the same for all the copolymers, as also is the case for α for the sub-diffusion of the active blocks.
Having established the evidence of anomalous diffusion in the transient regime, we calculate the corresponding generalized diffusion constant as
D_ g=⟨exp[ln t ln MSD(t^') -ln t^'ln MSD(t)/ln t-ln t^'] ⟩,
where the times t and t^' are within the transient regime, and ⟨…⟩ denotes averaging over different (t,t^') pairs and independent trajectories. The corresponding plots for D_ g as a function of Pe is shown in Fig. <ref>(c) for all the copolymers. For smaller Pe, D_ g remains almost invariant for all cases. At around Pe≈ 10, the active blocks show a transition to a Pe-dependent behavior. The passive blocks show a similar transition at larger Pe≈ 25. Even though the active blocks show the usual D_ g∼ Pe^2 dependence <cit.>, the behavior of the passive block is even more enhanced with D_ g∼ Pe^3. The corresponding scaling behaviors with respect to the chain length N are shown in Fig. <ref>(d). The data for the passive blocks show a lot of diversity with only the behavior for PA being roughly consistent with a Rouse-like scaling D_ g∼ N^-1<cit.>. Both PAP and APA show anomalous behavior with even slower dynamics. In contrast, the data for the active blocks of all the copolymers are more or less consistent with the Rouse-like scaling <cit.>. Of course, a proper theoretical treatment is required to confirm the validity of the observed scaling. Nevertheless, it can be inferred that the dynamics in the transient regime is rich, and is highlighted by the anomalous diffusion and related scaling.
Finally, to check if the anomalous transient dynamics of the individual blocks have any effect on the long-time diffusion of the full polymer, we calculate the effective diffusion constant of the cm of polymer as
D_ eff=⟨1/6lim_t →∞d/dt MSD(t)⟩.
The variation of D_ eff with Pe is shown in Fig. <ref>(a), which also includes the data for a FAP. All the copolymers show behavior similar to that of a FAP. For Pe < 10, D_ eff varies marginally, and beyond that it starts showing a quadratic dependence <cit.>. The crossover occurs almost at the same value of Pe where D_ g of the active blocks show a transition to a quadratic behavior as shown in Fig. <ref>(c). The corresponding scaling with respect to the chain length N is presented in Fig. <ref>(b) showing again a behavior proportional to the data for a FAP. In other words, the enhanced diffusion constant obeys a Rouse-like scaling D_ eff∼ N^-1 for all the copolymers <cit.>. | null | To summarize, using numerical simulations we have presented results for the conformation and dynamics of linear active block copolymers where blocks of passive and active monomers reside along its contour. Our results show that depending on the respective positions of passive and active blocks one can tune the amount of swelling achieved by the polymer due to the exerted activities. For example, the copolymer APA where two active blocks are present at the two ends with a passive block in between, exhibits more swelling than a fully active polymer of same length. This is attributed to the pulling of the passive block by the two connected active blocks leading to an expansion of the passive block. The behavior of the other copolymers can also be argued on the basis of the tug-of-war between the passive and active blocks. This fact possibly can explain similar peculiarities in conformations and dynamics, and thereby the functionality of many bio-polymers. Even though the long-time dynamics of the cm of the copolymers exhibit the usual diffusive Rouse-like scaling, the long-lived transient regime of the passive and active blocks, respectively, show anomalous super- and sub-diffusion. The corresponding generalized diffusion constant also exhibits non-trivial scaling behaviors with the chain length. Considering the recent progress in developing synthetic active polymers <cit.>, the results presented here should indulge design of new polymeric materials with tailored static and dynamic properties depending on the relative position of the passive and active blocks.
In future, it would be interesting to explore other patterns of relative arrangements of the passive and active blocks. Sequences mimicking real chromosomal patterns would reveal the relevance of the anomalous features observed here with such dynamics pertinent to motion of cell organelles within the cell nucleus. From a technical point of view it would also be intriguing to include the role of inertia <cit.>, which for particle system triggers orientational ordering <cit.>.
The work was funded by the Science and Engineering Research Board (SERB), Govt. of India for a Ramanujan Fellowship (file no. RJF/2021/000044). S.P. acknowledges University of Delhi for providing financial assistance through the Faculty Research Programme Grant-IOE (ref. no. /IOE/2024-25/12/FRP).
63
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0] `
12 `$12
`&12 `#12 `1̂2 `_12 `%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Sneppen and Zocchi(2005)]sneppen2005
authorauthorK. Sneppen and authorG. Zocchi, @noop titlePhysics in Molecular
Biology (publisherCambridge University Press, Cambridge,year2005)NoStop
[Pollard et al.(2022)Pollard, Earnshaw, Lippincott-Schwartz, and Johnson]pollard2022
authorauthorT.D. Pollard, authorW.C. Earnshaw, authorJ. Lippincott-Schwartz, and authorG. Johnson, @noop titleCell
Biology (publisherElsevier, Philadelphia, year2022)NoStop
[Van den Engh et al.(1992)Van den Engh, Sachs, and Trask]van1992
authorauthorG. Van den Engh, authorR. Sachs,
and authorB.J. Trask,titletitleEstimating genomic distance
from DNA sequence location in cell nuclei by a random walk model,@noop journaljournalScience volume257, pages1410–1412 (year1992)NoStop
[Sachs et al.(1995)Sachs,
Van Den Engh, Trask, Yokota, and Hearst]sachs1995
authorauthorR.K. Sachs, authorG. Van Den Engh,
authorB. Trask, authorH. Yokota, and authorJ.E. Hearst, titletitleA random-walk/giant-loop model for interphase
chromosomes. @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume92,pages2710–2714 (year1995)NoStop
[Marko and Siggia(1997)]marko1997
authorauthorJ.F. Marko and authorE.D. Siggia, titletitlePolymer models
of meiotic and mitotic chromosomes, @noop journaljournalMol. Biol. Cell volume8,pages2217–2231 (year1997)NoStop
[Münkel and Langowski(1998)]munkel1998
authorauthorC. Münkel and authorJ. Langowski, titletitleChromosome
structure predicted by a polymer model, @noop journaljournalPhys. Rev. E volume57, pages5888 (year1998)NoStop
[Mateos-Langerak et al.(2009)Mateos-Langerak, Bohn, De Leeuw,
Giromus, Manders, Verschure,
Indemans, Gierman, Heermann,
Van Drielet al.]mateos2009
authorauthorJ. Mateos-Langerak, authorM. Bohn, authorW. De Leeuw,
authorO. Giromus, authorE.M.M. Manders, authorP.J. Verschure, authorM.H.G. Indemans, authorH.J. Gierman, authorD.W. Heermann, authorR. Van Driel, et al., titletitleSpatially confined folding of chromatin
in the interphase nucleus, @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume106, pages3812–3817 (year2009)NoStop
[Barbieri et al.(2012)Barbieri, Chotalia, Fraser, Lavitas, Dostie, Pombo, and Nicodemi]barbieri2012
authorauthorM. Barbieri, authorM. Chotalia,
authorJ. Fraser, authorL.-M. Lavitas, authorJ. Dostie, authorA. Pombo, and authorM. Nicodemi, titletitleComplexity of chromatin folding is captured by the strings and
binders switch model, @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume109, pages16173–16178 (year2012)NoStop
[Jost et al.(2014)Jost,
Carrivain, Cavalli, and Vaillant]jost2014
authorauthorD. Jost, authorP. Carrivain,
authorG. Cavalli, andauthorC. Vaillant, titletitleModeling epigenome folding: formation
and dynamics of topologically associated chromatin domains, @noop journaljournalNucleic Acids Res. volume42, pages9553–9561 (year2014)NoStop
[Ganai et al.(2014)Ganai,
Sengupta, and Menon]ganai2014
authorauthorN. Ganai, authorS. Sengupta, and authorG.I. Menon,titletitleChromosome positioning from
activity-based segregation, @noop journaljournalNucleic Acids Res. volume42, pages4145–4159 (year2014)NoStop
[Di Pierro et al.(2016)Di Pierro, Zhang, Aiden, Wolynes, and Onuchic]di2016
authorauthorM. Di Pierro, authorB. Zhang,
authorE.L. Aiden, authorP.G. Wolynes, and authorJ.N. Onuchic, titletitleTransferable model for chromosome
architecture, @noop journaljournalProc. Nat. Acad. Sci. U. S. A. volume113,pages12168–12173 (year2016)NoStop
[Agrawal et al.(2017)Agrawal, Ganai, Sengupta, andMenon]agrawal2017
authorauthorA. Agrawal, authorN. Ganai,
authorS. Sengupta, andauthorG.I. Menon, titletitleChromatin as active matter,@noop journaljournalJ. Stat. Mech.:
Theor. Exp. volume2017, pages014001
(year2017)NoStop
[Shi et al.(2018)Shi,
Liu, Hyeon, and Thirumalai]shi2018
authorauthorGuang Shi, authorLei Liu, authorChangbong Hyeon, and authorDave Thirumalai, titletitleInterphase human chromosome exhibits out
of equilibrium glassy dynamics, @noop journaljournalNat. Commun. volume9,pages3161 (year2018)NoStop
[Menon(2020)]menon2020
authorauthorG.I. Menon, titletitleChromatin as an
active polymeric material, @noop journaljournalEmerg. Top. Life. Sci. volume4,pages111–118 (year2020)NoStop
[Weber et al.(2012)Weber,
Spakowitz, and Theriot]weber2012
authorauthorS.C. Weber, authorA.J. Spakowitz,
and authorJ.A. Theriot,titletitleNonthermal ATP-dependent
fluctuations contribute to the in vivo motion of chromosomal loci,@noop journaljournalProc. Nat. Acad.
Sci. U. S. A. volume109, pages7338–7343 (year2012)NoStop
[Ramaswamy(2010)]Ramaswamy2010
authorauthorS. Ramaswamy, titletitleThe mechanics
and statistics of active matter, @noop journaljournalAnnu. Rev. Condens. Matter Phys. volume1, pages323–345 (year2010)NoStop
[Marchetti et al.(2013)Marchetti, Joanny, Ramaswamy, Liverpool, Prost, Rao, and Simha]Marchetti2013
authorauthorM. C. Marchetti, authorJ. F. Joanny, authorS. Ramaswamy,
authorT. B. Liverpool, authorJ. Prost, authorMadan Rao, and authorR. Aditi Simha, titletitleHydrodynamics of soft active matter,@noop journaljournalRev. Mod. Phys.volume85, pages1143–1189 (year2013)NoStop
[Elgeti et al.(2015)Elgeti,
Winkler, and Gompper]elgeti2015
authorauthorJ. Elgeti, authorR.G. Winkler,
and authorG. Gompper,titletitlePhysics of
microswimmers–single particle motion and collective behavior: A review,@noop journaljournalRep. Prog. Phys.volume78, pages056601 (year2015)NoStop
[Shaebani et al.(2020)Shaebani, Wysocki, Winkler, Gompper, and Rieger]shaebani2020
authorauthorM.R. Shaebani, authorA. Wysocki,
authorR.G. Winkler, authorG. Gompper, and authorH. Rieger, titletitleComputational models for active matter,@noop journaljournalNat. Rev. Phys.volume2, pages181–199 (year2020)NoStop
[Winkler and Gompper(2020)]winkler2020
authorauthorR.G. Winkler and authorG. Gompper, titletitleThe physics of
active polymers and filaments, @noop journaljournalJ. Chem. Phys. volume153,pages040901 (year2020)NoStop
[Kaiser and Löwen(2014)]kaiser2014
authorauthorA. Kaiser and authorH. Löwen, titletitleUnusual
swelling of a polymer in a bacterial bath, @noop journaljournalJ. Chem. Phys. volume141, pages044903 (year2014)NoStop
[Harder et al.(2014)Harder,
Valeriani, and Cacciuto]harder2014
authorauthorJ. Harder, authorC. Valeriani, and authorA. Cacciuto,titletitleActivity-induced collapse
and reexpansion of rigid polymers, @noop journaljournalPhys. Rev. E volume90,pages062312 (year2014)NoStop
[Kaiser et al.(2015)Kaiser,
Babel, ten Hagen, von
Ferber, and Löwen]kaiser2015
authorauthorA. Kaiser, authorS. Babel,
authorB. ten Hagen, authorC. von Ferber, and authorH. Löwen, titletitleHow does a flexible chain of active particles
swell? @noop journaljournalJ. Chem.
Phys. volume142, pages124905
(year2015)NoStop
[Isele-Holder et al.(2015)Isele-Holder, Elgeti, and Gompper]isele2015
authorauthorR.E. Isele-Holder, authorJ. Elgeti, and authorG. Gompper, titletitleSelf-propelled
worm-like filaments: spontaneous spiral formation, structure, and
dynamics, @noop journaljournalSoft
Matter volume11, pages7181–7190
(year2015)NoStop
[Isele-Holder et al.(2016)Isele-Holder, Jäger, Saggiorato,
Elgeti, and Gompper]isele2016
authorauthorR.E. Isele-Holder, authorJ. Jäger, authorG. Saggiorato, authorJ. Elgeti,
and authorG. Gompper,titletitleDynamics of self-propelled
filaments pushing a load, @noop journaljournalSoft Matter volume12, pages8495–8505 (year2016)NoStop
[Bianco et al.(2018)Bianco,
Locatelli, and Malgaretti]bianco2018
authorauthorV. Bianco, authorE. Locatelli, and authorP. Malgaretti,titletitleGlobulelike conformation and
enhanced diffusion of active polymers, @noop journaljournalPhys. Rev. Lett. volume121, pages217802 (year2018)NoStop
[Chaki and Chakrabarti(2019)]chaki2019
authorauthorS. Chaki and authorR. Chakrabarti, titletitleEnhanced
diffusion, swelling, and slow reconfiguration of a single chain in
non-Gaussian active bath, @noop journaljournalJ. Chem. Phys. volume150, pages094902 (year2019)NoStop
[Liu et al.(2019)Liu,
Jiang, and Hou]liu2019
authorauthorX. Liu, authorH. Jiang, andauthorZ. Hou, titletitleConfiguration dynamics of a flexible
polymer chain in a bath of chiral active particles, @noop journaljournalJ. Chem. Phys. volume151, pages174904 (year2019)NoStop
[Das et al.(2021)Das,
Kennedy, and Cacciuto]das2021
authorauthorS. Das, authorN. Kennedy, andauthorA. Cacciuto, titletitleThe coil–globule transition in
self-avoiding active polymers, @noop journaljournalSoft Matter volume17,pages160–164 (year2021)NoStop
[Paul et al.(2021)Paul,
Majumder, and Janke]paul2021
authorauthorS. Paul, authorS. Majumder, and authorW. Janke, titletitleMotion of a polymer globule with
Vicsek-like activity: From super-diffusive to ballistic behavior,@noop journaljournalSoft Mater.volume19, pages306–315 (year2021)NoStop
[Anderson et al.(2022)Anderson, Briand, Dauchot, andFernández-Nieves]anderson2022
authorauthorC.J. Anderson, authorG. Briand,
authorO. Dauchot, andauthorA. Fernández-Nieves,titletitlePolymer-chain configurations
in active and passive baths, @noop journaljournalPhys. Rev. E volume106,pages064606 (year2022)NoStop
[Paul et al.(2022a)Paul, Majumder,
Das, and Janke]paul2022
authorauthorS. Paul, authorS. Majumder,
authorS.K. Das, and authorW. Janke, titletitleEffects of alignment activity on the collapse
kinetics of a flexible polymer, @noop journaljournalSoft Matter volume18,pages1978–1990 (year2022a)NoStop
[Paul et al.(2022b)Paul, Majumder, and Janke]paul2022activity
authorauthorS. Paul, authorS. Majumder, and authorW. Janke, titletitleActivity mediated globule to coil
transition of a flexible polymer in a poor solvent, @noop journaljournalSoft Matter volume18, pages6392–6403 (year2022b)NoStop
[Majumder et al.(2024)Majumder, Paul, and Janke]majumder2024
authorauthorS. Majumder, authorS. Paul, and authorW. Janke, titletitleEnhanced diffusion and universal
rouse-like scaling of an active polymer in poor solvent, @noop journaljournalPhys. Rev. Mater. volume8, pages075601 (year2024)NoStop
[Amitai and Holcman(2017)]amitai2017
authorauthorA. Amitai and authorD. Holcman, titletitlePolymer physics
of nuclear organization and function, @noop journaljournalPhys. Rep. volume678,pages1–83 (year2017)NoStop
[Agrawal et al.(2020)Agrawal, Ganai, Sengupta, andMenon]agrawal2020
authorauthorA. Agrawal, authorN. Ganai,
authorS. Sengupta, andauthorG.I. Menon, titletitleNonequilibrium biophysical processes
influence the large-scale architecture of the cell nucleus, @noop journaljournalBiophy. J. volume118, pages2229–2244 (year2020)NoStop
[de Gennes(1980)]deGennesbook
authorauthorP.-G. de Gennes, @noop titleScaling Concepts in
Polymer Physics (publisherAIP, Melville, New York,year1980)NoStop
[Doi(1996)]doi1996
authorauthorM. Doi, @noop titleIntroduction to Polymer
Physics (publisherOxford University Press, New York,year1996)NoStop
[Rubinstein and Colby(2003)]rubinstein2003
authorauthorM. Rubinstein and authorR.H. Colby, @noop titlePolymer Physics (publisherOxford University Press, New York, year2003)NoStop
[Clisby(2010)]clisby2010
authorauthorN. Clisby, titletitleAccurate
estimate of the critical exponent ν for self-avoiding walks via a fast
implementation of the pivot algorithm, @noop journaljournalPhys. Rev. Lett. volume104, pages055702 (year2010)NoStop
[Clisby and Dünweg(2016)]clisby2016
authorauthorN. Clisby and authorB. Dünweg, titletitleHigh-precision estimate of the hydrodynamic radius for self-avoiding
walks, @noop journaljournalPhys.
Rev. E volume94, pages052102
(year2016)NoStop
[Rouse Jr(1953)]rouse1953
authorauthorP.E. Rouse Jr, titletitleA theory of
the linear viscoelastic properties of dilute solutions of coiling
polymers, @noop journaljournalJ.
Chem. Phys. volume21, pages1272–1280 (year1953)NoStop
[Zimm(1956)]zimm1956
authorauthorB.H. Zimm, titletitleDynamics of
polymer molecules in dilute solution: Viscoelasticity, flow birefringence
and dielectric loss, @noop journaljournalJ. Chem. Phys. volume24, pages269–278 (year1956)NoStop
[Vatin et al.(2024)Vatin,
Kundu, and Locatelli]vatin2024
authorauthorM. Vatin, authorS. Kundu, andauthorE. Locatelli, titletitleConformation and dynamics of partially
active linear polymers, @noop journaljournalSoft Matter volume20, pages1892–1904 (year2024)NoStop
[Frenkel and Smit(2002)]frenkel_book
authorauthorD. Frenkel and authorB. Smit,@noop titleUnderstanding Molecular Simulations:
From Algorithms to Applications (publisherAcademic Press,
San Diego, year2002)NoStop
[Note1()]Note1
noteSee Supplemental Material (SM) at [… ]. The SM
contains movies of steady-state trajectories for a fully active polymer and
different active copolymers of length N=128 at Pe=50.Stop
[Note2()]Note2
noteIf there are two blocks of the same type of monomer then the
the presented R_g is an average of the two blocks.Stop
[Bouchaud and Georges(1990)]bouchaud1990
authorauthorJ.-P. Bouchaud and authorA. Georges, titletitleAnomalous
diffusion in disordered media: statistical mechanisms, models and physical
applications, @noop journaljournalPhys. Rep. volume195, pages127–293 (year1990)NoStop
[Metzler and Klafter(2000)]metzler2000
authorauthorR. Metzler and authorJ. Klafter, titletitleThe random
walk's guide to anomalous diffusion: a fractional dynamics approach,@noop journaljournalPhys. Rep.volume339, pages1–77 (year2000)NoStop
[Metzler et al.(2014)Metzler, Jeon, Cherstvy, andBarkai]metzler2014
authorauthorR. Metzler, authorJ.-H. Jeon,
authorA.G. Cherstvy, andauthorE. Barkai, titletitleAnomalous diffusion models and their
properties: non-stationarity, non-ergodicity, and ageing at the centenary of
single particle tracking, @noop journaljournalPhys. Chem. Chem. Phys. volume16,pages24128–24164 (year2014)NoStop
[Caspi et al.(2000)Caspi,
Granek, and Elbaum]caspi2000
authorauthorA. Caspi, authorR. Granek, and authorM. Elbaum,titletitleEnhanced diffusion in active
intracellular transport, @noop journaljournalPhys. Rev. Lett. volume85, pages5655 (year2000)NoStop
[Tolićć-Nørrelykke, I.M. and Munteanu, E.-L. and Thon, G. and
Oddershede, L. and Berg-Sørensen, K.(2004)]Toli2004
authorauthorTolićć-Nørrelykke, I.M. and Munteanu, E.-L. and Thon,
G. and Oddershede, L. and Berg-Sørensen, K., titletitleAnomalous Diffusion in Living Yeast Cells, 10.1103/PhysRevLett.93.078102journaljournalPhys. Rev. Lett. volume93,pages078102 (year2004)NoStop
[Bronstein et al.(2009)Bronstein, Israel, Kepten, Mai, Shav-Tal, Barkai, and Garini]bronstein2009
authorauthorI. Bronstein, authorY. Israel,
authorE. Kepten, authorS. Mai, authorY. Shav-Tal, authorE. Barkai, and authorY. Garini, titletitleTransient anomalous diffusion of telomeres in the nucleus
of mammalian cells, @noop journaljournalPhys. Rev. Lett. volume103, pages018102 (year2009)NoStop
[Reverey et al.(2015)Reverey, Jeon, Bao, Leippe,
Metzler, and Selhuber-Unkel]reverey2015
authorauthorJ.F. Reverey, authorJ.-H. Jeon,
authorH. Bao, authorM. Leippe, authorR. Metzler, and authorC. Selhuber-Unkel, titletitleSuperdiffusion dominates intracellular particle motion in the
supercrowded cytoplasm of pathogenic Acanthamoeba castellanii,@noop journaljournalSci. Rep.volume5, pages11690 (year2015)NoStop
[Chen et al.(2015)Chen,
Wang, and Granick]chen2015
authorauthorK. Chen, authorB. Wang, andauthorS. Granick, titletitleMemoryless self-reinforcing
directionality in endosomal active transport within living cells,@noop journaljournalNat. Mater.volume14, pages589–593 (year2015)NoStop
[Lampo et al.(2017)Lampo,
Stylianidou, Backlund, Wiggins, and Spakowitz]lampo2017
authorauthorT.J. Lampo, authorS. Stylianidou,
authorM.P. Backlund, authorP.A. Wiggins, and authorA.J. Spakowitz, titletitleCytoplasmic RNA-protein particles
exhibit non-Gaussian subdiffusive behavior, @noop journaljournalBiophys. J. volume112, pages532–542 (year2017)NoStop
[Song et al.(2018)Song,
Moon, Jeon, and Park]song2018
authorauthorM.S. Song, authorH.C. Moon,
authorJ.-H. Jeon, andauthorH.Y. Park, titletitleNeuronal messenger ribonucleoprotein
transport follows an aging Lévy walk, @noop journaljournalNat. Commun. volume9, pages1–8 (year2018)NoStop
[Molina-Garcia et al.(2018)Molina-Garcia, Sandev, Safdari,
Pagnini, Chechkin, and Metzler]molina2018
authorauthorD. Molina-Garcia, authorT. Sandev, authorH. Safdari,
authorG. Pagnini, authorA. Chechkin, and authorR. Metzler, titletitleCrossover from anomalous to normal diffusion:
truncated power-law noise correlations and applications to dynamics in lipid
bilayers, @noop journaljournalNew
J. Phys. volume20, pages103027
(year2018)NoStop
[Banks and Fradin(2005)]banks2005
authorauthorD.S. Banks and authorC. Fradin,titletitleAnomalous diffusion of
proteins due to molecular crowding, @noop journaljournalBiophy. J. volume89,pages2960–2971 (year2005)NoStop
[Jeon et al.(2011)Jeon,
Tejedor, Burov, Barkai,
Selhuber-Unkel, Berg-Sørensen,
Oddershede, and Metzler]jeon2011
authorauthorJ.-H. Jeon, authorV. Tejedor,
authorS. Burov, authorE. Barkai, authorC. Selhuber-Unkel, authorK. Berg-Sørensen, authorL. Oddershede, and authorR. Metzler, titletitleIn vivo anomalous diffusion and weak ergodicity breaking
of lipid granules, @noop journaljournalPhys. Rev. Lett. volume106, pages048103 (year2011)NoStop
[Biswas et al.(2017)Biswas,
Manna, Laskar, Kumar,
Adhikari, and Kumaraswamy]biswas2017
authorauthorB. Biswas, authorR.K. Manna,
authorA. Laskar, authorP.B.S. Kumar, authorR. Adhikari, and authorG. Kumaraswamy, titletitleLinking catalyst-coated isotropic colloids into
“active” flexible chains enhances their diffusivity, @noop journaljournalACS nano volume11, pages10025–10031 (year2017)NoStop
[Löwen(2020)]lowen2020
authorauthorH. Löwen, titletitleInertial
effects of self-propelled particles: From active Brownian to active
Langevin motion, @noop journaljournalJ. Chem. Phys. volume152 (year2020)NoStop
[Paul et al.(2024)Paul,
Majumder, and Janke]paul2024
authorauthorS. Paul, authorS. Majumder, and authorW. Janke, titletitleSpontaneous micro flocking of active
inertial particles without alignment interaction, @noop journaljournalarXiv preprint arXiv:2402.04397 (year2024)NoStop |
http://arxiv.org/abs/2409.17470v1 | 20240926021450 | Tactile Probabilistic Contact Dynamics Estimation of Unknown Objects | [
"Jinhoo Kim",
"Yifan Zhu",
"Aaron Dollar"
] | cs.RO | [
"cs.RO"
] |
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
[1]▹ #1
SE[DOWHILE]DodoWhile[1] #1
#1#
#1
FigFig. <ref>
figFig. <ref>
parSection <ref>
appenAppendix <ref>
secSection <ref>
subSection <ref>
tableTable <ref>
algAlgorithm <ref>
AlgAlgorithm <ref>
DefDefinition <ref>
ThmTheorem <ref>
LemLemma <ref>
stepStep <ref>
lnLine <ref>
eqEqn. <ref>
eqnEqn. <ref>
pbProblem <ref>
itItem <ref>
teTerm <ref>
Eq:#1:(<ref>)
EqEquation #1:
eqntiny
eqntinyN
#1##1 #1ALG@line-1||‖‖
emptyempty Tactile Probabilistic Contact Dynamics Estimation of Unknown Objects
Jinhoo Kim^*1, Yifan Zhu^*2, and Aaron Dollar^2^*: Denotes equal contributions.^1J. Kim is with the Department of Mechanical Engineering, ETH Zurich, Zurich, Switzerland. Work done as a visiting scholar at Yale University. [email protected]^2Y. Zhu and A. Dollar are with the Department of Mechanical Engineering and Materials Science, Yale University, New Haven, United States. {yifan.zhu, aaron.dollar}@yale.edu
September 28, 2024
=================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We study the problem of rapidly identifying contact dynamics of unknown objects in partially known environments. The key innovation of our method is a novel formulation of the contact dynamics estimation problem as the joint estimation of contact geometries and physical parameters. We leverage DeepSDF, a compact and expressive neural-network-based geometry representation over a distribution of geometries, and adopt a particle filter to estimate both the geometries in contact and the physical parameters. In addition, we couple the estimator with an active exploration strategy that plans information-gathering moves to further expedite online estimation. Through simulation and physical experiments, we show that our method estimates accurate contact dynamics with fewer than 30 exploration moves for unknown objects touching partially known environments.
§ INTRODUCTION
While robot manipulation technologies have advanced rapidly, deploying robots that can robustly perform manipulation with external contact in novel environments such as assembly and cooking remains a significant challenge. Model-free methods require environments that closely resemble or are identical to the training environment and degrade in the unfamiliar settings. Meanwhile, model-based approaches require dynamics models that are challenging to quickly acquire in unstructured environments where visual occlusions and sensor noises are prevalent.
In this work, we aim to rapidly identify an accurate contact dynamics model for a grasped unknown rigid object in a partially known rigid environment based exclusively on tactile measurements, and iteratively refine the model during interaction as shown in Fig. <ref>. Previous works have focused on explicitly estimating the contact locations and types <cit.> or obtaining a linear complementarity model <cit.>. However, explicitly determining contact positions and types is very challenging in real-world scenarios with complex object shapes, where there are abundant contact modes that also change frequently. In addition, a linear complementary model is a local approximation that inevitably introduces approximation errors especially when the local contact geometries are highly nonlinear.
Instead, we contribute a novel formulation of the contact dynamics estimation problem as the joint estimation of contact geometries and physical parameters, and our proposed method quickly captures contact dynamics in the wild with few assumptions. Essential to our method is a compact and expressive geometric representation leveraging DeepSDF <cit.>. This yields a representation of the object geometry using a learned continuous signed distance function (SDF) representation based on neural networks. The compact geometric representation allows us then to adopt a particle filter, which has shown robustness for tasks involving nonlinear and discontinuous contact dynamics <cit.>. Our representation ensures that the parameter space remains low dimensional, avoiding the curse of dimensionality of sampling-based methods, and enables the object parameters and geometry to be jointly estimated in a particle filter. In addition to the estimator, the quality of online samples also plays a key role in efficient estimation. We augment the particle filter with an active exploration strategy based on information theory that plans exploration actions with maximum expected information gain.
We evaluate our method on unknown objects in an environment of unknown friction with a flat surface and a vertical wall of unknown height and position. In both simulation and physical experiments, our estimator quickly estimates the contact dynamics with high accuracy. In simulation, the estimator shows less than 4 N of contact force prediction errors on new testing trajectories with ground truth force magnitudes going up to 25 N, while in physical experiments, it predicts with less than 0.5 N of error with 10 N of ground truth force magnitudes. This is achieved after fewer than 30 exploration actions. In addition, in simulation, we show that the active exploration strategy reduced wrench prediction error by more than 10% compared to random action selection.
§ RELATED WORK
§.§ Contact Dynamics Estimation for Tactile Measurements
Many existing works aim to explicitly estimate the contact type (e.g. point vs. line contact) and contact information (e.g location of point contact) from tactile measurements. These works adopt different estimation strategies, including direct equation fitting for narrow classes of contacts <cit.>, maximum a posterior (MAP) estimation via factor graphs <cit.>, and particle filters <cit.>. Some of these works assume unknown object geometry and a partially known environment, and estimate contact dynamics by explicitly considering the contact type, parameters, and the coefficient of friction <cit.>. While these methods are effective when estimating limited contact types or modes, they fail to scale to cases with multiple contacts of different contact types or when the contact changes rapidly in realistic manipulation tasks. Other works <cit.> assume that object and environment geometries are known. The most similar prior work to this was presented by Pankert and Hutter <cit.>, where a particle filter is used to estimate the in-hand transform of an insertion object and the world transform of a target box of known geometries. The key difference is our method estimates these transforms along with the geometries and physical parameters with minimal prior knowledge, as is essential for deploying robots in novel unknown environments.
Instead of directly estimating the contact parameters, another line of work aims to directly learn local dynamics, modeled as a linear complementarity system <cit.>. In these works, the matrices of a linear complementarity system are fitted from observation data. Compared to these, our work estimates the geometries in contact, and avoids approximation errors from linearization.
§.§ Particle Filtering for Contact Dynamics
Particle filtering is a non-parametric filtering approach that can support multimodal probability distributions using particles. This property is highly valuable for rigid body contact estimation as the problem is often multi-modal. Many existing works have adopted particle filters for estimation problems involving contact or nonlinear dynamics <cit.>. Our main contribution lies in the novel formulation of the contact estimation problem as nonlinear multimodal filtering of both geometry and physical parameters.
§ PROBLEM DEFINITION
We consider quasi-static manipulation of planar rigid objects rigidly attached to the robot with an unknown rigid transform, in a fully rigid environment. In this work, we assume the object geometry is unknown and the environment geometry is partially known, e.g. a flat surface with an unknown height and a wall located at an unknown position. Our goal is to estimate the probabilistic dynamics function p(x_l+1, w_l+1|x_l, u_l) at time l, given past observations of the object poses x_1, ⋯, x_l ∈ SE(2), contact wrenches w_1, ⋯, w_l ∈ℝ^3, and position commands of the impedance controller u_1, ⋯, u_l ∈ SE(2), . For convenience of notation, we also denote observation o_l = [x_l, w_l].
§ METHOD
We propose a method to efficiently capture the contact dynamics via the estimation of contact geometries and physical parameters with minimal assumptions. We take advantage of the well-established quasi-static rigid body contact dynamics and solve the problem of estimating the contact dynamics via the geometry and physical parameters θ of a quasi-static rigid body simulator. The dynamics function is then represented as p(x_l+1, w_l+1|x_l, u_l) = ∫ p(x_l+1, w_l+1|x_l, u_l, θ)bel(θ)dθ, where p(x_l+1, w_l+1|x_l, u_l, θ) is the simulator and bel(θ) is the current belief of θ, which is recursively updated over time using particle filter. In the case of a deterministic simulator, we simply denote the simulator as (x_l+1, w_l+1) = f_θ(x_l,u_l). Summarized in Fig. <ref>, our method first adopts a compact and expressive geometric representation in the form of a learned continuous Signed Distance Function (SDF) representation of the distribution of geometries. Then, treating contact dynamics estimation as a nonlinear multimodal filtering problem on the unknown parameters of a quasi-static rigid body simulator, we adopt a particle filter for efficient estimation. Finally, we couple our estimator with an active exploration strategy based on information theory to collect samples with high information. In this section, we will first discuss the geometric representation, then detail the quasi-static rigid body simulator, present our estimation algorithm based on particle filtering, and finally the active exploration algorithm.
§.§ Geometry Representation
A key ingredient of our method is the choice of the geometric representation. Representations such as meshes or point clouds, while extremely flexible, contain too many parameters for any practical objects, making the estimation intractable. Instead, we take advantage of an object geometry prior by utilizing DeepSDF <cit.>, which is a signed distance function based on a neural network that can represent a broad class of shapes in a single latent vector. We denote it as g(p,z):ℝ^2 ×ℝ^d_z→ℝ, which takes in the query 2D position of a point in the object frame p ∈ℝ^2, and the latent geometry vector z ∈ [-1,1]^d_z, and outputs the signed distance d ∈ℝ at the query position from the object surface (positive for being outside of the object, and negative for being inside). We use the trained latent vector z and the scaling parameter s to represent the object geometry.
For experiments, we assume that the unknown objects are drawn from the same distribution that the DeepSDF is trained on and we have partial knowledge of the terrain geometry, e.g. flat surface with unknown height. However, partial knowledge is not necessarily a limitation of our method since a similar compact geometric representation of the terrain to the object can be adopted and we intend to explore this in future work. As we will demonstrate in the Results section, we use d_z = 2 and achieved accurate estimation across all objects tested.
§.§ Quasi-Static Contact Dynamics
In quasi-static simulation, we assume that the robot moves slowly such that the Coriolis forces and accelerations can be ignored. We also assume the rigid bodies are always in force equilibrium, which is a mild assumption for many robot manipulation tasks. In addition, the robot uses a Cartesian-space impedance controller for commanding the end-effector poses with known impedance K. Note that due to the quasi-static assumption, there is no longer a concept of velocity, and the system is driven by position commands of the impedance controller of the robot. We follow the simulator proposed by Pang et al. <cit.>, and briefly present the key aspects of the simulator below.
Denoting the state of the actuated and un-actuated rigid objects in the environment with subscripts a and un, at each time step, the quasistatic rigid body simulator solves a linear complementarity problem (LCP) at time l:
Find v_l+1, s.t.:
J_c,u^Tf_c,un + hτ_un= 0
J_c, a^Tf_c,a + hτ_a + hK(x^* - x_l - hv_l+1)= 0
0 ≤ϕ_l + hf_c^Tv_l+1⊥ f_c ≥ 0
Terrain constraints.
Where v_l+1 is the velocity at the next step and h is the time step such that x_l+1 = x_l + hv_l+1. Note that quasi-static systems do not actually have velocity, this term here is just used for simulating the objects forward in time. J_c is the contact Jacobian, f_c is the frictional contact force that depends on the coefficient of friction μ, and τ are external forces including gravity and constraint forces such as those that constrain a rigid object that is a fixed terrain to be static. ϕ is the signed distance function. The expression hK(x^* - x_l - h*v_l+1) is the impulse applied by the impedance controller of stiffness K over h. Note that we use the minus sign - for both positional and angular differences for rotation matrices. In our problem, we consider robots with a single rigidly attached actuated object interacting with static terrain. This LCP can be expressed as the KKT condition of a quadratic program <cit.>, and we solve it efficiently with OSQP <cit.>.
§.§ Contact Dynamics Estimation via Particle Filtering
We treat dynamics estimation as a nonlinear multimodal filtering problem on the geometry and physical parameters θ. Therefore, θ is the concatenation of the latent SDF shape vector, the size scale, the pose of the object with respect to the end-effector, the environment shape parameter (such as the position of an unknown wall), and the coefficient of friction μ. Given the nonlinear and discontinuous nature of contact dynamics, we adopt particle filter, a non-parametric filtering approach that has shown good performance for problems involving nonlinear dynamics <cit.>. In particle filters, we denote the particle set with M particles at time l as: Θ_l θ_l^[1], ⋯, θ_l^[M] and the associated weights at time l as: Ω_l ω_l^[1], ⋯, ω_l^[M]. The belief of the state at time l, or in our case Bel(Θ_l)≈∑^M_iω^i_l δ (θ_l - θ_l^i), is represented as a set of particles θ_l^i and the associated weights ω_l^i. The essence of the particle algorithm is the same as the Bayes filter but with the beliefs represented as particles. This allows the particles to represent multi-modal distributions, which is very common for contact dynamics estimation.
Our particle filtering algorithm is summarized in Alg. 1. Given the current belief of particles, it first predicts the observation at time l based on the previous observation and the previous command input. Then, the importance weight of each particle is calculated by the probability density function of a Gaussian centered at the observation with a fixed diagonal covariance matrix R ∈ℝ^d_θ <cit.>, after which the weights are normalized. One challenge in using tactile feedback for contact dynamics estimation is that a single measurement is not very informative. To overcome this challenge, instead of only using the most recent observation and action, we use a history of N recent observations and actions (denoted as O_l and U_l) to update Bel(Θ_l), which was first introduced in <cit.>. Here, we note that particles are not updated through a process model as the particles represent the fixed geometry and physical parameters. Instead, we use the roughening method <cit.> by adding artificial noise after resampling to prevent particle depletion, which happens when a small number of particles dominate the distribution. Artificial noise is sampled from a zero-mean Gaussian with variance scaled by a roughness r relative to the particle variance. To also mitigate particle depletion due to frequent resampling <cit.>, we only resample when the effective sample size n_eff=∑^M_i1/ω^2_i is ≤ M/2.
§.§ Active Exploration
Instead of executing random exploration moves to collect information, we adopt an active exploration strategy based on information theory. We follow the active learning approach for particle filter proposed by Hauser <cit.>. We choose an action that would maximize the expected infomration gain (EIG), where the information gain is defined as the Kullback–Leibler (KL) divergence between the belief of θ before and after an update from the observation o. Specifically, we choose an action u^* from a set of action candidates U according to:
u^* = u ∈ Uargmax EIG(u) =
∫_o D_KL(Bel(θ|o)||Bel(θ))P(o|θ, u)do
D_KL(Bel(θ|o)||Bel(θ)) = ∫_θBel(θ|o)logBel(θ|o)/Bel(θ) dθ
To calculate EIG for a single action, we take advantage of the particles representation of Bel(θ). Shown in Alg. 2, for each particle, we simulate the observation with the action u, perform a particle filter step (without resampling) based on the observation, and calculate the KL divergence for the updated belief. Then EIG is the weighted sum of these information gains. Note that the KL divergence can be easily calculated by directly using the weights before and after the importance weight updates. This is an O(NM^2) operation that requires many calls to the simulator where N is the number of actions <cit.>. Therefore, we randomly downsample M/5 particles and weights for EIG calculation.
§ EXPERIMENTS AND RESULTS
We evaluate our estimation pipeline in both simulation and physical experiments. In this section, we first present the results of simulated experiments and then discuss physical experiments. We train the DeepSDF function on the 2D cross-sections of selected 21 objects from the YCB dataset <cit.> to represent the object geometry, where the latent geometry vector dimension d_z = 2. For both simulation and physical experiments, we set the impedance for the controller as K = [100 N/m,100 N/m,50 Nm]. Note that the controller is very stiff in rotation and compliant in x- and z-axis. In addition, each action changes the target pose to approximately translate 1 cm or rotate 5^∘.
§.§ Simulated Experiments
Shown in Fig. <ref>, our environment in simulation is a flat ground with unknown height g_h and a vertical wall located at an unknown position p_w. The parameter θ is a 9-dimensional vector, including the DeepSDF latent vector z, the object pose relative to the end-effector frame T_o,ee, size scale s, and the surface coefficient of friction μ. We assume that we know the possible range of the unknown parameters, where the unknown parameters are selected from a uniform distribution of g_h∼𝒰[-0.02m, 0.02m], p_w∼𝒰[0.09 m, 0.18 m], z∼𝒰[[-1.0,-1.0], [1.0,1.0]], T_o,ee∼𝒰[[-0.02 m, -0.02 m, -0.2 rad], [0.02 m, 0.02 m, 0.2 rad]], s∼𝒰[0.8, 1.2], and μ∼𝒰[0.1, 0.9].
As shown in Fig. <ref>, the simulation experiments use 11 planar objects which are cross sections of objects from the YCB dataset. For the particle filter, we use M = 5,000 particles, the observation noise parameters of [R_F_x, R_F_y, R_τ, R_x, R_z, R_ϕ] =[30 N,30 N,0.3 Nm,0.0001 m,0.0001 m,0.002 rad], roughness of r=0.3, and memory length N=5. For each object in each environment, we compare three different exploration strategies, examples of which are shown in Fig. <ref>. The first is a baseline exploration strategy that adds random movements to a basic exploration trajectory (Random). This basic exploration trajectory simply commands the robot to make contact with the floor and move to the right. A zero mean Gaussian noise with variance [0.03 m, 0.03 m, 0.25 rad] is added to the position command. The second is our active exploration strategy which adds exploration actions to the same basic exploration trajectory. The action set is 27 position commands that is a uniform grid with range [±0.03m, ±0.03m, ±0.025rad]. Note that due to limitations in computation, our active exploration strategy only plans one step ahead. Therefore, this is a local strategy and it requires the basic exploration trajectory as guidance to avoid getting stuck. We hope to alleviate the computation requirement of our active exploration strategy in the future. The last is an “expert" exploration trajectory which is tuned by the authors based on the ground truth wall position. We use the same expert strategy across all the objects. We run each exploration strategy for 15 time steps.
Once the estimation is completed, we fix the particles and test the estimated dynamics model on three different testing trajectories in the same partially known environment for 30 time steps. One example is shown in Fig. <ref>. Presented in Table <ref>, we report the quantitative results, averaged over 11 objects on 3 testing trajectories. The metric we used are the mean absolute error (MAE) between the ground truth and the weighted mean predictions of the top 100 particles with the largest weights. We report MAE for wrench predictions and next pose predictions except for rotation. This is because we use a very stiff controller in rotation and the error is minuscule. Overall, active exploration outperforms Random, but is worse than the Expert trajectory. We think that this is mainly due to the short horizon and the small action set we are using due to computation limit. Meanwhile, we observe that for the particular example shown in Fig. <ref>, active exploration outperforms Expert. This shows that a fixed exploration strategy, as is done for Expert, is not necessarily good for all objects, demonstrating the value of active exploration. As shown in Fig. <ref>, active learning approach uses both the bottom and top right part of the object to make a contact with the wall to simultaneously estimate unknown object and environment geometry. As a result, from time step 11, the wall position prediction quickly converges to the ground truth position. In terms of computation, it takes less than 1 s to do the estimation step and about 5 s to perform an active exploration step on a computer with an Intel i9-13900KF CPU, 64 GB of RAM, and an NVIDIA GeForce RTX 4090 GPU.
§.§ Physical Experiments
As shown in Fig. <ref>, we use a UR5e robot arm with 3D printed objects mounted on the end effector to perform the task with three objects in the Wall environment. We implemented an impedance controller on the UR5e robot. As the UR5e does not offer torque control, we approximate impedance control by using the position controller and the end-effector F/T sensor. However, this requires the environment to be deformable. We used gym tiles that deform about 1-3 mm with a 10 N contact force in our experiments. Despite this violation of the rigid body assumption, we show that our estimator still performs accurate estimation.
We use the same hyperparameters for the particle filter, except for the observation noise parameter R =[20 N,20 N,0.5 Nm,0.1 m,0.1 m,0.1 rad]. Here, the initial distribution over the unknown parameters are the following: g_h∼𝒰[-0.02 m, 0.02 m], p_w∼𝒰[-0.90 m, -0.80 m], z∼𝒰[[-1.0,-1.0], [1.0,1.0]], T_o,ee∼𝒰[[-0.02 m, -0.02 m, -0.2 rad], [0.02 m, 0.02 m, 0.2 rad]], s∼𝒰[0.8, 1.2], and μ∼𝒰[0.5, 1.6]. To showcase the best performance of our estimator, instead of using the active learning strategy, we adopt an Expert exploration trajectory for these experiments. We report the quantitative results in Table <ref> and show the estimation and testing trajectories for Lemon in Fig. <ref>. The estimation of Mug is also shown in Fig. <ref>. Despite the only information given in this case is that the shape of the environment is flat with a wall, our estimator is able to quickly estimate the contact dynamics, achieving less than 0.5 N of force prediction error where the ground truth magnitude goes up to 10 N. We believe our method shows great promise towards contact dynamics estimation in the open world.
§ DISCUSSION AND CONCLUSION
In this work, we present a method to quickly estimate accurate contact dynamics for unknown objects in a partially known environment. Through both simulated and physical experiments, we demonstrate the accuracy of our estimator. We also show the effectiveness of our active exploration approach in the simulated experiments. One requirement of our method is the presence of a good geometry prior. We believe that with the abundance of 3D geometry data and the development of 3D large vision foundation models, such a requirement is not an obstacle.
There are a number of future directions we would like to pursue. First, we would like to extend this to 3D and lift the restrictions for a partially known environment. We hope to also investigate techniques to improve the sample efficiency of particle filters. Next, we would like to improve the computation efficiency of our active exploration strategy. Additionally, we want to explore training a reinforcement learning agent to learn an exploration strategy that best suits our estimation pipeline. Finally, we would like to adopt our estimator for downstream manipulation tasks.
IEEEtran
| While robot manipulation technologies have advanced rapidly, deploying robots that can robustly perform manipulation with external contact in novel environments such as assembly and cooking remains a significant challenge. Model-free methods require environments that closely resemble or are identical to the training environment and degrade in the unfamiliar settings. Meanwhile, model-based approaches require dynamics models that are challenging to quickly acquire in unstructured environments where visual occlusions and sensor noises are prevalent.
In this work, we aim to rapidly identify an accurate contact dynamics model for a grasped unknown rigid object in a partially known rigid environment based exclusively on tactile measurements, and iteratively refine the model during interaction as shown in Fig. <ref>. Previous works have focused on explicitly estimating the contact locations and types <cit.> or obtaining a linear complementarity model <cit.>. However, explicitly determining contact positions and types is very challenging in real-world scenarios with complex object shapes, where there are abundant contact modes that also change frequently. In addition, a linear complementary model is a local approximation that inevitably introduces approximation errors especially when the local contact geometries are highly nonlinear.
Instead, we contribute a novel formulation of the contact dynamics estimation problem as the joint estimation of contact geometries and physical parameters, and our proposed method quickly captures contact dynamics in the wild with few assumptions. Essential to our method is a compact and expressive geometric representation leveraging DeepSDF <cit.>. This yields a representation of the object geometry using a learned continuous signed distance function (SDF) representation based on neural networks. The compact geometric representation allows us then to adopt a particle filter, which has shown robustness for tasks involving nonlinear and discontinuous contact dynamics <cit.>. Our representation ensures that the parameter space remains low dimensional, avoiding the curse of dimensionality of sampling-based methods, and enables the object parameters and geometry to be jointly estimated in a particle filter. In addition to the estimator, the quality of online samples also plays a key role in efficient estimation. We augment the particle filter with an active exploration strategy based on information theory that plans exploration actions with maximum expected information gain.
We evaluate our method on unknown objects in an environment of unknown friction with a flat surface and a vertical wall of unknown height and position. In both simulation and physical experiments, our estimator quickly estimates the contact dynamics with high accuracy. In simulation, the estimator shows less than 4 N of contact force prediction errors on new testing trajectories with ground truth force magnitudes going up to 25 N, while in physical experiments, it predicts with less than 0.5 N of error with 10 N of ground truth force magnitudes. This is achieved after fewer than 30 exploration actions. In addition, in simulation, we show that the active exploration strategy reduced wrench prediction error by more than 10% compared to random action selection. | §.§ Contact Dynamics Estimation for Tactile Measurements
Many existing works aim to explicitly estimate the contact type (e.g. point vs. line contact) and contact information (e.g location of point contact) from tactile measurements. These works adopt different estimation strategies, including direct equation fitting for narrow classes of contacts <cit.>, maximum a posterior (MAP) estimation via factor graphs <cit.>, and particle filters <cit.>. Some of these works assume unknown object geometry and a partially known environment, and estimate contact dynamics by explicitly considering the contact type, parameters, and the coefficient of friction <cit.>. While these methods are effective when estimating limited contact types or modes, they fail to scale to cases with multiple contacts of different contact types or when the contact changes rapidly in realistic manipulation tasks. Other works <cit.> assume that object and environment geometries are known. The most similar prior work to this was presented by Pankert and Hutter <cit.>, where a particle filter is used to estimate the in-hand transform of an insertion object and the world transform of a target box of known geometries. The key difference is our method estimates these transforms along with the geometries and physical parameters with minimal prior knowledge, as is essential for deploying robots in novel unknown environments.
Instead of directly estimating the contact parameters, another line of work aims to directly learn local dynamics, modeled as a linear complementarity system <cit.>. In these works, the matrices of a linear complementarity system are fitted from observation data. Compared to these, our work estimates the geometries in contact, and avoids approximation errors from linearization.
§.§ Particle Filtering for Contact Dynamics
Particle filtering is a non-parametric filtering approach that can support multimodal probability distributions using particles. This property is highly valuable for rigid body contact estimation as the problem is often multi-modal. Many existing works have adopted particle filters for estimation problems involving contact or nonlinear dynamics <cit.>. Our main contribution lies in the novel formulation of the contact estimation problem as nonlinear multimodal filtering of both geometry and physical parameters. | We propose a method to efficiently capture the contact dynamics via the estimation of contact geometries and physical parameters with minimal assumptions. We take advantage of the well-established quasi-static rigid body contact dynamics and solve the problem of estimating the contact dynamics via the geometry and physical parameters θ of a quasi-static rigid body simulator. The dynamics function is then represented as p(x_l+1, w_l+1|x_l, u_l) = ∫ p(x_l+1, w_l+1|x_l, u_l, θ)bel(θ)dθ, where p(x_l+1, w_l+1|x_l, u_l, θ) is the simulator and bel(θ) is the current belief of θ, which is recursively updated over time using particle filter. In the case of a deterministic simulator, we simply denote the simulator as (x_l+1, w_l+1) = f_θ(x_l,u_l). Summarized in Fig. <ref>, our method first adopts a compact and expressive geometric representation in the form of a learned continuous Signed Distance Function (SDF) representation of the distribution of geometries. Then, treating contact dynamics estimation as a nonlinear multimodal filtering problem on the unknown parameters of a quasi-static rigid body simulator, we adopt a particle filter for efficient estimation. Finally, we couple our estimator with an active exploration strategy based on information theory to collect samples with high information. In this section, we will first discuss the geometric representation, then detail the quasi-static rigid body simulator, present our estimation algorithm based on particle filtering, and finally the active exploration algorithm.
§.§ Geometry Representation
A key ingredient of our method is the choice of the geometric representation. Representations such as meshes or point clouds, while extremely flexible, contain too many parameters for any practical objects, making the estimation intractable. Instead, we take advantage of an object geometry prior by utilizing DeepSDF <cit.>, which is a signed distance function based on a neural network that can represent a broad class of shapes in a single latent vector. We denote it as g(p,z):ℝ^2 ×ℝ^d_z→ℝ, which takes in the query 2D position of a point in the object frame p ∈ℝ^2, and the latent geometry vector z ∈ [-1,1]^d_z, and outputs the signed distance d ∈ℝ at the query position from the object surface (positive for being outside of the object, and negative for being inside). We use the trained latent vector z and the scaling parameter s to represent the object geometry.
For experiments, we assume that the unknown objects are drawn from the same distribution that the DeepSDF is trained on and we have partial knowledge of the terrain geometry, e.g. flat surface with unknown height. However, partial knowledge is not necessarily a limitation of our method since a similar compact geometric representation of the terrain to the object can be adopted and we intend to explore this in future work. As we will demonstrate in the Results section, we use d_z = 2 and achieved accurate estimation across all objects tested.
§.§ Quasi-Static Contact Dynamics
In quasi-static simulation, we assume that the robot moves slowly such that the Coriolis forces and accelerations can be ignored. We also assume the rigid bodies are always in force equilibrium, which is a mild assumption for many robot manipulation tasks. In addition, the robot uses a Cartesian-space impedance controller for commanding the end-effector poses with known impedance K. Note that due to the quasi-static assumption, there is no longer a concept of velocity, and the system is driven by position commands of the impedance controller of the robot. We follow the simulator proposed by Pang et al. <cit.>, and briefly present the key aspects of the simulator below.
Denoting the state of the actuated and un-actuated rigid objects in the environment with subscripts a and un, at each time step, the quasistatic rigid body simulator solves a linear complementarity problem (LCP) at time l:
Find v_l+1, s.t.:
J_c,u^Tf_c,un + hτ_un= 0
J_c, a^Tf_c,a + hτ_a + hK(x^* - x_l - hv_l+1)= 0
0 ≤ϕ_l + hf_c^Tv_l+1⊥ f_c ≥ 0
Terrain constraints.
Where v_l+1 is the velocity at the next step and h is the time step such that x_l+1 = x_l + hv_l+1. Note that quasi-static systems do not actually have velocity, this term here is just used for simulating the objects forward in time. J_c is the contact Jacobian, f_c is the frictional contact force that depends on the coefficient of friction μ, and τ are external forces including gravity and constraint forces such as those that constrain a rigid object that is a fixed terrain to be static. ϕ is the signed distance function. The expression hK(x^* - x_l - h*v_l+1) is the impulse applied by the impedance controller of stiffness K over h. Note that we use the minus sign - for both positional and angular differences for rotation matrices. In our problem, we consider robots with a single rigidly attached actuated object interacting with static terrain. This LCP can be expressed as the KKT condition of a quadratic program <cit.>, and we solve it efficiently with OSQP <cit.>.
§.§ Contact Dynamics Estimation via Particle Filtering
We treat dynamics estimation as a nonlinear multimodal filtering problem on the geometry and physical parameters θ. Therefore, θ is the concatenation of the latent SDF shape vector, the size scale, the pose of the object with respect to the end-effector, the environment shape parameter (such as the position of an unknown wall), and the coefficient of friction μ. Given the nonlinear and discontinuous nature of contact dynamics, we adopt particle filter, a non-parametric filtering approach that has shown good performance for problems involving nonlinear dynamics <cit.>. In particle filters, we denote the particle set with M particles at time l as: Θ_l θ_l^[1], ⋯, θ_l^[M] and the associated weights at time l as: Ω_l ω_l^[1], ⋯, ω_l^[M]. The belief of the state at time l, or in our case Bel(Θ_l)≈∑^M_iω^i_l δ (θ_l - θ_l^i), is represented as a set of particles θ_l^i and the associated weights ω_l^i. The essence of the particle algorithm is the same as the Bayes filter but with the beliefs represented as particles. This allows the particles to represent multi-modal distributions, which is very common for contact dynamics estimation.
Our particle filtering algorithm is summarized in Alg. 1. Given the current belief of particles, it first predicts the observation at time l based on the previous observation and the previous command input. Then, the importance weight of each particle is calculated by the probability density function of a Gaussian centered at the observation with a fixed diagonal covariance matrix R ∈ℝ^d_θ <cit.>, after which the weights are normalized. One challenge in using tactile feedback for contact dynamics estimation is that a single measurement is not very informative. To overcome this challenge, instead of only using the most recent observation and action, we use a history of N recent observations and actions (denoted as O_l and U_l) to update Bel(Θ_l), which was first introduced in <cit.>. Here, we note that particles are not updated through a process model as the particles represent the fixed geometry and physical parameters. Instead, we use the roughening method <cit.> by adding artificial noise after resampling to prevent particle depletion, which happens when a small number of particles dominate the distribution. Artificial noise is sampled from a zero-mean Gaussian with variance scaled by a roughness r relative to the particle variance. To also mitigate particle depletion due to frequent resampling <cit.>, we only resample when the effective sample size n_eff=∑^M_i1/ω^2_i is ≤ M/2.
§.§ Active Exploration
Instead of executing random exploration moves to collect information, we adopt an active exploration strategy based on information theory. We follow the active learning approach for particle filter proposed by Hauser <cit.>. We choose an action that would maximize the expected infomration gain (EIG), where the information gain is defined as the Kullback–Leibler (KL) divergence between the belief of θ before and after an update from the observation o. Specifically, we choose an action u^* from a set of action candidates U according to:
u^* = u ∈ Uargmax EIG(u) =
∫_o D_KL(Bel(θ|o)||Bel(θ))P(o|θ, u)do
D_KL(Bel(θ|o)||Bel(θ)) = ∫_θBel(θ|o)logBel(θ|o)/Bel(θ) dθ
To calculate EIG for a single action, we take advantage of the particles representation of Bel(θ). Shown in Alg. 2, for each particle, we simulate the observation with the action u, perform a particle filter step (without resampling) based on the observation, and calculate the KL divergence for the updated belief. Then EIG is the weighted sum of these information gains. Note that the KL divergence can be easily calculated by directly using the weights before and after the importance weight updates. This is an O(NM^2) operation that requires many calls to the simulator where N is the number of actions <cit.>. Therefore, we randomly downsample M/5 particles and weights for EIG calculation. | null | null | null |
http://arxiv.org/abs/2409.17463v1 | 20240926015058 | Polarizability and plasmons in pseudospin-1 gapped materials with a flat band | [
"Liubov Zhemchuzhna",
"Andrii Iurov",
"Godfrey Gumbs",
"Danhong Huang"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
^1Department of Physics and Computer Science, Medgar Evers College of City University of New York, Brooklyn, NY 11225, USA
^2Department of Physics & Engineering Physics, Fordham University, Bronx, NY 10458, USA
^3Department of Physics and Astronomy, Hunter College of the City University of New York, 695 Park Avenue, New York, New York 10065, USA
^4Donostia International Physics Center (DIPC), P de Manuel Lardizabal, 4, 20018 San Sebastian, Basque Country, Spain
^5Space Vehicles Directorate, US Air Force Research Laboratory, Kirtland Air Force Base, New Mexico 87117, USA
§ ABSTRACT
The collective electronic properties of various types of pseudospin-1 Dirac-cone materials with a flat band and finite bangaps in their energy spectra are the subject of our reported investigation. Specifically, we have calculated the dynamical polarization, plasmon dispersions as well as their decay rates due to Landau damping. Additionally, we present closed-form analytical expressions for the wave function overlaps for both the gapped dice lattice and the Lieb lattice. The gapped dice lattice is a special case of the more general α-T_3 model since its band structure is symmetric and the flat band remains dispersionless. On the other hand, the Lieb lattice has a flat band which appears at the lowest point of its conduction band. Our results for these two cases exhibit unique features in the plasmon spectra and their damping regions, which have never been reported in previous studies. For example, the particle-hole modes of a Lieb lattice appear as finite-size regions, while the plasmon modes exist only in a region with small wave numbers but an extended range of frequencies.
Polarizability and plasmons in pseudospin-1 gapped materials with a flat band
Liubov Zhemchuzhna^1,2,3[E-mail contact: [email protected], [email protected]],
Andrii Iurov^1[E-mail contact: [email protected], [email protected]],
Godfrey Gumbs^3,4,
and
Danhong Huang^5
September 28, 2024
============================================================================================================================================================================================================================
§ INTRODUCTION
Recent research on almost all aspects of electronic property in novel two-dimensional materials has become a crucial subject as well as one of the most actively pursued directions in condensed-matter physics, chemistry, photonics and quantum electronics. This is a consequence of unique electron dynamics in graphene whose properties have been considered to be among the most significant discoveries in this century. <cit.> Shortly after discovery of plain graphene, researchers made another effort to fabricate and investigate other relevant candidates, such as graphene with a finite bandgap between its valence and conduction bands, novel materials with anisotropic and/or tilted Dirac cones as well as indirect band gaps, <cit.> new materials with Rashba spin-orbit coupling, <cit.> massive anisotropic electronic states, <cit.> twisted bilayers, <cit.> semi-Dirac materials, <cit.> and etc. The band structures in several types of these materials also demonstrate valley and spin-polarized electronic properties <cit.> and very unusual low-energy electronic spectra as well. Remarkably, anisotropic dispersion, accompanied by an energy gap in the band structure, of Dirac-cone materials could also be induced and controlled by applying an external off-resonance irradiation. <cit.>
Among such novel Dirac materials, those, involving a flat or dispersionless band in energy spectrum, are particularly interesting and stand out because of their highly unusual electronic, <cit.> collective, <cit.> optical, magnetic and transport <cit.> properties. Importantly, such materials are all connected to the so-called α-T_3 model described by a hexagon with an additional hub atom in its center, Lieb lattice, <cit.> Kagome lattice <cit.> and others, respectively.
Plasmon, or quantum quasi-particles representing collective charge-density fluctuations within a conducting solid, is one of the important properties of low-dimensional materials. The motivation for this work is to gain further insight into the dynamical polarization, plasmon dispersions as well as their decay rates. With the use of angle-resolved photoemission spectroscopy (ARPES) or high resolution electron energy loss spectroscopy (HREELS), A quantitative study of plasmon modes, including dispersion and dephasing (lifetime or damping), has been performed for graphene with zero and finite bandgap, <cit.> silicene and germanene, <cit.> hetero-structures, <cit.> hybrid systems, <cit.> multi-layers, fullerenes, <cit.> carbon nanotubes <cit.> and nanoribbons, as well as other materials, <cit.> at either zero or finite temperatures <cit.>. Particularly, studies of plasmon modes have been conducted for flat-band Dirac materials <cit.>.
In all these studies, a special attention has been drawn towards plasmon damping and stability <cit.> since only a weakly-damped plasmon mode can represent a quasi-particle. <cit.> Meanwhile, magneto-plasmons and energy-dispersion of electrons under a magnetic field have also been explored fully. <cit.> Moreover, studies on dynamics of plasmon modes have been performed for tilted and anisotropic lattices, such as 1T'-MoS_2 and 8-pmmn borophene <cit.>, graphene, silicene and α-T_3 based nanoribbons <cit.> in addition to related collective and transport properties, such as optical and Boltzmann conductivities. <cit.>
The remainder of this paper is organized as follows. In Secs. <ref> and <ref>, we calculate the low-energy Hamiltonian, electron energy spectrum and wave functions in both gapped dice and Lieb lattices by means of electronic properties in well-known gapped pseudospin-1 Dirac materials with a flat band. Meanwhile, we discuss similarity and difference in electron dynamics of these two likewise materials. The consequent Sec. <ref> is devoted to defining and calculating dynamical polarization functions, plasmon dispersions and dampings in these materials. Specifically, we discuss the zero-frequency limit of the polarization function and static screening, which is known to be important in calculations of the Boltzmann conductivity. Furthermore, we show how the density of states could be employed for reliably estimating the plasmon frequency in the long wavelength limit for gapped dice and Lieb lattices, as well as other known Dirac materials. In conjunction with the theory, numerical results are presented and discussed in detail for displaying photo-excited electron dynamics in these two different materials. Finally, the concluding remarks and summary statements are given in Sec. <ref>.
§ FINITE-GAP DICE LATTICE: ELECTRONIC STATES AND BAND STRUCTURE
Mathematically speaking, a dice lattice corresponds to the ϕ→π/4 limit of an α-T_3 model, which was addressed and investigated extensively in Refs. [oriekhov2022optical,weekes2021generalized]. Specifically, the Hamiltonian for a gapped dice lattice is supplemented by an extra bandgap-related term, i.e.,
Σ̂^(1)_z = [
[ 1 0 0; 0 0 0; 0 0 -1 ]] ,
while the Hamiltonian itself is given by
Ĥ_d ( | τ) = ħ v_F/√(2) [
[ √(2) Δ_0 k^τ_- 0; k^τ_+ 0 k^τ_-; 0 k^τ_+ -√(2) Δ_0 ]] .
The energy dispersions for the gapped dice and Lieb lattices are presented in Fig. <ref>.
Here, in contrast to general α-T_3 model, the eigenvalue equation, corresponding to Eq. (<ref>), gives rise to a symmetric band structure with two equal bandgaps and an unaffected flat band right in the middle between a valence and conduction band. Therefore, by including the extra term in Eq. (<ref>), the flat band will be deformed and receives a finite k-dispersion for all α-T_3 materials with a finite gap, expect for a dice lattice having ϕ = π/4.
The energy dispersions, corresponding to Hamiltonian (<ref>), are given by ε^τ_γ=0 ( | Δ_0) ≡ 0, and
ε^τ_γ=± 1 ( | Δ_0) = γ√(k^2 + Δ_0^2) ,
which is valley (τ) independent. The corresponding wave functions for γ=± 1 of this gaped dice lattice are calculated as
Ψ_γ = ± 1^τ ( | Δ_0) = 1/2 {[ [ 1 + γΔ_0/E_Δ_0(k)] e^- i Θ_ k; √(2)γ √(1 - ( Δ_0/E_Δ_0(k))^2); [ 1 - γΔ_0/E_Δ_0(k)] e^i Θ_ k ]} ,
which is also τ independent, where E_Δ_0(k) = √(k^2 + Δ_0^2) = ε^τ_γ=1 ( | Δ_0). For the flat band (γ=0), however, its τ-independent wave function is obtained as
Ψ_γ = 0^τ ( | Δ_0) = 1/√(2) √(1 - ( Δ_0/E_Δ_0(k))^2) {[ - e^- i Θ_ k; √(2)Δ_0/√(E_Δ_0(k)^2 - Δ_0^2 ); [0.05cm]
e^i Θ_ k ]} .
In addition, the overlap integral 𝔒_γ, γ'^τ (, | Δ_0), required for calculating the polarization function, is defined by
𝔒_γ, γ'^τ(, | Δ_0) = | ⟨Ψ_γ^τ ( | Δ_0) | Ψ_γ'^τ (+ | Δ_0)
⟩ |^2 .
Explicitly, by using Eq. (<ref>), the overlaps for Dirac-cone states with γ = ± 1 are calculated as
𝔒_γ, γ'^τ (, | Δ_0) = 1/4 {
1 + γγ' cos( β^δ_ k, q) √([1 - ( Δ_0/E_Δ_0(k))^2 ] [1 - ( Δ_0/E_Δ_0(+))^2 ]
)}^2
+ Δ_0^2/2 E_Δ_0(k) E_Δ_0(+) {γγ' + cos^2 ( β^δ_ k, q) √([1 - ( Δ_0/E_Δ_0(k))^2 ] [1 - ( Δ_0/E_Δ_0(+))^2 ]
)}
= 1/4 {
1 + γγ' cos( β^δ_ k, q) k/E_Δ_0(k) |+|/E_Δ_0(+)}^2 + Δ_0^2/2 E_Δ_0(k) E_Δ_0(+) {γγ' + cos^2 ( β^δ_ k, q) .
× .
k/E_Δ_0(k) |+|/E_Δ_0(+)} ,
where β^δ_ k, q is the angle between two vectors and +, so that cos( β^δ_ k, q) = (k^2 + ·)/( k |+|). Specifically, for γ'=0 and γ=±1, one finds from Eq. (<ref>) that
𝔒_γ = ± 1, γ' = 0^τ (, | Δ_0) = 1/2 |+|/E_Δ_0(+) {sin^2 ( β^δ_ k, q) - [ Δ_0/E_Δ_0(k)]^2 ·/k |+|}
= 1/2 |+|/E_Δ_0(+) {k^2/|+|^2 - ( ·/ q |+|)^2
- [ Δ_0/E_Δ_0(k)]^2 ·/k |+|} ,
which involves the electron transition from/towards the flat band.
§ ENERGY SPECTRUM AND ELECTRONIC STATES IN A LIEB LATTICE
Physically, besides a finite-gap dice lattice discussed in Sec. <ref>, flat bands can also be realized in some other types of lattices, mostly, optical Lieb and Kagome lattices. A Lieb lattice was observed in a number of existing systems and experimental setups, e.g., organic materials, optical lattices and waveguides.
Mathematically, however, a Lieb lattice can be viewed as the combination of three displaced square sublattices. The Hamiltonian of a Lieb lattice is generally written as <cit.>
H^( L)( | k_Δ) = ħ v_F [
[ k_Δ k_x 0; k_x - k_Δ k_y; 0 k_y k_Δ ]] ,
where the following substitution
k_x,y→π/a_0 + k_x,y
is required, and a_0 stands for the lattice parameter.
Before taking the substitution in Eq. (<ref>), three energy dispersions can be easily found from the Hamiltonian in Eq. (<ref>) as ε^( L)_γ = ± 1 ( | k_Δ)/ħ v_F = γ√( k_Δ^2 + k_x^2 + k_y^2) and ε^( L)_γ = 0 ( | k_Δ)/ħ v_F = k_Δ, which could be combined into a single expression, yielding
ε^(L)_γ( | k_Δ) = ħ v_F [δ_γ, 0 k_Δ + γ (1 - δ_γ, 0) √(k_Δ^2 + k^2) ] ,
where γ=0, ± 1, and δ_γ, 0 is the Kronecker symbol. Consequently, we obtain three energy subbands, and one of them (γ=0) is a flat band.
From Eq. (<ref>), it is obvious that the flat band in a Lieb lattice is located at a finite energy ħ v_F k_Δ, which is right next to the lowest point of the conduction band, while for the case of a dice lattice with a finite gap, this flat band is located symmetrically with respect to the valence and conduction bands.
The corresponding wave functions with respect to energy bands in Eq. (<ref>) are given by
Ψ^(L)_γ = ± 1 ( | k_Δ) = 1/ 2 E_k (E_k - γħ v_Fk_Δ) {[ k_x; - k_Δ + γ E_k; k_y ]} ,
and
Ψ ^(L)_γ = 0 ( | k_Δ) = 1/k {[ - k_y; 0; k_x ]} = {[ - sinΘ_ k; 0; cosΘ_ k ]} ,
where E_k≡ħ v_F √(k_Δ^2+k^2) and Δ_0=2ħ v_Fk_Δ is the bandgap. It is straightforward to verify that eigenstates (<ref>) and (<ref>) are orthogonal and normalized.
Using the obtained wave functions in Eqs. (<ref>) and (<ref>), we calculate their overlaps as defined in Eq. (<ref>). For isotropic dispersions we can always consider wave vector q aligned to a specific directions, such as x-axis. By using this fact, we find
𝔒_γ = ± 1, γ' = ± 1 (, | k_Δ) = {· + (
E_k - γ k_Δ)
[
E_k + E_ k + q + (γ - γ') k_Δ]
}^2
/2 E_k E_ k + q (
E_k - γ k_Δ) (
E_ k + q - γ' k_Δ) ,
and
𝔒_γ = ± 1, γ' = 0(, | k_Δ) = [ (×)·_z ]^2
/2 E_k (
E_k - γ k_Δ)
| + |^2 =
k^2q^2 - (·)^2
/2 E_k (
E_k - γ k_Δ)
| + |^2 ,
where (×)·_z represents the z-component (out-of-plane component) of the cross product of vectors and .
§ POLARIZATION FUNCTION AND PLASMON DISPERSIONS
We now turn our attention to the dynamics of plasmon modes in gapped dice and Lieb lattices, which are among the most crucial collective electronic
properties of these novel Dirac materials. For this, we first look into calculating the plasmon dispersion, which is determined by the following characteristic equation
ϵ(q, ω | Δ_0) = 1 - V_C(q) Π^(0)(q, ω | Δ_0) = 0 ,
where q and ω are the wave number and frequency of photo-excited electrons, respectively, while ϵ(q, ω | Δ_0) stands for a dielectric function defined within the q-ω plane. In Eq. (<ref>), V_C(q)=2 πβ_ϵ/q = e^2 /2ϵ_0ϵ_r q is an electron-electron Coulomb potential within a two-dimensional plane, and
β_ϵ = e^2/4πϵ_0 ϵ_r represents the (inverse) dielectric constant of a (dielectric) substrate. The solutions of the secular equation ϵ(q,ω|Δ_0)=0 for the dielectric function defined in Eq. (<ref>) can in general be written as ω=Ω_ pl(q). This gives rise to a dispersion relation for the plasmon-mode frequency Ω_ pl(q) as a function of wave number q. The dielectric function in Eq. (<ref>) is given in terms of the polarization function for interacting electrons which in the random-phase approximation (RPA) is Π^RPA(q,ω| Δ_0) =
Π^(0)(q,ω| Δ_0)/
ϵ(q, ω | Δ_0). The dielectric function is usually complex, like the dynamical polarization function Π^(0)(q, ω | Δ_0) defined in Eq. (<ref>) below. Here, the imaginary part of ϵ(q, ω | Δ_0) or Π^(0)(q, ω | Δ_0) determines the damping region of a plasmon mode within the q-ω plane, i.e., how likely a collective quasi-particle will decay into single-electron excitations. A stable plasmon mode acquires a very long lifetime, which is expected if both real and imaginary parts of the dielectric function in Eq. (<ref>) equal to zero.
In the following, we will calculate the noninteracting polarization function which was introduced in Eq. (<ref>) for a pseudospin-1 Hamiltonian. This is obtained from the equation of motion for the density matrix in the absence of dissipation and working to lowest order in the external perturbation. This provides the induced density fluctuation in terms of the density-density response function
Π^(0)( r, r^', ω | ϕ, μ(T)) determined by four wave functions which depend on the coordinate variables r and r^'. Making use of the fact that the polarization depends on the difference r - r^', we Fourier transform leading to
—-
Π^(0)(q, ω | ϕ, μ(T)) = g/4 π^2 ∫ d^2∑_γ,γ' = 0 ± 1 𝔒_γ, γ' (, | k_Δ) n_F[ϵ_γ(k | Δ_0), μ] - n_F[ϵ_γ'(|+| | Δ_0), μ]/(ħω + i 0^+ ) + ϵ_γ (k | Δ_0) - ϵ_γ' (| +| | Δ_0) ,
where n_F[ϵ_γ (k | Δ_0), μ(T)] = {1 + exp[(ϵ_γ (k | Δ_0) - μ)/(k_B T)] }^-1 is the Fermi-Dirac distribution function in a thermal-equilibrium state of electrons. The wave-function overlaps 𝔒_γ, γ' (, | k_Δ) have been calculated in Eqs. (<ref>) and (<ref>) for a dice lattice as well as in Eqs. (<ref>) and (<ref>) for a Lieb lattice, separately.
Here, before we take on rigorous numerical computations for the dynamical polarization function in Eq. (<ref>), dielectric function in Eq. (<ref>) and plasmon-mode dispersions for both gapped dice and Lieb lattices, we would like to introduce first an extremely useful and instructive technique in estimating plasmon dispersion in the long-wavelength limit based on the calculated electron density n_0 and the density of states ρ_d(E) as well.
In general, the density n_0(T) of an electron gas at a finite temperature T can be calculated by
n_0(T) = ∫_0^∞ρ_d(E) f(E) d E ,
where D(E) is the density of states while f(E) stands for the occupation function of electrons.
Specifically, at T=0K, we find that Eq. (<ref>) simply reduces to
n_0 = ∫_0^E_Fρ_d(E) d E ,
where E_F represents the Fermi energy of electrons in the system.
For two-dimensional gapped dice and Lieb lattice systems, their density of states ρ_d(E) of electrons can be written as
ρ_d(E) = g_s g_v/(2 π)^2 ∫_0^∞ k d k ∫_0^2 π d Θ_ k δ[ E - ε_γ ( | Δ_0) ] ,
which could be used for various Dirac materials, where g_s and g_v represent spin and valley degeneracies, respectively, while ε_γ ( | Δ_0) is the energy dispersion of electrons in the system. For the simplest case with a graphene, we find that ρ_d(E) = 2 | E |/[π (ħ v_F)^2]. For the gapped graphene with its energy dispersions ε_λ (k | Δ_0) = λ√((ħ v_F k)^2 + Δ_0^2), on the other hand, we obtain
ρ_d(E,Δ_0) = 1/π ∑_λ = ± 12 E/λ (ħ v_F)^2 Θ( E/λ - Δ_0 ) ,
which leads to a charge density at T=0 K, given by
n_0(Δ_0) = E_F^2 - Δ_0^2/π (ħ v_F)^2 Θ( E_F - Δ_0 ) .
Finally, Eq. (<ref>) leads us to the well-known expression for the polarization function
Π^(0)(q,ω | Δ_0) |_q ≪ k_F = E_F/π √(1 - ( Δ_0/E_F)^2) ( q/ħω)^2 Θ( E_F - Δ_0 ) .
In the presence of both a bandgap Δ_0 and a flat band at E = 0 (i.e. gapped dice lattice), Eq. (<ref>) gives rise to the following expression
ρ^ ( D)_d(E | Δ_0) = 1/π ∑_λ = ± 12 E/λ (ħ v_F)^2 Θ( E/λ - Δ_0 ) + 1/4 π ∫_0^k_max⟶∞ d (k^2 ) δ (E) ,
while for a Lieb lattice we acquire
ρ^ ( L)_d(E | Δ_0) = 1/π ∑_λ = ± 12 E/λ (ħ v_F)^2 Θ( E/λ - Δ_0 ) + 1/4 π ∫_0^k_ max⟶∞ d (k^2 ) δ (E - Δ_0) .
For both cases in Eqs. (<ref>) and (<ref>), their corresponding electron densities are the same, which is calculated as
n(Δ_0) = E_F^2 - Δ_0^2/π (ħ v_F)^2 Θ( E_F - Δ_0 ) + 1/4 π ∫_0^k_ max⟶∞ d (k^2 ) .
From the density of states in both Eqs. (<ref>) and (<ref>), we find that the last term ∽∫_0^k_ max⟶∞ d (k^2 ) for a flat band is divergent and cannot be properly taken into account “as it is”. We know that the flat band only affects the long-wavelength limit in the next order of q as
ħ[Ω^(α)_p(q) ]^2 = 8 E_F^2 β_c (ħ v_F q)/4 E_F + 4 [1 + 12 α^2 (1+α^2)^-2 ħ v_F q ]
= 2 E_F β_c (ħ v_F q) - 2 β_c [1 + 12 α^2/(1+α^2)^2] (ħ v_F q)^2 + . . . ,
where β_c=sin^2(2ϕ), α=tan(ϕ) is the relative hopping parameter for the α-T_3 model, which equals 0 for graphene but 1 for a dice lattice. Therefore, we get
[ α^(D)]^2 { 1 + [α^(D)]^2 }^-2 = 1/4 for a dice lattice in Eq. (<ref>).
Based on the calculated density of states, the plasmon frequency is estimated as
Ω_p (q) |_q ≪ k_F = {β_c q ħ v_F √(π n_0)}^1/2 ,
which yields the same dependence on the material parameters (e.g., Fermi energy, Fermi velocity and dielectric constant) as in the exact results for the plasmon dispersion for all known Dirac materials. Using this approach, the long-wavelength plasmons frequencies are calculated as
[ħΩ_p^( G) (q) ]^2 = 2 β_c E_F ħ v_F q
in graphene,
[ħΩ_p^(G.G) (q) ]^2 = 2 β_c √(E_F^2 - Δ_0^2) Θ( E_F - Δ_0 ) ħ v_F q
in gapped graphens, and
[ħΩ_p^( S) (q) ]^2 = 2 β_c √(E_F^2 - (Δ_<^2+Δ_>^2 )^2) Θ( E_F - Δ_> ) ħ v_F q
+ 2 β_c √(E_F^2 - Δ_<^2) Θ( E_F - Δ_< ) Θ(Δ_> - E_F ) ħ v_F q
in silicene with two generally inequivalent bandgaps Δ_< ≤Δ_>.
As a particularly interesting and straightforward result, we can easily predict the plasmon dispersions for an Key-Y patterned graphene with two inequivalent gapless Dirac cones or two different Fermi velocities v_F,1 < v_F,2. <cit.> For this case, its density of states ρ_d(E,Δ_0) is calculated as
ρ_d(E,Δ_0) = 1/π ∑_λ = ± 1|E|/[ ħ v_(F,λ) q ]^2 .
This results in its electron density n(Δ_0), given by
n(Δ_0) = 1/π ∑_λ = ± 1|E_F|/[ ħ v_(F,λ) q ]^2 .
At the same time, its plasmon frequency takes the form
[ħΩ_p^( G) (q) ]^2 = β_c E_F ∑_λ = ± 1[ ħ v_(F,λ) q ] ,
which appears as an average between the “fast” and “slow” Dirac cones. Therefore, we conclude that for the flat-band materials with an energy bandgap (i.e., Dice and Lieb lattice), the flat band does not play a crucial role for the plasmon spectrum in the long-wavelength limit, and it could be obtained in the “density-of-states” approximation as equivalent to that of gapped graphene in Eq. (<ref>). This can be verified from the rigorous result in Eq. (<ref>) (see Ref. [oriekhov2022optical]) for dice lattice and α-T_3 model in the absence of a bandgap Δ_0.
§ NUMERICAL RESULTS AND DISCUSSION
In general, our calculated results for the energy dispersion of the plasmon modes are presented in the left/right columns of the middle/lower panels in Figs. <ref>-<ref> for a gapped dice lattice and in Figs. <ref> and <ref> for a Lieb lattice. The top panels in Figs. <ref> through <ref> refer to real and imaginary parts of numerically computed polarizability Π^(0)(q, ω | Δ_0). Meanwhile, we also display numerical solutions, ω=Ω_ pl(q), of the characteristic equation in Eq. (<ref>) for each plasmon mode so as to verify that the calculated dielectric function ϵ(q,ω=Ω_ pl(q) | Δ_0) becomes zero within the precision of our numerical computations. For all of cases in Figs. <ref>-<ref>, we observe that Ω_ pl(q) for a small wave vector q (long-wavelength limit) increases with β_ϵ, implying that the plasmon energy decreases with an enhanced dielectric constant ϵ_r. For a large bandgap Δ_0, on the other hand, we observe an additional region for an elevated imaginary part of Π^(0)(q, ω | Δ_0) above the main diagonal determined by ω=v_Fq, which is further accompanied by a sign change in the real part of Π^(0)(q, ω | Δ_0) and corresponds to interband transitions between the valence and conduction bands. In addition, a separate peak shows up next to the main diagonal for all accessible values of wave vector q and frequency ω, which is related to different interband transitions near the Fermi wave vector k_F. For a reduced or zero bndgap, however, both these particle-hole modes merge together and reduce to a single narrow stripe along the main diagonal ω = v_F q.
Specifically, our numerical results for the dynamical polarization function Π^(0)(q, ω | Δ_0) and plasmon modes ω=Ω_ pl(q) in a gapped dice lattice are presented in Figs. <ref> and <ref>, which correspond to a large and a small bandgap Δ_0, respectively. From Fig. <ref>, we find the presence of particle-hole modes (i.e. regions for single-particle excitations), associated with a finite value of Im [Π^(0)(q, ω | Δ_0)], as well as the plasmon mode slightly above the diagonal boundary with a long lifetime in regions having zero or small Im [Π^(0)(q, ω | Δ_0)] values. Similar to a gapped graphene, for gapped dice lattices, a weakly-damped plasmon mode shows up in a larger range of wave vector q below the Fermi energy E_F^(0). Meanwhile, a particle-hole mode, which results from interband transitions from a flat band, shows up for any frequency ħω≥ E_F^(0). Consequently, a low-damped plasmon mode could be seen only below the Fermi energy E_F^(0). As the bandgap ratio Δ_0/E_F^(0) is reduced from 0.7 to 0.2 in Fig. <ref>, the above-diagonal bandgap-split upper particle-hole mode in Fig. <ref> is suppressed greatly.
We present calculated polarizability and plasmon dispersion relations in Figs. <ref> -<ref> for a gapped Lieb lattice. Interestingly, we find that the spectra of particle-hole modes in a gapped Lieb lattice become quite different from those in a gapped dice lattice or a gapped α-T_3 model. In current case, for a small gap parameter k_0 = Δ_0/(2 ħ v_F) = 0.2 k_F^(0) chosen for Fig. <ref>, both the electron low-energy band structure and corresponding schematics for electron transitions in a gapped Lieb lattice remain similar to those in gapless α-T_3 materials and dice lattices. Therefore, one expects to find some similarities in calculated spectra of both particle-hole and plasmon modes in these two different types of materials.
Once the bandgap Δ_0 or the parameter k_0 becomes large in Fig. <ref>(b), one has observed quite different electronic and plasmon features in a gapped Lieb lattice. In this case, the imaginary part of polarization function exhibits a significant and extended peak in contrast to a line or narrow regions found in previously studied Dirac-cone materials. Meanwhile, we also observe an extended peak region for Re [Π^(0)(q, ω | Δ_0)] in Fig. <ref>(a).
For dispersions of plasmon modes presented in panels (c)-(f) of Figs. <ref> and <ref>, we find low-damping regions with small wave vectors q for all relevant frequency ranges below the Fermi energy E_F^(0). Therefore, we conclude that only long-wavelength plasmons could prevail with a small damping and a long lifetime. Additionally, an undamped plasmon branch could exist over a larger range of frequencies with an enhanced β_ϵ value since it allows higher plasmon frequencies for a fixed wave vector q. On the the hand, for small values of β_ϵ corresponding to larger dielectric constants, the plasmon modes are greatly suppressed, as seen in Fig. <ref>(f).
§ SUMMARY AND REMARKS
In conclusion, we have investigated in this paper both the electronic and collective properties of pseudospin-1 Dirac materials with a flat band within their finite bandgaps. Particularly, we have calculated and compared the dynamical polarization function, particle-hole modes, as well as plasmon spectra and dampings for gapped dice and Lieb lattices, which are the most known and typical Dirac-cone materials with a flat band in their band structure and finite gaps as well among the valence, conduction and flat bands.
In our considered models, each of these pseudospin-1 Dirac materials exhibits a very unique structure of electron transitions between empty and occupied states, which becomes adjustable by varying the Fermi energy or doping density in each material. For a dice lattice, we find a symmetric band structure in contrast to any other α-T_3 material having 0<α<1. This enables an analytical expression for the wave-function overlap, which is a key component in calculating the polarization function. Also, we have obtained another wave-function overlaps analytically for the Lieb lattice.
For a dice lattice, we have found that a weakly-damped plasmon mode could exist for an extended range of wave vectors in the presence of a finite bandgap. However, this plasmon mode become strongly damped once its frequency exceeds the Fermi energy due to electron-hole mode with respect to transitions associated with a flat band. This is a genuine feature for all types of α-T_3 materials with either zero or finite bandgap. As a comparison, Lieb lattice represents a truly unique schematics for electron transitions, and its particle-hole modes reveal a large and extended peak in contrast to a narrow region found for all known Dirac-cone materials. Its plasmon modes with a long lifetime survive only for relatively small wave vectors but for a wide range of frequencies. Importantly, its plasmon modes become highly damped for all large wave vectors. As a result, it is almost impossible to observe an undamped plasmon mode in Lieb lattices with a large k_0 or Δ_0 value.
We are confident that the numerical results in this paper have predicted some novel electronic and collective properties of recently discovered Dirac materials, as well as new types of tunable plasmon modes and their spectra. This work will undoubtedly find numerous applications in novel nano-electronic and plasmonic devices.
A.I. was supported by the funding received from TRADA-54-46, PSC-CUNY Award # 66045-00 54. G.G. was supported by Grant No. FA9453-21-1-0046 from the Air Force Research Laboratory (AFRL). D.H. would like to acknowledge the Air Force Office of Scientific Research (AFOSR) and the views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of Defense or of the United States Air Force.
| Recent research on almost all aspects of electronic property in novel two-dimensional materials has become a crucial subject as well as one of the most actively pursued directions in condensed-matter physics, chemistry, photonics and quantum electronics. This is a consequence of unique electron dynamics in graphene whose properties have been considered to be among the most significant discoveries in this century. <cit.> Shortly after discovery of plain graphene, researchers made another effort to fabricate and investigate other relevant candidates, such as graphene with a finite bandgap between its valence and conduction bands, novel materials with anisotropic and/or tilted Dirac cones as well as indirect band gaps, <cit.> new materials with Rashba spin-orbit coupling, <cit.> massive anisotropic electronic states, <cit.> twisted bilayers, <cit.> semi-Dirac materials, <cit.> and etc. The band structures in several types of these materials also demonstrate valley and spin-polarized electronic properties <cit.> and very unusual low-energy electronic spectra as well. Remarkably, anisotropic dispersion, accompanied by an energy gap in the band structure, of Dirac-cone materials could also be induced and controlled by applying an external off-resonance irradiation. <cit.>
Among such novel Dirac materials, those, involving a flat or dispersionless band in energy spectrum, are particularly interesting and stand out because of their highly unusual electronic, <cit.> collective, <cit.> optical, magnetic and transport <cit.> properties. Importantly, such materials are all connected to the so-called α-T_3 model described by a hexagon with an additional hub atom in its center, Lieb lattice, <cit.> Kagome lattice <cit.> and others, respectively.
Plasmon, or quantum quasi-particles representing collective charge-density fluctuations within a conducting solid, is one of the important properties of low-dimensional materials. The motivation for this work is to gain further insight into the dynamical polarization, plasmon dispersions as well as their decay rates. With the use of angle-resolved photoemission spectroscopy (ARPES) or high resolution electron energy loss spectroscopy (HREELS), A quantitative study of plasmon modes, including dispersion and dephasing (lifetime or damping), has been performed for graphene with zero and finite bandgap, <cit.> silicene and germanene, <cit.> hetero-structures, <cit.> hybrid systems, <cit.> multi-layers, fullerenes, <cit.> carbon nanotubes <cit.> and nanoribbons, as well as other materials, <cit.> at either zero or finite temperatures <cit.>. Particularly, studies of plasmon modes have been conducted for flat-band Dirac materials <cit.>.
In all these studies, a special attention has been drawn towards plasmon damping and stability <cit.> since only a weakly-damped plasmon mode can represent a quasi-particle. <cit.> Meanwhile, magneto-plasmons and energy-dispersion of electrons under a magnetic field have also been explored fully. <cit.> Moreover, studies on dynamics of plasmon modes have been performed for tilted and anisotropic lattices, such as 1T'-MoS_2 and 8-pmmn borophene <cit.>, graphene, silicene and α-T_3 based nanoribbons <cit.> in addition to related collective and transport properties, such as optical and Boltzmann conductivities. <cit.>
The remainder of this paper is organized as follows. In Secs. <ref> and <ref>, we calculate the low-energy Hamiltonian, electron energy spectrum and wave functions in both gapped dice and Lieb lattices by means of electronic properties in well-known gapped pseudospin-1 Dirac materials with a flat band. Meanwhile, we discuss similarity and difference in electron dynamics of these two likewise materials. The consequent Sec. <ref> is devoted to defining and calculating dynamical polarization functions, plasmon dispersions and dampings in these materials. Specifically, we discuss the zero-frequency limit of the polarization function and static screening, which is known to be important in calculations of the Boltzmann conductivity. Furthermore, we show how the density of states could be employed for reliably estimating the plasmon frequency in the long wavelength limit for gapped dice and Lieb lattices, as well as other known Dirac materials. In conjunction with the theory, numerical results are presented and discussed in detail for displaying photo-excited electron dynamics in these two different materials. Finally, the concluding remarks and summary statements are given in Sec. <ref>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17829v1 | 20240926132736 | Phase glides and self-organization of atomically abrupt interfaces out of stochastic disorder in $α$-Ga$_{2}$O$_{3}$ | [
"Alexander Azarov",
"Javier García Fernández",
"Junlei Zhao",
"Ru He",
"Ji-Hyeon Park",
"Dae-Woo Jeon",
"Øystein Prytz",
"Flyura Djurabekova",
"Andrej Kuznetsov"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email protected]
Department of Physics and Centre for Materials Science and Nanotechnology, University of Oslo, PO Box 1048 Blindern, N-0316 Oslo, Norway
Department of Physics and Centre for Materials Science and Nanotechnology, University of Oslo, PO Box 1048 Blindern, N-0316 Oslo, Norway
[email protected]
Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
Department of Physics and Helsinki Institute of Physics, University of Helsinki, P.O. Box 43, FI-00014, Finland
Korea Institute of Ceramic Engineering & Technology, Jinju 52851, South Korea
Korea Institute of Ceramic Engineering & Technology, Jinju 52851, South Korea
Department of Physics and Centre for Materials Science and Nanotechnology, University of Oslo, PO Box 1048 Blindern, N-0316 Oslo, Norway
Department of Physics and Helsinki Institute of Physics, University of Helsinki, P.O. Box 43, FI-00014, Finland
[email protected]
Department of Physics and Centre for Materials Science and Nanotechnology, University of Oslo, PO Box 1048 Blindern, N-0316 Oslo, Norway
§ ABSTRACT
Disorder-induced ordering and unprecedentedly high radiation tolerance in γ-phase of gallium oxide is a recent spectacular discovery at the intersection of the fundamental physics and electronic applications.
Importantly, by far, these data were collected with initial samples in form of the thermodynamically stable β-phase of this material.
Here, we investigate these phenomena starting instead from already metastable α-phase and explain radically new trend occurring in the system.
We argue that in contrast to that in β-to-γ disorder-induced transitions, the O sublattice in α-phase exhibits hexagonal close-packed structure, so that to activate α-to-γ transformation significant structural rearrangements are required in both Ga and O sublattices.
Moreover, consistently with theoretical predictions, α-to-γ phase transformation requires accumulation of the substantial tensile strain to initiate otherwise impossible lattice glides.
Thus, we explain the experimentally observed trends in term of the combination of disorder and strain governing the process.
Finally, and perhaps most amazingly, we demonstrate atomically abrupt α/γ interfaces paradoxically self-organized out of the stochastic disorder.
Phase glides and self-organization of atomically abrupt interfaces out of stochastic disorder in
Andrej Kuznetsov
===================================================================================================
§ INTRODUCTION
Recently gallium oxide (Ga2O3) has attracted attention of a broad audience spreading from those dealing with fundamentals of the phase transitions [1-5] to device application experts [6-10].
Among the rest of the highlights there was a discovery of disorder-induced ordering in Ga2O3 [11-14] and unprecedently high radiation tolerance of the formed structures [15].
Specifically, it was shown that even though its thermodynamically stable monoclinic polymorph (β-Ga2O3) can be swiftly disordered, it does not amorphize under irradiation, but converts to a cubic defective spinel polymorph (γ-Ga2O3), remaining crystalline independently of subsequent irradiation [15].
Moreover, electronic radiation tolerance tests performed by comparing Schottky diodes fabricated out of β- and γ-polymorphs showed that the γ-Ga2O3-based diodes remain functional, while β-Ga2O3-based diodes lost their rectification under identical irradiation conditions [16].
As explained recently, the rationale behind this remarkable β-to-γ Ga2O3 polymorph transformation is because the oxygen sublattice in these polymorphs, exhibiting face-centered cubic (fcc) structure, demonstrates strong recrystallization trends, while the Ga sublattice is susceptible to disorder [17,18].
Very recently, this idea was exploited to demonstrate multiple γ/β polymorph repetitions by adjusting spatial distributions of the disorder levels as a function of the irradiation temperature and ion flux [19]; as such demonstrating “polymorph heterostructures” not being realized by conventional growth methods otherwise.
Meanwhile, understanding of the radiation phenomena in other Ga2O3 polymorphs is much less mature.
For instance, for the metastable rhombohedral polymorph (α-Ga2O3) there are only a few studies devoted to the radiation defect formation [20-22]; however, indicating that α-phase is more radiation resistant at the range of the nuclear stopping power maximum as compared to that of β-Ga2O3 [20].
Concurrently, the disorder buildup in α-Ga2O3 involves surface amorphization, somewhat resembling the features observed in GaN [23,24]. Nevertheless, if disorder-induced polymorphism is realized in α-Ga2O3 its impact on the device applicability may be even more interesting than that in β-Ga2O3; since α-Ga2O3 exhibits the widest bandgap among the rest of the Ga2O3 polymorphs family [25,26]; making it more likely to anticipate higher band offsets in, e.g., γ/α interfaces [27].
Thus, in the present work we undertook a systematic investigation of the radiation phenomena in α-Ga2O3 and determined conditions sufficient for igniting α-to-γ polymorph transition.
We argued that in contrast to that in β-phase, the O sublattice in α-phase posses hexagonal close-packed (hcp) structure, so that to activate α-to-γ phase transformation significant structural rearrangements are required in both Ga and O sublattices.
Moreover, consistently with predictions from the energy diagram, α-to-γ phase transformation requires accumulation of the substantial tensile strain to initiate otherwise impossible lattice glides.
Thus, we explain these fascinating phase transformation trends in term of the combination of disorder and strain governing the process.
As a result, we demonstrate atomically abrupt α/γ interfaces paradoxically self-organized out of the stochastic disorder.
§ RESULTS AND DISCUSSION
Fig. <ref> provides a survey of the experimental data including systematic measurements of the samples crystallinity as a function of displacement per atom (dpa) obtained by (a) RBS/C and (b) XRD in combination with TEM cross-sections of the selected samples in panels (c)-(f), associated with characteristic process stages as illustrated in cartoon insets in the middle of the figure.
Indeed, already for low dpa conditions, i.e., ≤ 4 dpa, the RBS/C spectra reveal – consistently with the literature [20,21] – a surface disorder peak (see Fig. 1(a)), specifically at dpa=4 corresponding to a 6 nm thick amorphous layer, as confirmed by TEM data in Fig. <ref>(c).
In addition, there is a broader “bulk” disorder peak localized far beyond of the maximum of the primary defect generation (R_pd≃ 105 nm) according to the SRIM code [28] simulations.
This stage is accompanied by a tensile strain accumulation as clearly seen from appearing of a shoulder on the left-hand side of the α-Ga2O3 (006) reflection in the XRD 2Θ scans (Fig. <ref>(b)) for dpa = 4.
Further, at 20 ≤dpa≤ 120 range, the surface disorder peak broadens and eventually reaching the random level, while the magnitude of the bulk peak saturates at much lower disorder level, see Fig. <ref>(a).
For example, at dpa = 120, ∼ 130 nm thick amorphous layer is revealed by TEM as illustrated in Fig. <ref>(d).
Notably, this disorder accumulation stage does not reveal much of changes in the XRD spectra, except of the strain release, as seen from the evolution of the left-hand side parts of the (006) diffraction peak in Fig. <ref>(b).
Spectacularly, additional relatively tiny dpa increase – just by a few tens of percents – dramatically changes the structure.
Indeed, at dpa = 140 RBS/C intensity increases right behind the surface amorphous layer, see the 100-170 nm range below the surface in Fig. <ref>(a).
This prominent transformation is accompanied by an appearance of a new diffraction peak centered at ∼ 37.7, see Fig. <ref>(b), which is identified as γ-Ga2O3 (222) reflection [29], in agreement with TEM data in Fig. <ref>(e).
Further dpa increase improves the crystallinity in this region as clearly seen from the decreased the RBS/C yield at dpa = 160 as compared to that at dpa = 140, see Fig. <ref>(a).
Simultaneously, the width of the phase-modified layer expands with increasing dpa.
Notably, the γ-layer expands both into the crystal bulk in the form of α-to-γ transition, and towards the surface, so that the amorphous layer converts into γ-phase too, as schematically shown in the corresponding cartoon inset.
Thus, the surface amorphous layer broadens as a function of dpa until the α-to-γ phase transformation starts, while further dpa increase leads to the shrinkage of the amorphous layer due to the γ-film expansion.
Notably, dpa increase beyond 140 dpa has practically no impact on the crystallinity of the newly formed γ-phase confirming its remarkable radiation tolerance consistently with literature [15].
Moreover, comparing it to β-phase, α-Ga2O3 itself can be indeed classified as a higher radiation tolerant material, since α-to-γ phase transition starts at much higher dpa levels (N.B. dpa = 1 was shown to be sufficient to start β-to-γ transition [15]).
Meanwhile, another spectacular observation indicated already by the data in Figs <ref>(e) and <ref>(f) is an ultimate abruptness of the γ/α interfaces resulted out of colossal disordering process having stochastic nature.
To investigate this phenomenon in details we performed HAADF-STEM analysis of the interfaces obtained in this study, see Fig. <ref>.
Specifically, Figs. <ref>(a) and <ref>(b) show the high resolution images of the amorphous/α-phase and amorphous/γ-phase interfaces formed in the samples upon disordering with dpa = 4 and 400, respectively.
As expected from the stochastic nature of disorder, these interfaces are not abrupt, and in case of the amorphous/γ-phase interface even rather rough.
However, even it is contraintuitive, polymorph interfaces formed out of the same stochastic disorder, specifically γ/α interface is atomically sharp as clearly demonstrated by Figs. <ref>(c) and <ref>(d) showing high resolution HAADF-STEM images of the interface region taken along different zone axes in the sample subjected to dpa = 400.
The corresponding fast Fourier transformation (FFT) for each image and the schematic unit cell and lattice stackings oriented as in the interfaces are shown in the insets and at the right-hand sides of the corresponding images, respectively.
Specifically, the orientation relationships at the γ/α interface were determined from STEM as γ[110]/α[1-100] and γ[112]/α[10-10] and were used further in simulations to shed more light on the mechanism of the γ/α interface formation.
Thus, for that matter, we performed machine-learning-driven molecular dynamics (ML-MD) simulations and Fig. <ref> summarizes the atomic configurations and dynamic evolution of such interface.
The initial local atomic configuration of the ML-MD interface (Fig. <ref>(c)) closely resembles the interface observed in high-resolution STEM images (Figs. <ref>(a) and <ref>(b)). The γ-O and α-O sublattices follow fcc (A’B’C’-A’B’C’…) and hcp (AB-…) stacking orders, respectively, as indicated in Fig. <ref>(d).
Consequently, the initial γ/α interfacial transition region (cyan region in Figs. <ref>(b-d)) exhibits a stepped edge with two horizontally mismatched (A’ | B) and (C’ | A) O stacking layers and a vertically mismatched B’-B stacking order which is energetically unfavorable.
The dynamic evolution of the representative (A’ | B) O layers (shadowed layer in Fig. <ref>(d)) is further detailed in Fig. <ref>(e), and Supplementary Video I illustrates the complete simulation.
Within the first 5 ps, the α-O B stacking layer reconstruct into a γ-O C’-like stacking, accompanied by overall vertical lattice distortion, leading to a transient local hcp-like stacking.
Further plane “slip” displacements follow typical hexagonal directions alone [110] or [1-10], as indicated by the blue arrows in Fig. <ref>(e).
These plane displacements or “glides” complete at t = 355 ps, along with simultaneously rapid local Ga rearrangement (see Supplementary Video I, from frame 3450 to frame 3560, t = 345∼356 ps). The final γ/α interface, presents a perfectly lattice-aligned O (B’=B) single layer with ultimate atomic sharpness.
Despite that β-to-γ phase transformations also occurs under ion irradiation, the mechanism of the disorder-induced transformations in α-Ga2O3 is dramatically different from that in β-phase [15].
Indeed, previously it was demonstrated that β-to-γ phase transformation occurs due to accumulation of Ga disorder, while O sublattice having fcc structure for both phases exhibits a strong recrystallization trend within collision cascade [15,17].
In contrast, the O sublattice in α-phase posses hcp structure, so that the structural rearrangements in both sublattices are required for the α-to-γ phase transformation.
Furthermore, out of the energy consideration, α-to-γ phase transformation requires an accumulation of the tensile strain in the system, see Supplementary Note I and Ref. [30].
Literally, in order to realize the α-to-γ Ga2O3 phase transformation, the hcp α-O sublattice must transform to fcc γ-O sublattice.
Both hcp and fcc stackings share efficient closed-packing arrangements, suggesting that the transformation likely occurs via a slip of closed-packed layers.
However, the atomic volume differences between two phases are the biggest among all polymorphs.
The smallest atomic volume of the α-phase is around 10.1 Å^3 per atom while that of the γ-phase is around 11 Å^3 per atom, see Supplementary Note 1.
This indicates that such phase transformation needs to be assisted with an expansion of the system.
To understand the expansion mechanism of α-to-γ phase transformation, we systematically compare O sublattice parameters of α and γ-Ga2O3 phases, as analyzed in Fig. <ref>.
Indeed, Fig. <ref>(a) illustrates the stacking of the closed-packed planes perpendicular to the closed-packed layers of the α-O sublattice (AB-AB-AB) and γ-O sublattice (A’B’C’-A’B’C’).
These data indicate that the interlayer distances in the α-O sublattice are smaller than those in the γ-O sublattice (see the corresponding levels marked by the black dashed lines in Fig. <ref>(a)).
Further, the PRDF curves of the single layer in the O sublattice which is perpendicular to the close-packed layers (Fig. <ref>(a)), the A-A distances in the α-O sublattice is the peak of D_A-A at 4.5 Å, the A’-A’ distances in the γ-O sublattice is D_A’-A’ at 7.1 Å.
Accounting these values the average interlayer distances in the closed-packed layer of α-O and γ-O sublattice is 2.25 Å and 2.37 Å, respectively.
This indicates an expansion between the closed-packed layers of the α-phase (alongside α[001] orientations) during the phase transformation of ∼5%.
Within the closed-packed layers, as shown in Supplementary Note II, the PRDF curves indicate negligible expansion, particularly in longer distances.
This implies that even though there are some deviations between the arrangements of efficient packing, no prominent “isotropic” expansion is observed within these closed-packed planes.
Meanwhile, anisotropic expansion of α-Ga2O3 may play a prominent role in explaining the mechanism of α to γ-Ga2O3 phase transformation under irradiation.
To investigate the expansion evolution, we employ additional isothermal-isobaric ensemble (NPT) relaxations after every 100 primary knock-on atoms (PKAs) overlapping collision cascade simulations to provide sufficient degree of freedom for the system volume adjustment.
Fig. <ref>(c) displays the strain in the three coordination directions, revealing significant expansion along the z axis, especially during the early stages of the radiation defects accumulation.
Conversely, the system sizes in the x and y directions show minimal changes before 300 PKAs, followed by a slight expansion.
This anisotropic expansion aligns with the lattice differences between α and γ-Ga2O3 phases.
The lattice experiences the strongest expansion along the z axis which is perpendicular to the closed-packed plane on the oxygen sublattice, while the x and y axes belong within the closed-packed plane and the expansion in these directions is very small.
The structure in the left-hand side of Fig. <ref>(d) shows the lattice of the pristine α-Ga2O3.
One can see that one of each three octahedral sites within the hcp O sublattice is vacant.
After collision cascades, the defective α-Ga2O3 (the right-hand side of Fig. <ref>(d)) shows that the displaced Ga atoms readily occupy the available octahedral interstitial positions, forming Ga-Ga pairs with incredibly short distances of less than 2.5 Å (visualized in Fig. <ref>(d) by purple sticks).
Appearance of the multitude of the short-distance Ga-Ga pairs aligned with the z axis in the defective α-Ga2O3 (see Fig. 4(e)) demonstrates that Ga defects indeed primarily occupy available octahedral sites in the hcp oxygen sublattice.
These defects accumulate stress which relaxes straining the lattice in the z-axis direction as seen in experiment.
Indeed, after the NPT relaxation, the system expands along the z axis, and the number of Ga-Ga short-distance pairs decreases significantly (see Fig. <ref>(e)).
The preferential movement of atoms parallel to the z axis can be seen via the displacement vectors for all the atoms in the lattice after relaxation, which are shown by the red arrows in the right-hand side of Fig. <ref>(e).
In other words, our results suggest that the accumulation of Ga defects generates the stress between the oxygen closed-packed planes, leading to significant expansion.
The fully relaxed stress in our simulations within the NPT ensemble, the system expands by approximately 10% in the z axis.
This expansion is greater than 5% needed for transformation of the hcp α-Ga2O3 oxygen sublattice into the fcc γ-Ga2O3 oxygen sublattice as according to the mechanism proposed in Fig. <ref>.
A full relaxation of the accumulated stress can be expected only near the surface.
The large expansion of the lattice allows for easier accumulation of the defects, which leads to faster deterioration of the crystal lattice.
This is why the lattice of α-Ga2O3 near the surface does not transform into the stable γ-Ga2O3 phase under ion irradiation, but first becomes amorphous, see Fig. <ref>.
However, in deeper regions, stress generated by octahedral Ga interstitials is not easily released, controlling interplane distances to be closer to the γ-O sublattice.
As stress increases, crystal plane slip becomes highly likely, completing the phase transformation from α to γ deep beneath the surface.
§ CONCLUSIONS
Disorder-induced ordering and unprecedently high radiation tolerance in γ-phase of gallium oxide is a recent spectacular discovery at the intersection of the fundamental physics and electronic applications.
Importantly, before the present work, all these amazing literature data were collected with initial samples in form of the thermodynamically stable β-phase of this material.
Here, we investigated these phenomena starting instead from already metastable α-phase and explained radically new trend occurring in the system.
We argued that in contrast to that in β-to-γ disorder-induced transitions, the O sublattice in α-phase exhibits hexagonal close-packed structure, so that to activate α-to-γ transformation significant structural rearrangements are required in both Ga and O sublattices.
Moreover, consistently with theoretical predictions, α-to-γ phase transformation requires accumulation of the substantial tensile strain to initiate otherwise impossible lattice glides.
Thus, we explain the experimentally observed trends in term of the combination of disorder and strain governing the process.
Finally, and perhaps most amazingly, we demonstrate atomically abrupt α/γ interfaces paradoxically self-organized out of the stochastic disorder.
§ METHODS
§.§ Experimental methods
In the present work we used rhombohedral ∼1 µm thick α-Ga2O3 films grown on sapphire substrates by halide vapor phase epitaxy (see details of the synthesis elsewhere [31]).
The samples were implanted at room temperature with 400 keV ^58Ni^+ ions in a wide dose range (1 × 10^15–1 × 10^17 Ni/cm^2) keeping the ion flux constant at 6×10^12 atom·cm^-2·s^-1.
All the implants were performed at 7 off-angle orientation from normal direction to minimize channeling.
For each ion dose the corresponding displacements per atom (dpa) values were calculated using conventional methodology [32] based on SRIM code [28]simulations.
Specifically, the quoted dpa values were taken at the maximum of the SRIM vacancy generation profiles simulated for a given ion dose and normalized to the atomic density of α-Ga2O3 (n_at = 10.35 × 10^22 at./cm^3).
The SRIM simulations were performed in a full damage cascade mode with 28 eV and 14 eV as the displacement energies for Ga and O atoms, respectively [33].
Structural characterization of the implanted samples was performed by a combination of Rutherford backscattering spectrometry in channeling mode (RBS/C), x-ray diffraction (XRD), and scanning transmission electron microscopy (STEM). The RBS/C measurements were performed by 1.6 MeV He^+ ions incident along [001] direction in α-Ga2O3 part of the structure and backscattered into a detector placed at 165 relative to the incident beam direction.
XRD 2Θ measurements were performed using the RIGAKU SmartLab diffractometer with high-resolution Cu K_α 1 radiation and Ge(440) four-bounced monochromator.
For cross-sectional STEM studies, selected samples were thinned by mechanical polishing and by Ar ion milling in a Gatan PIPS II (Model 695), followed by plasma cleaning (Fishione Model 1020) immediately before loading the samples into a C_s-corrected Thermo Fisher Scientific Titan G2 60–300 kV microscope, operated at 300 kV.
The STEM images were recorded using a probe convergence semi-angle of 23 mrad, a nominal camera length of 60 mm using two different detectors: high-angle annular dark field (HAADF) (collection angles 100–200 mrad), and bright field (BF) (collection angles 0–22 mrad). The structural model of different phases was displayed using VESTA software [34]
§.§ Computational methods
The machine-learned molecular dynamics (ML-MD) simulations were conducted using LAMMPS package [35].
The self-developed ML interatomic potential of Ga2O3 system was employed [30], which was designed with the high accuracy for all five experimentally known Ga2O3 polymorphs and generality for disordered structures.
The evolution of γ/α interface are stimulated using an orthogonal cell comprising 15,360 atoms with the side lengths of ∼82.2×17.8×113.4 Å^3. The x, y, and z axes correspond to γ[112]/α[100], γ[110]/α[1-10], and γ[111]/α[001] orientations, respectively.
The cell is initially optimized to a local minimum and is further run at 900 K and 0 bar in isothermal-isobaric ensemble (NPT) using Nosé-Hoover algorithm [36] for 500 ps with 1 fs per MD step.
The polyhedral template matching (PTM) method [37] is employed to identify the local stacking structure of O sublattice.
The coordination number of Ga atoms are counted with the cutoff radius of 2.6 Å.
The structural analyses and visualization are done with OVITO software [38].
A total of 900 overlapping cascades were conducted on the α-Ga2O3 cell, which contains 14,400 atoms in a ∼51×53×55 Å^3 box.
This scale of the simulation cell was taken to prevent cascade overlapping the temperature-controlled borders and to minimize computational time.
In each iteration, Ga or O atom was randomly selected as the PKA.
The PKA was assigned a kinetic energy of 500 eV with a uniformly random momentum direction.
To maintain consistency, the entire cell was translated and wrapped at periodic boundaries, positioning the PKA at the center of the cell.
Each simulation iteration consisted of two periods.
During the first cascade period, the cell was thermalized using NVE-MD for 5,000 MD steps with an adaptive time step.
Electron-stopping frictional forces were applied to atoms with kinetic energies above 10 eV [39,40].
In the subsequent period, the simulations continued in a quasi-canonical ensemble with a Langevin thermostat [41] applied to border atoms (within 7.5 Å of the simulation box boundaries, redefined in each iteration) for 10 ps at 300 K.
Additionally, relaxation simulations were periodically conducted after every 100 cascades.
During these relaxation simulations, the system was subjected to isothermal-isobaric ensemble (NPT) conditions at 300 K and 0 bar for 100 ps, with temperature controlled by the Nosé-Hoover thermostat and barostat [36].
§ ACKNOWLEDGMENTS
M-ERA.NET Program is acknowledged for financial support via GOFIB project (administrated by the Research Council of Norway project number 337627 in Norway and the Academy of Finland project number 352518 in Finland).
Additional support was received from the Research Council of Norway in the frame of the FRIPRO Program project number 739211.
The experimental infrastructures were provided at the Norwegian Micro- and Nano-Fabrication Facility, NorFab, supported by the Research Council of Norway project number 295864, at the Norwegian Centre for Transmission Electron Microscopy, NORTEM, supported by the Research Council of Norway project number 197405.
J.Z. acknowledges the National Natural Science Foundation of China under Grant 62304097; Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515012048; Shenzhen Fundamental Research Program under Grant JCYJ20230807093609019.
Computing resources were provided by the Finnish IT Center for Science (CSC) and by the Center for Computational Science and Engineering at the Southern University of Science and Technology.
The paper was also supported by the “Strategic R&D program” funded by the Korea Institute of Ceramic Engineering and Technology (KICET), Republic of Korea, in 2024 (KPP23004-0-02).
The international collaboration was also fertilized via INTPART Program at the Research Council of Norway project number 322382 as well as UTFORSK Program at the Norwegian Directorate for Higher Education and Skills project number UTF-2021/10210.
§ AUTHOR CONTRIBUTION
A.K. and A.A. conceived the research strategy and designed the methodological complementarities.
J.H.P. and D.W.J. contributed to the crystal growth and provided the samples.
A.A. and J.G.F. carried out experiments and provided initial drafts for the description of the experimental data.
R.H. and J.Z performed molecular dynamics simulations.
F.D., R.H. and J.Z. developed the theoretical models and composed the theoretical part of the manuscript.
A.K. and A.A. finalized the manuscript with the input from all the co-authors.
All co-authors discussed the results as well as reviewed and approved the manuscript.
A.K., Ø.P., J.Z., and F.D, administrated their parts of the project and contributed to the funding acquisition.
A.K. coordinated the work of the partners.
§ COMPETING INTERESTS
The authors declare no competing interests.
§ REFERENCES
[1] I. Cora, Zs. Fogarassy, R. Fornari, M. Bosi, A. Rečnik, and B. Pécz,
“In situ TEM study of κ→β and κ→γ phase transformations in Ga_2O_3”,
Acta Mater., 183, 216–227 (2020).
[2] C. Wouters, M. Nofal, P. Mazzolini, J. Zhang, T. Remmele, A. Kwasniewski, O. Bierwagen, and M. Albrecht,
“Unraveling the atomic mechanism of the disorder–order phase transition from γ-Ga_2O_3 to β-Ga_2O_3”,
APL Mater., 12, 011110 (2024).
[3] J. Wang, X. Guan, H. Zheng, L. Zhao, R. Jiang, P. Zhao, Y. Zhang, J. Hu, P. Li, S. Jia, and J. Wang,
“Size-dependent phase transition in ultrathin Ga_2O_3 nanowires”,
Nano Lett., 23, 7364–7370 (2023).
[4] S.-C. Zhu, S.-H. Guan, and Z.-P. Liu,
“Mechanism and microstructures in Ga_2O_3 pseudomartensitic solid phase transition”,
Phys. Chem. Chem. Phys., 18, 18563 (2016).
[5] K. R. Gann, C. S. Chang, M.-C. Chang, D. R. Sutherland, A. B. Connolly, D. A. Muller, R. B. van Dover, and M. O. Thompson,
“Initial nucleation of metastable γ-Ga_2O_3 during sub-millisecond thermal anneals of amorphous Ga_2O_3”,
Appl. Phys. Lett., 121, 062102 (2022).
[6] X. Chen, F. Ren, S. Gu, and J. Ye,
“Review of gallium-oxide-based solar-blind ultraviolet photodetectors”,
Photonics Res., 7, 381–415 (2019).
[7] S. J. Pearton, J. Yang, P. H. Cary IV, F. Ren, J. Kim, M. J. Tadjer, and M. A. Mastro,
“A review of Ga_2O_3 materials, processing, and devices”,
Appl. Phys. Rev., 5, 011301 (2018).
[8] A. J. Green, J. Speck, G. Xing, P. Moens, F. Allerstam, K. Gumaelius, T. Neyer, A. Arias-Purdue, V. Mehrotra, A. Kuramata, K. Sasaki, S. Watanabe, K. Koshi, J. Blevins, O. Bierwagen, S. Krishnamoorthy, K. Leedy, A. R. Arehart, A. T. Neal, S. Mou, S. A. Ringel, A. Kumar, A. Sharma, K. Ghosh, U. Singisetti, W. Li, K. Chabak, K. Liddy, A. Islam, S. Rajan, S. Graham, S. Choi, Z. Cheng, and M. Higashiwaki,
“β-Gallium oxide power electronics”,
APL Mater., 10, 029201 (2022).
[9] R. Zhu, H. Liang, S. Liu, Y. Yuan, X. Wang, F. C.-C. Ling, A. Kuznetsov, G. Zhang, and Z. Mei,
“Non-volatile optoelectronic memory based on a photosensitive dielectric”,
Nat. Commun., 14, 5396 (2023).
[10] M. J. Tadjer,
“Toward gallium oxide power electronics”,
Science, 378, 724 (2022).
[11] A. Azarov, C. Bazioti, V. Venkatachalapathy, P. Vajeeston, E. Monakhov, and A. Kuznetsov,
“Disorder-induced ordering in gallium oxide polymorphs”,
Phys. Rev. Lett., 128, 015704 (2022).
[12] H.-L. Huang, C. Chae, J. M. Johnson, A. Senckowski, S. Sharma, U. Singisetti, M. H. Wong, and J. Hwang,
“Atomic scale defect formation and phase transformation in Si implanted β-Ga_2O_3”,
APL Mater., 11, 061113 (2023).
[13] T. Yoo, X. Xia, F. Ren, A. Jacobs, M. J. Tadjer, S. Pearton, and H. Kim,
“Atomic-scale characterization of structural damage and recovery in Sn ion-implanted β-Ga_2O_3”,
Appl. Phys. Lett., 121, 072111 (2022).
[14] E. A. Anber, D. Foley, A. C. Lang, J. Nathaniel, J. L. Hart, M. J. Tadjer, K. D. Hobart, S. Pearton, and M. L. Taheri,
“Structural transition and recovery of Ge implanted β-Ga_2O_3”,
Appl. Phys. Lett., 117, 152101 (2020).
[15] A. Azarov, J. García Fernández, J. Zhao, F. Djurabekova, H. He, R. He, Ø. Prytz, L. Vines, U. Bektas, P. Chekhonin, N. Klingner, G. Hlawacek, and A. Kuznetsov,
“Universal radiation tolerant semiconductor,”
Nat. Commun., 14, 4855 (2023).
[16] A. Y. Polyakov, A. A. Vasilev, A. I. Kochkova, I. V. Shchemerov, E. B. Yakimov, A. V. Miakonkikh, A. V. Chernykh, P. B. Lagov, Y. S. Pavlov, A. S. Doroshkevich, R. Sh. Isaev, A. A. Romanov, L. A. Alexanyan, N. Matros, A. Azarov, A. Kuznetsov, and S. Pearton,
“Proton damage effects in double polymorph γ/β-Ga_2O_3 diodes”,
J. Mater. Chem. C, 12, 1020 (2024).
[17] J. Zhao, J. García-Fernández, A. Azarov, R. He, Ø. Prytz, K. Nordlund, M. Hua, F. Djurabekova, and A. Kuznetsov,
“Crystallization instead amorphization in collision cascades in gallium oxide”,
arXiv, 2401.07675 (2023).
[18] R. He, J. Zhao, J. Byggmästar, H. He, and F. Djurabekova,
“Ultrahigh Stability of O-Sublattice in β-Ga_2O_3”,
arXiv, 2404.10451 (2024).
[19] A. Azarov, C. Radu, A. Galeckas, I. F. Mercioniu, A. Cernescu, V. Venkatachalapathy, E. Monakhov, F. Djurabekova, C. Ghica, J. Zhao, and A. Kuznetsov,
“Self-assembling of multilayered polymorphs with ion beams”,
arXiv, 2404.19572 (2024).
[20] A. I. Titov, K. V. Karabeshkin, A. I. Struchkov, V. I. Nikolaev, A. Azarov, D. S. Gogova, and P. A. Karaseov,
“Comparative study of radiation tolerance of GaN and Ga_2O_3 polymorphs”,
Vacuum, 200, 111005 (2022).
[21] A. Azarov, J.-H. Park, D.-W. Jeon, and A. Kuznetsov,
“High mobility of intrinsic defects in a-Ga_2O_3”,
Appl. Phys. Lett., 122, 182104 (2023).
[22] D. Tetelbaum, A. Nikolskaya, D. Korolev, T. Mullagaliev, A. Belov, V. Trushin, Y. Dudin, A. Nezhdanov, A. Mashin, A. Mikhaylov, A. Pechnikov, M. Scheglov, V. Nikolaev, and D. Gogova,
“Ion-beam modification of metastable gallium oxide polymorphs”,
Mater. Lett., 302, 130346 (2021).
[23] S. O. Kucheyev, J. S. Williams, C. Jagadish, J. Zou, and G. Li,
“Damage buildup in GaN under ion bombardment”,
Phys. Rev. B, 62, 7510 (2000).
[24] K. Lorenz, E. Wendler, A. Redondo-Cubero, N. Catarino, M.-P. Chauvat, S. Schwaiger, F. Scholz, E. Alves, and P. Ruterana,
“Implantation damage formation in a-, c- and m-plane GaN”,
Acta Mater., 123, 177–187 (2017).
[25] S. Fujita, M. Oda, K. Kaneko, and T. Hitora,
“Evolution of corundum-structured III-oxide semiconductors: Growth, properties, and devices”,
Jpn. J. Appl. Phys., 55, 1202A3 (2016).
[26] A. Galeckas et al.,
“Optical library of Ga_2O_3 polymorphs”, under preparation.
[27] C. Wu, C. He, D. Guo, F. Zhang, P. Li, S. Wang, A. Liu, F. Wu, and W. Tang,
“Vertical α/β-Ga_2O_3 phase junction nanorods array with graphene-silver nanowire hybrid conductive electrode for high-performance self-powered solar-blind photodetectors”,
Mat. Today Phys., 12, 100193 (2020).
[28] J. F. Ziegler, M. D. Ziegler, and J. P. Biersack,
“SRIM—the stopping and range of ions in matter (2010)”,
Nucl. Instrum. Methods Phys. Res., Sect. B, 268, 1818 (2010).
[29] O. Nikulina, D. Yatsenko, O. Bulavchenko, G. Zenkovets, and S. Tsybulya,
“Debye function analysis of nanocrystalline gallium oxide γ-Ga_2O_3”,
Z. Kristallogr., 231, 261–266 (2016).
[30] J. Zhao, J. Byggmästar, H. He, K. Nordlund, F. Djurabekova, and M. Hua,
“Complex Ga_2O_3 polymorphs explored by accurate and general-purpose machine-learning interatomic potentials”,
npj Comput. Mater., 9, 159 (2023).
[31] H. Son and D.-W. Jeon,
“Optimization of the growth temperature of α-Ga_2O_3 epilayers grown by halide vapor phase epitaxy”,
J. Alloys Compd., 773, 631 (2019).
[32] A. I. Titov, A. Yu. Azarov, L. M. Nikulina, and S. O. Kucheyev,
“Damage buildup and the molecular effect in Si bombarded with PFn cluster ions”,
Nucl. Instrum. Methods Phys. Res., Sect. B, 256, 207 (2007).
[33] B. R. Tuttle, N. J. Karom, A. O’Hara, R. D. Schrimpf, and S. T. Pantelides,
“Atomic-displacement threshold energies and defect generation in irradiated β-Ga_2O_3: A first-principles investigation”,
J. Appl. Phys., 133, 015703 (2023).
[34] K. Momma and F. Izumi,
“VESTA 3 for three-dimensional visualization of crystal, volumetric and morphology data”,
J. Appl. Crystallogr., 44, 1272–1276 (2011).
[35] A. P. Thompson, H. M. Aktulga, R. Berger, D. S. Bolintineanu, W. M. Brown, P. S. Crozier, P. J. in’t Veld, A. Kohlmeyer, S. G. Moore, T. D. Nguyen, R. Shan, M. J. Stevens, J. Tranchida, C. Trott, and S. J. Plimpton,
“LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales”,
Comput. Phys. Commun., 271, 108171 (2022).
[36] W. G. Hoover,
“Canonical dynamics: Equilibrium phase-space distributions”,
Phys. Rev. A, 31, 1695 (1985).
[37] P. M. Larsen, S. Schmidt, and J. Schiøtz,
“Robust structural identification via polyhedral template matching”,
Modell. Simul. Mater. Sci. Eng., 24, 055007 (2016).
[38] A. Stukowski,
“Visualization and analysis of atomistic simulation data with OVITO – the open visualization tool”,
Modell. Simul. Mater. Sci. Eng., 18, 015012 (2010).
[39] K. Nordlund,
“Molecular dynamics simulation of ion ranges in the 1–100 keV energy range”,
Comput. Mater. Sci., 3, 448 (1995).
[40] K. Nordlund, M. Ghaly, R. S. Averback, M. Caturla, T. Diaz de la Rubia, and J. Tarus,
“Defect production in collision cascades in elemental semiconductors and fcc metals”,
Phys. Rev. B, 57, 7556 (1998).
[41] B. Dünweg and W. Paul,
“Brownian dynamics simulations without gaussian random numbers”,
Int. J. Mod. Phys. C, 02, 817 (1991).
| Recently gallium oxide (Ga2O3) has attracted attention of a broad audience spreading from those dealing with fundamentals of the phase transitions [1-5] to device application experts [6-10].
Among the rest of the highlights there was a discovery of disorder-induced ordering in Ga2O3 [11-14] and unprecedently high radiation tolerance of the formed structures [15].
Specifically, it was shown that even though its thermodynamically stable monoclinic polymorph (β-Ga2O3) can be swiftly disordered, it does not amorphize under irradiation, but converts to a cubic defective spinel polymorph (γ-Ga2O3), remaining crystalline independently of subsequent irradiation [15].
Moreover, electronic radiation tolerance tests performed by comparing Schottky diodes fabricated out of β- and γ-polymorphs showed that the γ-Ga2O3-based diodes remain functional, while β-Ga2O3-based diodes lost their rectification under identical irradiation conditions [16].
As explained recently, the rationale behind this remarkable β-to-γ Ga2O3 polymorph transformation is because the oxygen sublattice in these polymorphs, exhibiting face-centered cubic (fcc) structure, demonstrates strong recrystallization trends, while the Ga sublattice is susceptible to disorder [17,18].
Very recently, this idea was exploited to demonstrate multiple γ/β polymorph repetitions by adjusting spatial distributions of the disorder levels as a function of the irradiation temperature and ion flux [19]; as such demonstrating “polymorph heterostructures” not being realized by conventional growth methods otherwise.
Meanwhile, understanding of the radiation phenomena in other Ga2O3 polymorphs is much less mature.
For instance, for the metastable rhombohedral polymorph (α-Ga2O3) there are only a few studies devoted to the radiation defect formation [20-22]; however, indicating that α-phase is more radiation resistant at the range of the nuclear stopping power maximum as compared to that of β-Ga2O3 [20].
Concurrently, the disorder buildup in α-Ga2O3 involves surface amorphization, somewhat resembling the features observed in GaN [23,24]. Nevertheless, if disorder-induced polymorphism is realized in α-Ga2O3 its impact on the device applicability may be even more interesting than that in β-Ga2O3; since α-Ga2O3 exhibits the widest bandgap among the rest of the Ga2O3 polymorphs family [25,26]; making it more likely to anticipate higher band offsets in, e.g., γ/α interfaces [27].
Thus, in the present work we undertook a systematic investigation of the radiation phenomena in α-Ga2O3 and determined conditions sufficient for igniting α-to-γ polymorph transition.
We argued that in contrast to that in β-phase, the O sublattice in α-phase posses hexagonal close-packed (hcp) structure, so that to activate α-to-γ phase transformation significant structural rearrangements are required in both Ga and O sublattices.
Moreover, consistently with predictions from the energy diagram, α-to-γ phase transformation requires accumulation of the substantial tensile strain to initiate otherwise impossible lattice glides.
Thus, we explain these fascinating phase transformation trends in term of the combination of disorder and strain governing the process.
As a result, we demonstrate atomically abrupt α/γ interfaces paradoxically self-organized out of the stochastic disorder. | null | null | null | null | null |
http://arxiv.org/abs/2409.18044v1 | 20240926164646 | Unveiling the Role of Pretraining in Direct Speech Translation | [
"Belen Alastruey",
"Gerard I. Gállego",
"Marta R. Costa-jussà"
] | cs.CL | [
"cs.CL"
] |
LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field
Huan Wang1,2,†Feitong Tan2 Ziqian Bai2,3 Yinda Zhang2 Shichen Liu2 Qiangeng Xu2 Menglei Chai2 Anish Prabhu2 Rohit Pandey2 Sean Fanello2 Zeng Huang2 Yun Fu1
Received XXX; accepted XXX
===============================================================================================================================================================
§ ABSTRACT
Direct speech-to-text translation systems encounter an important drawback in data scarcity. A common solution consists on pretraining the encoder on automatic speech recognition, hence losing efficiency in the training process. In this study, we compare the training dynamics of a system using a pretrained encoder, the conventional approach, and one trained from scratch. We observe that, throughout the training, the randomly initialized model struggles to incorporate information from the speech inputs for its predictions. Hence, we hypothesize that this issue stems from the difficulty of effectively training an encoder for direct speech translation. While a model trained from scratch needs to learn acoustic and semantic modeling simultaneously, a pretrained one can just focus on the latter. Based on these findings, we propose a subtle change in the decoder cross-attention to integrate source information from earlier steps in training. We show that with this change, the model trained from scratch can achieve comparable performance to the pretrained one, while reducing the training time.
§ INTRODUCTION
In recent years, extensive research has been done in the field of speech-to-text translation (ST). These models have transitioned from cascaded systems to direct ones <cit.>. While this shift helps mitigate error propagation, it introduces other challenges such as scarcity of training data and the need for the model to tackle translation and speech recognition simultaneously. To bypass these issues, a common approach to train direct ST systems involves pretraining the encoder on the Automatic Speech Recognition (ASR) task <cit.>. This enables the encoder to learn acoustic modeling in the source language by leveraging ASR data, and the model can focus on semantic modeling during the ST training.
Various studies have been conducted on pretraining for ST. <cit.>, introduced a method to enhance ST performance for low-resource source languages by utilizing ASR pretraining from a high-resource language. <cit.> enhanced the performance of an ST system by pretraining both the encoder and the decoder on ASR and MT respectively. <cit.> and <cit.> proposed variations of ASR pretraining that yielded superior results.
However, pretraining has some drawbacks too. It has additional data requirements, which can be a problem particularly in languages that don't have a written form and hence no ASR data. Furthermore, it complicates the training pipeline and worsens the efficiency of the overall training process.
Recent studies have already questioned the pretraining approach, <cit.>, demonstrating that similar results can be achieved under certain conditions without the need for pretraining. However, the authors show that many strategies need to be simultaneously used to achieve this, such as an exhaustive hyperparameter tunning, CTC-based regularization and their proposed parameterized distance penalty.
Complementing previous interpretability works in ST <cit.>, in this study, we conduct the first-ever analysis of the training dynamics of a ST system, and based on its results, we propose a subtle modification in the Transformer <cit.> architecture to bypass the pretraining stage.
First, we compare the training dynamics of a conventional system that uses a pretrained encoder with one trained from scratch[The pretraining is done on the same amount of training data than the ST training.]. Through this analysis, we observe significant disparities in their behaviors. Particularly, we note that when making predictions, the model trained from scratch delays the utilization of information extracted by the encoder until a later stage of training.
We hypothesize that this delay occurs due to the complexity of the acoustic modeling task, that in this setting needs to be learned together with the semantic modeling. Hence, it takes a significant amount of updates to sufficiently train the encoder so that it can extract meaningful information. Consequently, the model ignores the encoder outputs and focuses on training the decoder for language modeling. Once the encoder can extract valuable representations, the model has already converged towards language modeling and struggles to rely on the information obtained by the encoder.
Secondly, we believe that by forcing the model to utilize encoder outputs earlier in the training process, the model would not converge towards language modeling and the encoder would be trained more rapidly, leading to higher-quality representations in its outputs. Through a modification in the residual connection in the decoder cross-attention mechanism, we force the model trained from scratch to integrate source information from earlier training steps, and we observe a comparable performance than in the pretrained one.
Overall, the main contributions of our work are: (1) the first study of training dynamics in ST, that unveils the role of the pretraining, and (2) a modification in the Transformer architecture to bypass the pretraining stage.
§ RELATED WORK
§.§ Interpretability of Transformer Models
Our research aims to quantify the source of information used for making predictions in these models, specifically whether it originates from the source (encoder input) or the target prefix (previously predicted words serving as decoder inputs). To achieve this, we employ the ALTI+ interpretability method <cit.>.
ALTI+ employs a strategy to rewrite attention blocks, as introduced by <cit.>, along with the contribution definition provided by <cit.> and a variation of rollout inspired by <cit.>. By utilizing ALTI+, we determine the extent to which each input token in the source and target prefix contribute to the prediction of a token. Furthermore, by summing the individual contribution of each token in the encoder source, the authors obtain a unique score referred to as source contribution, that we use to study training dynamics on ST.
§.§ Training Dynamics on Machine Translation
Previous work has been done to understand how Transformers learn in the task of Machine Translation on text. <cit.> analyse how the source contribution varies along the training using Layerwise Relevance Propagation method to track source and target contribution, and describe three different training phases.
Target-side language modeling: The beginning of training is devoted to target-side language modeling. The total contribution of the source substantially decreases. This means that in the trade-off between information coming from the source and the target prefix, the model gives more and more priority to the prefix.
Learning how to use source: In the second stage, the source influence increases quickly. This means that, opposite to the first stage, in the trade-off between information coming from the source and the target prefix, the model progressively gives more and more importance to source information.
Refining translations: In the last stage, the source contribution remains constant. By analysing other metrics the authors see that the model is learning to refine some translations. The model learns to align better, and is able to generate more natural translations instead of word-to-word ones.
§ TRAINING DYNAMICS IN SPEECH TRANSLATION
In this section, we analyse how much two training strategies on a Transformer-based system[We train the small S2T-Transformer architecture from Fairseq (<https://github.com/facebookresearch/fairseq>).] rely on the speech source when making predictions along a ST training. In particular, we study: (1) a standard ST system, consisting on an ASR pretraining, followed by a re-initialization of the decoder and a training on ST data, and (2) a system that only performs a ST training on a randomly initialized model.
To measure the amount of input information used by the model to generate a prediction we use the source contribution defined by ALTI+, covered in Section <ref>.
To generalize this score to a sentence-wise score, we average the source contribution used for the prediction of each token in a sentence. Finally, to obtain a score over a test set, we average again the score obtained by every sentence in the set.
Given that we use a different source contribution measure than <cit.> in their work described in Section <ref>, we decide to train an additional MT model, to confirm that the three stages described in their work still happen on our setting.
For all our analysis, we store a checkpoint every 1k updates during the full training, and every 100 updates during the first 10k[Setup details of the experiments are in Appendix <ref>.]. For each of the checkpoints, we evaluate the model computing BLEU and source contribution scores on English-German MuST-C dataset <cit.> [We use transcripts and text translations for the MT model.].
§.§ Results Analysis
In Figure <ref>, we see the obtained results. When focusing on the first 10k updates we first observe that the three stages described in Section <ref> still happen in the text translation model. However, when analysing both ST variants, we observe different behaviours.
In the standard setting with a pretrained encoder, we observe a two-stage process. This model skips the first stage described in Section <ref>, and rapidly integrates source data from the beginning of training. This phenomenon is coherent, as the encoder has been pretrained, resulting in high-quality representations that are immediately beneficial for the decoder during prediction tasks. As in the case of text translation, the last stage starts after the first around 6k updates.
Instead, the ST model trained from scratch undergoes the same three-stage process than text translation. However, each stage appears to require significantly more time compared to text translation. Specifically, the model does not achieve a stable level of source contribution until after approximately 30k updates, whereas the other two models achieve this stability after only 6k updates.
We hypothesize this happens due to the difficulty of training the encoder for the task of ST from scratch. Unlike an encoder in text translation, which solely requires semantic modeling, a ST encoder learns both acoustic and semantic modeling. This dual requirement makes the training process for an ST encoder more time-consuming than that of a text translation model. Consequently, the model tends to overlook the encoder during the early stages of training, focusing instead on language modeling.
Overall, we believe that the initial stage outlined in Section <ref> is not a result of the need to learn language modeling. Rather, it's a strategy to bypass the encoder information until the encoder is adequately trained. This process is quick in text translation, non-existent when using a pre-trained encoder in ST, and lengthy when training an ST system from scratch.
Moreover, we think that by the time the ST encoder trained from scratch becomes capable of extracting relevant information, the model has already converged towards relying on language modeling. As a result, it never reaches the level of contribution achieved by the pretrained model (as shown in Figure <ref>), leading to inferior performance.
§ TRAINING ST FROM SCRATCH
Building on our previous analysis, we hypothesize that forcing a Speech Translation model trained from scratch to utilize source information from the start could enhance the training process. If the model is required to use the encoder's representations, regardless of their quality, poor representations will negatively impact the model's overall performance. This, in turn, will cause a faster training of the encoder to extract better representations.
In particular, considering the results in Figure <ref>, we observe that both the text translation model and the pretrained speech translation model achieve a stable source contribution of approximately 65%. Hence, we consider this proportion to be optimal and aim to enforce it in the speech translation model trained from scratch.
To test our hypothesis we propose a subtle architecture modification, that forces the Transformer to use source information along the full training. Our modification focuses on the cross-attention layer of the decoder, which is the step where source and target information is aggregated, with source information coming from the attention block and target-prefix details coming from the residual flow.
§.§ WeRC: Weighted Residual Connection
We modify the residual connection after each cross-attention block in the decoder. This sum aggregates the output of the cross-attention block (x_attn) and the residual stream(x_res). Our goal is to increase the information flow coming from the source, so we scale these two components giving a higher weight to the output of the cross-attention (Eq. <ref>). Specifically, we aim to approximately match the proportion of source contribution found in Section <ref>, hence we set λ = 0.65.
x_out = λ· x_attn + (1 - λ) · x_res
However, a potential issue of this approach is that the cross-attention block could converge towards producing small-norm vectors, so that they would still have a small contribution regardless of the weighting. To solve this potential issue, we normalize each term of the summation (Eq. <ref>), by adding layer normalization layers <cit.>. This ensures both tensors have the same norm before the weighting. Therefore they will contribute to the sum with the target proportion.[Note that we remove the learnable parameters from layer normalization to avoid any scaling that could affect the predefined weights.]
x_out = λ· LN(x_attn) + (1 - λ) · LN(x_res)
Results in Table <ref> show that our model, which incorporates WeRC and is trained from scratch, outperforms the baseline by +1.3 BLEU points. Additionally, it nearly achieves the same performance as the model with pretraining, while reducing the training time by skipping the pretraining stage. Additionally, we extend this experiment to En-Es and En-Fr MuST-C sets and obtain analogous results.
§.§ Ablation Study
We perform an ablation study on the usefulness of the weighted sum and the layer normalization individually. In Table <ref> we observe that both strategies achieve a better performance than the baseline trained from scratch, but they are still considerably behind WeRC and the pretrained baseline. In the case of the variant without normalization, we believe this is as a result of the trainable parameters in the attention block (as described earlier). In the case of the model without weights, we believe this happens because the model is forced to use a 50% of source contribution, which is below the optimal (as observed in Figure <ref>). Additional ablation studies regarding the use of WeRC on a MT and a pretrained ST models can be found in Appendix <ref>.
§ CONCLUSIONS
In this work, we present the first study on the training dynamics of direct ST systems, comparing a standard ST model with a pretrained encoder to one trained from scratch. The analysis shows that, without pretraining, the model struggles to incorporate information from the encoder's outputs when making predictions. As an explanation, we suggest the encoder needs more updates than in a text task until it can extract valuable representations of the input tokens. Once this is achieved, the model has already converged towards language modeling, hence failing to utilize the information extracted by the encoder effectively even in later steps. To address this issue, we propose a subtle modification to the transformer architecture that forces the model to incorporate source information throughout the whole training. By doing so, we achieve comparable performance in a model trained from scratch to one with pretraining, while reducing training time and data requirements.
§ LIMITATIONS
While our study provides valuable insights into the training dynamics of direct ST systems and proposes a novel approach to improve the efficiency of the training process, our findings are based on a specific model, dataset and languages. We believe different results could be obtained in other settings, such as low resource speech translation.
Furthermore, our paper focuses on a classic and widely extended pretraining strategy. ASR and ST training sets correspond to the same dataset and have the same size, differing only in the language of the targets. We also don't use additional techniques such as CTC auxiliary loss. However, our goal in this work is not obtaining a new state-of-the-art ST training strategy but analysing and understanding a common training strategy using interpretability tools, and performing additional experiments to validate the hypothesis extracted from the analysis.
Finally, in our work we use the learning rate defined by <cit.> for ST finetuning also on the experiments trained from scratch. We acknowledge that the performance of experiments trained from scratch could be pushed further by tuning this hyperparameter. However, we wanted to keep experiments comparable for the training dynamics analysis, and hence we decided to use the same learning rate. Furthermore, this should not have an impact in the conclusions of the paper, given that our proposed modification (WeRC) is also trained from scratch and uses the same learning rate.
acl_natbib
§ EXPERIMENTAL SETUP
ST Models Details: In our experiments we use commonly used Fairseq[https://github.com/facebookresearch/fairseq] Transformer and S2T-Transformer architectures. In the case of speech, it consists of 12 encoder layers and 6 decoder layers. Both the encoder and the decoder use 4 attention heads, the embedding dimension is 256, and in the MLP blocks it is 2048. The decoder output dimension is 256, the same as the decoder embedding dimension. The model has layer normalization before its main blocks instead of after, and a dropout of 0.1 is used in both the attention weights and in the MLP activations. ReLU is used as the activation function for the MLP. Regarding text models we have 6 encoder and 6 decoder layers, no dropout, an embedding dimension of 512 and 8 attention heads (other settings remain the same than for speech).
Training Setup: In the case of speech translation and speech recognition trainings, we follow the setup defined by <cit.>. We fix a maximum of 20000 tokens per batch. We use Adam optimizer <cit.> and a learning rate of 1·10^-3 with an inverse square root scheduler. We apply a warm-up for the first 10000 updates and we clip the gradient to 10 to avoid exploding gradients. We use label smoothed Cross-entropy loss, with a smoothing factor of 0.1. The update frequency is set to 16, simulating the use of 16 GPUs. We train each model for a maximum of 100000 updates. In ST trainings we use a learning rate of 2·10^-3 while on speech recognition it is 1·10^-3, as done by <cit.>.
In the text translation system, we again follow the setup defined by <cit.> for Machine Translation. It is similar than the speech translation one but the maximum number of tokens per batch is limited to 4096 and the number of warm updates is 4000. Gradient clipping is removed and learning rate is set to 5·10^-4.
§ RESULTS ON MACHINE TRANSLATION AND SPEECH TRANSLATION WITH PRETRAINING
In this section, we aim to study the impact of using WeRC on the analysed MT system and on the ST training with a pretrained encoder. These settings achieve an optimal level of source contribution from the first updates of the training, so we hypothesize that WeRC might have a less noticeable impact than in the main study of this paper (ST from scratch).
In Table <ref>[Note that this results are obtained evaluating on the best checkpoint without checkpoint averaging.] we see the obtained results. We observe that both settings maintain the same performance, which is consistent with our hypothesis.
| In recent years, extensive research has been done in the field of speech-to-text translation (ST). These models have transitioned from cascaded systems to direct ones <cit.>. While this shift helps mitigate error propagation, it introduces other challenges such as scarcity of training data and the need for the model to tackle translation and speech recognition simultaneously. To bypass these issues, a common approach to train direct ST systems involves pretraining the encoder on the Automatic Speech Recognition (ASR) task <cit.>. This enables the encoder to learn acoustic modeling in the source language by leveraging ASR data, and the model can focus on semantic modeling during the ST training.
Various studies have been conducted on pretraining for ST. <cit.>, introduced a method to enhance ST performance for low-resource source languages by utilizing ASR pretraining from a high-resource language. <cit.> enhanced the performance of an ST system by pretraining both the encoder and the decoder on ASR and MT respectively. <cit.> and <cit.> proposed variations of ASR pretraining that yielded superior results.
However, pretraining has some drawbacks too. It has additional data requirements, which can be a problem particularly in languages that don't have a written form and hence no ASR data. Furthermore, it complicates the training pipeline and worsens the efficiency of the overall training process.
Recent studies have already questioned the pretraining approach, <cit.>, demonstrating that similar results can be achieved under certain conditions without the need for pretraining. However, the authors show that many strategies need to be simultaneously used to achieve this, such as an exhaustive hyperparameter tunning, CTC-based regularization and their proposed parameterized distance penalty.
Complementing previous interpretability works in ST <cit.>, in this study, we conduct the first-ever analysis of the training dynamics of a ST system, and based on its results, we propose a subtle modification in the Transformer <cit.> architecture to bypass the pretraining stage.
First, we compare the training dynamics of a conventional system that uses a pretrained encoder with one trained from scratch[The pretraining is done on the same amount of training data than the ST training.]. Through this analysis, we observe significant disparities in their behaviors. Particularly, we note that when making predictions, the model trained from scratch delays the utilization of information extracted by the encoder until a later stage of training.
We hypothesize that this delay occurs due to the complexity of the acoustic modeling task, that in this setting needs to be learned together with the semantic modeling. Hence, it takes a significant amount of updates to sufficiently train the encoder so that it can extract meaningful information. Consequently, the model ignores the encoder outputs and focuses on training the decoder for language modeling. Once the encoder can extract valuable representations, the model has already converged towards language modeling and struggles to rely on the information obtained by the encoder.
Secondly, we believe that by forcing the model to utilize encoder outputs earlier in the training process, the model would not converge towards language modeling and the encoder would be trained more rapidly, leading to higher-quality representations in its outputs. Through a modification in the residual connection in the decoder cross-attention mechanism, we force the model trained from scratch to integrate source information from earlier training steps, and we observe a comparable performance than in the pretrained one.
Overall, the main contributions of our work are: (1) the first study of training dynamics in ST, that unveils the role of the pretraining, and (2) a modification in the Transformer architecture to bypass the pretraining stage. | §.§ Interpretability of Transformer Models
Our research aims to quantify the source of information used for making predictions in these models, specifically whether it originates from the source (encoder input) or the target prefix (previously predicted words serving as decoder inputs). To achieve this, we employ the ALTI+ interpretability method <cit.>.
ALTI+ employs a strategy to rewrite attention blocks, as introduced by <cit.>, along with the contribution definition provided by <cit.> and a variation of rollout inspired by <cit.>. By utilizing ALTI+, we determine the extent to which each input token in the source and target prefix contribute to the prediction of a token. Furthermore, by summing the individual contribution of each token in the encoder source, the authors obtain a unique score referred to as source contribution, that we use to study training dynamics on ST.
§.§ Training Dynamics on Machine Translation
Previous work has been done to understand how Transformers learn in the task of Machine Translation on text. <cit.> analyse how the source contribution varies along the training using Layerwise Relevance Propagation method to track source and target contribution, and describe three different training phases.
Target-side language modeling: The beginning of training is devoted to target-side language modeling. The total contribution of the source substantially decreases. This means that in the trade-off between information coming from the source and the target prefix, the model gives more and more priority to the prefix.
Learning how to use source: In the second stage, the source influence increases quickly. This means that, opposite to the first stage, in the trade-off between information coming from the source and the target prefix, the model progressively gives more and more importance to source information.
Refining translations: In the last stage, the source contribution remains constant. By analysing other metrics the authors see that the model is learning to refine some translations. The model learns to align better, and is able to generate more natural translations instead of word-to-word ones. | null | null | null | null |
http://arxiv.org/abs/2409.17890v1 | 20240926143928 | Actions of Taft Algebras on Noetherian Down-Up Algebras | [
"Simon Crawford",
"Jason Gaddis",
"Robert Won"
] | math.RA | [
"math.RA",
"16E65, 16T05, 16W22, 16W50"
] |
§ ABSTRACT
We consider actions of Taft algebras on noetherian graded down-up algebras. We classify all such actions and determine properties of the corresponding invariant rings A^T. We identify precisely when A^T is commutative, when it is Artin–Schelter regular, and give sufficient conditions for it to be Artin–Schelter Gorenstein. Our results show that many results and conjectures in the literature concerning actions of semisimple Hopf algebras on Artin–Schelter regular algebras can fail when the semisimple hypothesis is omitted.
Extremal number of arborescences
[
September 28, 2024
================================
§ INTRODUCTION
Throughout, we let denote an algebraically closed field of characteristic 0. All algebras will be associative -algebras.
If G is a finite subgroup of (n,) and A = [x_1, …, x_n] is a polynomial ring, then the study of invariants of the action of G on A is a deep and beautiful subject with connections to combinatorics, algebraic geometry, and representation theory. In particular, there are many classical results which describe properties of the invariant ring A^G in terms of properties of G. For example, the Shephard–Todd–Chevalley Theorem states that A^G is a polynomial ring if and only if G is generated by quasi-reflections, while Watanabe's Theorem states that, if G contains no quasi-reflections, then A^G is Gorenstein if and only if G ⩽(n,).
In the past three decades, there has been a strong research effort to try to generalise many of these classical results to a noncommutative setting. The approach that many authors have taken, and which we shall also follow, is to replace the polynomial ring by an Artin–Schelter regular algebra, which may be viewed as a “noncommutative polynomial ring”. Additionally, noncommutative algebras often have “quantum symmetries” which commutative algebras lack <cit.>, so one should allow for actions by finite-dimensional Hopf algebras H, rather than just group algebras. For such a Hopf action on an algebra, there exists a suitable notion of an invariant ring, which we denote A^H.
This perspective has proved to be fruitful, particularly in the case where one additionally assumes that the Hopf algebra H is semisimple. For example, Kirkman, Kuzmanovich, and Zhang have established an analogue of the Shephard–Todd–Chevalley Theorem for skew polynomial rings <cit.>, and an analogue of Watanabe's Theorem for actions of Hopf algebras on Artin–Schelter Gorenstein algebras <cit.>. Many other results have satisfactory generalisations to this noncommutative setting; we refer the interested reader to the survey of Kirkman <cit.> for a thorough overview.
However, much less is known when we omit the semisimple hypothesis from H, which many general results in the area require. For example, Molien's Theorem determines the Hilbert series of the invariant ring A^H using representation theory, but this result is false in general if H is not assumed to be semisimple.
In <cit.>, the authors studied examples of non-semisimple Hopf algebras on Artin–Schelter regular algebras and provided a list of differences between the semisimple and non-semisimple setting <cit.>. However, these examples exist only in positive characteristic, and so it is unclear if the novel behaviour which they exhibit is also influenced by the characteristic of the field, as is the case in the modular representation theory of groups.
Accordingly, in this paper, we study a certain family of actions of non-semisimple Hopf algebras on Artin–Schelter regular algebras where has characteristic 0. In particular, we are interested in how properties of the corresponding invariant rings differ from the semisimple setting. The non-semisimple Hopf algebras which we consider are the Taft algebras, which were originally defined by Taft <cit.> when studying the order of the antipode of a Hopf algebra. These algebras depend on a single parameter n ⩾ 2, and have presentation
T_n = ⟨ x,g | g^n-1, x^n, gx-ω xg⟩,
where ω∈ is a primitive nth root of unity. Actions of Taft algebras and their generalisations have been studied on families of AS regular algebras previously. In <cit.>, the authors showed that the invariant ring of a Taft algebra acting on a quantum plane is always a commutative polynomial ring <cit.>. In <cit.>, actions of generalized Taft algebras on quantum planes were considered: here, the invariant rings are always commutative, and are either a polynomial ring or the coordinate ring of a well-understood singularity <cit.>. Other work in this direction include actions on finite-dimensional algebras <cit.>, quantum generalised Weyl algebras <cit.>, path algebras of quivers <cit.>, and preprojective algebras <cit.>, although these algebras are not AS regular.
Our focus will be actions of Taft algebras on down-up algebras. These algebras were originally defined by Benkart and Roby <cit.>, who were interested in studying their representation theory. We will restrict attention to the subclass of down-up algebras which are noetherian and graded, denoted A(α,β) with α,β∈, β≠ 0. These are -algebras which are generated by two elements subject to two cubic relations, and are AS regular algebras of global dimension 3 <cit.>. Kirkman, Kuzmanovich, and Zhang showed that A(α,β) is rigid in the sense that A^G is not isomorphic to A for any nontrivial subgroup G of automorphisms <cit.>. A similar result holds for group coactions; see <cit.>.
Our first result classifies all possible inner faithful homogeneous actions of Taft algebras on noetherian-down up algebras:
Let n ⩾ 2 and suppose that T_n acts inner faithfully and homogeneously on a down-up algebra A(α,β). Then we have the following possibilities:
* For some 0 ⩽ k ⩽ n-1 and q ∈^×,
g =
[ ω^k+1 0; 0 ω^k ],
x = [ 0 q; 0 0 ],
α = ω^-(k+1) (1 + √(ω)), β = - ω^-2(k+1)√(ω),
where √(ω) denotes a choice of either one of the square roots of ω; or
* For some 0 ⩽ k ⩽ n-1 and r ∈^×,
g =
[ ω^k 0; 0 ω^k+1 ],
x = [ 0 0; r 0 ],
α = ω^k+1 (1 + √(ω^-1)), β = - ω^2(k+1)√(ω^-1),
where √(ω) denotes a choice of either one of the square roots of ω.
Conversely, each of the above parameter choices indeed give rise to an inner faithful action of T_n on an appropriate down-up algebra.
For the remainder of this introduction, let A = A(α,β) and T = T_n be a pair satisfying the conditions of the above theorem. We then seek to determine properties of the invariant rings A^T. By Remark <ref>, it will suffice to consider only case (1) from Theorem <ref>. As a first step, we give a presentation for the subring A^x of A consisting of x-invariant elements. It turns out that the structure of A^x depends crucially on whether has order n or order 2n.
Suppose that A and T are as in Theorem <ref> (1).
* (Proposition <ref>) If has order n, then A^x is a skew polynomial ring, generated by u, z vu-ω^-(k+1)uv, and v^n.
* Proposition <ref>) If has order 2n, then A^x is generated by u, z, v^2n and x^n-1· v^2n-1, and is the factor of a four-dimensional AS regular algebra by a homogeneous regular normal element.
We are then able to calculate the invariant ring A^T as the subring of A^x consisting of g-invariant elements. This allows us to exploit the well-understood properties of actions of finite groups on Artin–Schelter Gorenstein rings. Again, the order of the root of unity has a substantial effect on the properties of A^T.
Suppose that A and T are as in Theorem <ref> (1), and that has order n.
* (Lemma <ref>) A^T is commutative.
* (Proposition <ref>) A^T is a polynomial ring if and only if (k+1)(2k+1) ≡ 0 n. Otherwise, it is the coordinate ring of the product of affine 1-space with a (possibly non-Gorenstein) type 𝔸 singularity.
* (Corollary <ref>) A^T is AS Gorenstein if and only if
(k+1) (2k+1,n) + (2k+1) (k+1,n) ≡ 0 n.
Suppose that A and T are as in Theorem <ref> (1), and that has order 2n.
* (Lemma <ref>) A^T is not commutative;
* (Lemma <ref>) A^T is not AS regular; and
* (Theorem <ref>) A^T is AS Gorenstein if n and k satisfy the following relationships:
(a) n ⩾ 2 and k = n-1;
(b) 4k+3 ≡ 0 n (which happens only if n is odd);
(c) n ⩾ 3 and 4k+2 ≡ 0 n.
The previous two theorems show that A^T can exhibit a wide variety of behaviours, often quite different from the semisimple setting. In particular, down-up algebras are rigid with respect to actions of Taft algebras, but are not “strongly rigid”, in the sense that it is possible for A^T to be Artin–Schelter regular, which is new behaviour. Additionally, there is an example where the homological determinant of the action is trivial but A^T is not Artin–Schelter Gorenstein (Example <ref>), which shows that the semisimple hypothesis is essential in <cit.>. Moreover, Example <ref> exhibits an action with trivial homological determinant for which the so-called Auslander map is not an isomorphism. This shows that one also requires the semisimple hypothesis in <cit.>.
Organisation of this paper. In Section 2, we recall some important definitions and results. We classify all possible inner faithful, homogeneous actions of Taft algebras on down-up algebras in Section 3. In Section 4, we establish some useful identities for subsequent calculations which, in Section 5, allows us to determine the subring of x-invariant elements. Finally, in Section 6 we use our understanding of the x-invariants to understand the full subring of invariants, and some of its properties.
Acknowledgements. S. Crawford thanks the Heilbronn Institute for Mathematical Research for their financial support during this work. R. Won was partially supported by Simons Foundation grant #961085. We also thank Benny Liber whose undergraduate research project with J. Gaddis motivated some of the ideas herein.
§ BACKGROUND AND PRELIMINARIES
In this section we recall some definitions and preliminary results which are necessary for our analysis.
Let H be a Hopf algebra with counit ϵ, comultiplication Δ, and antipode S.
We say H acts on an algebra A (from the left)
if A is a left H-module, h · 1_A = ϵ(A) 1_A, and for all h ∈ H and a,b ∈ A,
h · (ab) = ∑ (h_1 · a)(h_2 · b). Alternatively, in this setting we say that A is a left H-module algebra. If A is a -graded algebra, we say that the action of H is homogeneous if A_i is an H-module for each i ∈. If A is an H-module algebra, we define the invariant ring A^H to be the following subring of A:
A^H { a ∈ A | h · a = ε(h) a for all h ∈ H }.
The action of H is said to be inner faithful if there
does not exist a nonzero Hopf ideal I such that I · A = 0.
When such an ideal exists, then the Hopf algebra H/I acts naturally on A. In particular, if H= G is a group algebra, then an action by H is inner faithful if and only if it is faithful.
Thus, the impetus for studying inner faithful actions is that they do not factor through the actions of a “smaller" Hopf algebra.
We will be interested in a specific family of Hopf algebras defined below.
Fix an integer n ⩾ 2 and let ω be a primitive nth root of unity.
The Taft algebra of dimension n^2 is the algebra
T_n = ⟨ x,g | g^n-1, x^n, gx-ω xg⟩.
There is a Hopf structure on T_n defined by declaring g to be grouplike and
x to be (g,1)-skew primitive.
Explicitly, the counit ϵ, comultiplication Δ, and antipode S
are defined on generators as follows:
ε(g) = 1, Δ(g) = g ⊗ g, S(g) = g^n-1
ε(x) = 0, Δ(x) = g ⊗ x + x ⊗ 1, S(x) = - g^n-1 x.
The group of grouplikes of T_n is C_n = ⟨ g ⟩.
The Taft algebras are non-semisimple, noncommutative, and noncocommutative.
It is known that ⟨ x ⟩ is a Hopf ideal of T_n and T_n/ ⟨ x ⟩≅ C_n, where C_n is the cyclic group of order n generated by g.
Therefore, to ensure that the action of T_n is inner faithful, we want to exclude the case where x acts trivially. More formally, we have the result below.
The action of T_n on a -algebra is inner faithful if and only if the action of g is given by an automorphism of order n
and the action of x is nontrivial.
We do not recall the general definition of a down-up algebra,
as we will only be interested in the following subclass.
Let α,β∈.
The (graded) down-up algebra with parameters (α, β) is the algebra
with presentation
A(α,β) = ⟨ u,v ⟩/⟨[ v^2u - α vuv - β uv^2; vu^2 - α uvu - β u^2 v ]⟩.
By <cit.>, A = A(α,β) is noetherian if and only if β≠ 0.
We will assume this to be the case throughout. It is also well-known that A is a domain of global dimension and Gelfand–Kirillov (GK) dimension 3. Moreover, for any λ∈, A has PBW basis
{ u^i (vu-λ uv)^j v^k | i,j,k ⩾ 0 }
and, consequently, A has Hilbert series
_A (t) = 1/(1-t)^2(1-t^2).
The characteristic polynomial of A(α,β) is the polynomial t^2-α t - β. Let γ_1, γ_2 be the roots of the characteristic polynomial. Then A is PI (i.e. satisfies a polynomial identity) if and only if γ_1 and γ_2 are roots of unity (see, e.g., <cit.>).
Moreover, by <cit.>, the elements Ω_i=vu-γ_i uv, i=1,2, satisfy
u Ω_1 = γ_2Ω_1 u,
v Ω_1 = γ_2 Ω_1 v,
u Ω_2 = γ_1Ω_2 u,
v Ω_2 = γ_1 Ω_2 v.
When performing computations for Taft actions on down-up algebras, many formulae that arise involve Gaussian binomial coefficients, defined as follows:
For non-negative integers m and r, and q ∈, define
mrq(1-q^m)(1-q^m-1) ⋯ (1-q^m-r+1)/(1-q)(1-q^2) ⋯ (1-q^r),
with the convention that m0 q = 1.
It is straightforward to check that, if m < r, then mr q = 0. Moreover, if q is an nth root of unity, then m+nr q = mr q. In this case, if m is divisible by n then mr q = 0.
Down-up algebras are examples of Artin–Schelter regular algebras, which should be thought of as noncommutative analogues of polynomial rings. We define these below, as well as the weaker notion of an Artin–Schelter Gorenstein algebra.
Let A be a connected graded -algebra and write = A/A_⩾ 1 for the trivial module. We say that A is Artin–Schelter Gorenstein (or AS Gorenstein) of dimension d if:
* _A A = A_A = d < ∞, and
* There is an isomorphism of graded right A-modules
^i(_A,_A A) ≅{[ 0 if i ≠ d; [ℓ]_A if i = d ] .
for some integer ℓ, and a symmetric condition holds for ^i(_A,A_A), with the same integer ℓ. We call ℓ the Gorenstein parameter of A.
If, moreover
(3) A = d, and
(4) A has finite GK dimension,
then we say that A is Artin–Schelter regular (or AS regular) of dimension d.
Since we are interested in classifying the inner faithful actions of Taft algebras on down-up algebras, it will be helpful to have a description of the graded automorphism group of A(α,β).
As usual, we will restrict attention to inner faithful, homogeneous (left) actions. Let (n,) denote the subgroup of (n,) consisting of diagonal matrices, and let S_2 denote the subgroup of order 2 of (2,) generated by the unique nontrivial permutation matrix.
The automorphism group of A(α,β) is as follows:
_gr A(α,β) = {[ (2,) if (α,β) ∈{ (0,1), (2,-1) },; (2,) ⋊ S_2 if β = -1, α≠ 2,; (2,) otherwise. ].
When A is a connected graded algebra and H is a semisimple Hopf algebra, properties of the invariant ring A^H are well-understood. However, if we remove the semisimple hypothesis from H then this is no longer the case. One of the motivations for the work in this paper is to try to better understand what types of behaviour are possible.
One particular result that no longer holds in this setting is Molien's Theorem, which allows one to calculate the Hilbert series of an invariant ring A^H when A is connected graded and H is a semisimple Hopf algebra. We recall this result in the specific case where H is a group algebra, which we will require later on. We first require a definition:
Suppose that A is a connected graded algebra and g ∈(A). The trace of g on A is the formal power series
_A(g,t) ∑_i ⩾ 0(gA_i) t^i ∈ t ,
where (-) is the usual trace of a linear map.
If, moreover, A is AS Gorenstein of dimension d and Gorenstein parameter ℓ, and _A(g,t) can be written as a rational function, then we can expand it as a power series in t^-1:
_A(g,t) = (-1)^n c^-1 t^-ℓ + lower order terms.
The homological determinant of g acting on A is defined to be _A(g) = c. This gives rise to a group homomorphism
_A : G →^×.
If, in addition, A is AS regular with Hilbert series
_A(t) = 1/(1-t)^n p(t)
where p(1) ≠ 0, then we say that g is a quasi-reflection if
_A(g,t) = 1/(1-t)^n-1 q(t)
where q(1) ≠ 0.
The homological determinant is typically defined using local cohomology, and can be defined for semisimple Hopf algebras as well, but the definition above will be sufficient for our purposes. We note that _A(g,t) is a rational function when, in particular, A is (a quotient of an) AS regular algebra or is PI <cit.>.
Suppose that G is a finite group acting on a connected graded algebra A. Then the Hilbert series of A^G can be calculated as follows:
_A^G(t) = 1/n∑_g ∈ G_A(g,t).
The homological determinant plays an important role in detecting when an invariant ring as AS Gorenstein:
Suppose that A is noetherian and AS Gorenstein, and that G is a finite subgroup of (A). If _A(g) = 1 for each g ∈ G, then A^G is AS Gorenstein.
When _A(g) = 1 for all g ∈ G, we say that G acts on A with trivial homological determinant. While we do not define the homological determinant in the wider generality of Hopf actions, we mention that a general semisimple Hopf algebra H is said to act with trivial homological determinant if _A = ε as maps of -algebras.
Another way to determine when a connected graded ring is AS Gorenstein is via Stanley's Theorem, which was originally proved for commutative rings in <cit.>. We state a noncommutative version of this result which is suitable for our purposes.
Let A be an AS Cohen–Macaulay algebra which is a domain and is PI. Then A is AS Gorenstein if and only if the equation of rational functions
_A(t^-1) = ± t^m_A(t)
holds for some integer m.
We will not define the notion of an AS Cohen–Macaulay algebra here. However, we note that, if A is the invariant ring of a finite group acting on an AS Gorenstein ring, then it is AS Cohen–Macaulay by <cit.>.
§ CLASSIFICATION OF ACTIONS
We begin by classifying all possible actions of a Taft algebra on a down-up algebra. Suppose that T_n acts inner faithfully and homogeneously on A(α,β) for some n ⩾ 2 and α,β∈ which are yet to be determined. Abusing notation, write
g = [ a b; c d ], x = [ p q; r s ]
for the matrices representing the actions of g and x, for some a,b,c,d,p,q,r,s ∈. To determine the values that these parameters can take, we need to ensure that g and x satisfy the defining relations of T_n, that the ideal of relations defining A(α,β) is invariant under the actions of g and x, and that Lemma <ref> is satisfied. In particular, since x is a 2 × 2 matrix, the relation x^n = 0 implies x^2=0, so that
[ p^2 + qr q(p+s); r(p+s) s^2 + qr ]
= 0,
From the perspective of invariant theory, it suffices to determine our representation of H up to conjugation by an element of (A). By Lemma <ref>, if (α,β) = (0,1) or (α,β) = (2,-1), by conjugating by a suitable element of (A) = (2,), we may assume that g is diagonal. In light of Lemma <ref>, the only other possibility for g is for it to be antidiagonal, which only occurs when β = -1. In this case, if α≠ 2, then it is not possible to diagonalise the actions of g.
We first treat the case where g is diagonalisable:
Suppose that T_n acts inner faithfully and homogeneously on some A(α,β), where the representation of T_n is given by
g =
[ a 0; 0 d ],
x = [ p q; r s ]
for some a,d ∈^× and p,q,r,s ∈. Then either:
* a=ω^k+1, d = ω^k for some 0 ⩽ k ⩽ n-1, q ≠ 0, p=r=s=0, and
α = ω^-(k+1) (1 + √(ω)), β = - ω^-2(k+1)√(ω)
where √(ω) denotes a choice of either one of the square roots of ω; or
* a=ω^k, d = ω^k+1 for some 0 ⩽ k ⩽ n-1, r ≠ 0, p=q=s=0, and
α = ω^k+1 (1 + √(ω^-1)), β = - ω^2(k+1)√(ω^-1).
where √(ω) denotes a choice of either one of the square roots of ω.
Conversely, each of the above parameter choices indeed give rise to an inner faithful action of T_n on an appropriate down-up algebra.
We first note that the relation gx = ω xg tells us that
[ (1-ω)ap q(a-ω d); r(d-ω a) (1-ω) ds ]
= 0.
As ω≠ 1 and a ≠ 0 ≠ d, we deduce that p=0=s. By (<ref>), we then have qr=0 so, to ensure that the action is inner faithful, precisely one of q and r must be nonzero.
Suppose first that q ≠ 0 and r = 0. The top-right entry of (<ref>) implies q(a-ω d) = 0, so that a = ω d. Viewing the variables u and v as elements of the free algebra ⟨ u, v⟩, using x · u = 0 we obtain
x · (v u^2) = (x · v) u^2 = qu^3,
x · (uvu) = (g · u)(x · v) u = qω d u^2,
x · (u^2 v) = (g · u) (g · u) (x · v) = qω^2 d^2 u^3.
Therefore,
x · (v u^2 - α uvu - β u^2 v) = q(1 - αω d -βω^2 d^2)u^3
and hence we must have
1 - αω d -βω^2 d^2 = 0.
Similarly, we have
x · (v^2 u) = (g · v) (x · v) u + (x · v) v u = q(dvu^2 + uvu),
x · (vuv) = (g · v) (g · u) (x · v) + (x · v) uv = q(ω d^2 vu^2 + u^2v),
x · (uv^2) = (g · u) (g · v) (x · v) + (g · u) (x · v) v = q( ω d^2 uvu + ω d u^2 v),
and so
x · (v^2 u - α vuv - β u v^2)
= q(d - ωα d^2) vu^2 - q(ωβ d^2 - 1) uvu - q(α+ωβ d) u^2v.
Comparing coefficients of vu^2 and uvu with those in the first down-up relation shows that
α q (d-ωα d^2) = q ω(β d^2 - 1).
Substituting α = (ω d)^-1(1- βω^2 d^2) from (<ref>) into (<ref>) gives ω^3 β^2 d^4 - 1 = 0, and hence
β = - (ω d)^-2√(ω),
where we write for one of the two possible choices for a square root of ω. Substituting this into (<ref>) then gives
α = (ω d)^-1(1 + √(ω)).
Now, we must have d = ω^k for some 0 ⩽ k ⩽ n-1 to ensure that g^n=1. Since a = ω d = ω^k+1, and k and k+1 are coprime, it follows that g has order exactly n. Finally, it remains to check that these choices give a well-defined action of T_n on A(α, β). This is true by construction, with the only exception being that we must verify that x maps the first down-up relation into the ideal of relations defining A(α,β). By (<ref>), this happens if, and only if, q β d (1-ωα d) = q (α + ωβ d), and it is straightforward to check that this equation is satisfied for our parameter choices. This corresponds to option (1) in the statement of the lemma.
The analysis when r ≠ 0 and q = 0 is similar and hence omitted; it gives rise to option (2) in the statement.
It remains to consider the case where g is antidiagonal:
There are no inner faithful, homogeneous Taft actions on A(α,β) when β = -1 and α≠ 2, where g has the form
g = [ 0 b; c 0 ].
The relation gx = ω xg now tells us that
[ br-ω cq b(s-ω p); c(p-ω s) cq-ω br ]
= 0.
Direct calculation gives
x · (v^2 u) = (g · v) (g · v) (x · u) + (g · v) (x · v) u + (x · v) vu
= b(bp+q) u^3 + 0 · v^3 + s v^2 u + 0 · uv^2 + other terms
x · (vuv) = (g · v) (g · u) (x · v) + (g · v) (x · u) v + (x · v) uv
= 0 · u^3 + 0 · v^3 + 0 · v^2 u + b(cs+r) uv^2 + other terms
x · (uv^2) = (g · u) (g · v) (x · v) + (g · u) (x · v) v + (x · u) v^2
= 0 · u^3 + (cs+r) v^3 + 0 · v^2 u + p uv^2 + other terms
It follows that
x · (v^2u - α vuv - β u v^2) = b(bp + q)u^3 + (cs + r)v^3 + s v^2 u + (p - α b r - α b c s) u v^2 + other terms.
To ensure that the coefficients of u^3 and v^3 vanish, we require r = -cs and q = -bp. Since the coefficients of v^2 u and uv^2 must be equal, this then implies that p = s. The top-right entry of (<ref>) now tells us that -2cp^2 = 0. However, we cannot have c=0, otherwise g is singular, nor can we have p=0, else x=0. This contradiction tells us that no such action is possible.
This exhausts all possibilities, so we summarise our findings below:
Let n ⩾ 2 and suppose that T_n acts inner faithfully and homogeneously on a down-up algebra A(α,β). Then we have the following possibilities:
* For some 0 ⩽ k ⩽ n-1 and q ∈^×,
g =
[ ω^k+1 0; 0 ω^k ],
x = [ 0 q; 0 0 ],
α = ω^-(k+1) (1 + √(ω)), β = - ω^-2(k+1)√(ω),
where √(ω) denotes a choice of either one of the square roots of ω; or
* For some 0 ⩽ k ⩽ n-1 and r ∈^×,
g =
[ ω^k 0; 0 ω^k+1 ],
x = [ 0 0; r 0 ],
α = ω^k+1 (1 + √(ω^-1)), β = - ω^2(k+1)√(ω^-1),
where √(ω) denotes a choice of either one of the square roots of ω.
Conversely, each of the above parameter choices indeed give rise to an inner faithful action of T_n on an appropriate down-up algebra.
The algebras A(α,β) in Theorem <ref> are PI.
Setting t = (ω d)^-1 in (<ref>) gives the polynomial
t^2 - α t - β,
which has a root t_1 = (ω d)^-1 = ω^-(k+1). The other root t_2 satisfies t_1 t_2 = β = ω^-2(k+1), and hence t_2 = ω^-(k+1). Therefore both roots of (<ref>) are roots of unity, so A(α,β) is PI. This covers case (1) from Theorem <ref>, and the analysis for case (2) is similar.
We are not aware of an inner faithful, homogeneous action of a Taft algebra on an algebra A where A is not PI.
There are no inner faithful, homogeneous actions of a Taft algebra on A(0,1) or A(2,-1).
In Theorem <ref>, it is not possible for the parameter α to attain the values 0 or 2.
In light of Corollary <ref>, Theorem <ref> actually gives all possible homogeneous, inner faithful actions of a Taft algebra on a down-up algebra, not just up to conjugation.
If we allow ourselves to classify the actions up to conjugation, then the actions in (1) and (2) of Theorem <ref> above are equivalent, in the following sense. If we conjugate the representation of T_n in (2) by the matrix [ 0 1; 1 0 ] then we have
g ↦[ ω^k+1 0; 0 ω^k ],
x ↦[ 0 r; 0 0 ],
and this has the additional effect of swapping u and v in A. Accordingly, the relations on A(α,β) become
u^2 v - α uvu - β v u^2 = -β( vu^2 - -α/β uvu - 1/β u^2 v ),
u v^2 - α vuv - β v^2 u = -β( v^2u - -α/β vuv - 1/β u v^2 ),
so that we now have an action on A( -α/β, 1/β). In particular, if we begin with one of the actions in part (2), then
-α/β = ω^k+1 (1 + √(ω^-1))/ω^2(k+1)√(ω^-1) = ω^-(k+1)(1+√(ω)), 1/β = 1/ω^2(k+1)√(ω^-1) = -ω^-2(k+1)√(ω)
which is of the form given in (1). Accordingly, it suffices to consider only case (1) when analysing properties of the invariant ring A(α,β)^T_n.
Additionally, conjugating by the matrix [ p 0; 0 1 ], which is an automorphism of any down-up algebra, puts the matrix of x from (1) into Jordan normal form, while leaving the matrix of g and the relations in A(α,β) unchanged, so we may assume p=1 when analysing A(α,β)^T_n.
We can restate Theorem <ref> by fixing the down-up algebra A(α, β) and determining which Taft algebras act inner faithfully and homogeneously on it. Recall that, for such a down-up algebra, many of its properties are determined the roots of the polynomial f(t) = t^2 - α t - β. Call these roots γ_1 and γ_2 (which need not be distinct). Note that α = γ_1 + γ_2, β = -γ_1γ_2, and A(α, β) is PI if and only if both γ_1 and γ_2 are roots of unity.
Let A(α, β) be a down-up algebra where (α, β) ≠ (2, -1), (0,1). There is an inner faithful, homogeneous action of some Taft algebra on A(α, β) if and only if γ_1 and γ_2 are roots of unity, and either γ_1 or γ_2 is some power of γ_1^-2γ_2^2. In this case, the only Taft algebra which acts is T_n, where n is the multiplicative order of γ_1^-2γ_2^2. For each q ∈^×, there are two actions of T_n on A(α, β): either
g =
[ γ_1 0; 0 γ_1 γ_2^-2 ],
x = [ 0 q; 0 0 ],
or
g =
[ γ_1γ_2^2 0; 0 γ_1 ],
x = [ 0 0; q 0 ],
where (if necessary) we have relabeled so that γ_1 is the power of γ_1^-2γ_2^2.
In particular, if A(α, β) is not PI, then no Taft algebra acts on it inner faithfully and homogeneously.
Note that α and β determine the roots of the polynomial f(t) = t^2 - α t - β and vice versa. By Theorem <ref>, if A(α, β) admits an inner faithful homogeneous action of some Taft algebra, then we must have
α = ζ^-(k+1)(1 + √(ζ)) and β = - ζ^-2(k+1)√(ζ)
for some root of unity ζ and some integer 0 ⩽ k ⩽ n - 1, where √(ζ) denotes a choice of either one of the square roots of ζ. Then the roots of the polynomial f(t) are given by ζ^-(k+1) and ζ^-(k+1)√(ζ). In particular, both roots of f(t) are roots of unity and so A(α, β) must be PI.
It is clear that in all of the actions given in Theorem <ref>, one of the roots of f(t) is some power of the square of the quotient of the roots.
Conversely, suppose that γ_1 and γ_2 are roots of unity. If γ_1 is some power of γ_1^-2γ_2^2, then let ω = γ_1^-2γ_2^2 and choose 0 ⩽ k ⩽ n-1 such that γ_1 = ω^-(k+1). Then we have γ_2^2 = γ_1^2 ω = ω^-2(k+1)ω and so γ_2 = ω^-(k+1)√(ω) for some choice of √(ω). Otherwise, if γ_2 is some power of γ_1^-2γ_2^2, then set ω = γ_1^2γ_2^-2 and choose 0 ⩽ k ⩽ n - 1 such that γ_2 = ω^-(k+1) so γ_1 = ω^-(k+1)√(ω). Then by Theorem <ref>, there exists an action of T_n on A(α, β), where n is the order of ω = γ_1^-2γ_2^2.
For each fixed down-up algebra of this form, without loss of generality, relabel the roots of f(t) so that γ_1 is some power of γ_1^-2γ_2^2. Then by the paragraph above, we see that ω^k+1 = γ_1 and ω^k = ω^k+1/ω = γ_1 γ_2^-2 and so Theorem <ref> (1) gives the action
g =
[ γ_1 0; 0 γ_1 γ_2^-2 ],
x = [ 0 q; 0 0 ]
for each q ∈^×. The same down-up algebra A(α, β) appears in part (2) of Theorem <ref>, acted on by the Taft algebra T_n with parameter ω. Hence, Theorem <ref> (2) gives the action
g =
[ γ_1γ_2^2 0; 0 γ_1 ],
x = [ 0 0; q 0 ]
for each q ∈^×. Each down-up algebra appears exactly once in (1) and once in (2) for each choice of q ∈^×, which completes the proof.
We end this section by considering the homological determinant of the actions in Theorem <ref>. By <cit.>, if H is a semisimple Hopf algebra acting on a derivation-quotient algebra with associated twisted superpotential , then the homological determinant satisfies
h · = _A(h)
for all h ∈ H. The proof of this result uses <cit.>, the proof of which does not require H to be semisimple (c.f. <cit.>). It follows that (<ref>) can be used to calculate the homological determinant of the action of any finite-dimensional Hopf algebra on a derivation-quotient algebra.
Down-up algebras are examples of derivation-quotient algebras, with associated twisted superpotential
= uv^2u - α uvuv - β u^2v^2 - β^-1 v^2u^2 + αβ^-1 vuvu + v u^2 v,
viewed as an element in the free algebra ⟨ u,v ⟩. A lengthy computation then shows that, for the actions under consideration in this paper, we have x · = 0. Since g ∈(A), the usual definition of the homological determinant applies, and shows that _A(g) = ω^4k+2 = (g)^2 (alternatively, one can apply (<ref>) to determine this value).
Suppose that T=T_n acts on A=A(α,β) as in Theorem <ref>. Then
_A (g) = ω^4k+2 and _A (x) = 0.
In particular, the homological determinant is trivial if and only if 4k+2 ≡ 0 n.
§ USEFUL IDENTITIES
In this section, we record various identities that will be useful for computations later in the paper. Many of these identities will depend on certain Gaussian binomial coefficients.
For the remainder of this paper, we write T = T_n for a Taft algebra, depending on some integer n ⩾ 2, which acts on a down-up algebra A = A(α,β), where both A and the action of T on A depend on some parameter 0 ⩽ k ⩽ n-1, as in Theorem <ref>. In light of Remark <ref>, it suffices to restrict attention to case (1) of Theorem <ref>, so we will assume that g and x are represented by the following matrices:
g = [ ω^k+1 0; 0 ω^k ],
x = [ 0 1; 0 0 ].
In addition, we will henceforth write z ∈ A for the element
z vu - ω^-(k+1) uv,
which clearly depends on our choice of action. In particular, we will use the following PBW basis for A for the remainder of this paper:
{ u^i z^j v^ℓ| i,j,ℓ⩾ 0 }.
The next lemma follows directly from (<ref>).
The element z is normal and satisfies
zu = ω^-(k+1)√(ω) uz, and vz = ω^-(k+1)√(ω) zv.
We have explicitly defined the action of T on the degree 1 piece of A, with its action on higher degree pieces following from the fact that A is a T-module algebra. It will be useful to record the action of x on the basis (<ref>).
The elements u and z are x-invariant for all n and k. In particular, every element in the subring of A generated by u and z is x-invariant.
It is clear that u is invariant under x, while direct calculation gives
x · z = (g · v)(x · u) + (x · v)u - ω^-(k+1) (g · u)(x · v) - ω^-(k+1) (x· u)v
= u^2 - ω^-(k+1)ω^k+1 u^2 = 0.
On a basis element u^i z^j v^ℓ, we have
x · (u^i z^j v^ℓ) = g · (u^i z^j) x · v^ℓ = ω^i(k+1) + j(2k+1) u^i z^j x · v^ℓ.
Using Lemma <ref>, we obtain
x · (u^i z^j v^ℓ) = g · (u^i z^j) x · v^ℓ + x · (u^i z^j) v^ℓ = g · (u^i z^j) x · v^ℓ = ω^i(k+1) + j(2k+1) u^i z^j x · v^ℓ.
It remains to give a formula for the action of x on powers of v. We define the following notation, which we will use frequently in this paper:
λ_m m1ω^-1, μ_m ω^k m2^-1.
We have
x · v^m = λ_m u v^m-1 + μ_m z v^m-2.
We prove this result by induction on m. The result holds when m=1 since, in this case, both sides of the equation in the lemma are equal to u.
Now suppose that m ⩾ 2 and that the result holds for smaller m. We then have
x · v^m = x · (v v^m-1)
= (g · v)(x · v^m-1) + (x · v) v^m-1
= ω^k v ( m-11ω^-1 u v^m-2 + ω^k m-12^-1 z v^m-3) + u v^m-1
= ω^k m-11ω (z + ω^-(k+1) uv) v^m-2 + ω^k-1m-12^-1 zv^m-2 + uv^m-1
= ( 1 + ωm-11ω) uv^m-1 + ω^k ( m-11ω + ^-1m-12^-1) zv^m-2
= m1ω^-1 u v^m-1 + ω^k m2^-1 z v^m-2,
as desired.
As a corollary, we have the following result which describes the effect of acting by x on various subspaces of A:
Fix integers p,q,r ⩾ 0. For 0 ⩽ m ⩽r/2, define
U_m { u^p+i z^q+m-i v^r-2m+i| 0 ⩽ i ⩽ m }.
Then x · U_m ⊆ U_m+1.
The result follows from a direct calculation, using Lemmas <ref> and <ref>:
x ·( u^p+m-i z^q+i v^r-m-i)
= ω^(p+m-i)(k+1) + (q+i)(2k+1) u^p+m-i z^q+i (x · v^r-m-i)
= ω^(p+m-i)(k+1) + (q+i)(2k+1) u^p+m-i z^q+i( λ_r-m-i u v^r-m-i-1 + μ_r-m-i z v^r-m-i-2)
= ω^(p+m-i)(k+1) + k(q+i)^q+iλ_r-m-i u^p+(m+1)-i z^q+i v^r-(m+1)-i
+ ω^(p+m-i)(k+1) + (q+i)(2k+1)μ_r-m-i u^p+(m+1)-(i+1) z^q+(i+1) v^r-(m+1)-(i+1)
∈ U_m+1,
where we have recorded the coefficients in the above calculation for later use.
Finally, the following lemma shows how to write the element v^m u in terms of the basis (<ref>), and is a translated version of <cit.>. Rather than making the translation explicit, we provide a direct proof in the notation of this paper.
We have
v^m u = ω^-m(k+1) u v^m + ω^-(m-1)(k+1)m1 z v^m-1.
We also prove this result by induction on m. The case m=1 says that
vu = ω^-(k+1) uv + z
which follows from the definition of z. When m=2, the right-hand side of the expression in the lemma is
ω^-2(k+1) uv^2 + ω^-(k+1)(1 + ) zv
= ω^-2(k+1) uv^2 + ω^-(k+1)(1 + ) (vu - ω^-(k+1) uv) v
= ω^-(k+1))(1 + ) vuv - ω^-2(k+1) uv^2
= α vuv + β uv^2,
which is equal to v^2 u by definition, as required.
For the inductive step, assume m ⩾ 3. We then have
v^m u = v^2 (v^m-2 u)
= v^2 ( ω^-(m-2)(k+1) u v^m-2 + ω^-(m-3)(k+1)m-21 z v^m-3)
= ω^-(m-2)(k+1)( ω^-2(k+1) uv^2 + ω^-(k+1))(1 + ) zv ) v^m-2
+ ω^-(m-3)(k+1)ω^-(2k+1)m-21 zv^m-1
= ω^-m(k+1) uv^m
+ ω^-(m-1)(k+1)( (1 + ) + ωm-21) zv^m-1
= ω^-m(k+1) uv^m
+ ω^-(m-1)(k+1)m1 zv^m-1,
as desired.
§ THE SUBRING OF X-INVARIANTS
The remainder of this paper is dedicated to determining the invariant ring A^T and some of its properties. Our strategy for computing A^T will be to first determine the elements which are invariant under the action of x ∈ T, which we denote A^x, and then to determine which of these are additionally invariant under the action of g ∈ T. Assuming A^x is suitably nice, this will allow us to use tools from the invariant theory of finite groups.
In general, the elements of a ring R which are invariant under the action of a single element of a Hopf algebra need not form a subring of R. However, it is well-known that in a Hopf algebra H, if g_1, g_2 ∈ H are grouplike and h ∈ H is (g_1, g_2)-primitive, then A^h is a subring of A. In particular, we have the following:
Let A^x denote the subset of A consisting of x-invariant elements:
A^x = { a ∈ A | x · a = 0 }.
Then A^x is a subring of A.
It will turn out that there is a dichotomy in the behaviour of the invariants, depending on whether has order n or order 2n. In this section, we consider these two cases separately.
§.§ The case when sqrt(w) has order n
In this subsection, we make the standing assumption that the root of unity has order n, and will not repeat this in the statements of results. This case can only occur when n is odd, and amounts to choosing ω^(n+1)/2 as the square root of ω.
We begin by identifying some elements in A^x.
We have u,z,v^n ∈ A^x.
That u and z are x-invariant is Lemma <ref>. To see that v^n is x-invariant, we appeal to Lemma <ref>. Indeed, we have
x · v^n = n1ω^-1 u v^n-1 + ω^k n2^-1 z v^n-2.
Since ω^-1 and ^-1 are primitive nth roots of unity, both of the Gaussian binomial coefficients in the above expression vanish, as desired.
In fact, these elements generate A^x, and this subring has a nice form:
The subring A^x is generated by u,z, and v^n, and is a skew polynomial ring where
v^n u = u v^n, zu = ω^-(k+1)√(ω) uz, v^n z = zv^n.
In particular, A^x is AS regular.
We can write any p ∈ A as p = ∑_m=0^d p_m v^m for some
p_m in the subalgebra generated by u and z, which is a skew polynomial ring.
We may assume p_d ≠ 0.
Note that each of the p_m is x-invariant, by Lemma <ref>. Suppose p as above lies in A^x. We then have
0 = x · p = ∑_m=0^d g · p_m x · v = ∑_m=0^d g · p_m (λ_m uv^m-1 + μ_m z v^m-2)
by Lemma <ref>.
In the above, the coefficient of v^d-1 is λ_d (g · p_d) u. Since p_d ≠ 0, this forces λ_d = 0, and so d must be divisible by n, say d = d' n for some integer d'. By Lemma <ref>, v^d = (v^n)^d' is x-invariant. Then p - p_d (v^n)^d'∈ A^x has strictly smaller v-degree than p, and an induction argument now shows that p can be generated by u,z, and v^n.
It is straightforward to check that the given elements (skew-)commute as claimed in the statement of the proposition. Moreover, there are no algebraic relations between u, z, and v^2n because { u^i z^j v^ℓ| i,j,ℓ⩾ 0 } is a PBW basis for A. Therefore A^x is a skew polynomial ring.
We have the following corollary, which will be helpful when trying to understand the full invariant ring A^T.
The Hilbert series of A^x is
_A^x(t) = 1/(1-t)(1-t^2)(1-t^n).
§.§ The case when sqrt(w) has order 2n
In this subsection, we make the standing assumption that that the root of unity has order 2n, and will not repeat this in the statements of results. This is always the case when n is even, and can also occur when we choose the appropriate square root of ω when n is odd, namely = - ω^(n+1)/2.
As in the previous section, we begin by determining some elements of A which are x-invariant. The differences in the behaviour of the invariant ring in this subsection, compared with the previous subsection, stem from fact that v^n is no longer x-invariant. Instead, the smallest power of v which is x-invariant is v^2n:
We have u, z, v^2n∈ A^x, while v^n ∉ A^x.
We only need to establish the claims about v^2n and v^n. Lemma <ref> tells us that
x · v^m = m1ω^-1 u v^m-1 + ω^k m2^-1 z v^m-2.
and both of the Gaussian binomial coefficients vanish when m=2n, but only the former vanishes when m=n.
Since v^n is not x-invariant, there is a possibility that there are elements in degree at most 2n which cannot be generated by just u and z. Much of this section will be devoted to identifying one such element, and determining its properties.
For the remainder of this paper, we write
a x^n-1· v^2n-1,
which is clearly x-invariant. For future reference, we record how g acts on a:
g · a = g (x^n-1· v^2n-1) = ω^n-1 (x^n-1· g v^2n-1) = ω (x^n-1·ω^(2n-1)k v^2n-1) = ω^-(k+1) a.
We claim that A^x is equal to the subalgebra generated by the elements a, u, z, and v^2n, namely
B ⟨ a, u, z, v^2n⟩.
We first require a lemma:
If we write a = x^n-1· v^2n-1 in terms of the PBW basis (<ref>), then u^n-1 v^n appears with nonzero coefficient, and every other term appears with smaller v-degree.
To see this, first set p=0, q=0, and r = 2n-1 in Lemma <ref>; then v^2n-1∈ U_0 and the spaces
U_m { u^i z^m-i v^2n-1-2m+i| 0 ⩽ i ⩽ m }, 0 ⩽ m ⩽ n-1,
satisfy x · U_m ⊆ U_m+1. In particular, this shows that every term in a has v-degree at most n. Now, if p ∈ U_m, the only term that contributes to the coefficient of u^m+1 v^2n-m-2 in x · p ∈ U_m+1 is the term arising from x · u^m v^2n-m-1. Thus
x^n-1· v^2n-1 = x^n-2· (λ_2n-1 uv^2n-2 + other terms)
= x^n-3· (λ_2n-1λ_2n-2ω^k+1 u^2v^2n-3 + other terms)
⋮
= ( ∏_i=m^n-1λ_2n-mω^(m-1)(k+1)) u^n-1 v^n + other terms,
where the coefficient in (<ref>) is nonzero.
We are now in a position to show that B = A^x.
Let B be the subalgebra of A generated by
u, z, v^2n, a x^n-1· v^2n-1,
as in (<ref>). Then B=A^x.
By Lemma <ref>, we have B ⊆ A^x, so it remains to establish the reverse inclusion. To this end, let p ∈ A^x. Since p ∈ A, we can write
p = ∑_m=0^d p_m(u,z) v^m
for some p_m = p_m(u,z) = ∑_i,jα_i,j,m u^i z^j, where p_d ≠ 0. Note that if d ≡ 0 2n, then x ·(p_d v^d ) = 0, and p - p_d v^d ∈ A^x, so we may as well assume d ≢0 2n.
Since p ∈ A^x, using the notation of (<ref>), we have
0 = x · p
= ∑_m=0^d (g · p_m) (x · v^m)
= ∑_m=0^d (g · p_m) ( λ_m uv^m-1 + μ_m zv^m-2)
= ∑_m=0^d λ_m (g · p_m) uv^m-1 + μ_m (g · p_m) z v^m-2.
The coefficient of v^d-1 in x · p is λ_d (g · p_d) u = d1ω (g · p_d) u, which is necessarily equal to 0. Since p_d ≠ 0, we must have d1ω = 0. This forces d ≡ 0 n, and since d ≢0 2n, we must have d ≡ n 2n.
Now, the coefficient of v^d-2 in x · p is necessarily 0, so we obtain
λ_d-1 (g · p_d-1) u = -μ_d (g· p_d) z.
Since d ≡ n 2n, both d-11ω and d2_√(ω) are nonzero and so, since the right hand side of the above equality has a factor of u, it follows that u divides g · p_d, and hence also divides p_d.
The same argument holds for each m with d-n ⩽ m ⩽ d-2: the coefficient of v^m is
λ_m+1 (g · p_m+1) u + μ_m+2 (g· p_m+2) z
where both λ_m+1 and μ_m+2 are nonzero.
Since p_m+1 u has a factor of u, the same must be true of p_m+2.
It follow that u^n-1 is a factor of p_d, and hence the term in p containing v^d has the form
p_d(u,z) v^d = p_d(u,z) u^n-1 v^n (v^2n)^ℓ,
for some polynomial p_d(u,z) in u and z, and some non-negative integer ℓ. The element p_d a (v^2n)^ℓ lies in B and, by Lemma <ref>, has the same coefficient of v^d as p. It follows that, for a suitably chosen γ∈, the element p - γp_d a (v^2n)^ℓ∈ A^x, has strictly smaller v-degree than p. An induction argument now shows that p ∈ B.
Having written down a generating set for A^x, our goal is to now give a presentation for this ring. In particular, we will need to determine skew-commutativity relations between our generating set, as well as any other relations among them.
We first give a closed-form for an element which is closely related to a.
We have
x^n-1· v^2n-2 = ω^1/2n(n-1)(2k+1) + k(n-1)∏_i=1^n-12i2 z^n-1.
In Lemma <ref>, set p=0, q=0, and r=2n-2, which tells us that the subspaces
U_m { u^i z^m-i v^2n-2-2m+i| 0 ⩽ i ⩽ m }, 0 ⩽ m ⩽ n-1,
satisfy x · U_m ⊆ U_m+1. Since v^2n-2∈ U_0, we obtain
x^n-1· v^2n-2∈ U_n-1 = { u^i z^n-1-i v^i| 0 ⩽ i ⩽ n-1 }.
We also know that x^n-1· v^2n-2 is x-invariant, so consider an arbitrary element of U_n-1∩ A^x, say
p = ∑_i=0^n-1α_i u^i z^n-1-i v^i.
Then
0 = x · p = ∑_i=0^n-1α_i g · (u^i z^n-1-i) x · v^i
=∑_i=1^n-1α_i g · (u^i z^n-1-i) ( λ_i uv^i-1 + μ_i z v^i-2).
The coefficient of v^n-2 is α_n-1λ_n-1 g · (u^i z^n-i-i) u and, since λ_n-1≠ 0, we deduce that α_n-1 = 0. Repeating this argument shows that α_i = 0 for 1 ⩽ i ⩽ n-1, so that p = α_0 z^n-1. It follows that x^n-1· v^2n-2 is a scalar multiple of z^n-1. To determine the value of this scalar, we note that if p ∈ U_m, the only term that contributes to the coefficient of z^m+1 v^2n-2-2(m+1) in x · p ∈ U_m+1 is the term arising from x · (z^m v^2n-2-2m), namely ω^m(2k+1)μ_2n-2-2m z^m+1 v^2n-2-2(m+1). Therefore,
x^n-1· v^2n-2 = x^n-2· (μ_2n-2 z v^2n-4 + other terms)
= x^n-3· (μ_2n-2μ_2n-4ω^2k+1 z^2 v^2n-6 + other terms)
⋮
= (∏_i=1^n-1ω^(i-1)(2k+1)μ_2i) z^n-1
= ω^1/2n(n-1)(2k+1) + k(n-1)∏_i=1^n-12i2 z^n-1.
With the above result in hand, we are able to determine the skew-commutativity relations between the elements of B.
The element v^2m is central in B, while the other generators satisfy the relations
zu = ω^-(k+1) uz, az = ω^-(k+1) za,
au-ua = -ω^1/2n(n-1)(2k+1) + k(n-1)∏_i=1^n-12i2 z^n.
By Lemma <ref>, we have
v^2n u = ω^-2n(k+1) u v^m + ω^-(2n-1)(k+1)2n1 z v^m-1 = u v^2n,
where the second term vanishes since is a primitive (2n)th root of unity. Therefore v^2n is central in A, and hence also in B. The fact that zu = ω^-(k+1) uz follows from Lemma <ref>.
For the skew-commutativity relation between z and a, we calculate
az = (x^n-1· v^2n-1) z
= x^n-1· (v^2n-1 z)
= ( ω^-(k+1))^2n-1 x^n-1· (z v^2n-1)
= ( ω^-(k+1))^-1 (g^n-1· z) (x^n-1· v^2n-1)
= ( ω^-(k+1))^-1ω^-(2k+1) za
= ω^-(k+1) za.
For the final relation, we have
au-ua = (x^n-1· v^2n-1) u - u (x^n-1· v^2n-1)
= x^n-1· (v^2n-1 u - ω^-(n-1)(k+1) u v^2n-1)
= x^n-1·(-ω^-(2n-2)(k+1)2n-11 zv^2n-2)
= -ω^2(k+1) x^n-1· ( z v^2(n-1))
= -ω^2(k+1)ω^(n-1)(2k+1) z (x^n-1· v^2(n-1))
= - z (x^n-1· v^2(n-1)).
The result now follows from Proposition <ref>.
Since B has 4 generators which satisfy relations similar to those in a quantum polynomial ring and A^x should have GK dimension 3, one expects the generators of B to satisfy another homogeneous relation and, indeed, this is the case. We first require a technical lemma, whose proof is similar to that of Lemma <ref>, so we omit it.
For 0 ⩽ m ⩽ n-1, set
V_m = { u^2m+i z^n-m-1-i v^2n+i| 0 ⩽ i ⩽ n-m-1 }.
Then x · V_m ⊆ V_m+1.
With V_m as above, we have a v^2n-1∈ V_0, and the coefficient of u^n-1 v^3n-1 in a v^2n-1 is
ω^1/2(n-1)(n-2)(k+1)∏_i=m^n-1m1ω.
Consider the subspaces U_m from Lemma <ref> with p=0, q=0, and r=2n-1, so that
U_m { u^i z^m-i v^i-1+2(n-m)| 0 ⩽ i ⩽ m }, 0 ⩽ m ⩽ n-1,
satisfy x · U_m ⊆ U_m+1. In particular, U_0 = v^2n-1 and U_n-1 v^2n-1 = V_0, so that
a v^2n-1 = (x^n-1· v^2n-1) v^2n-1∈ U_n-1 v^2n-1 = V_0.
To determine the coefficient of u^n-1 v^3n-1 in a v^2n-1, it suffices to determine the coefficient of u^n-1 v^n in a.
This coefficient was calculated in (<ref>), and it is straightforward to check that it simplifies to the coefficient given in the statement of the lemma.
We are now able to write down an algebraic relation between the generators of B:
We have
a^2 = ω^2(k+1)( ∏_m=1^n-1m1ω^-1)^2 u^2n-2 v^2n.
We have
a^2 = a x^n-1· v^2n-1
= x^n-1·( (g^-(n-1)· a) v^2n-1)
= ω^-(k+1) x^n-1· (a v^2n-1).
Now, a v^2n-1∈ V_0 by Lemma <ref>, and hence x^n-1 (a v^2n-1) ∈ V_n-1 = { u^2n-2 v^2n}. To determine the coefficient of u^2n-2 v^2n, we only need to keep track of a single term at each step, as in the proof of Lemma <ref>. Writing η for the scalar (<ref>), we have
x^n-1· (a v^2n-1)
= x^n-1· (η u^n-1 v^3n-1 + other terms)
= η(∏_m=n-1^2n-3ω^m(k+1)) ( ∏_m=2n+1^3n-1λ_m ) u^2n-2 v^2n
= ηω^1/2(n-1)(n-4)(k+1)( ∏_m=1^n-1m1ω^-1) u^2n-2 v^2n
= ω^3(k+1)( ∏_m=1^n-1m1ω)^2 u^2n-2 v^2n.
The claim now follows.
With these results in hand, we can give a presentation for A^x. The following result shows that the algebra whose skew-commutativity relations are the same as those satisfied by the generators of B has good properties.
Consider the algebra
⟨ a,b,c,d ⟩/⟨[ [ ad-da, bd-db, cd-dc ]; [ ca-ω^k √(ω) ac, bc-ω^k √(ω) cb ]; [ ba-ab-c^n ] ]⟩
with grading given by declaring that the respective degrees of a,b,c,d are 2n-1,1,2,2n. This algebra is a noetherian domain which is AS regular of dimension 4.
It is straightforward to show that this algebra is the iterated Ore extension
[a][c;σ][b;τ,δ][d]
where
σ : [a] →[a], a ↦ω^k √(ω)
τ: [a][c;σ] →[a][c;σ], a ↦ a, c ↦ω^k √(ω)
are ring automorphisms, and δ : [a][c;σ] →[a][c;σ] is the τ-derivation satisfying
δ(a) = c^n, δ(c) = 0.
All of the claims now follow.
Consider the algebra from the preceding lemma. Then the element f a^2 - b^2(n-1)d is a normal nonzerodivisor which satisfies
af = fa, bf=fb, cf=ω^2k+1fc.
Upon noting that
ba^2 = (ab+c^n)a = a(ab+c^n) + (ω^k √(ω))^n a c^n = a^2b + ac^n - ac^n = a^2 b,
the first two identities follow immediately. For the last identity, we have
cf = c(a^2 - b^2(n-1) d) = (ω^k √(ω))^2 a^2 c - (ω^k √(ω))^-(2(n-1)) b^2(n-1) d c = ω^2k+1 (a^2 - b^2(n-1) d)c
= ω^2k+1 fc.
Finally, the fact that f is a nonzerodivisor follows from the algebra being a domain.
Finally, we can give a presentation for A^x:
We have the following presentation for A^x:
A^x ≅⟨ a,b,c,d ⟩/⟨[ [ ad-da, bd-db, cd-dc ]; [ ca-ω^k √(ω) ac, bc-ω^k√(ω) cb ]; [ ba-ab-c^n, a^2-b^2(n-1) d ] ]⟩.
Call the right-hand side C. We define a map θ : ⟨ a,b,c,d ⟩→ A^x as follows. First declare θ(b) = u and θ(d) = v^2n. We let θ(a) = γ x^n-1· v^2n-1, where γ∈^× is chosen so as to ensure that θ(a^2 - b^2(n-1)d) = 0, which is possible by Proposition <ref>. We then define θ(c) = η z where, in light of Corollary <ref>, our choice of η∈^× gives θ(ba-ab-c^n) = 0. Lemma <ref> and Proposition <ref> ensure that we obtain a well-defined ring homomorphism θ : C → A^x, which is surjective by Proposition <ref>.
The result will follow if we can show that θ is injective. First note that A^x ⊇⟨ u, z, v^2n⟩ which, using arguments similar to those in Theorem <ref>, is a polynomial ring in three variables. It follows that A^x ⩾ 3. On the other hand, since C is obtained by factoring an algebra of GK dimension 4 by a regular normal element, we must have C ⩽ 3 by <cit.>. By the same reference again, the kernel of θ must have GK dimension 0, and hence be finite-dimensional. If the kernel were nonzero, then this would imply the existence of at least one additional relation in A^x of degree at most 4n-3.
To rule out this possibility, assign bidegrees (n-1,n), (1,0), (1,1), and (0,2n) to the elements a,b,c, and d, which are simply the (u,v)-bidegrees of their images in A^x. Using the relations in C, we can write any monomial in C in the form a^i b^j c^ℓ d^m, and the only pairs of monomials of degree at most 4n-3 which have the same bidegree are of the form a b^i+1 c^j and b^i c^n+j. Any tentative relation must say that these are equal up to scaling, and deleting copies of b and c (since we are in a domain), says that ab is a scalar multiple of c^n, which is absurd. It follows that there are no additional relations, so the kernel of θ is trivial, and we obtain the desired isomorphism.
In particular, using a similar argument as in <cit.> we have the following:
A^x is AS Gorenstein.
Since A^x is a factor of an iterated Ore extension by a regular normal element, it is Auslander–Gorenstein by <cit.> and <cit.>. Then, since A^x has finite GK dimension and is connected, it is also AS Gorenstein by <cit.>.
§ PROPERTIES OF THE INVARIANT RING A T
We end this paper by establishing a few properties of the full invariant rings A^T, including when they are commutative, when they are AS regular, and when they are AS Gorenstein. As in the previous section, we need to distinguish between the cases when has order n and when has order 2n.
§.§ The case when sqrt(w) has order n
All results in this subsection will assume that has order n. As explained previously, we seek to calculate the invariant ring A^T as A^T = (A^x)^g. To this end, we first write down the action of g on the (skew) polynomial ring A^x = [u,z,v^n]. Direct calculation shows that the matrix of g on this ring (with the variables in the given order) is
[ ω^k+1 0 0; 0 ω^2k+1 0; 0 0 1 11 ].
For the remainder of this subsection, let G be the group generated by the matrix (<ref>).
We have the following basic result which, since A^x is only “mildly” noncommutative, is not too surprising.
A^T is commutative.
Since the action of g on A^x is diagonal, it suffices to consider which monomials in u,z, and v^n are invariant under g. By (<ref>), a monomial u^i z^j v^nℓ∈ A^x is g-invariant if and only if ω^(k+1) i + (2k+1)j = 1, which happens if and only if (k+1) i + (2k + 1)j ≡ 0 n. Consider any two such monomials u^i z^j v^nℓ and u^i' z^b' v^n c'∈ A^T, so that
(k+1)i + (2k+1)j ≡ 0 n
(k+1)i' + (2k+1)j' ≡ 0 n .
If we multiply the first congruence by j' and the second by j and then subtract, we obtain (k+1)(ij' - ji') ≡ 0 n. Similarly, multiplying the first by i' and the second by i and subtracting yields (2k + 1)(i j' - j i') ≡ 0 n. Since (k+1, 2k+1) = 1, we conclude that ij' - ji' ≡ 0 n. Writing ε_k = ω^-(k+1), which is a primitive nth root of unity, we obtain
(u^i z^j v^nℓ)(u^i' z^j' v^n ℓ') = u^i z^j (u^i' z^j' v^n ℓ') v^n ℓ
= ε_k^-ji' u^i (u^i' z^j' v^n ℓ') z^j v^n ℓ
= ε_k^ij' - ji' (u^i' z^j' v^n ℓ') (u^i z^j v^n ℓ)
= (u^i' z^j' v^n ℓ') (u^i z^j v^n ℓ),
and so A^T is commutative.
Since G is a group acting on the skew polynomial ring A^x by <cit.>, we see that A^T is AS regular if and only if G is generated by quasi-reflections. Indeed, since A^T is commutative, the AS regularity of A^T is equivalent to A^T being a (commutative) polynomial ring in three variables. The quasi-reflections on A^x are also well-understood by <cit.>. In particular, since g acts diagonally, the only elements which act as quasi-reflections on A are the (classical) reflections on A_1 <cit.>.
It remains to understand the classical reflections in G.
Let d = (k + 1, n) and e = (2k + 1, n).
* The group G is quasi-reflection-free if and only if k+1 and 2k+1 are both coprime to n if and only if de = 1.
* The group G is generated by quasi-reflections if and only if (k+1)(2k+1) ≡ 0 n if and only if de = n.
* The group G contains, but is not generated by, quasi-reflections if and only if 1 < de < n.
As discussed above, the matrix giving the action of g on the generators of A^x is
[ ω^k+1 0 0; 0 ω^2k+1 0; 0 0 1 11 ].
By an abuse of notation, we shall also refer to this matrix as g. Clearly, for any m ∈, the diagonal entries of g^m are ω^m(k+1), ω^m(2k+1), and 1. Hence, the reflections in G are exactly the elements g^m such that exactly one of ω^m(k+1) and ω^m(2k+1) is equal to 1, where 1 ⩽ m < n.
(1) If both k+1 and 2k + 1 are coprime to n, then ω^m(k+1) = 1 if and only if n | m if and only if ω^m(2k+1) = 1. Hence, G is quasi-reflection-free. Conversely, suppose that at least one of k + 1 and 2k + 1 is not coprime to n. Without loss of generality, suppose that d ≠ 1. Note that since k+1 and 2k + 1 are coprime, therefore d is not a divisor of 2k + 1. Then letting m = n/d, we see that ω^m(k+1) = 1 while ω^m(2k+1)≠ 1, so g^n/d is a quasi-reflection and so G is not quasi-reflection-free.
(2) Observe that
g^n/d = [ 1 0 0; 0 ω^n(2k+1)/d 0; 0 0 1 11 ] and g^n/e = [ ω^n(k+1)/e 0 0; 0 1 0; 0 0 1 11 ].
Further observe that every quasi-reflection in G must be a power of g^n/d or g^n/e. Hence, if G is generated by quasi-reflections, then we must actually have G = ⟨ g^n/d, g^n/e⟩.
Therefore, G is generated by quasi-reflections if and only if g = g^ni/dg^nj/e for some i,j ∈. This is true if and only if there exist i,j ∈ such that ω^k+1 = ω^nj(k+1)/e and ω^2k+1 = ω^ni(2k+1)/d. But this happens if and only if (k + 1,n) = (n(k+1)/e, n) and (2k + 1,n) = (n(2k+1)/d, n). Since (k+1, n) = d and (2 k + 1, n) = e, this happens if and only if (d, n) = (nd/e, n) and (e, n) = (ne/d, n).
For any integer m and any prime p, let ν_p(m) denote the largest integer k such that p^k | m. We have that (d, n) = (nd/e, n) if and only if ν_p(d) < ν_p(n) implies that ν_p(e) = ν_p(n) for all primes p. Therefore, G is generated by quasi-reflections if and only if ν_p(d) < ν_p(n) implies that ν_p(e) = ν_p(n) and ν_p(e) < ν_p(n) implies that ν_p(d) = ν_p(n). That is, any prime factor of n which does not occur in d must occur with full multiplicity in e. But since k+1 and 2k + 1 are coprime, so are d and e. Therefore, this condition is equivalent to de = n, which is equivalent to (k + 1)(2k + 1) ≡ 0 n.
(3) Since we have seen that case (1) occurs if and only if de = 1 and case (2) occurs if and only if de = n, the remaining case is when de is neither 1 nor n. Since k+1 and 2k + 1 are coprime, this is equivalent to 1 < de < n.
Let d = (k + 1, n) and e = (2k + 1, n).
* If de=1, so that k+1 and 2k+1 are both coprime to n, then A^T is the coordinate ring of the product of affine 1-space with a (possibly non-Gorenstein) type 𝔸 singularity. Explicitly,
A^T ≅[x,y]^1/n(1,r) [t],
where r = (2k+1) (k+1)^-1 n and
1/n(1,r) = ⟨[ ω 0; 0 ω^r ]⟩.
In particular, A^T is AS Gorenstein if and only if 3k+2 ≡ 0 n.
* If de =n (equivalently, (k+1)(2k+1) ≡ 0 n), then
A^T = [u^n/d, z^n/e,v^n]
is a polynomial ring.
* If 1 < de < n , then A^T is the coordinate ring of the product of affine 1-space with a (possibly non-Gorenstein) type 𝔸 singularity. Moreover, it is AS Gorenstein if and only if
e (k+1) + d (2k+1) ≡ 0 n.
(1) Let i be the inverse of k+1, modulo n. Then the ith power of the matrix (<ref>) is
[ ω 0 0; 0 ω^r 0; 0 0 1 ]
and, since i and n are coprime, this matrix generates G. It follows that, using the notation of <cit.>,
A^T = (A^x)^G = _ε_k[u,z]^1/n(1,r)[v^n],
where ε_k = ω^-(k+1) and zu = ε_k uz. This ring is necessarily commutative by Lemma <ref>, so it follows that
_ε_k[u,z]^1/n(1,r)≅[x,y]^1/n(1,r),
where [x,y] is a commutative polynomial ring. The exact form of the invariant ring [x,y]^1/n(1,r) can be understood by <cit.>, for example; it is the coordinate ring
of the product of affine 1-space with a (possibly non-Gorenstein) type 𝔸 singularity. By <cit.>, this ring is known to be AS Gorenstein if and only if G is a subgroup of (3,), and this happens if and only 3k+2 ≡ 0 n.
(2) Since (k+1)(2k+1) ≡ 0 n, G is generated by quasi-reflections by Lemma <ref>, and hence A^T has finite global dimension by <cit.>. By Lemma <ref>, A^T is also commutative, and hence it is a polynomial ring. Clearly v^n is an invariant and, since ω^k+1 and ω^2k+1 have respective orders n/d and n/e, it follows that u^n/d and z^n/e are also invariants, and one can show that these three elements generate A^T.
(3) Finally, suppose 1 < de < n. We calculate the invariant ring A^T = (A^x)^G via a series of intermediate steps. First note that the matrix representing the action of g^n/e on A^x is
h_1 [ ω^(k+1) n/e 0 0; 0 1 0; 0 0 1 ],
where ω^(k+1) n/e has order e. Therefore (A^x)^h_1 = [u^e,z,v^n], where u^e and z skew-commute, at worst. Now, the matrix representing the action of g on (A^x)^h_1 is
[ ω^e(k+1) 0 0; 0 ω^2k+1 0; 0 0 1 ],
and hence the matrix of g^n/de on (A^x)^h_1 is
h_2 [ 1 0 0; 0 ω^2k+1/e n /d 0; 0 0 1 ],
where ω^2k+1/e n /d has order d. Hence ((A^x)^h_1)^h_2 = [u^e,z^d,v^n]. Finally, the matrix of g restricted to this algebra is
[ ω^e(k+1) 0 0; 0 ω^d(2k+1) 0; 0 0 1 ],
which has order n/de since both ω^e(k+1) and ω^d(2k+1) have order n/de. It follows that the cyclic group generated by this matrix acts as a quasi-reflection-free group on ((A^x)^h_1)^h_2 = [u^e,z^d,v^n] and so, in principal, one can give an explicit generating set for the (necessarily commutative) invariant ring A^T = [u^e,z^d,v^n]^g. In particular, it is the coordinate ring of the product of affine 1-space with some (possibly non-Gorenstein) type 𝔸 singularity. Finally, by <cit.>, it is AS Gorenstein if and only if the matrix (<ref>) has determinant 1; equivalently,
e (k+1) + d(2k+1) ≡ 0 n,
as claimed.
Proposition <ref> (2) shows that it is possible for A^T to be AS regular, which is not the case when we replace T by a group algebra or its dual <cit.>.
In particular, if n is odd and k=n-1/2, we obtain
A^T = [u^n,z,v^n].
By the discussion after Corollary <ref>, in this case the action of T has trivial homological determinant. Moreover, from the PBW basis (<ref>), it is clear that A is free as an A^T-module: explicitly, it has rank n^2 and basis
{ u^i v^ℓ| 0 ⩽ i,ℓ < n }.
In particular, there exist elements of _A^T(A) of negative degree, and so <cit.> does not hold if we omit the semisimple hypothesis.
Another interesting feature of this example is that, by <cit.>, the invariant ring A^T is equal to the centre of A. It would be interesting to investigate for which down-up algebras the centre can be obtained as the subring of invariants under the action of some Hopf algebra.
The condition in part (3) of Proposition <ref> actually holds in all cases, and gives a convenient criterion to determine when A^T is AS Gorenstein:
A^T is AS Gorenstein if and only if
(k+1) (2k+1,n) + (2k+1) (k+1,n) ≡ 0 n.
We consider the three cases of Lemma <ref>. We have already seen that the result holds for case (3), and the result holds for case (1) since, in this setting,
(k+1) (2k+1,n) + (2k+1) (k+1,n) = (k+1) + (2k+1) = 3k+2.
In case (2), A^T is always AS Gorenstein, so it remains to show that if (k+1)(2k+1) ≡ 0 n then (<ref>) holds. If k+1 = n or 2k+1 = n then this is clear, so suppose not, in which case (k+1,n) and (2k+1,n) lie strictly between 1 and n. Since
k+1 = (k+1,n) (k+1,n)/n,
from (k+1)(2k+1) ≡ 0 n we obtain
(2k+1) (k+1,n) (k+1,n)/n≡ 0 n.
But (k+1,n)/n and n are coprime, so we deduce that
(2k+1) (k+1,n) ≡ 0 n.
Similarly, one can show that
(k+1) (2k+1,n) ≡ 0 n,
from which the claim follows.
We note that the homological determinant of the action of T on A is trivial precisely when k=n-1/2 (Lemma <ref>), and then A^T is AS Gorenstein by Proposition <ref> (2). In particular, <cit.> holds for these examples.
§.§ The case when sqrt(w) has order 2n
We finally establish some properties of the full invariant rings A^T when has order 2n, a hypothesis which we will not repeat. We will typically compute this ring as the subring of g-invariant elements of the ring given in Proposition <ref>. Accordingly, we will need to determine the action of g on the generators of this ring. Since a,b,c, and d are, respectively, scalar multiples of x^n-1· v^2n-1, u, z, and v^2n, it is straightforward to check that the matrix of g on this algebra is given by
[ ω^-(k+1) 0 0 0; 0 ω^k+1 0 0; 0 0 ω^2k+1 0; 0 0 0 1 ].
Many properties of A^T can now be determined using standard techniques, since we can view it as the subring of the AS Gorenstein ring A^x consisting of elements which are invariant under the cyclic group generated by g.
In Lemma <ref> we saw that, when has order n, the invariant ring A^T is always commutative. On the other hand, in our current setting we have the complete opposite behaviour:
A^T is not commutative.
Throughout, let ε_k = ω^-(k+1)√(ω), which is a primitive (2n)th root of unity, and view A^x via its presentation in Proposition <ref>. From this, it is clear that c^n is g-invariant. Moreover, the element b^2k+1 c^n-(k+1) is g-invariant, since
(2k+1)(k+1) + (n-(k+1))(2k+1) ≡ 0 n,
where we note that the exponent of c is non-negative, and that the exponent of b is odd. These elements do not commute, since
b^2k+1 c^n-(k+1)· c^n = ε_k^-n(2k+1) c^n · b^2k+1 c^n-(k+1) = - c^n · b^2k+1 c^n-(k+1),
and hence A^T is not commutative.
We now seek to determine situations when A^T is AS Gorenstein. We have two main tools for this: we can apply Theorem <ref> after determining when the homological determinant of g on A^x is trivial, or we can apply Stanley's Theorem. We note that the hypotheses of Stanley's Theorem are always met for the examples of interest to us, since A^T = (A^x)^g is the ring of invariants for a finite group acting on an AS Gorenstein ring, and therefore it is AS Cohen–Macaulay <cit.>. Moreover, it is a subring of the PI domain A, and hence also has these properties.
To apply either Theorem <ref> or Stanley's Theorem, we need to know the trace of g on A^x.
The trace of g^m on A^x is
_g^mA^x(t) = 1-ω^-2m(k+1) t^4n-2/(1-ω^m(k+1)t)(1-ω^m(2k+1)t^2)(1-ω^-m(k+1)t^2n-1)(1-t^2n).
This is a direct application of <cit.>.
The following example demonstrates an application of these techniques:
Let n ⩾ 2 and k=n-1. Then the matrix of g on A^x is
[ 1 0 0 0; 0 1 0 0; 0 0 ω^n-1 0; 0 0 0 1 ],
and so a simple computation using Molien's Theorem and Lemma <ref> gives
_A^T(t) = 1-t^4n-2/(1-t)(1-t^2n-1)(1-t^2n)^2.
Direct computation then gives
_A^T(t^-1) = -t^2(n+1)_A^T(t),
so Stanley's Theorem tells us that A^T is AS Gorenstein. It is easy to see that the invariant ring is generated by a,b and d (the invariant c^n satisfies c^n = ba-ab, so would be redundant). With a bit more work, one can show that A^T is isomorphic to a polynomial ring over the down-up algebra A(0,1) (which is AS regular), modulo the hyperplane relation a^2 = b^2(n-1) d.
Before determining when A^T is AS Gorenstein, we first investigate whether it is possible for it to have the stronger property of being AS regular. In Proposition <ref> (1) we saw that, when has order n, it was possible for the invariant ring A^T to be AS regular. However, when has order 2n this never happens:
A^T is not AS regular.
Since A^T = (A^x)^g is the ring of invariants of a ring of GK dimension 3 under the action of a finite group, it has GK dimension 3. Therefore, if it were AS regular, then it would have 2 or 3 generators, by <cit.>. Hence, to prove the result, it suffices to show that a purported generating set of size 3 cannot generate the entire invariant ring.
If k=n-1, then Example <ref> shows that A^T is not AS regular. (Note that our proposed method of proof does not work here since, in this case, a minimal generating set has three elements.)
Now suppose that 0 ⩽ k ⩽ n-2 and, for contradiction, assume that A^T is AS regular. Since g is diagonal, we may assume that a generating set for A^T consists of monomials. Therefore, using (<ref>), a generating set must contain ab and d, as well as b^i, where i = n/(n,k+1). Now, as in Lemma <ref>, the element b^2k+1 c^n-k-1 is also T-invariant; we claim that it cannot be generated by ab,d, and b^i. To see this, note that (using the notation of Proposition <ref>), our claimed generators have respective bidegrees
(n,n), (0,2n), (i,0),
while b^2k+1 c^n-k-1 has bidegree (2k+1, 2(n-k-1)). Moreover, we have
0 ⩽ k ⩽n-2/2 ⇒ 1 ⩽ 2k+1 ⩽ n-1,
n-1/2⩽ k ⩽ n-2 ⇒ 2 ⩽ 2(n-k-1) ⩽ n-1.
Therefore both entries in the bidegree of b^2k+1 c^n-k-1 are positive, and one of them is at most n-1. However, this such a bidegree cannot be written as a positive integer linear combination of the bidegrees (<ref>). Therefore ab,d, and b^i do not form a generating set for A^T, so a minimal generating set for A^T contains at least four elements.
To give a first sufficient condition for A^T to be AS Gorenstein, we calculate the homological determinant of g on A^x:
We have
_A^x(g^m,t^-1) = -ω^-m(4k+3) t^4 (1-ω^2m(k+1) t^4n-2)/(1-ω^-m(k+1) t) (1-ω^-m(2k+1) t^2) (1-ω^m(k+1) t^2n-1) (1-t^2n).
In particular, _A^x(g) = ω^-(4k+3).
This formula for _A^x(g^m,t^-1) follows from simply evaluating the formula in Lemma <ref> at t^-1. The power series expansion of _A^x(g,t) in t^-1 is therefore given by
_A^x(g,t) = -ω^-m(4k+3) t^-4 + lower order terms,
and so _A^x(g) = ω^-(4k+3).
If 4k+3 ≡ 0 n (which happens only if n is odd), then A^T is AS Gorenstein.
This follows from Theorem <ref>, Corollary <ref>, and Lemma <ref>.
The above result is only sufficient, but not necessary, for A^T to be AS Gorenstein. Indeed, Example <ref> gives an infinite family of examples where A^T is AS Gorenstein but where the action of g on A^x has nontrivial homological determinant.
We attempt to identify further examples of AS Gorenstein invariant rings A^T as follows. If T were semisimple, then <cit.> would apply, and tell us that A^T is AS Gorenstein whenever the action had trivial homological determinant. By Lemma <ref>, this happens if and only if 4k+2 ≡ 0 n. If n ≡ 0 4 then no value of k satisfies this equation; otherwise we have
k = {[ n-1/2 if n is odd,; n-2/4 or 3n-2/4 if n ≡ 2 4. ].
We analyse whether A^T is AS Gorenstein in these cases, despite T not being semisimple, in a series of examples.
Before we begin these examples, we set some notation. Given a finite list of positive integers (possibly with repetitions) S = [s_1, …, s_n], let 𝒫_S(d) denote the size of the set
{ (i_1, …, i_n) | i_k ∈ℤ_+, ∑_k=1^n i_k s_k = d }.
We call these restricted partition functions. By <cit.>, we have
∑_d ⩾ 0𝒫_S(d) t^d = ∏_s ∈ S1/(1-t^s).
We begin with the case in (<ref>) where n is odd.
Suppose that n is odd and let k=n-1/2. Then the matrix of g^2 (which also generates ⟨ g ⟩ since n is odd) on A^x is
[ ω^-1 0 0 0; 0 ω 0 0; 0 0 1 0; 0 0 0 1 ].
By Molien's Theorem, we have
_A^T(t) = 1/n∑_m=0^n-1_(g^2)^mA^x(t)
= 1/n∑_m=0^n-11-ω^-2mt^4n-2/(1-ω^-m t^2n-1)(1-ω^m t) (1-t^2)(1-t^2n)
= 1/(1-t^2)(1-t^2n)( 1/n∑_m=0^n-11/(1-ω^m t)(1-ω^-m t^2n-1) - 1/n∑_m=0^n-1ω^-2m t^4n-2/(1-ω^m t)(1-ω^-m t^2n-1)).
The left-hand term in the parentheses is the Hilbert series of a (weighted) 𝔸_n-1 singularity, and is therefore known the equal
1-t^2n^2/(1-t^n)(1-t^2n)(1-t^n(2n-1)).
Before evaluating the right-hand term in the parentheses, we first recall the well-known fact that
∑_m=0^n-1ω^jm = {[ n if j n ≡ 0,; 0 if j n ≢0. ].
We then have
1/n∑_m=0^n-1ω^-2m t^4n-2/(1-ω^m t)(1-ω^-m t^2n-1)
= 1/n∑_m=0^n-1ω^-2m t^4n-2(∑_i ⩾ 0ω^im t^i ) (∑_j ⩾ 0ω^-jm t^j(2n-1))
= 1/n∑_m=0^n-1ω^-2m t^4n-2∑_ℓ⩾ 0(∑_i+j(2n-1) = ℓω^(i-j-2)m) t^ℓ
= ∑_ℓ⩾ 0∑_i+j(2n-1) = ℓ(1/n∑_m=0^n-1ω^(i-j-2)m) t^ℓ+4n-2
= ∑_ℓ⩾ 0#{ (i,j) | i,j ⩾ 0, i+(2n-1)j=ℓ, i-j ≡ 2 n } t^ℓ+4n-2.
We evaluate the size of the set in this summation by splitting it into two disjoint subsets, and performing various substitutions:
{ (i,j) | i,j ⩾ 0, i+(2n-1)j=ℓ, i-j ≡ 2 n }
= { (i,j) | i > j ⩾ 0, i+(2n-1)j=ℓ, i-j ≡ 2 n }
⊔{ (i,j) | j > i ⩾ 0, i+(2n-1)j=ℓ, i-j ≡ 2 n }
= { (j,p) | j,p ⩾ 0, p+(2n-1)j=ℓ-1, p ≡ 1 n }i=j+1+p
⊔{ (i,q) | i,q ⩾ 0, 2ni+(2n-1)q=ℓ-2n+1, q ≡ -3 n }j=i+1+q
= { (j,r) | j,r ⩾ 0, nr+2nj=ℓ-2 }p = nr+1
⊔{ (i,s) | i,s ⩾ 0, 2ni+n(2n-1)s=ℓ-n(2n-5)-2 }q=ns+n-3.
The size of this set is given by a sum of restricted partition functions:
𝒫_n,2n(ℓ-2) + 𝒫_2n,n(2n-1)(ℓ-n(2n-5)-2).
By (<ref>), we have
1/n∑_m=0^n-1ω^-2m t^4n-2/(1-ω^m t)(1-ω^-m t^2n-1)
= ∑_ℓ⩾ 0𝒫_n,2n(ℓ-2)t^ℓ+4n-2 + ∑_ℓ⩾ 0𝒫_2n,n(2n-1)(ℓ-n(2n-5)-2) t^ℓ+4n-2
= t^4n/(1-t^n)(1-t^2n) + t^n(2n-1)/(1-t^2n)(1-t^n(2n-1))
= t^4n - t^n(2n+3) + t^n(2n-1) - t^2n^2/(1-t^n)(1-t^2n)(1-t^n(2n-1)).
Combining this with (<ref>), we obtain
_A^T(t) = 1/(1-t^2)(1-t^2n)( 1-t^2n^2/(1-t^n)(1-t^2n)(1-t^n(2n-1)) - t^4n - t^n(2n+3) + t^n(2n-1) - t^2n^2/(1-t^n)(1-t^2n)(1-t^n(2n-1)))
= (1-t^4n)(1-t^n(2n-1))/(1-t^2)(1-t^n)(1-t^2n)^2(1-t^n(2n-1))
= 1-t^4n/(1-t^2)(1-t^n)(1-t^2n)^2.
This Hilbert series satisfies
_A^T(t^-1) = -t^n+2_A^T(t),
so Stanley's Theorem tells us that A^T is AS Gorenstein.
The form of the Hilbert series suggests that A^T is generated by four elements, of degrees 2,n,2n,2n, and that there is a single relation between them in degree 4n. It is clear that the elements
a^n, x ab, y b^n, c, d,
are all g-invariant. Moreover, we have
a^n = a · (a^2)^(n-1)/2 = a · (b^2(n-1) d)^(n-1)/2 = x y^n-2 d^(n-1)/2,
from which it follows that x,y,c,d generate A^T. It is straightforward to check that these elements satisfy the relations
yx = xy + c^n y, xc = cx, yc = cy,
and that there is also a relation in degree 4n, namely
x^2 = y^2 d - x c^n.
It follows that A^T is isomorphic to the quotient of a suitable AS regular algebra by a hyperplane relation.
We now turn our attention to the case where n ≡ 2 4 and k=n-2/4 or k=3n-2/4. We also make the additional restriction that n ⩾ 3. For n=2, the analysis differs: the case k=3n-2/4=1 is already covered by Example <ref>, while the case k=n-2/4 = 0 will be treated in the next example.
We note that, in each case, by (<ref>) the matrix of g on A^x has the form
[ ω^-(k+1) 0 0 0; 0 ω^k+1 0 0; 0 0 -1 0; 0 0 0 1 ].
In particular, the order of ω^k+1 will affect properties of the invariant ring.
First suppose that n ≡ 6 8 and k= n-2/4, or n ≡ 2 8 and k= 3n-2/4. In this case, it is easy to check that the greatest common divisor of n and k+1 is 2, and hence ω^k+1 has order n/2. By a calculation similar to the one in Example <ref>, one can use Molien's Theorem to show that
_A^T(t) = 1-t^4n/(1-t^4)(1-t^n/2)(1-t^2n)^2.
From this, one checks that
_A^T(t^-1) = -t^n/2+4_A^T(t),
and hence Stanley's Theorem tells us that A^T is AS Gorenstein.
Now instead suppose that n ≡ 2 8 and k= n-2/4, or n ≡ 6 8 and k= 3n-2/4. In this case, n and k+1 are coprime, so ω^k+1 has order n. As before, one can calculate
_A^T(t) = 1-t^n+4-t^4n+t^5n+4/(1-t^4)(1-t^n/2+2)(1-t^n)(1-t^2n)^2,
and check that
_A^T(t^-1) = -t^n/2+2_A^T(t).
Again, Stanley's Theorem tells us that A^T is AS Gorenstein.
Finally, we consider the case where n=2 and k=0, which fits into the framework of the previous example with k=n-2/4, but exhibits different behaviour. Here, the matrix of g on A^x has the form
[ -1 0 0 0; 0 -1 0 0; 0 0 -1 0; 0 0 0 1 ],
and there are many more generators than in the previous example (essentially due to the fact that -1 ≡ 1 2.) Another calculation with Molien's Theorem tells us that
_A^T(t) = 1-t^6-t^7-2t^8-t^9+2t^11+2t^12+2t^13+t^14-t^15-t^16-t^17/(1-t^2)(1-t^3)(1-t^4)^3(1-t^5),
which does not satisfy the condition in Stanley's Theorem, and so A^T is not AS Gorenstein.
The above example is particularly interesting because it gives an example of a non-semisimple Hopf algebra acting with trivial homological determinant on an AS regular algebra for which the invariant ring is not AS Gorenstein. This shows that the semisimple hypothesis is essential in <cit.>.
We summarise the results from the preceding examples below.
Suppose that T acts on A, and that has order 2n. Then A^T is AS Gorenstein if n and k satisfy the following relationships:
* n ⩾ 2 and k = n-1;
* 4k+3 ≡ 0 n (which happens only if n is odd);
* n ⩾ 3 and 4k+2 ≡ 0 n.
Additional computations tell us that there are many cases which are not covered by the above result for which A^T is AS Gorenstein. The following table indicates which values of n and k give rise to AS Gorenstein invariant rings A^T, which were determined using Stanley's Theorem. Check marks correspond to parameter values for which A^T is AS Gorenstein; circled check marks correspond to those which are covered by Theorem <ref>.
We draw attention to a few features of this table. In Theorem <ref>, the only sufficient condition for A^T to be AS Gorenstein when n ≡ 0 4 is k=n-1. Indeed, when n is 4 or 8, these are the only situations in which A^T is AS Gorenstein. However, when n=12, there are many values of k for which A^T is AS Gorenstein, and so this behaviour is not linked to being divisible by 4. The final row of the table suggests that, when n is power of 2, the only AS Gorenstein invariant ring occurs when k=n-1. Indeed, the same behaviour holds for n=32. It might be possible to prove this purely combinatorially, using Stanley's Theorem.
Moreover, when n is an odd prime, the table suggests that only three values of k give rise to AS Gorenstein invariant rings, and these are the unique values of k arising from each of (1), (2), and (3) of Theorem <ref>. This behaviour continues up to n=31. Again, one might be able to prove that this is always the case using combinatorics.
On the other hand, we do not have an understanding of why the cases corresponding to the uncircled check marks are AS Gorenstein, as Theorem <ref> does not apply to these. It would be interesting to investigate these cases more thoroughly.
myamsalpha
| Throughout, we let denote an algebraically closed field of characteristic 0. All algebras will be associative -algebras.
If G is a finite subgroup of (n,) and A = [x_1, …, x_n] is a polynomial ring, then the study of invariants of the action of G on A is a deep and beautiful subject with connections to combinatorics, algebraic geometry, and representation theory. In particular, there are many classical results which describe properties of the invariant ring A^G in terms of properties of G. For example, the Shephard–Todd–Chevalley Theorem states that A^G is a polynomial ring if and only if G is generated by quasi-reflections, while Watanabe's Theorem states that, if G contains no quasi-reflections, then A^G is Gorenstein if and only if G ⩽(n,).
In the past three decades, there has been a strong research effort to try to generalise many of these classical results to a noncommutative setting. The approach that many authors have taken, and which we shall also follow, is to replace the polynomial ring by an Artin–Schelter regular algebra, which may be viewed as a “noncommutative polynomial ring”. Additionally, noncommutative algebras often have “quantum symmetries” which commutative algebras lack <cit.>, so one should allow for actions by finite-dimensional Hopf algebras H, rather than just group algebras. For such a Hopf action on an algebra, there exists a suitable notion of an invariant ring, which we denote A^H.
This perspective has proved to be fruitful, particularly in the case where one additionally assumes that the Hopf algebra H is semisimple. For example, Kirkman, Kuzmanovich, and Zhang have established an analogue of the Shephard–Todd–Chevalley Theorem for skew polynomial rings <cit.>, and an analogue of Watanabe's Theorem for actions of Hopf algebras on Artin–Schelter Gorenstein algebras <cit.>. Many other results have satisfactory generalisations to this noncommutative setting; we refer the interested reader to the survey of Kirkman <cit.> for a thorough overview.
However, much less is known when we omit the semisimple hypothesis from H, which many general results in the area require. For example, Molien's Theorem determines the Hilbert series of the invariant ring A^H using representation theory, but this result is false in general if H is not assumed to be semisimple.
In <cit.>, the authors studied examples of non-semisimple Hopf algebras on Artin–Schelter regular algebras and provided a list of differences between the semisimple and non-semisimple setting <cit.>. However, these examples exist only in positive characteristic, and so it is unclear if the novel behaviour which they exhibit is also influenced by the characteristic of the field, as is the case in the modular representation theory of groups.
Accordingly, in this paper, we study a certain family of actions of non-semisimple Hopf algebras on Artin–Schelter regular algebras where has characteristic 0. In particular, we are interested in how properties of the corresponding invariant rings differ from the semisimple setting. The non-semisimple Hopf algebras which we consider are the Taft algebras, which were originally defined by Taft <cit.> when studying the order of the antipode of a Hopf algebra. These algebras depend on a single parameter n ⩾ 2, and have presentation
T_n = ⟨ x,g | g^n-1, x^n, gx-ω xg⟩,
where ω∈ is a primitive nth root of unity. Actions of Taft algebras and their generalisations have been studied on families of AS regular algebras previously. In <cit.>, the authors showed that the invariant ring of a Taft algebra acting on a quantum plane is always a commutative polynomial ring <cit.>. In <cit.>, actions of generalized Taft algebras on quantum planes were considered: here, the invariant rings are always commutative, and are either a polynomial ring or the coordinate ring of a well-understood singularity <cit.>. Other work in this direction include actions on finite-dimensional algebras <cit.>, quantum generalised Weyl algebras <cit.>, path algebras of quivers <cit.>, and preprojective algebras <cit.>, although these algebras are not AS regular.
Our focus will be actions of Taft algebras on down-up algebras. These algebras were originally defined by Benkart and Roby <cit.>, who were interested in studying their representation theory. We will restrict attention to the subclass of down-up algebras which are noetherian and graded, denoted A(α,β) with α,β∈, β≠ 0. These are -algebras which are generated by two elements subject to two cubic relations, and are AS regular algebras of global dimension 3 <cit.>. Kirkman, Kuzmanovich, and Zhang showed that A(α,β) is rigid in the sense that A^G is not isomorphic to A for any nontrivial subgroup G of automorphisms <cit.>. A similar result holds for group coactions; see <cit.>.
Our first result classifies all possible inner faithful homogeneous actions of Taft algebras on noetherian-down up algebras:
Let n ⩾ 2 and suppose that T_n acts inner faithfully and homogeneously on a down-up algebra A(α,β). Then we have the following possibilities:
* For some 0 ⩽ k ⩽ n-1 and q ∈^×,
g =
[ ω^k+1 0; 0 ω^k ],
x = [ 0 q; 0 0 ],
α = ω^-(k+1) (1 + √(ω)), β = - ω^-2(k+1)√(ω),
where √(ω) denotes a choice of either one of the square roots of ω; or
* For some 0 ⩽ k ⩽ n-1 and r ∈^×,
g =
[ ω^k 0; 0 ω^k+1 ],
x = [ 0 0; r 0 ],
α = ω^k+1 (1 + √(ω^-1)), β = - ω^2(k+1)√(ω^-1),
where √(ω) denotes a choice of either one of the square roots of ω.
Conversely, each of the above parameter choices indeed give rise to an inner faithful action of T_n on an appropriate down-up algebra.
For the remainder of this introduction, let A = A(α,β) and T = T_n be a pair satisfying the conditions of the above theorem. We then seek to determine properties of the invariant rings A^T. By Remark <ref>, it will suffice to consider only case (1) from Theorem <ref>. As a first step, we give a presentation for the subring A^x of A consisting of x-invariant elements. It turns out that the structure of A^x depends crucially on whether has order n or order 2n.
Suppose that A and T are as in Theorem <ref> (1).
* (Proposition <ref>) If has order n, then A^x is a skew polynomial ring, generated by u, z vu-ω^-(k+1)uv, and v^n.
* Proposition <ref>) If has order 2n, then A^x is generated by u, z, v^2n and x^n-1· v^2n-1, and is the factor of a four-dimensional AS regular algebra by a homogeneous regular normal element.
We are then able to calculate the invariant ring A^T as the subring of A^x consisting of g-invariant elements. This allows us to exploit the well-understood properties of actions of finite groups on Artin–Schelter Gorenstein rings. Again, the order of the root of unity has a substantial effect on the properties of A^T.
Suppose that A and T are as in Theorem <ref> (1), and that has order n.
* (Lemma <ref>) A^T is commutative.
* (Proposition <ref>) A^T is a polynomial ring if and only if (k+1)(2k+1) ≡ 0 n. Otherwise, it is the coordinate ring of the product of affine 1-space with a (possibly non-Gorenstein) type 𝔸 singularity.
* (Corollary <ref>) A^T is AS Gorenstein if and only if
(k+1) (2k+1,n) + (2k+1) (k+1,n) ≡ 0 n.
Suppose that A and T are as in Theorem <ref> (1), and that has order 2n.
* (Lemma <ref>) A^T is not commutative;
* (Lemma <ref>) A^T is not AS regular; and
* (Theorem <ref>) A^T is AS Gorenstein if n and k satisfy the following relationships:
(a) n ⩾ 2 and k = n-1;
(b) 4k+3 ≡ 0 n (which happens only if n is odd);
(c) n ⩾ 3 and 4k+2 ≡ 0 n.
The previous two theorems show that A^T can exhibit a wide variety of behaviours, often quite different from the semisimple setting. In particular, down-up algebras are rigid with respect to actions of Taft algebras, but are not “strongly rigid”, in the sense that it is possible for A^T to be Artin–Schelter regular, which is new behaviour. Additionally, there is an example where the homological determinant of the action is trivial but A^T is not Artin–Schelter Gorenstein (Example <ref>), which shows that the semisimple hypothesis is essential in <cit.>. Moreover, Example <ref> exhibits an action with trivial homological determinant for which the so-called Auslander map is not an isomorphism. This shows that one also requires the semisimple hypothesis in <cit.>.
Organisation of this paper. In Section 2, we recall some important definitions and results. We classify all possible inner faithful, homogeneous actions of Taft algebras on down-up algebras in Section 3. In Section 4, we establish some useful identities for subsequent calculations which, in Section 5, allows us to determine the subring of x-invariant elements. Finally, in Section 6 we use our understanding of the x-invariants to understand the full subring of invariants, and some of its properties.
Acknowledgements. S. Crawford thanks the Heilbronn Institute for Mathematical Research for their financial support during this work. R. Won was partially supported by Simons Foundation grant #961085. We also thank Benny Liber whose undergraduate research project with J. Gaddis motivated some of the ideas herein. | null | null | null | null | null |
http://arxiv.org/abs/2409.17571v1 | 20240926063536 | Incomplete quantum oblivious transfer with perfect one-sided security | [
"David Reichmuth",
"Ittoop Vergheese Puthoor",
"Petros Wallden",
"Erika Andersson"
] | quant-ph | [
"quant-ph"
] |
#1#1
#1#1
#1#1
penumerate
IPaQS, Heriot-Watt University, Edinburgh, UK
School of Computing, Newcastle University, Newcastle upon Tyne, UK
School of Informatics, University of Edinburgh, Edinburgh, UK
IPaQS, Heriot-Watt University, Edinburgh, UK
§ ABSTRACT
Oblivious transfer is a fundamental cryptographic primitive which is useful for secure multiparty computation. There are several variants of oblivious transfer. We consider 1-out-of-2 oblivious transfer, where a sender sends two bits of information to a receiver. The receiver only receives one of the two bits, while the sender does not know which bit the receiver has received. Perfect quantum oblivious transfer with information-theoretic security is known to be impossible. We aim to find the lowest possible cheating probabilities. Bounds on cheating probabilities have been investigated for “complete" protocols, where if both parties follow the protocol, the bit value obtained by the receiver matches the sender's bit value. We instead investigate incomplete protocols, where the receiver obtains an incorrect bit value with probability p_f. We present optimal non-interactive protocols where Alice's bit values are encoded in four symmetric pure quantum states, and where she cannot cheat better than with a random guess. We find the protocols such that for a given p_f, Bob's cheating probability p_r is as low as possible, and vice versa. Furthermore, we show that non-interactive quantum protocols can outperform non-interactive classical protocols, and give a lower bound on Bob's cheating probability in interactive quantum protocols. Importantly for optical implementations, our protocols do not require entanglement nor quantum memory.
Incomplete quantum oblivious transfer with perfect one-sided security
Erika Andersson
September 28, 2024
=====================================================================
§ INTRODUCTION
1-out-of-2 oblivious transfer (OT) is a cryptographic primitive where a sender Alice holds two bits, and a receiver Bob receives one of them. The receiver should not know the other bit, and the sender should not know which bit the receiver obtained. A dishonest sender Alice will attempt to find out which bit Bob obtained, and a dishonest receiver Bob will attempt to learn both bit values. Apart from 1-out-of-2 oblivious transfer <cit.>, there are other variants. In Rabin oblivious transfer <cit.>, for example, a single bit is either received or not. Oblivious transfer is important not the least because it is universal for multiparty computation <cit.>.
Unlike for quantum key distribution, information-theoretically secure perfect quantum oblivious transfer is impossible <cit.>. It does become possible with restrictions on quantum memory for cheating parties <cit.>.
Variants of oblivious transfer where the parties are constrained by special relativity are also possible <cit.>.
While perfect quantum oblivious transfer is impossible without restrictions on cheating parties,
quantum protocols with bounded cheating probabilities for the (otherwise unrestricted) parties are still possible.
Currently known protocols however do not achieve cheating probabilities that are tight with existing lower bounds. The known bounds also hold specifically for “complete" protocols <cit.>, where completeness means that the protocol always works correctly if the parties are honest. For such protocols, a lower bound on the greater of Alice's and Bob's cheating probabilities is 2/3 in general <cit.> and ≈ 0.749 if symmetric pure states are used <cit.>. Another variant of oblivious transfer is XOR oblivious transfer (XOT) where the sender has two bits and a receiver obtains either the first bit, the second bit, or their XOR. Using pure symmetric states, the XOT protocol has been demonstrated to be an optimal protocol having lower cheating probabilities than the classical XOT protocols <cit.>.
We will here instead examine non-interactive incomplete protocols for 1-out-of-2 quantum oblivious transfer. Non-interactive means that there is no back-and-forward communication, either classical or quantum; the protocols have a single step, where one party sends classical and/or quantum information to the other party, who measures what is received. Some of our results also provide bounds valid for interactive protocols.
“Incomplete" means that a protocol might fail even when both parties follow the protocol. Here protocol failure will mean that Bob obtains an incorrect bit value. There are several reasons to investigate protocols that can fail. It turns out that a non-zero failure probability means that cheating probabilities can be lowered. There is also a connection to quantum random access codes <cit.>, as will be discussed at the end; incomplete oblivious transfer can be seen as a generalisation that “interpolates" between random access codes and complete oblivious transfer. Noise and imperfections will also often lead to a non-zero failure probability in implementations.
Specifically, we investigate incomplete protocols for quantum oblivious transfer, where the sender Alice can never cheat any better than with a random guess. That is, the security against a cheating Alice is “perfect". For a given protocol failure probability p_f, the goal is then to minimize Bob's cheating probability p_r, and vice versa. Bob's cheating probability is the probability that he correctly guesses both of Alice's bit values.
We discuss the definition of protocol failure in Section <ref>, non-interactive classical protocols in Section <ref>, and some general properties of incomplete quantum oblivious transfer in Section <ref>. In Section <ref>, we find the lowest possible cheating probability for receiver Bob for a given protocol failure probability for protocols using symmetric pure states, and in <ref>, we give examples of optimal quantum protocols of this type. The example protocols are feasibleto realize with current technology, for example using photons. We finish with a discussion.
§ DEFINITION OF PROTOCOL FAILURE
“Failure" could in principle mean either that Bob obtains an incorrect bit value, or that he fails to obtain any bit value at all. An incorrect bit value could be due to noise or randomness that is in principle avoidable (but might be intentional), or, in a quantum protocol, to the fact that non-orthogonal quantum states cannot be perfectly distinguished from each other. Bob could fail to obtain any bit value at all again because of randomness or noise, e.g. if no detector on Bob's side registers anything. Or, in a quantum protocol, if Bob is for example using an unambiguous quantum measurement to determine one of Alice's bit values, then such a measurement might give an inconclusive result, meaning that Bob fails to obtain a bit value even if the implementation is perfect.
We will concentrate on incomplete protocols where Bob always obtains a bit value, which is incorrect with probability p_f, and where Bob generally does not know if his bit value is correct or not. The other type of incomplete protocol would be if Bob sometimes fails to obtain any bit value at all (in addition, when he does obtain a bit value, it might be guaranteed to be correct, or might be correct with a probability less than one). If Bob fails to obtain a bit value at all, then this will be obvious to Bob. If both parties are honest, they could agree to simply repeat the protocol until a bit value is obtained by Bob (with Alice using new bit values each time). Taken together, if the bit value that Bob in the end obtains is guaranteed to be correct, this would then simply be another instance of a complete protocol for quantum oblivious transfer. If the bit value in the end obtained by Bob's is sometimes incorrect, then the protocol is of the type we consider.
Apart from repeating the protocol until success, the parties could also agree that if Bob fails to obtain a bit value, he makes a random guess to produce a bit value. In this case Bob again obtains an incorrect bit value with some probability. This is therefore again a protocol of the type we will be considering. Generally, Bob's lowest failure probability p_f will be achieved when he is never certain that the bit value he received is correct (because Bob's optimal measurement will be a minimum-error measurement).
To summarize, we will consider the case where Bob always obtains a bit value, which is incorrect with probability p_f, and where Bob does not know when the protocol has failed (or at least does not always know). In the quantum protocols we will examine, Alice does not know whether the protocol has succeeded or failed either.
Cheating probabilities could be calculated either as overall cheating probabilities, whether the protocol has failed or not, or as conditional cheating probabilities, conditioned on the protocol failing or succeeding. In the quantum protocols we consider, the honest parties do not know whether a particular run of the protocol has failed or not. It therefore seems most natural to consider overall cheating probabilities. In the classical non-interactive protocols below, on the other hand, Alice may know whether or not a bit value is correct (but not whether Bob has received it). This can be seen as an advantage for Alice. We will nevertheless compare overall cheating probabilities for classical and quantum protocols.
§ CLASSICAL NON-INTERACTIVE PROTOCOLS
As always in quantum cryptography, we will need to compare our quantum protocols to classical protocols, and hence we will need to know how low cheating probabilities can be in classical protocols.
First, recall that in any complete protocol for 1-out-of-2 oblivious transfer (quantum or classical), either party can always successfully cheat with a probability of 1/2 using a random guess.
Furthermore, in protocols with information-theoretic (as opposed to computational) security, if one party can successfully cheat only with probability 1/2, then the other party can necessarily cheat with probability 1 <cit.>. This holds both for “classical" and for quantum protocols which are complete. A classical protocol achieving this is where Alice sends both bits to Bob, who (if he is honest) reads out only one of them. Alice's cheating probability is then 1/2 (her probability to correctly guess which bit Bob obtains) and Bob's cheating probability is 1 (his probability to obtain both of Alice's bits). Conversely, if Alice sends one of the bits to Bob and “forgets" which one she sent if she is honest, then Alice can cheat with probability 1 and Bob can cheat with probability 1/2.
Let us now consider incomplete classical protocols which fail with some probability p_f, where the sender can only cheat with probability 1/2, to see how much the receiver's cheating probability can be lowered. Let us start with an example. Suppose that Alice sends two bits to Bob, but that she (or a “noisy environment" that Bob cannot control or access) flips one of her bit values with some probability, and leaves the other one unchanged. Each bit is equally likely to be the flipped one. Suppose that the first bit is flipped and the second bit left correct with probability p≤ 1/3, and the first bit left correct and the second one flipped also with probability p. Both bit values are left correct with probability 1-2p≥ 1/3 (and it never happens that both bit values are flipped). The failure probability is then p_f = p, which is the probability that an honest Bob is unlucky and reads out a flipped bit value. Bob's cheating probability if he reads out both bits is 1-2p = 1-2p_f.
For a non-interactive classical protocol, this turns out to be optimal, in the sense that for a given p_f≤ 1/3, Bob's cheating probability p_r is as small as it can be, keeping Alice's cheating probability equal to p_s=1/2.
Let us consider a general classical non-interactive protocol where Alice sends some information to Bob, who then reads out one bit if he is honest, and both bits if he is dishonest. “Reading out" may involve some information processing; we are placing no computational restrictions on either Alice or Bob.
Classical information can be copied, and therefore, for example, the probability that if Bob chooses to read out the first bit value, it is correct, is independent of whether or not he also chooses to read out the second bit value. (This would not be the case in a quantum protocol, where Bob generally, if he learns about a particular property of a quantum state, will be able to obtain less information about something else.)
Let us denote the probability that both bit values can be correctly read out by Bob by c, the probability that the first bit value is correct and the second is wrong by p, the probability that the first bit value is correct and the second is wrong by q, and the probability that both bit values are wrong by w. It holds that c+p+q+w=1.
We can without loss of generality assume that c is larger than p, q and w (if this is not the case to start with, then Bob can flip bit values to make it so). Bob's cheating probability will then be p_r=c. An honest Bob's success probability to obtain one bit value correctly, if he randomly reads out either the first or the second bit, is 1-p_f=c+(p+q)/2. The probability that his bit value is wrong is p_f=w+(p+q)/2. It therefore holds that
p_r=c= 1-w-q-p = 1-2p_f+w.
In order to minimize Bob's probability to cheat, for a given p_f for an honest Bob, Alice should clearly choose w as small as possible. This means that in addition to c= Max(c,p,q,w), Alice will ensure that w= Min(c,p,q,w). Other choices are possible, but will lead to a higher cheating probability for Bob, for a given protocol success probability, and we (and Alice) are interested in the lowest possible cheating probability for Bob. If c ≥ 1/3, then Alice can choose p, q, w so that w=0, and it then holds that p_r= 1-2p_f. Generally, when 0≤ p_f≤ 1/3, for classical non-interactive protocols it therefore holds that
p_r≥ 1-2p_f.
If c≤ 1/3, then the smallest w Alice can choose is w=1-3c, by picking p=q=c. In this case, the protocol failure probability p_f=1-2c=1-2p_r. When p_f ≥ 1/3, it therefore holds that Bob's cheating probability is bounded as
p_r≥1/2(1-p_f).
A receiver who performs the protocol as if they are honest, but then randomly guesses the bit value they do not obtain, will correctly guess both bit values with probability p_r=1/2(1-p_f). Equation (<ref>) therefore means that for p_f ≥ 1/3, this is Bob's best cheating strategy.
This combined bound in equations (<ref>) and (<ref>), for non-interactive classical protocols where the sender can only cheat with probability p_s=1/2, will serve as a benchmark for our quantum protocols. It turns out that for corresponding non-interactive quantum protocols, the cheating probability for the receiver Bob can be less than in classical protocols.
§ INCOMPLETE QUANTUM PROTOCOLS
We will now consider quantum oblivious transfer with a failure probability p_f, where the sender can only cheat with probability p_s=1/2, that is, no better than using a random guess. As mentioned above, we define the sender's and receiver's cheating probabilities as their overall probability to cheat successfully, independently of whether the protocol fails or not. This makes sense since the parties will not know whether or not the protocol has failed. (In complete quantum protocols, dishonest parties will also not always know whether the information they have obtained is correct, if they maximize their cheating probabilities.)
We will consider non-interactive protocols where a quantum state is sent only once from sender to receiver, who makes a final measurement.
In general, protocols for quantum oblivious transfer can have several steps where quantum states are sent back and forth, such as in the protocol by Chailloux et al. <cit.>, and as in the general framework in <cit.>. Some of our results generalize to such protocols. In particular, a dishonest party can always cheat by honestly performing the protocol until the last step, and then cheating by altering their final measurement. The cheating probability for this particular strategy then gives a lower bound for their general cheating probability. That is, lower bounds for cheating in non-interactive protocols also give lower bounds for cheating in
interactive protocols.
In 1-out-of-2 oblivious transfer Alice has two classical input bits x_0, x_1, and Bob has one input bit c. Usually one considers the case where Alice and Bob select their bit values randomly.
Suppose that an honest sender Alice encodes her two classical bit values in one of the quantum states σ_00, σ_01, σ_10, σ_11, where the subscripts indicate Alice's bit values.
She sends this quantum state to Bob, who (if he is honest) selects between making a measurement to learn either the value of the first bit (c=0), or a measurement to learn the value of the second bit (c=1), with equal probability.
Bob choosing between two measurements, and the fact that there is no further communication with Alice, will guarantee that Alice's overall cheating probability is 1/2. By no-signalling, she cannot tell which measurement Bob is choosing, or indeed whether Bob has made a measurement at all <cit.>. She therefore cannot cheat better than with a random guess. Bob's cheating probability on the other hand can be higher than 1/2(1-p_f), which he would achieve with a random guess. Conversely, if Alice should be able to cheat no better than with a random guess, then she should not be able to distinguish between Bob obtaining the first or the second bit value. It follows that whatever Bob does can, from Alice's point of view, be described as him choosing between two different generalized measurements, depending on which bit value he wishes to obtain.
We will now examine how well the protocol works if both parties are honest.
If Bob is honest and wishes to learn the value of the first bit, then he performs a measurement to distinguish between the sets of states S_00={σ_00, σ_01} and S_01={σ_10, σ_11}, where all states are equiprobable. This is the same as making a measurement to distinguish between the equiprobable states 1/2σ_00+1/2σ_01 and 1/2σ_10+1/2σ_11. The minimum-error measurement that distinguishes between two states ρ_0 and ρ_1, occurring with probabilities p_0 and p_1, has the error probability
p_ err = 1/2(1- Tr|p_0ρ_0-p_1ρ_1|).
The measurement is a projection in the eigenbasis of p_0ρ_0-p_1ρ_1.
The probability that Bob obtains an incorrect value for the first bit, if he uses a minimum-error measurement, is therefore
p_f,0 = 1/2(1-1/4 Tr|σ_00+σ_01-σ_10-σ_11|).
If Bob wishes to learn the second bit, then he performs a measurement to distinguish between the sets of states S_10={σ_00, σ_10} and S_11={σ_01, σ_11}, where again all states are equiprobable. This is equivalent to distinguishing between the equiprobable states 1/2σ_00+1/2σ_10 and 1/2σ_01+1/2σ_11. Bob's minimum-error measurement gives an incorrect value for the second bit with the probability
p_f,1 = 1/2(1-1/4 Tr|σ_00+σ_10-σ_01-σ_11|).
On average, Bob's probability to obtain an incorrect bit value is
p_f = 1/2(p_f,0+p_f,1).
If Bob's failure probability is independent of whether he tries to obtain the first or the second bit, then p_f,0=p_f,1=p_f.
We already remarked that Alice's best cheating strategy is a random guess. If Bob wishes to cheat, then he needs to distinguish between all four of Alice's states. His optimal measurement for this necessarily gives the wrong result at least as often as either of the above two measurements where he learns only one bit value. This means that Bob's cheating probability p_r, defined as the probability that he can correctly guess both of Alice's bit values, is generally strictly smaller than either of p_f,0, p_f,1 or p_f above. That is, it holds that
p_r ≤ 1-p_f,0, p_r ≤ 1-p_f,1 and p_r ≤ 1-p_f,
and the inequalities are strict except in special cases (which would correspond to poorly designed protocols).
However, for a quantum OT protocol to have a lower cheating probability for Bob than the best non-interactive classical protocol, we would want p_r to be lower than the bounds in Eqns. (<ref>) and (<ref>).
We can also make further statements about p_f and p_r.
From equations (<ref>) and (<ref>), if Bob is selecting between learning either the first or the second bit value,
then his failure probability p_f will be nonzero unless the states σ_00, σ_01, σ_10, σ_11 all have distinct supports, and can therefore be perfectly distinguished from each other. That is, as was already known, p_f=0 necessarily implies that Bob's cheating probability p_r=1, if we also demand that Alice should not be able to cheat any better than with a random guess.
Moreover, that p_f=0 ∧ p_s=1/2 implies p_r=1 holds true whether or not c is actively chosen by Bob. In so-called semi-random protocols, Bob does not select between two courses of action during the protocol depending on his input c, but instead obtains his value of c randomly as an output from the protocol <cit.>. Without loss of generality, whether or not Bob actively chooses c, we can assume that Bob obtains both c and the bit value x_c using a measurement that he defers to the end of the protocol. Bob's final measurement then has four outcomes. We denote Bob's generalized measurement operators by π^i* for c=0, and π^*i for c=1, where i=0, 1 gives the value of x_c. For example, the measurement operator π^1* corresponds to c=0 and x_0=1. Alice can cheat by using entanglement, so that she and Bob in the last step of the protocol share an entangled state. The requirement that Alice is unable to learn anything about c means that the state she holds after Bob's measurement cannot depend on Bob's output c. From no-signalling <cit.>, this implies that π^0*+π^1*= q 1̂ and π^*0+π^*1= (1-q) 1̂, where 0≤ q≤ 1. Bob's measurement is therefore equivalent to him making a random choice between learning the first bit with probability q and the second bit with probability 1-q. Equations (<ref>) and (<ref>) then again mean that if p_f=0 ∧ p_s=1/2, then p_r=1 holds, also when Bob does not choose c as an input, but obtains it as an output from the protocol.
When p_f>0, we can reason as follows. Let us assume that the same measurements Bob would use to learn either the first or the second bit is used instead when it is in addition known that Alice is sending either σ_00 or σ_11, with equal probability. In this case, learning either of the bit values determines the state. If p_f,0 does not depend on whether σ_00 or σ_01 was prepared, or whether σ_10 or σ_11 was prepared, it follows that
p_f,0≥1/2(1-1/2 Tr|σ_00-σ_11|),
where the RHS is the minimum error probability for the measurement that optimally distinguishes between the equiprobable states σ_00 and σ_11.
Similarly, if p_f,1 does not depend on whether σ_00 or σ_10 was prepared, or whether σ_01 or σ_11 was prepared, we have
p_f,1≥1/2(1-1/2 Tr|σ_00-σ_11|).
If both of these equations hold, then we obtain
p_f≥1/2(1-1/2 Tr|σ_00-σ_11|).
If σ_00=|ψ_00⟩⟨ψ_00| and σ_11=|ψ_11⟩⟨ψ_11| are pure, then we obtain
p_f ≥1/2(1-√(1-|⟨ψ_00|ψ_11⟩|^2)).
A similar argument can be used to derive
p_f≥1/2(1-1/2 Tr|σ_01-σ_10|).
Bob's optimal cheating strategy will be a minimum-error measurement for distinguishing between all four states σ_00, σ_10, σ_01 and σ_11. In a general interactive protocol with more steps, the success probability of this measurement gives a lower bound for Bob's cheating probability.
We will use a bound by Audanaert and Mosonyi <cit.> for the minimum-error probability for distinguishing between a set of states {ρ_i} with prior probabilities p_i,
p_ err≤1/2∑_i j√(p_ip_j)F(ρ_i,ρ_j),
where the fidelity of two quantum states (sometimes called square root fidelity) is defined as
F= Tr[(√(ρ_0)ρ_1√(ρ_0))]^1/2.
For two pure states this reduces to the absolute value of their overlap.
We denote the largest of the “nearest-neighbour" fidelities between σ_00 and σ_01, between σ_00 and σ_10, between σ_11 and σ_01 and between σ_11 and σ_10 by F, and the larger of the
fidelities between σ_00 and σ_11, and between σ_01 and σ_10, by G. Bob's cheating probability then obeys
p_r ≥ 1 -F-1/2G.
If p_f=0, then G=0 is necessary, and the equation above reduces to the corresponding bound for complete protocols <cit.>.
Equation (<ref>) (or the corresponding equation for |ψ_01⟩ and |ψ_10⟩, depending on which fidelity is larger) can be rewritten as
√(p_f(1-p_f))≥1/2G,
which holds if the σ_ij are pure states.
It then holds that
p_r≥ 1-F-√(p_f(1-p_f)).
From the above equations, it seems that a non-zero failure probability p_f may allow for a lower cheating probability p_r for Bob (lower than if p_f=0). The minimum of the RHS in (<ref>) is obtained when p_f=1/2, giving p_r≥ 1/2-F, but p_f=1/2 corresponds to a random (and thus useless) result for an honest Bob. The expression is of course only a lower bound on Bob's cheating probability, but the lower bound is lower for non-zero p_f and G.
Alice's cheating probability remains equal to 1/2 independent of F and G, since an honest Bob is randomly selecting between two measurements to learn either the first or the second bit value. For complete protocols, it generally holds that p_s≥ (1+F)/2 <cit.>, which together with p_r≥ 1-F means that for complete protocols, Alice's and Bob's cheating probabilities obey the tradeoff relation 2p_s+p_r≥ 2. For the incomplete (imperfect) protocols we are examining, with p_s=1/2, it instead holds that
2p_s + p_r ≥ 2 - F- 1/2G,
which shows that in incomplete protocols for quantum oblivious transfer, 2p_s+p_r can be lower than complete protocols.
For pure states, we can also obtain a tradeoff relation in terms of the failure probability p_f as
2p_s + p_r ≥ 2 - F-√(p_f(1-p_f)).
§ PROTOCOLS USING SYMMETRIC PURE STATES
We will now derive the failure and success probability for an honest Bob, and Bob's cheating probability, for protocols using symmetric pure states.
For N symmetric states |ψ_j⟩, j=0, 1,… , N-1, it holds that |ψ⟩_j = U^j|ψ_0⟩, where U is a unitary transform which satisfies U^N= 1.
Assume that Alice encodes her two bits in the
states |ψ_00⟩, |ψ_01⟩, |ψ_11⟩, |ψ_10⟩, where |ψ_01⟩=U|ψ_00⟩, |ψ_11⟩ = U^2|ψ_00⟩ and |ψ_10⟩ = U^3|ψ_00⟩, where U^4 = 1.
Two of the pairwise overlaps between the states are ⟨ψ_01|ψ_00⟩ = f and ⟨ψ_11|ψ_00⟩ = g. The overlap f is in general complex, while due to the symmetry g has to be real, but can be negative. Since the states are symmetric, this also determines ⟨ψ_10|ψ_00⟩ = f^* and all other pairwise overlaps.
An honest Bob chooses between a minimum-error measurement to learn the value of the first bit, and a minimum-error measurement for learning the value of the second bit. The optimal cheating strategy for a dishonest Bob is to make the minimum-error measurement to distinguish between all four equiprobable states.
We will first calculate Bob's cheating probability p_r in this non-interactive quantum protocol.
Also, if Bob should distinguish between the four states |ψ_00⟩, |ψ_01⟩, |ψ_11⟩, |ψ_10⟩ in the last step of a protocol with many steps, then the p_r calculated below will be a lower bound on his cheating probability.
If the states are equiprobable and symmetric, then the optimal measurement a cheating Bob can make is the so-called square-root measurement. Its success probability
an be obtained in terms of the sum of the square roots of the eigenvalues of the Gram matrix 𝒢 for the states <cit.>. The Gram matrix has elements 𝒢_ij = ⟨ψ_i|ψ_j⟩.
For the four states we are considering, it is given by
𝒢=([ 1 f g f^*; f^* 1 f g; g f^* 1 f; f g f^* 1 ]).
Its eigenvalues are equal to
λ_0 = 1+f+g+f^*, λ_1 = 1+if-g-if^*,
λ_2 = 1-f+g-f^*, λ_3 = 1-if-g+if^*.
The number of nonzero eigenvalues of the Gram matrix is equal to the dimension D of the space spanned by the states |ψ_00⟩, |ψ_01⟩, |ψ_11⟩, |ψ_10⟩. Independent of D,
the success probability for the square-root measurement for symmetric states can be obtained as
p_r = 1/16|√(λ_0)+√(λ_1)+√(λ_2)+√(λ_3)|^2
= 1/16|√(1+g + 2 f)+√(1+g - 2 f).
+.√(1-g + 2 f)+√(1-g - 2 f)|^2.
This is Bob's optimal cheating probability in a non-interactive protocol using symmetric pure states.
To determine the value of the first bit, an honest Bob needs to distinguish between the equiprobable states
1/2(σ_00+ σ_01) and 1/2(σ_10+ σ_11),
where σ_ij = |ψ_ij⟩⟨ψ_ij|.
To determine the second bit, he needs to distinguish between the equiprobable states
1/2(σ_00+σ_10) and 1/2(σ_01+σ_11).
Since all states |ψ_ij⟩ are equiprobable and symmetric, his failure probability p_f is the same in each case. For determining the first bit, the failure probability is given by
p_f,0 = 1/2(1-1/4 Tr|σ_00+σ_01-σ_10-σ_11|).
The analogous expression for the failure probability for determining the second bit, p_f,1, is obtained by swapping σ_01 and σ_10. The failure probability p_f=p_f,0=p_f,1 for symmetric pure states is calculated in Appendix <ref>, and is given by
p_f = 1/2[1 - 1/4√((λ_0+λ_2)(λ_1+λ_3)
+ √((λ_0+λ_2)^2(λ_1+λ_3)^2 - 16λ_0λ_1λ_2λ_3))
- 1/4√((λ_0+λ_2)(λ_1+λ_3)
- √((λ_0+λ_2)^2(λ_1+λ_3)^2 - 16λ_0λ_1λ_2λ_3))]
=1/2[1 - 1/2√(1-g^2+ 2√((1+g)^2( f)^2 + (1-g)^2( f)^2 - 4( f)^2( f)^2))
- 1/2√(1-g^2- 2√((1+g)^2( f)^2 + (1-g)^2( f)^2 - 4( f)^2( f)^2))]
=1/2[1 - 1/2√(1-g^2+ 2√(|f|^2(1+g^2) + 2g[( f)^2-( f)^2] - 4( f)^2( f)^2))
- 1/2√(1-g^2- 2√(|f|^2(1+g^2) + 2g[( f)^2-( f)^2] - 4( f)^2( f)^2))].
This is the probability that an honest Bob obtains an incorrect bit value.
Using the expressions in (<ref>) and (<ref>), one can investigate how low p_r can be for a given p_f, whether quantum protocols using symmetric pure states can be better than classical protocols, and what the corresponding sets of pure symmetric states are.
In Appendix <ref>, we show that in the range (1-1/√(2))/2 ≤ p_f ≤ 1/2, the lowest possible cheating probability p_r for protocols using symmetric pure states is given by
p_r = 1/4(1+√(2))-1/√(2)p_f,
= 1/4(1-√(2))+1/√(2)(1-p_f),
which we also wrote in terms of the protocol success probability 1-p_f. Bob's cheating probability is then in the range 1/4 ≤ p_r ≤ 1/2. Moreover, we learn from the analysis in Appendix <ref> that the corresponding optimal states |ψ_ij⟩ span a two-dimensional space, since two of the eigenvalues of their Gram matrix are equal to zero. If all four states are equal, then p_r=1/4 and p_f=1/2.
The value p_f = (1-1/√(2))/2≈0.15 corresponds to p_r=1/2 (which Bob could also achieve with a random guess).
Let us check when Bob's cheating probability for this class of optimal quantum protocols is lower than in the corresponding classical protocols.
For a quantum protocol to be better than classical protocols, it should hold that p_r < 1-2p_f. Combined with (<ref>), this gives p_f < (3-√(2))/(8-2√(2))≈ 0.31, corresponding to 1-p_f > 0.69. In this range, Bob's cheating probability is therefore lower in the quantum protocols. When p_f ≳ 0.31, protocols using symmetric pure states are not optimal, which is also interesting. Mixedness and/or asymmetry must then be useful, and might of course be useful also for other values of p_f. In the classical protocols, Alice uses the equivalent of mixed states.
When 0≤ p_f ≤ (1-1/√(2))/2≈0.15, which corresponds to 1/2≤ p_r ≤ 1, the lowest possible p_f is given by
p_f = 1/2[1-√(p_r^2+(1-p_r)^2)].
The optimisation in Appendix <ref> also tells us that the states |ψ_ij⟩ now span a four-dimensional space.
Geometrically, if p_r and 1-p_r are the lengths of two sides of a right-angled triangle, then the length of the hypotenuse is given by 1-2p_f. Hence, p_r is always less than 1-2p_f, meaning that for these optimal quantum protocols using symmetric pure states, Bob always has a lower cheating probability than in classical protocols with the same failure probability p_f. To summarize, protocols using symmetric pure quantum states are better than corresponding classical protocols when 0<p_f ≲ 0.31. When 0.31≲ p_f < 1/2, classical protocols are better, meaning that mixedness and/or asymmetricity must be useful in more general quantum protocols (which must be at least as good as classical protocols). The optimal p_r for symmetric pure states is plotted in figure <ref> as a function of the protocol success probability 1-p_f.
§ OPTIMAL PROTOCOLS USING SYMMETRIC PURE STATES
In this section we will describe sets of four symmetric pure states which are optimal in the sense that for a given failure probability p_f, Bob's cheating probability p_r is as low as possible. Alternatively, for a given p_r, Bob's p_f is as low as possible.
We also describe a set of qutrit states which is suboptimal, but is related to the states used in the semi-random oblivious transfer protocol in <cit.>.
§.§ BB84 or “Wiesner" states
Suppose that Alice encodes her two bits x_0, x_1 in the states
|ψ_00⟩ = |0⟩, |ψ_01⟩=√(i)|+⟩,
|ψ_11⟩ = i|1⟩, |ψ_10⟩=i√(i)|-⟩,
where |+⟩= (|0⟩+|1⟩)/√(2) and |-⟩= (|0⟩-|1⟩)/√(2). Apart from the overall phase factors which have been included to make the states symmetric according to the definition at the start of Section <ref>, this set of states is identical to the one used for “conjugate coding" by Wiesner <cit.>, in the Bennett-Brassard protocol for quantum cryptography (BB84) <cit.>, and for quantum random access codes <cit.>. Here we only reinterpret the situation as imperfect 1-out-of-2 oblivious transfer.
Depending on which bit value an honest Bob wishes to obtain, he measures in a basis rotated by π/8 from the basis {|0⟩,|1⟩} to obtain the first bit, or rotated by 3π/8 to obtain the second bit.
Either of these measurements succeed with probability cos^2(π/8)≈ 0.85. Consequently, the protocol's inherent failure probability p_f^W is given by
p_f^W = 1-cos^2(π/8)≈ 0.15.
As before, no-signalling prohibits Alice from cheating so that p_s=1/2. The success probability of Bob's minimum-error measurement for distinguishing between all four states is
p_r^W = 1/2,
which is his cheating probability.
It holds that p_r^W < 1-2p_f^W ≈ 0.7. This protocol therefore is better than classical protocols. In fact, as is confirmed by the results in the previous section and in Appendix <ref>, it is optimal in the sense that for this value of p_f, no choice of four symmetric pure states can give a lower p_r.
§.§ Qubit states
To obtain protocols with p_f in the range p_f^W ≤ p_f ≤ 1/2,
we will look at symmetric qubit states given by
|ψ_00⟩ = a|0⟩+b|1⟩,
|ψ_01⟩=a|0⟩+ib|1⟩,
|ψ_11⟩ = a|0⟩-b|1⟩,
|ψ_10⟩=a|0⟩-ib|1⟩,
where
0≤ |a|, |b| ≤ 1, with |a|^2+|b|^2=1. This set of states is symmetric with the symmetry operation U=|0⟩⟨0| + i|1⟩⟨1|. For a=b=1√(2) we obtain the eigenstates of σ_x and σ_y, that is, four states isomorphic to the “Wiesner" or BB84 states, which are eigenstates of σ_x and σ_z.
By varying a and b, we will generally obtain lower p_r and higher p_f than for the “Wiesner" states. For |a|=1, b=0 or a=0, |b|=1, all four states are equal to the same state, giving p_f=1/2 and p_r=1/4.
It holds that f=|a|^2+i|b|^2 and g=|a|^2-|b|^2. From (<ref>) and (<ref>) we obtain
p_f = 1/2(1-√(2)|ab|),
p_r = 1/4(|a|+|b|)^2 = 1/4(1+2|ab|),
and it therefore also holds that
p_r = 1/√(2)(1-p_f)+1/4(1-√(2)),
that is, Bob's cheating probability p_r is a straight line as a function of the protocol success probability 1-p_f.
As shown in Appendix <ref> and given in equation (<ref>), this is the lowest possible p_r for a given p_f with p_f in the range p_f^W ≤ p_f ≤ 1/2. It follows that the qubit states in (<ref>) are indeed optimal.
Moreover, it holds that
1-2p_f = √(2)|ab| <p_r
when |ab|>1/(4√(2)-2) ≈ 0.273, which corresponds to p_f≲ 0.31. As mentioned above, this is the range where this class of protocols outperform classical protocols.
§.§ Ququart states
We will now look at a set of states which makes it possible to reach p_f=0 (when it must hold that p_r=1). For this limit it is necessary that all four states become orthogonal, and we also know from the optimisation in Appendix <ref> that the corresponding sets of states need to span a four-dimensional space. We will choose
|ψ_00⟩ = 1/√(2)(|0⟩+|1⟩),
|ψ_01⟩=a/√(2)(|0⟩+i|1⟩)+b|2⟩,
|ψ_11⟩ = 1/√(2)(|0⟩-|1⟩),
|ψ_10⟩=a/√(2)(|0⟩-i|1⟩)+b|3⟩,
where 0≤ |a|, |b| ≤ 1 and |a|^2+|b|^2=1. For a=1 we obtain the eigenstates of σ_x and σ_y, connecting with the qubit
states considered above. The pairwise overlaps are now
⟨ψ_00|ψ_01⟩= ⟨ψ_11|ψ_10⟩ = a/2(1+i),
⟨ψ_01|ψ_11⟩= ⟨ψ_10|ψ_00⟩ = a^*/2(1+i) ,
⟨ψ_00|ψ_11⟩= ⟨ψ_01|ψ_10⟩ = 0.
That is, the set of sates is symmetric without further modifications if a is real. In this case we obtain f=(a/2)(1+i) and g=0, and
p_f = 1/4(2-√(1+a√(2-a^2))-√(1-a√(2-a^2))),
p_r = 1/4(√(1+a)+√(1-a))^2 = 1/2(1+√(1-a^2)).
The above relations imply that
(1-2p_f)^2=p_r^2+(1-p_r)^2=1-a^2/2.
This agrees with equation (<ref>), and the above choice of ququart states is therefore optimal for 0≤ p_f ≤ p_f^W. For this set of states, it holds that 1-2p_f ≥ p_r for the whole range of 0≤ a ≤ 1, with equality for a=0, when p_f=0 and p_r=1. The resulting quantum protocol therefore outperforms classical protocols for the whole range of 0 < p_f ≤ p_f^W.
§.§ Qutrit states
As an example of symmetric pure states which turn out to not be optimal for the type of protocols we are considering, we will look at the set of states
|ψ_00⟩ = a|0⟩+b|1⟩,
|ψ_01⟩=a|0⟩+b|2⟩,
|ψ_11⟩ = a|0⟩-b|1⟩,
|ψ_10⟩=a|0⟩-b|2⟩,
where 0≤ |a|, |b| ≤ 1 and |a|^2+|b|^2=1.
One reason to investigate this set of states is that
for a=b=1/√(2), it is
equivalent to two copies of the same “Wiesner" states, that is, the states |00⟩, |++⟩, |11⟩, |- -⟩, since all pairwise overlaps match. These states are used in an oblivious transfer protocol in <cit.>, which has the lowest known cheating probabilities for a complete
1-out-of-2 quantum oblivious transfer protocol where the parties are not restricted. It is therefore interesting to see how well the same set of states performs for an incomplete oblivious transfer protocol.
If Bob is given two copies of a “Wiesner" or BB84 state, we might expect that an honest Bob will obtain a correct bit value more often than if he is given a single copy, but also, that it becomes easier for a dishonest Bob to cheat. However, one can easily verify that this is not the case. While an additional copy helps a dishonest Bob cheat, as we will see, it does not lower an honest Bob's failure probability. An honest Bob can ignore the second copy. On the other hand, with two copies of a “Wiesner" or BB84 state, the semi-random oblivious transfer protocol with p_f=0 in <cit.> is possible. In a semi-random protocol, Bob does not choose whether he obtains the first or the second bit value, but obtains a bit value at random. Bob's overall measurement is then no longer from Alice's point of view equivalent to Bob choosing between two measurements. Instead, Bob makes one fixed measurement with four outcomes. Therefore while p_f=0 becomes possible, Bob's and also Alice's cheating probability increases, becoming equal to ≈ 0.729 and 3/4, respectively, if the states |00⟩, |++⟩, |11⟩, |- -⟩ are used.
Returning to non-random protocols where Bob chooses between learning the first or the second bit,
an honest Bob does not benefit from using the qutrit states in equation (<ref>), as compared with the qubit states in equation (<ref>). This may seem counter-intuitive, but is analogous to a scenario where Bob is sent a classical bit value that might have been flipped with some probability p. His probability to read out the bit value correctly does not increase if he is sent a second copy of the same bit value, which might also independently have been flipped with probability p. Both bits are then incorrect (and the two incorrect values agree) with probability p^2. If one bit value is correct and the other one not, then Bob will have to guess what the correct value is. His overall probability to arrive at the incorrect bit value is then p^2+2p(1-p)/2=p, the same as if he had been sent only one bit.
For the states in (<ref>), we obtain f=⟨ψ_00|ψ_01⟩=|a|^2 and g=⟨ψ_00|ψ_11⟩ = |a|^2-|b|^2. This gives
p_f = 1/2(1-√(2)|ab| )
p_r = 1/4(|a|+√(2)|b|)^2 = 1/4(1+|b|^2+2√(2)|ab|).
That is, as a function of |a| and |b|, we obtain the same p_f as for the qubit states in (<ref>), but a higher p_r. This set of qutrit states therefore never outperforms the qubit states. Bob's cheating probability for this set of qutrit states is plotted in figure <ref>, along with the lowest possible cheating probabilities for the optimal qubit and ququart states, and the lowest possible cheating probability for classical protocols.
§ CONCLUSION AND OUTLOOK
We have demonstrated that for oblivious transfer protocols which sometimes fail, quantum protocols may outperform classical ones, in the sense that for protocols where the sender can only cheat with probability 1/2, Bob's cheating probability in quantum protocols can be lower than in any classical non-interactive protocol with the same failure probability p_f.
We fully characterized non-interactive quantum protocols which use symmetric pure states, and gave examples of optimal protools, which use qubits and ququarts respectively. When p_f ≲ 0.31, Bob's cheating probability in these protocols is lower than in classical protocols.
Incomplete 1-out-of-2 oblivious transfer of the type we are considering, where the sender can only cheat with probability p_s=1/2, can be seen as a generalisation of complete oblivious transfer, but also as a generalization of random access codes (RACs) <cit.>. For a RAC, in the simplest case (which will correspond most closely to 1-out-of-2 oblivious transfer), one restricts the sender to use one bit to encode the values of two bits. The sender does not know which bit the receiver wants, but aims for the receiver to correctly retrieve the bit of their choice with a probability that is as high as possible. For a corresponding quantum random access code (QRAC) <cit.>, the sender is restricted to using one qubit for encoding two classical bit values.
Probabilistic one-time programs <cit.> are a related functionality, which reduces to a QRAC in the case relevant here.
Unlike in oblivious transfer, in a RAC or QRAC one is not concerned with the probability that the sender guesses which bit value the receiver wants to access. But since the receiver is choosing between two measurements, depending on which bit value they choose, the sender is not able to tell better than with a random guess. In the type of oblivious transfer protocols we have been considering, however, the sender Alice can correctly guess which bit the receiver Bob wants only with probability 1/2, which is analogous to a RAC or QRAC.
Another difference between RACS and QRACs on one hand, and oblivious transfer on the other hand, is that for oblivious transfer,
there is no restriction on the dimensionality of the state space that can be used. Instead of fixing the dimensionality of the state space, one wants to restrict the probability for the receiver to retrieve all of the sent information. In this sense, the type of incomplete oblivious transfer we have considered, with p_s =1/2, is a generalisation of a QRAC (as well as being a generalization of complete or “perfect" oblivious transfer).
That is, for the type oblivious transfer we have considered, we are concerned with (i) maximising the probability that the receiver correctly obtains their chosen bit value, as in a RAC, but also (ii) minimising the probability that they correctly obtain both bit values, which generalizes the restriction of sending a single bit or qubit.
A remaining difference is that in complete protocols for oblivious transfer, the receiver should always correctly retrieve their chosen bit value, whereas in a RAC or QRAC, or in an incomplete protocol for oblivious transfer, the probability for the bit value to be correct is generally lower than 1. Our protocols interpolate between or “connect" random access codes and oblivious transfer, generalizing both functionalities.
To summarize, we have obtained bounds on cheating probabilities for incomplete (imperfect) quantum oblivious transfer, and derived optimal protocols which use pure symmetric states. Cheating probabilities can be lower for incomplete protocols than for complete protocols. The optimal symmetric-state protocols use qubit or ququart (2-qubit) states, and could be implemented with existing experimental techniques; ququart states can for example be realized as single-photon states, using either four different spatial modes, or polarisation and two spatial modes. The honest receiver's measurements are easy-to-realize projective measurements, since the task is to distinguish between only two mixed states. As is well known and also discussed above, the optimal measurement is in this case simply a projective measurement in a particular basis. The type of incomplete oblivious transfer we have considered can be seen as generalizing both complete oblivious transfer and random access codes, and connecting these two functionalities.
IVP and EA acknowledge support by the UK Engineering and Physical Sciences Research Council (EPSRC) under Grants No. EP/T001011/1.
00
Wie83 S. Wiesner, Conjugate coding, ACM Sigact News. 15, 78 (1983).
evengoldlemp S. Even, O. Goldreich and A. Lempel, A randomized protocol for signing contracts, Communications of the ACM 28, 637-647 (1985).
Rabin M. O. Rabin, How To Exchange Secrets with Oblivious Transfer, Cryptology ePrint Archive: Report 2005/187 187, (2005).
OTuniversal J. Kilian, Founding cryptography on oblivious transfer, Proceedings of the twentieth annual ACM symposium on Theory of computing, 20-31 (1988).
Mayers D. Mayers, Unconditionally Secure Quantum Bit Commitment is Impossible, Phys. Rev. Lett. 78, 3414 (1997).
LoNogo H.-K. Lo, Insecurity of quantum secure computations, Phys. Rev. A 56, 1154 (1997).
DFSS05 I. B. Damgård, S. Fehr, L. Salvail and C. Schaffner, Cryptography in the Bounded-Quantum-Storage Model, SIAM Journal on Computing 37, 1865 (2008).
Garcia D. Pitalúa-García, Spacetime-constrained oblivious transfer, Phys. Rev. A 93, 062346 (2016).
Sikora11 J. Sikora, On the existence of loss-tolerant quantum oblivious transfer protocols, arXiv:1009.2735v2 (2011).
Chailloux13 A. Chailloux, I. Kerenidis and J. Sikora, Lower bounds for quantum oblivious transfer, arXiv:1007.1875v2 (2013).
Chailloux16 A. Chailloux, G. Gutoski and J. Sikora, Optimal bounds for semi-honest quantum oblivious transfer, arXiv:1310.3262v2 (2016).
Ryan R. Amiri, R. Stárek, D. Reichmuth, I.V. Puthoor, M. Miččuda, L. Miššta Jr., M. Duššek, P. Wallden and E. Andersson, Imperfect 1-Out-of-2 Quantum Oblivious Transfer: Bounds, a Protocol, and its Experimental Implementation, PRX Quantum. 2, 010335 (2021).
Stroh L. Stroh, N. Horová, R. Stárek, I.V. Puthoor, M. Miččuda, M. Duššek and E. Andersson, Noninteractive XOR Quantum Oblivious Transfer: Optimal Protocols and Their Experimental Implementations, PRX Quantum. 4, 020320 (2023).
QRAC A. Ambainis, A. Nayak, A. Ta-Shma and U. Vazirani, Dense Quantum Coding and Quantum Finite Automata, Journal of the ACM, 49, 496 (2002).
rimini G. C. Ghirardi, A. Rimini and T. Weber, A general argument against superluminal transmission through the quantum mechanical measurement process, Lett. Nuovo Cimento Soc. Ital. Fis. 27, 293 (1980).
AM14 K. M. Audenaert and M. Mosonyi, Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination, J. Math. Phys. 55, 102201 (2014).
Andersson0026 P. Wallden, V. Dunjko and E. Andersson, Minimum-cost measurements for quantum information, J. Phys. A: Math. Theor. 47, 125303 (2014).
BB84 C. H. Bennett and G. Brassard, Quantum cryptography: Public key distribution and coin tossing, Theoretical Computer Science, 560, 7-11 (2014).
OTP S. Goldwasser, Y. T. Kalai and G. N. Rothblum, One-time programs, Advances in Cryptology CRYPTO 2008: 28th Annual International Cryptology Conference, Santa Barbara, CA, USA, 39-56 (2008).
Fitz M. Roehsner, J. A. Kettlewell, T. B. Batalhao, J. F. Fitzsimons and P. Walther, Quantum advantage for probabilistic one-time programs, Nature Communications, 9 5225 (2018).
§ PROTOCOL FAILURE PROBABILITY FOR SYMMETRIC PURE STATES
To calculate the protocol failure probability, we construct the states
2√(λ_0)|A_0⟩ = |ψ_00⟩+|ψ_01⟩+|ψ_11⟩+|ψ_10⟩
2√(λ_1)|A_1⟩ = |ψ_00⟩+i|ψ_01⟩-|ψ_11⟩-i|ψ_10⟩
2√(λ_2)|A_2⟩ = |ψ_00⟩-|ψ_01⟩+|ψ_11⟩-|ψ_10⟩
2√(λ_3)|A_3⟩ = |ψ_00⟩-i|ψ_01⟩-|ψ_11⟩+i|ψ_10⟩,
where λ_i are the previously given eigenvalues of the Gram matrix for the states |ψ_ij⟩. If the space spanned by |ψ_00⟩, |ψ_01⟩, |ψ_11⟩, |ψ_10⟩ is four-dimensional, then the states |A_0⟩, |A_1⟩, |A_2⟩, |A_3⟩ are orthonormal. If the space is less than four-dimensional, then some of the eigenvalues λ_i are equal to zero, but the four states |A_i⟩ can still be chosen orthonormal, and used as a basis. Independent of dimension, we have
|ψ_00⟩ = 1/2(√(λ_0)|A_0⟩ +√(λ_1)|A_1⟩ +√(λ_2)|A_2⟩ +√(λ_3)|A_3⟩)
|ψ_01⟩ = 1/2(√(λ_0)|A_0⟩ -i√(λ_1)|A_1⟩ -√(λ_2)|A_2⟩ +i√(λ_3)|A_3⟩)
|ψ_11⟩ = 1/2(√(λ_0)|A_0⟩ -√(λ_1)|A_1⟩ +√(λ_2)|A_2⟩ -√(λ_3)|A_3⟩)
|ψ_10⟩ = 1/2(√(λ_0)|A_0⟩ +i√(λ_1)|A_1⟩ -√(λ_2)|A_2⟩ -i√(λ_3)|A_3⟩).
Using the basis {|A_0⟩, |A_1⟩, |A_2⟩, |A_3⟩} we therefore have
1/2(σ_00+σ_01)=
1/8([ 2λ_0 √(λ_01)(1+i) 0 √(λ_03)(1-i); √(λ_01)(1-i) 2λ_1 √(λ_12)(1+i) 0; 0 √(λ_12)(1-i) 2λ_2 √(λ_23)(1+i); √(λ_03)(1+i) 0 √(λ_23)(1-i) 2λ_3 ]),
1/2(σ_10+σ_11)=
1/8([ 2λ_0 √(λ_01)(-1-i) 0 √(λ_03)(-1+i); √(λ_01)(-1+i) 2λ_1 √(λ_12)(-1-i) 0; 0 √(λ_12)(-1+i) 2λ_2 √(λ_23)(-1-i); √(λ_03)(-1-i) 0 √(λ_23)(-1+i) 2λ_3 ]),
and 1/4(σ_00+σ_01-σ_10-σ_11)=
1/8([ 0 √(λ_01)(1+i) 0 √(λ_03)(1-i); √(λ_01)(1-i) 0 √(λ_12)(1+i) 0; 0 √(λ_12)(1-i) 0 √(λ_23)(1+i); √(λ_03)(1+i) 0 √(λ_23)(1-i) 0 ]),
where we have used the shorthand λ_ij = λ_iλ_j.
The eigenvalue equation for the last matrix above is given by |1/4(σ_00+σ_01-σ_10-σ_11) - Λ 1|=0, which, when evaluating the determinant, gives
(8Λ)^4-2(8Λ)^2(λ_0+λ_2)(λ_1+λ_3)+16λ_0123=0,
where λ_0123 = λ_0λ_2λ_2λ_3.
The four eigenvalues of the matrix in (<ref>) are consequently given by the solutions to
Λ^2 = 1/8^2[( λ_0+λ_2)(λ_1+λ_3)
±√((λ_0+λ_2)^2(λ_1+λ_3)^2 - 16λ_0λ_1λ_2λ_3)]
= 1/8^2[4(1-g^2)± 8√((1+g)^2( f)^2 + (1-g)^2( f)^2 - 4( f)^2( f)^2)].
The failure probability for the protocol is therefore given by
p_f = 1/2[1 - 1/4√((λ_0+λ_2)(λ_1+λ_3)
+ √((λ_0+λ_2)^2(λ_1+λ_3)^2 - 16λ_0λ_1λ_2λ_3))
- 1/4√((λ_0+λ_2)(λ_1+λ_3)
- √((λ_0+λ_2)^2(λ_1+λ_3)^2 - 16λ_0λ_1λ_2λ_3))]
=1/2[1 - 1/2√(1-g^2+ 2√(|f|^2(1+g^2) + 2G[( f)^2-( f)^2] - 4( f)^2( f)^2))
- 1/2√(1-g^2- 2√(|f|^2(1+g^2) + 2G[( f)^2-( f)^2] - 4( f)^2( f)^2))].
This is the probability that an honest Bob obtains an incorrect bit value.
§ OPTIMAL PURE-STATE PROTOCOLS
For an optimal protocol, it holds that for a given failure probability p_f, Bob's cheating probability p_r is as low as possible, and vice versa. The failure probability p_f and the cheating probability p_r are given in equations (<ref>) (which is the same as (<ref>)) and (<ref>) respectively. We will minimize p_f for a given (fixed) p_r=|∑_i√(λ_i)|^2/16. We first rewrite equation (<ref>) in the more convenient form
8(1-2p_f)^2 = (λ_0+λ_2)(λ_1+λ_3)+4√(λ_0123).
In the range 0≤ p_f≤ 1/2, minimising p_f is equivalent to maximising (1-2p_f)^2. It holds that
λ_0+λ_1+λ_2+λ_3=4,
where we note that λ_i are all real, and λ_i≥ 0.
We define
a=(√(λ_0)+√(λ_1)+√(λ_2)+√(λ_3))/4
b=(√(λ_0)-√(λ_1)+√(λ_2)-√(λ_3))/4
c=(√(λ_0)+√(λ_1)-√(λ_2)-√(λ_3))/4
d=(√(λ_0)-√(λ_1)-√(λ_2)+√(λ_3))/4,
which means that a^2=p_r is a constant. We also obtain
a^2+b^2+c^2+d^2=1/4(λ_0+λ_1+λ_2+λ_3)=1,
which together means that
b^2+c^2+d^2=1-p_r
is also a constant. Equation (<ref>) becomes
(1-2p_f)^2=(a^2-b^2)^2+(c^2-d^2)^2,
which should be maximized. Recall that a^2=p_r and b^2+c^2+d^2=1-p_r are constant. Clearly, (1-2p_f)^2 will be maximized if we can choose b=0, and either c=0 and d^2=1-p_r, or c^2=1-p_r and d=0. If this is possible, then we obtain
(1-2p_f)^2=p_r^2+(1-p_r)^2.
It is however not always possible to choose b=0, and either c=0 or d=0. For example, if b=d=0, then a=√(p_r) and c=√(1-p_r), and we obtain
√(λ_0)=a+b+c+d=√(p_r)+√(1-p_r)
√(λ_1)=a-b+c-d=√(p_r)+√(1-p_r)
√(λ_2)=a+b-c-d=√(p_r)-√(1-p_r)
√(λ_3)=a-b-c+d=√(p_r)-√(1-p_r).
Since λ_i≥ 0, it must hold that 1/2≤ p_r≤ 1 for this to be possible. This corresponds to 0≤ p_f≤ (1-1/√(2))/2≈ 0.15, where it then holds that
p_f = 1/2[1-√(p_r^2+(1-p_r)^2)].
When 1/2 < p_r, the space spanned by the corresponding optimal states |ψ_ij⟩ is four-dimensional, and becomes two-dimensional for p_r=1/2 (equivalent to the “Wiesner" states).
When 1/4≤ p_r < 1/2, we must instead choose a^2=c^2=p_r and b^2=d^2=1/2-p_r to maximize (1-2p_f)^2 (equivalently, to minimize p_f). Alternatively, we can choose a^2=d^2 and b^2=c^2. It will either hold that λ_1=λ_2=0 or that λ_2=λ_3=0, meaning that the corresponding optimal sets of states |ψ_ij⟩ span a two-dimensional space. It then holds that
p_r=1/4(1+√(2))-1/√(2)p_f,
which is the lowest possible cheating probability p_r in the range (1-1/√(2))/2 ≤ p_f≤ 1/2, corresponding to 1/4≤ p_r≤ 1/2, for protocols using symmetric pure states. The results in this Appendix have also been verified numerically.
| 1-out-of-2 oblivious transfer (OT) is a cryptographic primitive where a sender Alice holds two bits, and a receiver Bob receives one of them. The receiver should not know the other bit, and the sender should not know which bit the receiver obtained. A dishonest sender Alice will attempt to find out which bit Bob obtained, and a dishonest receiver Bob will attempt to learn both bit values. Apart from 1-out-of-2 oblivious transfer <cit.>, there are other variants. In Rabin oblivious transfer <cit.>, for example, a single bit is either received or not. Oblivious transfer is important not the least because it is universal for multiparty computation <cit.>.
Unlike for quantum key distribution, information-theoretically secure perfect quantum oblivious transfer is impossible <cit.>. It does become possible with restrictions on quantum memory for cheating parties <cit.>.
Variants of oblivious transfer where the parties are constrained by special relativity are also possible <cit.>.
While perfect quantum oblivious transfer is impossible without restrictions on cheating parties,
quantum protocols with bounded cheating probabilities for the (otherwise unrestricted) parties are still possible.
Currently known protocols however do not achieve cheating probabilities that are tight with existing lower bounds. The known bounds also hold specifically for “complete" protocols <cit.>, where completeness means that the protocol always works correctly if the parties are honest. For such protocols, a lower bound on the greater of Alice's and Bob's cheating probabilities is 2/3 in general <cit.> and ≈ 0.749 if symmetric pure states are used <cit.>. Another variant of oblivious transfer is XOR oblivious transfer (XOT) where the sender has two bits and a receiver obtains either the first bit, the second bit, or their XOR. Using pure symmetric states, the XOT protocol has been demonstrated to be an optimal protocol having lower cheating probabilities than the classical XOT protocols <cit.>.
We will here instead examine non-interactive incomplete protocols for 1-out-of-2 quantum oblivious transfer. Non-interactive means that there is no back-and-forward communication, either classical or quantum; the protocols have a single step, where one party sends classical and/or quantum information to the other party, who measures what is received. Some of our results also provide bounds valid for interactive protocols.
“Incomplete" means that a protocol might fail even when both parties follow the protocol. Here protocol failure will mean that Bob obtains an incorrect bit value. There are several reasons to investigate protocols that can fail. It turns out that a non-zero failure probability means that cheating probabilities can be lowered. There is also a connection to quantum random access codes <cit.>, as will be discussed at the end; incomplete oblivious transfer can be seen as a generalisation that “interpolates" between random access codes and complete oblivious transfer. Noise and imperfections will also often lead to a non-zero failure probability in implementations.
Specifically, we investigate incomplete protocols for quantum oblivious transfer, where the sender Alice can never cheat any better than with a random guess. That is, the security against a cheating Alice is “perfect". For a given protocol failure probability p_f, the goal is then to minimize Bob's cheating probability p_r, and vice versa. Bob's cheating probability is the probability that he correctly guesses both of Alice's bit values.
We discuss the definition of protocol failure in Section <ref>, non-interactive classical protocols in Section <ref>, and some general properties of incomplete quantum oblivious transfer in Section <ref>. In Section <ref>, we find the lowest possible cheating probability for receiver Bob for a given protocol failure probability for protocols using symmetric pure states, and in <ref>, we give examples of optimal quantum protocols of this type. The example protocols are feasibleto realize with current technology, for example using photons. We finish with a discussion. | null | null | null | null | null |
http://arxiv.org/abs/2409.17429v1 | 20240925233259 | Real-World Data Inspired Interactive Connected Traffic Scenario Generation | [
"Junwei You",
"Pei Li",
"Yang Cheng",
"Keshu Wu",
"Rui Gan",
"Steven T. Parker",
"Bin Ran"
] | cs.RO | [
"cs.RO"
] |
Real-World Data Inspired Interactive Connected Traffic Scenario Generation
Junwei You
University of Wisconsin-Madison
Email: [email protected]
Pei Li, Ph.D., Corresponding Author
University of Wisconsin-Madison
Email: [email protected]
Yang Cheng, Ph.D.
University of Wisconsin-Madison
Email: [email protected]
Keshu Wu, Ph.D.
University of Wisconsin-Madison
Email: [email protected]
Rui Gan
University of Wisconsin-Madison
Email: [email protected]
Steven T. Parker, Ph.D.
University of Wisconsin-Madison
Email: [email protected]
Bin Ran, Ph.D.
University of Wisconsin-Madison
Email: [email protected]
§ ABSTRACT
Simulation is a crucial step in ensuring accurate, efficient, and realistic Connected and Autonomous Vehicles (CAVs) testing and validation. As the adoption of CAV accelerates, the integration of real-world data into simulation environments becomes increasingly critical. Among various technologies utilized by CAVs, Vehicle-to-Everything (V2X) communication plays a crucial role in ensuring a seamless transmission of information between CAVs, infrastructure, and other road users. However, most existing studies have focused on developing and testing communication protocols, resource allocation strategies, and data dissemination techniques in V2X. There is a gap where real-world V2X data is integrated into simulations to generate diverse and high-fidelity traffic scenarios. To fulfill this research gap, we leverage real-world Signal Phase and Timing (SPaT) data from Roadside Units (RSUs) to enhance the fidelity of CAV simulations. Moreover, we developed an algorithm that enables Autonomous Vehicles (AVs) to respond dynamically to real-time traffic signal data, simulating realistic V2X communication scenarios. Such high-fidelity simulation environments can generate multimodal data, including trajectory, semantic camera, depth camera, and bird's eye view data for various traffic scenarios. The generated scenarios and data provide invaluable insights into AVs' interactions with traffic infrastructure and other road users. This work aims to bridge the gap between theoretical research and practical deployment of CAVs, facilitating the development of smarter and safer transportation systems.
Keywords: Vehicle-to-Everything, Connected and Autonomous Vehicles, Singal Phase and Timing, High-Fidelity Traffic Simulation
§ INTRODUCTION
The development of Connected and Autonomous Vehicles (CAVs) is revolutionizing the transportation industry by integrating advanced perception, control, and communication technologies. In particular, Vehicle-to-Everything (V2X) communication enables CAVs to interact with other road users and with surrounding infrastructure, such as traffic signals and road signs <cit.>. V2X aims to enhance road safety, optimize traffic flow, and reduce environmental impact <cit.>. Among the different forms of V2X, Vehicle-to-Infrastructure (V2I) communication, facilitated by Roadside Units (RSUs), plays a crucial role particularly.
§.§ V2X Simulation
The development of V2X communication technologies and their integration with AVs has led to a variety of simulation frameworks aimed at testing and refining these systems. These frameworks are crucial for understanding how V2X and AV technologies can be effectively deployed in real-world environments. For example, one of the foundational studies in this area <cit.> presents a simulation platform that integrates network and driving scenarios to assess the impact of Cellular Vehicle-to-Everything (C-V2X) technology on vehicle communication and control. This platform integrates network and driving scenarios using SUMO and CARLA simulators, which enables the evaluation of road traffic and vehicle dynamics under a C-V2X framework. Another study <cit.> focuses on simulating vehicle platooning, demonstrating the potential of V2X to enhance traffic efficiency by coordinating vehicle movements. The importance of integrating V2X sensors into existing simulation tools is highlighted in <cit.>. This study expands the CARLA simulator's capabilities by enabling detailed simulations of V2X interactions, particularly in urban environments where complex traffic scenarios are common. Meanwhile, <cit.> underscores the necessity for simulation frameworks that can accommodate various V2X communication protocols, ensuring comprehensive testing of system interoperability.
Performance analysis is a critical component of these studies, as seen in <cit.> which evaluates the communication protocol's reliability and efficiency. Similarly, <cit.> provides a detailed examination of latency and throughput, essential metrics for assessing the feasibility of deploying V2X systems in densely populated urban areas. The findings highlight the critical role of media access control (MAC) layer parameters, such as resource reselection probability and channel bandwidth, in influencing packet reception ratios and overall communication reliability. To conduct performance analysis, specialized simulation tools play a vital role. For example, <cit.> offers a platform specifically designed to test V2X applications like traffic signal prioritization, providing valuable insights into how these systems can be optimized for real-world use. The emphasis on realistic data in simulations is also highlighted by <cit.> which addresses the challenges of simulating V2X communications under variable environmental conditions. In addition, comprehensive simulation environments offer a holistic approach that is crucial for evaluating V2X protocols and their effectiveness in improving traffic flow and safety <cit.>. They integrate multiple aspects of traffic and network modeling, facilitating the development of advanced traffic management systems. Lastly, <cit.> explores how V2X technologies can enhance the capabilities of AVs, particularly through applications like cooperative adaptive cruise control. This study highlights the synergy between V2X and AV technologies, demonstrating how they can be integrated to create more efficient and safer transportation systems.
These papers collectively provide a comprehensive overview of the current state of V2X simulation and its critical role in the development of CAV technologies. They underscore the importance of versatile and high-fidelity simulation tools in evaluating these systems' performance and scalability, paving the way for more intelligent and adaptive traffic management solutions.
§.§ RSU Data Utilization
The integration of Roadside Units (RSUs) in vehicular networks is pivotal for enhancing V2X communication and data services, which are critical for the development of CAVs. The literature reveals various strategies and frameworks focused on optimizing RSU data utilization, addressing key aspects such as data scheduling, dissemination, and resource allocation. Specifically, efficient data scheduling is a recurrent theme across several studies, which highlights the importance of managing the flow of information between vehicles and RSUs. <cit.> proposes a mechanism for prioritizing data based on predicted connection times, which helps in reducing network congestion and ensuring timely data delivery. Similarly, <cit.> emphasizes the role of advanced scheduling techniques in minimizing latency and improving service quality. These approaches are crucial in dynamic environments where vehicular connections with RSUs are intermittent and unpredictable. Moreover, the allocation of resources in RSU-empowered vehicular networks is another critical area of focus. In <cit.>, the authors explore strategies to optimize data transmission and reduce latency through intelligent resource management. This study, along with <cit.>, underscores the need for adaptive frameworks that can respond to changing network conditions and traffic patterns.
Ensuring reliable and efficient data transmission is essential for the effective operation of vehicular networks. <cit.> investigates the use of edge computing to offload data processing tasks, thereby enhancing throughput and reducing the computational burden on vehicles. Additionally, <cit.> introduces network coding techniques to improve data dissemination, reduce redundancy, and optimize bandwidth usage. These studies highlight innovative solutions to common challenges in data-intensive vehicular networks. Moreover, the dynamic nature of vehicular networks necessitates robust frameworks for real-time data access and management. The research <cit.> focuses on differentiating between safety-critical and non-safety-critical data, proposing a framework that adapts to traffic conditions to ensure timely access to vital information. This approach is critical for applications requiring immediate response, such as collision avoidance systems and real-time traffic updates. Lastly, integrating V2V (Vehicle-to-Vehicle) and V2I communication is explored in <cit.>. This study examines the potential of hybrid communication networks to enhance data transmission reliability and improve routing efficiency in complex urban settings. By leveraging both V2V and V2I communications, these networks can offer more resilient and flexible data transmission solutions.
These studies collectively emphasize the crucial role of RSUs in the broader context of vehicular communication networks. By addressing challenges related to data scheduling, resource allocation, data throughput, and real-time access, they provide a foundation for developing more efficient and reliable transportation systems. These advancements are essential for the successful implementation of CAVs, where seamless communication between vehicles and infrastructure is a cornerstone. However, the existing literature primarily focuses on developing and testing communication protocols, resource allocation strategies, and data dissemination techniques. Notably, there is a gap where real-world RSU data is integrated into simulations to generate diverse and high-fidelity traffic scenarios. Utilizing real-world RSU data could provide valuable insights into interactive vehicle behavior, traffic dynamics, and system performance under various conditions, thereby enhancing the fidelity and applicability of CAV simulations. The approach proposed in this study aims at bridging this gap between theoretical research and practical implementation, offering a more comprehensive understanding of the challenges and opportunities in deploying V2X technologies in real-world settings.
Specifically, the main contribution of this paper is threefold:
* This study processes and incorporates real-world Signal Phase and Timing (SPaT) data from RSUs into a simulation environment, which enhances its fidelity and applicability.
* This study develops an algorithm to simulate how Autonomous Vehicles (AVs) operate and respond upon receiving real-time SPaT data through V2X communication, demonstrating practical V2I connectivity in the simulation environment.
* The research generates a variety of traffic scenarios from vehicle interactions with real-world SPaT data, providing a comprehensive testing ground for CAVs.
The remainder of this paper is organized as follows. The second section details the methods used in this study. The third section presents the parameter and simulation setups and showcases the generated scenarios. The last section concludes the paper and provides future work.
§ METHODOLOGY
§.§ Framework
The framework of this study is shown in Figure <ref>, which illustrates the process of integrating real-world RSU SPaT data into a simulation environment to generate comprehensive traffic scenarios. The upper part of the diagram shows the procedure for processing raw RSU data into structured SPaT messages. These messages provide detailed traffic signal timing information that is critical for a high-fidelity simulation environment. In the lower part of the diagram, the simulation environment is configured using the structured SPaT messages to accurately represent intersection signal phases and timings. AVs in the simulation can respond proactively to these real-time signal inputs once within the intersection area, simulating V2I interactions. This setup further generates multimodal traffic scenarios with detailed sensor data, including trajectory, RGB camera, and depth camera data from both vehicles and roadside sensors. The following sections will elaborate on each module in detail.
§.§ Data Collection and Parsing
In general, encoded raw data is received distributively in three edge servers from RSUs placed on the existing arterial and freeway in Madison, WI, as shown in Figure <ref>. So far, 21 RSUs have been installed, including 15 RSUs installed along the Park Street and 6 RSUs installed along the Beltline freeway <cit.>.
The received SPaT, MAP, and Basic Safety Messages (BSM), are stored in the database through API in another edge server. The database is connected with decoders for message decoding. The decoded messages are archived back in the database. The database is also connected to the SPaT dashboard through API for data display. The entire framework is shown in Figure <ref>.
Specifically, this study aims to leverage SPaT data from the intersection of Part Street and Dayton Street, as shown in Figure <ref>, for interactive traffic scenario generation. The SPaT data is collected by the controller of the intersection and then transmitted to RSU. Figure <ref> shows the signal controller and RSU used in this study. Utilizing a developed data pipeline, the encoded SPaT messages are then received from RSU and stored in the local database <cit.>.
The decoder used in this study to decode raw SPaT messages is the Pycrate library <cit.>, which is identified as an open-source toolkit for handling various telecommunications and networking protocols, such as LTE (4G) and 5G NR. The library is fully licensed under LGPL v2.1, offering robust ASN.1 support for data description. Table 1 shows an example of a raw SPaT message and the corresponding decoded message using the Pycrate decoder.
The decoded message adheres to the SAE J2735 standard for Dedicated Short Range Communications (DSRC) in vehicular environments. It contains a variety of fields, starting with the "messageId," which identifies the type of message, likely a SPaT message used to convey traffic signal states. The "value" field encompasses the main content of the message, including the "timeStamp" that indicates when the message was generated, ensuring synchronization across systems. Within the "intersections" section, details about one or more intersections are provided, including a unique "id" for each intersection, ensuring accurate identification. The "revision" field denotes the version or update status of the intersection data, maintaining the currency of information. The "status" field may reflect the operational state of the intersection, indicating normal operation or highlighting issues. The "states" array lists the current states of traffic signals at the intersection, describing each with specific attributes. The "eventState" indicates the current signal phase, such as "Stop-And-Remain" or "protected-Movement-Allowed," dictating the actions permitted for vehicles. The "timing" information, including fields like "minEndTime," provides details on the timing of these states, such as the minimum remaining time for the current signal phase. The "signalGroup" identifies groups of signals controlled together, typically representing sets of lanes or directions at the intersection. This standardized format ensures consistent communication between vehicles and infrastructure effectively.
The decoded SPaT messages are further processed clearly for easier interpretation. The processed SPaT message of the above example is illustrated in Table 2. This table presents a processed and readable format of a SPaT message for the intersection noted as "Dayton", where "Timestamp" indicates the time the message was generated. The table details various signal groups, numbered 1 through 8, each associated with specific lanes or directions at the intersection. For each signal group, it provides an event state, such as 'permissive-Movement-Allowed' or 'stop-And-Remain,' describing the current traffic signal phase. Additionally, it lists the remaining time in seconds for each signal state currently. The processed SPaT messages are utilized to configure the signal phases of an intersection in the simulation environment.
§.§ V2X Collaborative Autonomous Driving Design
To ensure fully connected and automated traffic, where AVs are connected with infrastructure through V2I communication towards collaborative autonomous driving, in the CARLA simulator <cit.>, we need to carefully design the logic for vehicle operation. While approaching an intersection to a certain distance, AVs need to in advance gain access to the signal time remaining in the specific direction through which they are driving, and respond correspondingly.
By proactively obtaining signal timing information from the infrastructure, AVs can optimize their speed and trajectory well in advance, leading to smoother and more efficient navigation through intersections. This reduces the likelihood of abrupt braking or acceleration, which not only enhances passenger comfort but also improves overall traffic flow and reduces energy consumption. Furthermore, this strategy minimizes the risk of collisions by providing vehicles with accurate and timely information, enabling them to make informed decisions and avoid potential hazards. While high-level AVs with advanced sensing systems can detect and interpret traffic signals, they highly rely on vehicle sensors, which can be affected by various environmental factors such as poor lighting conditions, adverse weather, and obstructions. These factors can compromise the accuracy and reliability of the information obtained. In contrast, direct transmission of signal timing information via V2I communication ensures that AVs receive precise and real-time data, irrespective of external conditions. This method not only enhances the reliability of the information but also reduces the computational burden on the vehicles' onboard systems, allowing them to focus on other critical tasks. Hence, in this study, we aim to design and simulate the behavior of AVs in the CARLA simulator as they receive real-time traffic signal data upon entering a certain range. This simulation will enable us to generate high-fidelity and complex multimodal traffic scenarios, providing a rich dataset for further research.
Algorithm 1 demonstrates in detail how AVs respond and operate upon receiving real-time traffic signal data. AVs are enabled in CARLA by applying the Autopilot function in advance. In the CARLA simulator, the Autopilot feature refers to a built-in, AI-controlled driving system that allows vehicles to navigate and operate autonomously within the simulated environment. Specifically, the proposed algorithm initiates by continuously monitoring the AV's location and determining if it is within a predetermined intersection area. Upon entering this zone, the AV's current speed and location are assessed, and the relevant traffic light for its direction is identified.
Since AVs receive traffic signal information in advance, they can respond proactively to navigate the intersection. Based on the traffic light state—red, yellow, or green—the AV's target speed is adjusted preemptively. For instance, if the light is red and the AV is close to the intersection, it will come to a stop. If the light is yellow, the AV will slow down significantly. If the light is green, the AV will proceed at the speed limit. Since the Autopilot function is already enabled, AVs have been equipped to handle complex conditions. This approach simplifies the AV control strategy by focusing on connectivity, allowing vehicles to react in advance to the received signal information, thus managing intersection navigation effectively.
§ EXPERIMENT
With the proactive intersection navigation algorithm in place, we can simulate AVs in CARLA using real-world signal data. Specifically, we utilize the processed SPaT data from the intersection of Park Street and Dayton Street, collected, decoded, and processed as described earlier. We further structure the data into a sequence of signal phases derived from consecutive processed SPaT messages. Table 3 below illustrates a representative sample of the traffic light phases used in the simulation, which details the duration and state of traffic lights for both directions during each phase. The structured data is eligible to be used to configure the signal phase and timing of a four-way intersection in CARLA.
§.§ Simulation Setup
We use the default map Town 10 in Carla as shown in Figure <ref> (a), where the target four-way intersection is marked by the yellow circle. Figure <ref> (b) marks the traffic signals of this intersection. he signal phase and timing of this intersection are set using the real-world SPaT data, as described in the previous section. The number of vehicles spawned is 100, and the speed limit is set to 15 m/s. The intersection area in this study is defined as a square extending 35 meters in each direction from the center of the intersection. The center of the intersection C_intersection can be calculated based on the exact locations of the traffic signals, as shown in the following equation:
C_intersection = ∑_i=1^nNS_i + ∑_j=1^mEW_j/n + m
where NS_i represents the location of the i-th traffic light in the north-south direction, EW_i represents the location of the j-th traffic light in the east-west direction, n is the total number of traffic lights in the north-south direction, and m is the total number of traffic lights in the east-west direction. Normally, m = n =2. This formula calculates the centroid of the traffic light locations. The simulation begins with the start of the first cycle and terminates at the end of the last cycle.
§.§ Interactive Scenario Generation
The objective of the simulation is to generate interactive traffic scenarios, which involve real-time interactions between AVs and other road users and the infrastructure. In this study, the interaction is uniquely facilitated and enhanced by the communication of real-world traffic signal data, which allows AVs to adjust their behavior proactively and dynamically based on traffic conditions.
In these scenarios, two main types of data are collected: trajectory data and sensor data. The trajectory data includes detailed records of the AVs' movements, such as position, speed, and acceleration. Figure <ref> illustrates the movement patterns of various vehicles entering and passing through the intersection area. It shows the vehicle position in a two-dimensional plane. The color gradient indicates the temporal progression of the vehicle trajectories, as well as the sequence in which different vehicles arrive at and pass through the intersection. These trajectory are the direct result of AVs responding to the received real-time traffic signal data.
Sensor data encompasses both vehicle and infrastructure sensor data, each offering critical insights into the environment around the AV. Vehicle sensor data, gathered from the onboard cameras, provides a detailed account of the vehicle's immediate environment, including other vehicles, static obstacles, drivable areas, lane markings, and traffic signals. This data is essential for real-time navigation and decision-making. Additionally, roadside camera data complements this information by offering a broader perspective of the traffic scenario, enhancing the overall situational awareness of the AV system. Figure <ref> illustrates various types of sensor data captured during the simulation, showcasing the different imaging modalities used to gather comprehensive environmental information.
The cameras used in these simulations are carefully configured to optimize the quality and relevance of the data collected, as listed below:
* RGB Camera Images in Figure <ref> (a): These images provide a standard visual representation of the environment, showing the appearance of surroundings, which may include vehicles, lane marks, road signs, and other objects. The RGB camera is configured with a resolution of 640 x 480 pixels and a 110-degree field of view (FOV), capturing detailed visual data fundamental for tasks such as object detection and recognition.
* Semantic Camera Images in Figure <ref> (b): These images are processed to segment the scene into different categories, which highlighs key features such as lanes, drivable areas, and various object types. The semantic camera uses the same resolution and FOV as the RGB camera. Different colors in these images represent various objects and areas, aiding the AV in understanding the structure of the environment and assisting in tasks like lane keeping and navigation.
* Depth Camera Images in Figure <ref> (c): These images provide information on the distance between the vehicle and surrounding objects, represented in shades of gray. Darker tones indicate closer objects, while lighter tones denote further distances. The depth camera, configured with the same 640 x 480 pixel resolution and 110-degree FOV, is crucial for spatial awareness and collision avoidance, which helps the vehicle gauge distances accurately.
* Bird's Eye View (BEV) Images in Figure <ref> (d): BEV images offer an overhead perspective, providing a comprehensive layout of the intersection and surrounding areas. These images are captured 25 meters above the vehicle with a pitch angle of -90 degrees, offering a top-down view essential for monitoring traffic flow, vehicle positioning, and strategic planning in dense traffic situations. The also camera uses the same resolution and FOV to ensuring consistency in data quality across different views.
These rich vehicle sensor data, acting as an indirect viewpoint of the AVs' response to traffic signal data, offer insights into the vehicle's status from an environmental perspective. This data is crucial for understanding how the connected and automated vehicle's intrinsic autonomous driving capabilities harmonize with external information from connected infrastructure, highlighting the integration and synergy between onboard systems and external guidance.
Roadside camera data, as shown in Figure <ref>, offers a broader view of the intersection, capturing the overall traffic flow and providing an external validation of the AVs' actions. This data is invaluable for cross-verifying the accuracy and effectiveness of the vehicle's responses to the traffic signals and the actions of other road users. The roadside camera is configured to capture high-quality visual data with an RGB camera set to a resolution of 640 x 480 pixels and a 110-degree FOV. It is strategically positioned at a height of 20 meters above the ground, with a pitch angle of -50 degrees and a yaw of 25 degrees, which provides a comprehensive and angled view of the intersection area. Moreover, Table 4 summarizes the types of sensors used in the simulation, along with their specific parameters and unique features. These configurations are designed to be flexible and versatile, allowing adjustments based on personalized user needs and specific research objectives. This adaptability ensures that the data captured is highly relevant and useful for analyzing the AVs' behaviors and decision-making processes in response to real-time traffic conditions.
The integration of trajectory and sensor data provides a robust foundation for a comprehensive analysis of the AVs' performance, especially regarding safety, efficiency, and compliance with traffic regulations. Utilizing real-world traffic signal data allows researchers to observe and evaluate the AVs' behavior under real traffic conditions, highlighting the vehicles' responses to dynamic signal changes and the behavior of other road users. The scenarios generated in this experiment, underpinned by real-world signal data, serve as an essential testing ground for the advancement of CAV technologies. They provide invaluable insights into how AVs can be more effectively integrated with existing traffic management systems, and potentially foster the development of more intelligent and adaptive traffic solutions. The use of real-world data not only enhances the fidelity, realism and reliability of the simulations but also ensures that the findings apply to realistic urban settings.
§ CONCLUSION
This study underscores the significance of incorporating real-world traffic data into simulation frameworks to advance the evaluation and implementation of CAVs. By integrating real-world SPaT data from RSUs into the CARLA simulation environment, we have created a high-fidelity and applicable testing ground for evaluating V2X communication utilized by AVs. The key contributions of this work include developing an algorithm that simulates AVs' response to real-time traffic signals and generating comprehensive traffic scenarios. These efforts provide a robust framework for analyzing the effectiveness and reliability of V2X communication systems, highlighting the interactions between AVs and other road users and the infrastructure. The use of real-world data enhances the fidelity of the simulation. Moreover, the framework offers a variety of sensor outputs in dynamic traffic scenarios, providing comprehensive data for analyzing AV behavior, traffic flow, safety dynamics, etc. Such insights are vital for refining AV algorithms with enhanced safety and efficiency.
Future research should aim to incorporate additional real-world data, including data related to environmental conditions, pedestrian activities, and unexpected traffic events, to further enrich the simulation environment. For example, expanding the simulation environments from urban to rural conditions will provide a more comprehensive understanding of V2X system performance across diverse contexts. Furthermore, the development of advanced predictive models could enhance the responsiveness and efficiency of AVs in dynamic traffic scenarios, optimizing their interactions with both infrastructure and other road users.
§ ACKNOWLEDGEMENTS
The Park Street Smart Corridor is being developed through a collaboration of the TOPS Lab, the City of Madison, Traffic and Parking Control Products and Solutions (TAPCO), and the Wisconsin Department of Transportation. The ideas and views expressed in this paper are strictly those of the TOPS Lab a the University of Wisconsin.
§ AUTHOR CONTRIBUTIONS
The authors confirm their contribution to the paper as follows: study conception and design: Junwei You, Pei Li, Yang Cheng, Keshu Wu, Rui Gan, Steven T. Parker; algorithm development and program development: Junwei You, Pei Li, Yang Cheng, Keshu Wu, Rui Gan; data preparation and analysis: Junwei You, Pei Li, Keshu Wu; manuscript preparation: Junwei You, Pei Li, Yang Cheng, Keshu Wu, Rui Gan, Steven T. Parker, Bin Ran. All authors reviewed the results and approved the final version of the manuscript.
trb
| The development of Connected and Autonomous Vehicles (CAVs) is revolutionizing the transportation industry by integrating advanced perception, control, and communication technologies. In particular, Vehicle-to-Everything (V2X) communication enables CAVs to interact with other road users and with surrounding infrastructure, such as traffic signals and road signs <cit.>. V2X aims to enhance road safety, optimize traffic flow, and reduce environmental impact <cit.>. Among the different forms of V2X, Vehicle-to-Infrastructure (V2I) communication, facilitated by Roadside Units (RSUs), plays a crucial role particularly.
§.§ V2X Simulation
The development of V2X communication technologies and their integration with AVs has led to a variety of simulation frameworks aimed at testing and refining these systems. These frameworks are crucial for understanding how V2X and AV technologies can be effectively deployed in real-world environments. For example, one of the foundational studies in this area <cit.> presents a simulation platform that integrates network and driving scenarios to assess the impact of Cellular Vehicle-to-Everything (C-V2X) technology on vehicle communication and control. This platform integrates network and driving scenarios using SUMO and CARLA simulators, which enables the evaluation of road traffic and vehicle dynamics under a C-V2X framework. Another study <cit.> focuses on simulating vehicle platooning, demonstrating the potential of V2X to enhance traffic efficiency by coordinating vehicle movements. The importance of integrating V2X sensors into existing simulation tools is highlighted in <cit.>. This study expands the CARLA simulator's capabilities by enabling detailed simulations of V2X interactions, particularly in urban environments where complex traffic scenarios are common. Meanwhile, <cit.> underscores the necessity for simulation frameworks that can accommodate various V2X communication protocols, ensuring comprehensive testing of system interoperability.
Performance analysis is a critical component of these studies, as seen in <cit.> which evaluates the communication protocol's reliability and efficiency. Similarly, <cit.> provides a detailed examination of latency and throughput, essential metrics for assessing the feasibility of deploying V2X systems in densely populated urban areas. The findings highlight the critical role of media access control (MAC) layer parameters, such as resource reselection probability and channel bandwidth, in influencing packet reception ratios and overall communication reliability. To conduct performance analysis, specialized simulation tools play a vital role. For example, <cit.> offers a platform specifically designed to test V2X applications like traffic signal prioritization, providing valuable insights into how these systems can be optimized for real-world use. The emphasis on realistic data in simulations is also highlighted by <cit.> which addresses the challenges of simulating V2X communications under variable environmental conditions. In addition, comprehensive simulation environments offer a holistic approach that is crucial for evaluating V2X protocols and their effectiveness in improving traffic flow and safety <cit.>. They integrate multiple aspects of traffic and network modeling, facilitating the development of advanced traffic management systems. Lastly, <cit.> explores how V2X technologies can enhance the capabilities of AVs, particularly through applications like cooperative adaptive cruise control. This study highlights the synergy between V2X and AV technologies, demonstrating how they can be integrated to create more efficient and safer transportation systems.
These papers collectively provide a comprehensive overview of the current state of V2X simulation and its critical role in the development of CAV technologies. They underscore the importance of versatile and high-fidelity simulation tools in evaluating these systems' performance and scalability, paving the way for more intelligent and adaptive traffic management solutions.
§.§ RSU Data Utilization
The integration of Roadside Units (RSUs) in vehicular networks is pivotal for enhancing V2X communication and data services, which are critical for the development of CAVs. The literature reveals various strategies and frameworks focused on optimizing RSU data utilization, addressing key aspects such as data scheduling, dissemination, and resource allocation. Specifically, efficient data scheduling is a recurrent theme across several studies, which highlights the importance of managing the flow of information between vehicles and RSUs. <cit.> proposes a mechanism for prioritizing data based on predicted connection times, which helps in reducing network congestion and ensuring timely data delivery. Similarly, <cit.> emphasizes the role of advanced scheduling techniques in minimizing latency and improving service quality. These approaches are crucial in dynamic environments where vehicular connections with RSUs are intermittent and unpredictable. Moreover, the allocation of resources in RSU-empowered vehicular networks is another critical area of focus. In <cit.>, the authors explore strategies to optimize data transmission and reduce latency through intelligent resource management. This study, along with <cit.>, underscores the need for adaptive frameworks that can respond to changing network conditions and traffic patterns.
Ensuring reliable and efficient data transmission is essential for the effective operation of vehicular networks. <cit.> investigates the use of edge computing to offload data processing tasks, thereby enhancing throughput and reducing the computational burden on vehicles. Additionally, <cit.> introduces network coding techniques to improve data dissemination, reduce redundancy, and optimize bandwidth usage. These studies highlight innovative solutions to common challenges in data-intensive vehicular networks. Moreover, the dynamic nature of vehicular networks necessitates robust frameworks for real-time data access and management. The research <cit.> focuses on differentiating between safety-critical and non-safety-critical data, proposing a framework that adapts to traffic conditions to ensure timely access to vital information. This approach is critical for applications requiring immediate response, such as collision avoidance systems and real-time traffic updates. Lastly, integrating V2V (Vehicle-to-Vehicle) and V2I communication is explored in <cit.>. This study examines the potential of hybrid communication networks to enhance data transmission reliability and improve routing efficiency in complex urban settings. By leveraging both V2V and V2I communications, these networks can offer more resilient and flexible data transmission solutions.
These studies collectively emphasize the crucial role of RSUs in the broader context of vehicular communication networks. By addressing challenges related to data scheduling, resource allocation, data throughput, and real-time access, they provide a foundation for developing more efficient and reliable transportation systems. These advancements are essential for the successful implementation of CAVs, where seamless communication between vehicles and infrastructure is a cornerstone. However, the existing literature primarily focuses on developing and testing communication protocols, resource allocation strategies, and data dissemination techniques. Notably, there is a gap where real-world RSU data is integrated into simulations to generate diverse and high-fidelity traffic scenarios. Utilizing real-world RSU data could provide valuable insights into interactive vehicle behavior, traffic dynamics, and system performance under various conditions, thereby enhancing the fidelity and applicability of CAV simulations. The approach proposed in this study aims at bridging this gap between theoretical research and practical implementation, offering a more comprehensive understanding of the challenges and opportunities in deploying V2X technologies in real-world settings.
Specifically, the main contribution of this paper is threefold:
* This study processes and incorporates real-world Signal Phase and Timing (SPaT) data from RSUs into a simulation environment, which enhances its fidelity and applicability.
* This study develops an algorithm to simulate how Autonomous Vehicles (AVs) operate and respond upon receiving real-time SPaT data through V2X communication, demonstrating practical V2I connectivity in the simulation environment.
* The research generates a variety of traffic scenarios from vehicle interactions with real-world SPaT data, providing a comprehensive testing ground for CAVs.
The remainder of this paper is organized as follows. The second section details the methods used in this study. The third section presents the parameter and simulation setups and showcases the generated scenarios. The last section concludes the paper and provides future work. | null | §.§ Framework
The framework of this study is shown in Figure <ref>, which illustrates the process of integrating real-world RSU SPaT data into a simulation environment to generate comprehensive traffic scenarios. The upper part of the diagram shows the procedure for processing raw RSU data into structured SPaT messages. These messages provide detailed traffic signal timing information that is critical for a high-fidelity simulation environment. In the lower part of the diagram, the simulation environment is configured using the structured SPaT messages to accurately represent intersection signal phases and timings. AVs in the simulation can respond proactively to these real-time signal inputs once within the intersection area, simulating V2I interactions. This setup further generates multimodal traffic scenarios with detailed sensor data, including trajectory, RGB camera, and depth camera data from both vehicles and roadside sensors. The following sections will elaborate on each module in detail.
§.§ Data Collection and Parsing
In general, encoded raw data is received distributively in three edge servers from RSUs placed on the existing arterial and freeway in Madison, WI, as shown in Figure <ref>. So far, 21 RSUs have been installed, including 15 RSUs installed along the Park Street and 6 RSUs installed along the Beltline freeway <cit.>.
The received SPaT, MAP, and Basic Safety Messages (BSM), are stored in the database through API in another edge server. The database is connected with decoders for message decoding. The decoded messages are archived back in the database. The database is also connected to the SPaT dashboard through API for data display. The entire framework is shown in Figure <ref>.
Specifically, this study aims to leverage SPaT data from the intersection of Part Street and Dayton Street, as shown in Figure <ref>, for interactive traffic scenario generation. The SPaT data is collected by the controller of the intersection and then transmitted to RSU. Figure <ref> shows the signal controller and RSU used in this study. Utilizing a developed data pipeline, the encoded SPaT messages are then received from RSU and stored in the local database <cit.>.
The decoder used in this study to decode raw SPaT messages is the Pycrate library <cit.>, which is identified as an open-source toolkit for handling various telecommunications and networking protocols, such as LTE (4G) and 5G NR. The library is fully licensed under LGPL v2.1, offering robust ASN.1 support for data description. Table 1 shows an example of a raw SPaT message and the corresponding decoded message using the Pycrate decoder.
The decoded message adheres to the SAE J2735 standard for Dedicated Short Range Communications (DSRC) in vehicular environments. It contains a variety of fields, starting with the "messageId," which identifies the type of message, likely a SPaT message used to convey traffic signal states. The "value" field encompasses the main content of the message, including the "timeStamp" that indicates when the message was generated, ensuring synchronization across systems. Within the "intersections" section, details about one or more intersections are provided, including a unique "id" for each intersection, ensuring accurate identification. The "revision" field denotes the version or update status of the intersection data, maintaining the currency of information. The "status" field may reflect the operational state of the intersection, indicating normal operation or highlighting issues. The "states" array lists the current states of traffic signals at the intersection, describing each with specific attributes. The "eventState" indicates the current signal phase, such as "Stop-And-Remain" or "protected-Movement-Allowed," dictating the actions permitted for vehicles. The "timing" information, including fields like "minEndTime," provides details on the timing of these states, such as the minimum remaining time for the current signal phase. The "signalGroup" identifies groups of signals controlled together, typically representing sets of lanes or directions at the intersection. This standardized format ensures consistent communication between vehicles and infrastructure effectively.
The decoded SPaT messages are further processed clearly for easier interpretation. The processed SPaT message of the above example is illustrated in Table 2. This table presents a processed and readable format of a SPaT message for the intersection noted as "Dayton", where "Timestamp" indicates the time the message was generated. The table details various signal groups, numbered 1 through 8, each associated with specific lanes or directions at the intersection. For each signal group, it provides an event state, such as 'permissive-Movement-Allowed' or 'stop-And-Remain,' describing the current traffic signal phase. Additionally, it lists the remaining time in seconds for each signal state currently. The processed SPaT messages are utilized to configure the signal phases of an intersection in the simulation environment.
§.§ V2X Collaborative Autonomous Driving Design
To ensure fully connected and automated traffic, where AVs are connected with infrastructure through V2I communication towards collaborative autonomous driving, in the CARLA simulator <cit.>, we need to carefully design the logic for vehicle operation. While approaching an intersection to a certain distance, AVs need to in advance gain access to the signal time remaining in the specific direction through which they are driving, and respond correspondingly.
By proactively obtaining signal timing information from the infrastructure, AVs can optimize their speed and trajectory well in advance, leading to smoother and more efficient navigation through intersections. This reduces the likelihood of abrupt braking or acceleration, which not only enhances passenger comfort but also improves overall traffic flow and reduces energy consumption. Furthermore, this strategy minimizes the risk of collisions by providing vehicles with accurate and timely information, enabling them to make informed decisions and avoid potential hazards. While high-level AVs with advanced sensing systems can detect and interpret traffic signals, they highly rely on vehicle sensors, which can be affected by various environmental factors such as poor lighting conditions, adverse weather, and obstructions. These factors can compromise the accuracy and reliability of the information obtained. In contrast, direct transmission of signal timing information via V2I communication ensures that AVs receive precise and real-time data, irrespective of external conditions. This method not only enhances the reliability of the information but also reduces the computational burden on the vehicles' onboard systems, allowing them to focus on other critical tasks. Hence, in this study, we aim to design and simulate the behavior of AVs in the CARLA simulator as they receive real-time traffic signal data upon entering a certain range. This simulation will enable us to generate high-fidelity and complex multimodal traffic scenarios, providing a rich dataset for further research.
Algorithm 1 demonstrates in detail how AVs respond and operate upon receiving real-time traffic signal data. AVs are enabled in CARLA by applying the Autopilot function in advance. In the CARLA simulator, the Autopilot feature refers to a built-in, AI-controlled driving system that allows vehicles to navigate and operate autonomously within the simulated environment. Specifically, the proposed algorithm initiates by continuously monitoring the AV's location and determining if it is within a predetermined intersection area. Upon entering this zone, the AV's current speed and location are assessed, and the relevant traffic light for its direction is identified.
Since AVs receive traffic signal information in advance, they can respond proactively to navigate the intersection. Based on the traffic light state—red, yellow, or green—the AV's target speed is adjusted preemptively. For instance, if the light is red and the AV is close to the intersection, it will come to a stop. If the light is yellow, the AV will slow down significantly. If the light is green, the AV will proceed at the speed limit. Since the Autopilot function is already enabled, AVs have been equipped to handle complex conditions. This approach simplifies the AV control strategy by focusing on connectivity, allowing vehicles to react in advance to the received signal information, thus managing intersection navigation effectively. | null | null | This study underscores the significance of incorporating real-world traffic data into simulation frameworks to advance the evaluation and implementation of CAVs. By integrating real-world SPaT data from RSUs into the CARLA simulation environment, we have created a high-fidelity and applicable testing ground for evaluating V2X communication utilized by AVs. The key contributions of this work include developing an algorithm that simulates AVs' response to real-time traffic signals and generating comprehensive traffic scenarios. These efforts provide a robust framework for analyzing the effectiveness and reliability of V2X communication systems, highlighting the interactions between AVs and other road users and the infrastructure. The use of real-world data enhances the fidelity of the simulation. Moreover, the framework offers a variety of sensor outputs in dynamic traffic scenarios, providing comprehensive data for analyzing AV behavior, traffic flow, safety dynamics, etc. Such insights are vital for refining AV algorithms with enhanced safety and efficiency.
Future research should aim to incorporate additional real-world data, including data related to environmental conditions, pedestrian activities, and unexpected traffic events, to further enrich the simulation environment. For example, expanding the simulation environments from urban to rural conditions will provide a more comprehensive understanding of V2X system performance across diverse contexts. Furthermore, the development of advanced predictive models could enhance the responsiveness and efficiency of AVs in dynamic traffic scenarios, optimizing their interactions with both infrastructure and other road users. |
http://arxiv.org/abs/2409.17131v1 | 20240925174902 | Enhancing robot reliability for health-care facilities by means of Human-Aware Navigation Planning | [
"Olga E. Sorokoletova",
"Lucca Iocchi"
] | cs.RO | [
"cs.RO"
] |
1]Olga E. Sorokoletova[
[email protected],
]
[1]
[1]Sapienza University of Rome,
5 Piazzale Aldo Moro, Rome RM, 00185, Italy
1]Luca Iocchi[
]
§ ABSTRACT
With the aim of enabling robots to cooperate with humans, carry out human-like tasks, or navigate among humans, we need to ensure that they are equipped with the ability to comprehend human behaviors and use the extracted knowledge for intelligent decision-making. This ability is particularly important in the safety-critical and human-centred environment of health-care institutions. In the field of robotic navigation, the most cutting-edge approaches to enhancing robot reliability in the application domain of healthcare facilities and in general pertain to augmenting navigation systems with human-aware properties. To implement this in our work, the Co-operative Human-Aware Navigation planner has been integrated into the ROS-based differential-drive robot MARRtina and exhaustively challenged within various simulated contexts and scenarios (mainly modelling the situations relevant in the medical domain) to draw attention to the integrated system's benefits and identify its drawbacks or instances of poor performance while exploring the scope of system capabilities and creating a full characterization of its applicability. The simulation results are then presented to medical experts, and the enhanced robot acceptability within the domain is validated with them as the robot is further planned for deployment.
social robotics human-aware navigation path planning
Enhancing robot reliability for health-care facilities by means of Human-Aware Navigation Planning
[
September 28, 2024
==================================================================================================
§ INTRODUCTION
[lines=3, findent=2pt, nindent=-3pt]Taking into account the specificity of the application domain is one of the crucial considerations while developing a robotic system, and the chosen domain of a health-care facilities is represented by the quick-paced and safety-critical environment, which is frequently overcrowded, chaotic, and understaffed. The robotics community has been researching the use of robots in hospital settings to reduce the workload of caregivers by utilizing robots for low-value duties such as medical supply delivery or patient assistance at the bedside when clinicians are not available.
Nowadays, the number of autonomous mobile robots serving in hospitals is rather low. This reveals a major gap in terms of existing approaches and is explained by the fact that the higher the desired level of autonomy, the more challenging it is to provide it while also satisfying safety criteria of primary importance for the considered application domain. Thus, the clinical environment is a unique safety-critical environment that poses specific navigation challenges for robots <cit.>, <cit.> and gives rise to particular scenarios in which the navigational conflicts between humans and robots must be resolved.
For example, a robot in an Intensive Care Unit (ICU) is likely to encounter the situation when a group of Health-Care Workers (HCWs), some of whom may have been involved in other tasks and whose predictions of behaviors may have already been derived by the robot's planner, suddenly rush to the bed of a certain high-acuity patient to perform a life-saving treatment. In this scenario, to address the challenge of performing the situation assessment and responding to the changes in dynamics of the environment due to changing human plans, robots must be more than purely reactive; they must be proactive, adaptive, anticipative, and capable of intelligent decision-making.
Besides that, robot adaptability is required, which is defined as the ability of a robot to adapt to a new environment with different characteristics. To illustrate, let us assume that in a particular hospital, the ICU is a room on the Emergency Department (ED) floor. The ICU resembles a crowded open-space kind of environment (beds are normally installed along the walls, and often there is a wall and/or nursing station in the center of the room); meanwhile, the corridors of the ED could be not so crowded but cluttered with trolleys and appliances. Therefore, while exiting the ICU to navigate in the ED hallways and vice versa, the robot must adapt to changes and exhibit different types of behavior.
In our work, we address the described challenges and also the challenge of increasing the robot's acceptability and mitigating inconsistencies between the robot's actual and expected behavior by means of Human-Aware Navigation Planning. Specifically, we integrate the Co-operative Human-Aware Navigation (CoHAN) planner designed by Singamaneni, Favier, and Alami <cit.> into a given robot software framework. The CoHAN planner enables a robot to navigate in diverse contexts while being aware of humans in the environment. The software belongs to a ROS-based differential drive robot with functionalities equivalent to those of a Turtlebot system. By now, experimentation has been carried out in a simulation, but the social version of the robot in <ref>, MARRtina, is intended for deployment in a real hospital[Sant'Andrea Hospital, Roma RM, Italy] to serve as a bedside robot.
The integrated system becomes capable of performing Social Navigation as opposed to treating people simply as obstacles. The studies within the Social (or Human-Aware) Navigation field are centered on learning about various human motion patterns and relevant social aspects and developing navigation approaches that account for them. The associated research community has explored a wide range of interesting subject matters so far and developed numerous Human-Aware Navigation planners; however, these planners are mostly environment- and scenario-specific and hence do not meet the adaptability requirement indicated above. Therefore, the motivation behind choosing the CoHAN system for integration is that it is flexible, highly tunable, and easily adjustable to the various contexts, making it capable of handling complex indoor and crowded scenarios (ICU, ED).
The main contribution of this work is that, in addition to being integrated, a new planning system, enhanced with human awareness, has been thoroughly challenged, tested, and evaluated in multiple simulated human-robot scenarios and environments: ICU crowded free space, ICU with a wall in the middle, heart attack emergency in ICU, narrow hallway, free/cluttered wide ED corridor, narrow/wide door crossing scenario, patient bed approach scenario, etc. Qualitative and quantitative analysis is presented, uncovering the subtle nuances of the system's functioning and ultimately creating a comprehensive characterization of the properties and limitations of the adopted planning approach. As such, on a global level, we hope to contribute to better patient outcomes and lessen the workload of HCWs by making robots more reliable. One of the necessary steps towards this goal is to ensure that potential users accept the developed robotic technology. Therefore, after gathering feedback on the system's performance from the HCWs at the Sant'Andrea Hospital, a brief statistical analysis of the robot's acceptability by humans was carried out.
The rest of the paper is structured as follows. The background material and related works are described in [sec:2]Section <ref>sec:2. The CoHAN Planner’s architecture, including an overview of its features, components, and the scheme of communication between different modules and processes, is provided in [sec:3]Section <ref>sec:3. Following this, [sec:4]Section <ref>sec:4 deals with the details of the integration of a CoHAN system into MARRtina software. The primary matter of the work is found in [sec:5]Section <ref>sec:5. It covers the experimental set-up and analysis of the results in various simulated human-robot scenarios. Finally, the work is summarized in [sec:6]Section <ref>sec:6, which also comments on potential future research to improve the system.
§ BACKGROUND
This section briefly explains the fundamentals and some related works of the Robotic Operation System (ROS) 2D Navigation Stack[<http://wiki.ros.org/navigation>], in general, and a package with a Timed Elastic Band (TEB) local trajectory planner[<http://wiki.ros.org/teb_local_planner>], in particular, because this is exactly the component that has been made "human-aware" in the architecture of the CoHAN system. Then, the Human-Aware Navigation Planning problem is introduced.
§.§ ROS Navigation Stack
Both MARRtino robot software and the integrated Co-operative Human-Aware Navigation (CoHAN) planner are ROS-based. Citing its web page, ROS <cit.> is an open-source robotic middleware toolset that provides a structured communications layer above the host operating systems and is considered by many as the most general-purpose and commonly-used frameworks for developing robotic applications.
The atomic units for organizing software in ROS are packages, and the collections of packages are a stack. Particularly, to navigate with a ROS robot, the ROS Navigation Stack is used. The ROS navigation, or ROS Navigation Stack, is meant for 2D maps, and it takes as input information about odometry, sensor measurements, and a goal pose to provide as output safe velocity commands that are sent to a mobile base.
Functionally, the Navigation Stack packages contain a set of ROS nodes and navigation-related algorithms that are implemented to move mobile robots autonomously from one point to another while avoiding obstacles. The basic structure of a Navigation Stack is represented by the three main blocks or modules:
* Mapping - to create a map of the environment;
* Localization - to localize a robot in the created map;
* Path Planning - to plan the robot path (Global Path Planning) and execute the planned path while avoiding obstacles (Local Path Planning).
§.§ Timed Elastic Band
The main accent in a newly integrated CoHAN system is the improvement of a particular component of the described ROS Navigation Stack: the local planner. Precisely, the Timed Elastic Band (TEB) is the optimal local trajectory planner. This planner's package is called and is implemented in ROS as a plugin to the default local planner of the 2D Navigation Stack. The underlying planning approach was introduced in <cit.>, and its core idea is to optimize locally at runtime the initial trajectory generated by a global planner with respect to:
* Trajectory execution time (time-optimal objective);
* Separation from the obstacles;
* Kinodynamic constraints (maximum velocities and accelerations).
The Elastic Band is a sequence of n intermediate robot configurations x_i = (x_i, y_i, β_i)^T ∈ R^2 × S^1, where (x_i, y_i) is a position and β_i - orientation of the robot in a map reference frame:
Q = {x_i}_i=0...n n ∈ℕ
The Timed Elastic Band (TEB) appears as an augmentation by the time intervals between two consecutive configurations: n-1 time differences Δ T_i, each of which is the time needed to transit from one configuration to another, i.e., as a tuple of sequences:
B := (Q, τ) τ = {Δ T_i}_i=0...n-1
Then, the optimization problem can be formulated as a real-time weighted multi-objective optimization of TEB defined in <ref> in terms of both configurations and time intervals. Let us denote f(B) as an objective function and B^* as an optimal value (problem solution), then:
f(B) = ∑_k γ_kf_k(B)
B^* = Bminf(B)
where γ_k are the weights and f_k - multiple objectives that may capture the diverse aspects.
§.§ Human-Aware Navigation
The TEB planner was further expanded with human-aware characteristics to get Human-Aware Timed Elastic Band (HATEB) <cit.> by incorporating human motion predictions and estimations and simultaneous planning for humans and the robot. By "human-aware characteristics", we mean that the robot's paths generated by the planner must not only be safe, legible, and optimal in terms of time and robot resource consumption, but also acceptable and look natural to humans. And Social (or Human-Aware) Navigation is a whole new branch of robotic research that emerged from the synthesis of Human-Robot Interaction (HRI) and Motion Planning to focus on learning how to build such paths.
A survey in <cit.> suggests that the Human-Aware Navigation challenge can be defined as a challenge of navigating while accounting for the constraints related to social aspects and rules and offers one of the possible approaches to the sub-categorization of the characteristics that a robot must exhibit in order to be considered navigating in a human-aware manner: compliance with human comfort, naturalness of motion, and sociability.
Human Comfort appears as the absence of annoyance and stress for humans during human-robot interaction sessions or in shared spaces. In a sense, it is linked to the concept of safety and heavily relies on the study of proxemics, the study of how people perceive the proximity of others, illustrated in <ref>. It is a branch of knowledge that deals with the amount of space that people feel it is necessary to set between themselves and others to feel comfortable.
Motion Naturalness is the ability of the robot to mimic the nature of human motion and elaborate human-alike navigation behavior by capturing and recreating human low-level motion patterns. The main struggle in this research area resides in the fact that robots have different abilities compared to humans, and therefore not all patterns are meant to be transferred.
Finally, Sociability is concerned with high-level decision-making and attempts to comply with social and cultural norms by either explicit modeling of known social protocols or learning them from humans as spacial effect. The latter is especially advantageous because learning from humans allows their behavior to be not only predicted but also affected and exploited for spacial conflict resolution.
§ CO-OPERATIVE HUMAN-AWARE NAVIGATION PLANNER ARCHITECTURE
As introduced previously, the Co-operative Human-Aware Navigation (CoHAN) planner is a chosen approach to implementing Social Navigation in our system. Its overall architecture is summarized in the block scheme in <ref> and described in the current section.
The red building blocks in the block scheme are the CoHAN-specific components that were added by Singamaneni et al. <cit.> to the standard ROS Navigation Stack to enrich it with human-aware properties. The HATEB Local Planner is a core component of the system, which is in essence a human-aware extension of the Timed Elastic Band (TEB) local planner, explained in [subsec:teb]Section <ref>subsec:teb. It includes two state variables: the Human State and the Planning State.
Human State is a part of the Human Tracking mechanism that controls the inclusion of the Human Safety and Human Visibility costmap layers, where the former adds cost for approaching humans too close and the latter penalizes surprise appearances from behind and makes the robot enter the human’s field of view from a larger distance.
Planning State is a criterion for the selection of a Human Path Prediction service, whereas a Human Path Prediction module is responsible for sending a request to a certain service in order to predict or suggest possible paths for the tracked dynamic humans.
To better understand the interconnections between the different components of architecture, let us describe the Human Tracking approach, the Human Path Prediction module, and the transition scheme between Planning Modes in more detail.
§.§ Human Tracking
The Human Tracking is provided by an external to the Navigation Stack module (top-right text input in the <ref>). Although all humans in the environment are tracked, only those within a Visible Region are considered visible to the robot. Furthermore, by introducing the Planning Radius (a tunable parameter), the Visible Region is further narrowed, and only humans within the Planning Radius are considered for planning.
The humans within the Planning Radius are called observable. Each observable human is classified as "static" or "dynamic," and the classification result is recorded in the Human State. Following that, the costmaps' plugin examines the Human State and adds the Human Safety and Human Visibility costmap layers around the static humans as it is shown in <ref>.
When it comes to dynamic humans, the system detects the two nearest of them, attaches elastic bands, predicts their paths, and plans their possible trajectories until they move behind the robot or out of the Planning Radius.
§.§ Human Path Prediction
The Human Path Prediction module is responsible for the generation of global plans for the two nearest observable dynamic humans. Once the global plans are generated, they are handed over to the HATEB Local Planner to build the local plans.
There are four different prediction services, and which one of them is currently in charge depends on the system configuration and active Planning Mode. The services are called , , and . They are described in detail in <cit.>. However, our experimentation allowed us to discover the nuances of their functioning, as reported in the following paragraph.
, , and services are called when the robot is in a mode. and are alternatives to each other, and can be activated in parallel with any of them. Activation is done manually through the configuration file. Controversially, is called automatically when the robot switches to mode. Note that , , and can all be disabled at the same time, and then is used for as well.
§.§ Planning Modes
The CoHAN system is designed to handle a variety of contexts. For that, a multi-modality is provided. The concept of multi-modality, or modality shifting, consists of a mechanism of switching between different Planning Modes depending on the current context. This concept can be related to the ideas of Qian et al. <cit.> and Mehta et al. <cit.> that utilize the Partially Observable Markov Decision Processes (POMDP) model to generate navigation control policies. However, unlike CoHAN, no situation assessment is implemented in these works.
In a nutshell, the multi-modality mechanism implemented in CoHAN is the following:
* The system takes as input the navigation goal;
* The continuous process starts:
* HATEB Local Planner accesses the human-robot scenario and determines the Human State and the Planning State;
* Depending on the value of the determined states, planner shifts between different Planning Modes;
* The current Planning Mode is used to choose the command velocity to send to the robot’s base controller;
* The process completes when either the goal is reached, the robot is stuck, or there is a collision, in which case the recovery behavior is activated.
The available Planning Modes are , , and .
- the mode in which the elastic band is added only to the robot. This mode can be seen as a purely reactive mode. The planning system starts in and stays in it as long as there are no humans. This mode is computationally the least expensive.
- the mode in which the elastic bands are added to two nearest observable dynamic humans and to the robot. Trajectories for two humans and a robot are optimized simultaneously. This allows the robot to demonstrate proactive behavior. Additionally, the planned for humans trajectories offer a possible solution for human-robot conflicts.
- the mode in which elastic bands are added to humans, and trajectories for them are predicted only if they have some velocity. This mode is less proactive, but it allows for active re-planning and is useful in crowded scenarios or when the robot cannot move due to the Entanglement Problem of the mode.
The Entanglement Problem <cit.>: HATEB assumes that humans keep moving and try to adapt its path according to their motion. This assumption can result in an entanglement of trajectories when the human no longer moves and the robot keeps waiting for the human to move, neglecting the other possible solutions.
- the mode that is activated when the robot encounters the Full Blockage Problem of the mode. In , the robot moves backwards slowly until it finds a free space to clear the way. Once the robot clears the way, it waits for the corresponding human to complete its navigation or a timeout and then proceeds to its goal.
The Full Blockage Problem: The robot is in the near vicinity of the human, and it is stuck without progressing towards the goal for a time window of a certain duration. There is no solution to the planning problem unless one of the agents completely clears the way for the other. This kind of situation commonly occurs in a very narrow corridor.
The decision-making process involved in transitioning between different modes is shown in <ref>. The shifting criteria are thoroughly explained in <cit.> and <cit.>. On a low level, they are based on a thresholding of the measured human-robot distances and human velocities depending on the states of the observable humans.
§ INTEGRATION
This section demonstrates how the CoHAN system has been integrated into MARRtina robot software.
The MARRtina robot software is a Docker-based software. It contains ROS packages representing different robot functionalities and control and visualization modules distributed among Docker images (or, as runtime instances, Docker containers) and interfaced with Python, C++, and other languages.
There are a number of available images, including the and components responsible for the Navigation Stack functioning: contains the Mapping module, and stores the executables to perform Localization and to move robot base along the paths provided by the Path Planning. The Path Planning itself is represented by the variety of planning approaches. They are normally implemented as external systems and pulled from the outer repositories into being inherited from the image. Similarly, the Co-operative Human-Aware Navigation (CoHAN) planner repository is pulled to the image. Thus, this image becomes a foundation for the overall integration process.
The Docker containers involved in the integration and testing of the CoHAN system are the following:
*
*
*
*
*
The last two containers in the list are related to the computer set-up (web server); and are the runtime instances of the and images, respectively; and is a simulator.
To launch the navigation, the containers of interest for the execution of the commands are the and the . Hence, in order to integrate and challenge a new planner within the navigation framework, these two containers are the architectural components that must be modified.
The ROS Stage simulator and RViz visualization tool were used to model the navigation scenarios. The Stage[<http://wiki.ros.org/stage>] is a standard ROS 2D simulator, and the corresponding MARRtina software package contains a Python script to automatically create and run the simulated environments with human or robot agents. Mainly, two modifications were made in the container:
* The map collection was complemented;
* The Human Tracking system was provided.
First, some of the existing maps were customized and/or augmented with semantic information, while others were created from scratch because a large collection of maps is required to conduct extensive testing. On the one hand, the task was to model all of the intricate scenarios that the planner was originally designed to handle efficiently (e.g., doorways, hallways, and corridors) in order to verify its declared properties. On the other hand, since enhancement of a robot's reliability in healthcare environments is our focus, the aim was to learn how to account for the specificity of a domain the best by simulating medical contexts. <ref> shows four main maps used in experiments: labyrinth, Emergency Department, Intensive Care Unit (ICU), and Intensive Care Unit with the wall in the middle (ICU2).
Second, as we remember from the [subsec:tracking]Section <ref>subsec:tracking and <ref>, there is an external system in charge of Human Tracking. According to its arrangement, to include humans into the system, they must be published on a topic following the particular message structure. This is implemented in a Python bridge script that takes as input the number of humans and then pipes a subscriber to the humans and a publisher to . The script must have been added to the to function properly.
After Human Tracking is activated and the Localization (in this case, basic ) is launched, the core component of the Navigation Stack must be started. This main executable is called . It was complemented with the necessary frame transformations and placed in the , together with new configuration files containing , Local Costmap, Global Costmap, and HATEB Local Planner's parameters. The uses HATEB as a Local Planner in the node and, besides this node, contains Human Path Prediction and Human Filtering.
§ EXPERIMENTS
Let us move on to the experiments that have been conducted to create a complete characterization of the integrated system and assert enhancement of the robot's reliability by means of a chosen Human-Aware Navigation Planning approach.
The results in various simulated human-robot contexts are presented and thoroughly analyzed qualitatively and quantitatively, followed by a statistical analysis of the system evaluation by clinicians in terms of acceptability and usability in a real environment. Precisely, the following scenarios have been modeled and evaluated:
* Visibility Test
* A human in open space
* A human and a wall
* Two humans in open space
* Door Crossing Scenario
* Wide Doorway
* Narrow Doorway
* Bed Approach
* Narrow Corridor Scenario
* A "never stopping human"
* A human who stops
* Wide Corridor Scenario
* Free Corridor
* Cluttered Corridor
* Crowded Scenario
* Free Space Crowd
* Emergency Situation
The ROS plugin was employed to tune the navigation parameters at runtime.
§.§ Qualitative Results
In all the snapshots from RViz, dark and light blue lines are the global path and elastic band (local path) of the robot, dark and light green lines are the global paths and elastic bands generated for humans, yellow arrows indicate human orientations, and the trajectories of the robot and humans are shown as colored bars or dots. They are the poses planned by HATEB Local Planner, and the color visualizes the timestamp. If the color of the predicted human pose bar is the same as the color of the robot pose bar, then they will both be at that location at the same time.
§.§.§ Visibility Test
This test considers the presence of only static humans and is intended to estimate the influence of a particular component of the system: the Human Safety and Human Visibility layers.
We start with a test of a single static human in an open space. Since the human is static, the robot is always in , and the system adds the to the costmaps. As it can be seen from <ref>, sometimes robot chooses to pass in front (<ref>) and sometimes - behind (<ref>) the human. This is explained by the fact that the cost of passing behind or in front is the same outside the safety radius (1.5 m), and since the test is performed in the open space, there are cells of the same cost from both sides. In other words, this is a costmap-based implementation, and the situation is symmetric. The robot has enough space behind the human so as not to disturb him or her and still continue to its goal. The planner does not always make the robot choose the frontal side always - only when the back side is constrained and the robot has to move close to human. However, even when the robot appears from behind the human, it enters the human’s field of view from a larger distance.
The explanation is confirmed in tests with one static human standing next to the wall and two static humans standing next to each other in an open-space environment. In the case of a static human and a wall, regardless of human orientation, if there is a free space between a human and a wall, the robot navigates through this space; otherwise it navigates around the human. When there are two humans next to each other in open space, no matter if they are co-directed or contr-directed (in which case they can be considered interacting), if there is a free space between them, the robot moves through this space; otherwise, it goes around. An interesting situation is shown in <ref>. Usually in Social Navigation, when two humans are facing each other, the robot should not pass between them (interaction should not be interrupted), but from the other side, the distance between these humans is more than 3m.
§.§.§ Door Crossing Scenario
The Door Crossing scenario is a series of three simulated situations: Door Crossing: Wide Doorway, Door Crossing: Narrow Doorway, and a Bed Approach Test.
The first two are common situations in many human environments, including medical institutions, because they can occur at the entrance of any room. A dynamic human is simulated as moving with a constant linear velocity along a straight line, with a goal somewhere on this line behind the initial position of the robot. The Human Path Prediction service is activated.
In a Door Crossing: Wide Doorway case a doorway is wide enough for two agents. This scenario is appropriate to test the robot in a mode, because there is a dynamic human, and neither Entanglement, nor Blockage Problem can happen. The test completes successfully: robot avoids collisions and demonstrates the declared behavior. The robot always correctly switches from a to a , but the other events vary depending on the run.
During the perfect run, illustrated in <ref>, the robot is able to guess the linear motion of the human and plan its own path with a deviation maneuver to let the human pass first. In this case, the robot makes a greater effort than a human.
In the case of a less lucky run, the robot does not plan a greater effort for itself because it assumes human cooperation (i.e., that human will move aside too). Fortunately, this does not cause collisions since the robot finds an alternative to the waiting solution: to pass through the cell of a doorway that is not occupied by the human.
A similar problem is present in the Door Crossing: Narrow Doorway scenario, where the setup is the same but the doorway is wide enough for one agent only. The tests for this scenario were also successful. However, this task is more complicated because simultaneous crossing is not possible and the process of entering the doorway requires higher precision. As a result, the robot does not ever guess a human's linear motion and does not plan maneuvers to move aside. Instead, it again assumes human cooperation and hence proposes a cooperative solution. Then, at the moment right before the collision, shown in <ref>, the robot takes an action for avoidance and starts moving backwards to let human pass. The robot keeps backtracking until the doorway is free. Eventually, it re-plans and proceeds to the goal in a .
The robot's planner recommends the cooperative solution in both Door Crossing scenarios, partly due to inaccuracy in the human model but primarily due to this being a property of CoHAN by design: it assumes that both agents are interested in cooperation. The system does not predict the human's path but rather plans it based on the estimated goal and the human model. This is a proactive planning approach. It is not very accurate, but the robot's behavior is enhanced in comparison with non-social planners. The elimination of these inconsistencies points to directions for future work, such as improving goal prediction and updating the human model.
The series of experiments performed to test robot capabilities in a human-robot collaborative crossing context ends with a scenario that is especially relevant in the healthcare sector: the Bed Approach scenario. The map of an Intensive Care Unit with a wall in the middle of the room and beds placed along the walls is used. The Door Crossing situation is modeled by two agents conflicting for exiting/entering the free space between two hospital beds: the human provider is exiting and the robot is entering. The goals of both agents are behind each other. Again, the service is set on. The focus is not on the quality of the approach itself but on the behavior of the robot towards human.
The test is successful because the robot collision never happens, and, normally, human collision does not happen either because the robot demonstrates the expected behavior:
* The human and the robot move towards each other;
* As soon as the robot detects the human, it activates mode and derives path predictions for itself and for the human while imposing more effort on itself (<ref>);
* The more human-robot distance reduces, the more the robot slows down;
* When the distance becomes critically small, the robot moves a bit backwards, stops, and remains in this state until the human passes it by;
* The robot resumes to its goal.
§.§.§ Narrow Corridor Scenario
The Narrow Corridor Scenario can happen in any hospital corridor, and it challenges the planner with the Blockage Problem: a long corridor has to be traversed by two agents in opposite directions, and the corridor is wide enough only for a single agent; one of the agents must go back and wait for the other to cross. When one of the agents is a robot, it should back off, giving priority to the human while taking legible actions. Thus, the expected behavior of the robot is the following:
* The robot switches from to and plans a certain trajectory;
* The robot’s way is blocked by the human, and the system shifts to the mode (the is supposed to be activated when the robot is in mode in the near vicinity of the human (<2.5m), and it is stuck without progressing towards the goal);
* The robot clears the way for the human, drives away from the corridor, and waits on the side until the human crosses the robot;
* The robot resumes to its goal in mode.
The dynamic human is moving with a constant linear velocity along a straight line and is a non-collaborative human (does not move back). We consider two different approaches to modeling this human's behavior: one is the human who never stops, and, alternatively, the human who moves but then stops. Both variations are interesting because they highlight certain features of the robot's behavior.
A "never-stopping human" keeps moving in the corridor even if there is a robot on the way. In real life, in most situations, human will not be "never-stopping". However, this situation can occur, for example, when a person is looking at the phone and hence does not see that there is a robot in the corridor.
The test always results in a human collision. Right before the collision, the robot slows down and attempts to move backwards, but at some point it abandons its attempts and takes a hit. This happens because the robot stays in mode ever since it detects a dynamic human (<ref>): the condition for a shift to - the human stopping - is never satisfied. In turn, without shifting to , a shift to cannot be performed as well. The negative outcome is not critical because the collision is, in fact, a human's fault. Yet, the test case demonstrates an important limitation of the CoHAN planner.
In another variation of the scenario, some time after the robot detects the dynamic human, the human fully stops. In this case, the expected behavior is not confirmed through experimentation either. The robot gets stuck either without a shift → (a shift → is performed correctly) or with a shift, but without going backwards. The problem here is that the functioning of the robot in mode is prone to errors because the mode implementation lacks robustness. This is regarded as one of the algorithm's current limitations that leaves room for improvement. For the time being, the result of the robot stopping in the narrow corridor after detecting the Blockage is considered satisfactory as opposed to the collision.
§.§.§ Wide Corridor Scenario
The test scenario takes after the Narrow Corridor Scenario, but now the corridor is large enough for two agents. Additionally, on one side of the corridor there are rooms with open doors, so it is possible for the robot to drive there and wait until a human passes. The human goal is indicated in <ref>. Since the human intends to turn, the human agent's motion is nonlinear. The Human Path Prediction service is activated.
Once again, there are two modifications of the scenario: Wide Corridor: Free Corridor and Wide Corridor: Cluttered Corridor.
The Wide Corridor: Free Corridor challenge is usually resolved positively by the robot; the collision does not occur and the mode transitions are performed properly. In a perfect run, the cooperative solution that suggests a greater effort for a robot is planned (<ref>).
Otherwise, the cooperative solution proposes a heavier load for a human. But in the majority of cases this is justified, as in the <ref>, for example: the moment is close to a collision and the robot's current velocity does not allow it to perform the maneuver of entering the free room doorway, so the robot expects this maneuver to be executed by the human. However, the system's capacity to anticipate navigation conflicts requires improvement because the situations illustrated in <ref> can be prevented if the robot foresees them. Additionally, the possibility of driving off is completely ignored by the robot. Presumably, Human Path Prediction is a responsible module of architecture that needs to be upgraded.
Moving on to a Wide Corridor: Cluttered Corridor, the setting is similar, but now a more realistic clinical environment is simulated. As it is investigated in <cit.>, patients are often treated in the corridors when the Emergency Department (ED) bedrooms are full. Placing patients in hallways is a way to handle an overflow of patients. Therefore, the corridors are often overcrowded and cluttered with stretchers and other medical equipment.
We model a simplified version of the Cluttered Corridor scenario where there is only one dynamic human and all the stretchers are leaned against the same wall. Only one agent at a time can pass between the stretcher and the other wall. The robot and the human start navigating from the opposite sides of the cluttered part of the corridor, and their goals are behind each other.
The test normally produces the same series of events at each run:
* Following its initial path, the robot arrives in the middle of the cluttered hallway, where it detects a human and switches from to ;
* The robot backs off all the way back until it reaches the vicinity of its initial position and then comes to a halt.
This test result leaves a dual impression. The robot exhibits safe behavior that enables collision avoidance. It even gives priority to a human and does not insist on pursuing its goal; it also does not abort in the middle of the corridor but moves backward and clears the way instead. Unfortunately, the robot is neither proactive nor anticipative enough. It is not anticipative due to the fact that it only begins to resolve a conflict after it has already encountered it, not in advance. And non-proactivity is well-seen in the <ref>, where there are three options for one agent to clear the way to the other: two free spaces between the stretchers and one free room entrance. The CoHAN system proposes a deviating elastic band for a human. In <ref>, the deviation maneuver opts for the first (counting from left) free space between the stretchers, but as simulation proceeds, it changes to the second and third ones, respectively (always for a human).
§.§.§ Crowded Scenario
The series of experiments concludes with a Crowded Scenario, which is the most challenging for the CoHAN system. The scenario consists of two sub-scenarios invented to estimate the robot's reliability in a crowd from two different perspectives. Both scenarios are simulated in an Intensive Care Unit (ICU) environment.
The first sub-scenario covers the context of a crowd in the free space and is named Free Space Crowd. The map is a room that does not include any static semantic objects other than the beds along the walls. The humans are distributed around the room and form a chaotic crowd. There are 14 humans in total: 10 of them are dynamic and 4 are static. The dynamic humans have different constant velocities and move along a circular trajectory each (<ref>). The robot's objective is to visit all of the static humans while navigating safely and intelligently through the dynamic crowd.
The most evident positive aspect of navigation in the crowd can be observed from the <ref>: the robot in is able to correctly predict the humans' paths. In fact, is the robustest Planning Mode of CoHAN.
Two other remarkable examples of intelligent robot behavior are shown in <ref> and <ref>. In <ref>, the human with a lower velocity emerges on the robot's path. The robot slows down (or stops if necessary) and adapts its velocity. Then, it starts following the human from behind until its path no longer passes through the risk zone. The social nuance is whether the human feels comfortable being followed.
In <ref>, the robot is confronted with a situation in which it is trapped between a static human and inflated static obstacles on one side and a human moving extremely slowly on the other side. The slow human temporarily blocks the robot. The robot resolves the temporary blockage through oscillation. Every time it moves forward, the robot checks if the person has moved away; if not, the robot continues to oscillate; otherwise, it proceeds to the goal.
The second sub-scenario models the situation of emergency in the ICU: the patient needs urgent intervention from the providers, e.g., he or she has a heart attack. In such circumstances, the robot's proactivity is especially important because it must not interrupt providers performing life-saving treatment on a patient. However, the desired robot behavior is difficult to provide because the robot bases its reasoning on a certain system of beliefs about the state of the environment, and this state changes abruptly in emergencies.
In the simulation, an emergency is occurring at a specific location: one of the providers is standing next to the bed of the patient with the emergency, and he or she is calling other providers to help. The scenario assumes that 10 providers suddenly start running to the provider who is calling. They form a "running crowd" next to the bed of the patient with an emergency (<ref>). Meanwhile, the robot had been following a path to some goal, but this path now passes through the emergency zone (i.e., comes into conflict with the paths of the "running crowd"). The robot's expected behavior is to avoid interfering with the crowd.
The robot operates in whenever dynamic humans enter the area, limited by the Planning Radius. The robot collision never occurs. However, a human collision may occur, depending on the test run.
In lucky runs, the robot is able to re-plan and avoid safely either by passing around the crowd, as presented in the <ref>, or by following its path while adapting the speed until the "safe spot" is found and halting at this spot to wait for the crowd to move away (<ref>).
When the robot is less lucky, it gets trapped by humans with no "safe spot" found. In this case, a human collision happens: the robot slows down and comes to a halt at the point where it realizes there is no "safe spot," and one or more running humans collide with it. The issue once again uncovers a point for improvement in the robot's inability to be anticipative enough to foresee the human collision and take look-ahead action. Additionally, it is linked to the fact that the robot derives plan predictions for the two nearest dynamic humans only, and in emergencies, more than two of them are relevant.
This final test concludes the section on qualitative results.
§.§ Quantitative Results
Five experimental scenarios have been chosen to perform the quantitative analysis, each repeated 10 times with the Co-operative Human-Aware Navigation (CoHAN) and Simple Move Base (SMB) systems in order to compare an integrated Social Navigation planner to a non-human-aware approach such as those found in MARRtina software prior to integration. The averaged over 10 runs results are collected in the <ref>.
The tested scenarios are Door Crossing: Bed Approach, Narrow Corridor: move-and-stop human, Wide Corridor: Free Corridor, Crowded Scenario: Free Space Crowd, and Crowded Scenario: Emergency Situation. In all these scenarios, both systems produced consistent results over repeated trials. However, in two of them, SMB failed to complete collision-free navigation.
The metrics involved in comparison are the accuracy (Acc), the total length of the path (PL), the total execution time (TT), and the minimum human-robot distance that the planner encountered during navigation (HRD).
Accuracy is the percentage of times a test is completed successfully. "Success" is defined as reaching the navigation goal without colliding and getting stuck, except for the Narrow Corridor, where it is redefined as a detection of the Blockage followed by abortion. This always happens when the robot is controlled with a Simple Move Base, but CoHAN distinguishes between the robot aborting and getting stuck. The navigation cannot be resumed in the latter case, and this occurs 50% of the time. In all the scenarios, besides the corridors, CoHAN outperforms SMB in accuracy. In fact, SMB does not avoid human collisions at all in the Bed Approach and Emergency Situation scenarios, whereas using a new planner robot sometimes gets lost in the bed inflation or is not able to find a "safe spot," but sometimes resolves the conflict. In the Free Space Crowd, the robot with SMB can only navigate safely when it is lucky. The CoHAN system, on the other hand, produces rare collisions due to lag in costmap updates during mode switching. Finally, both approaches are reliable in the free wide corridors.
The path length is the total length of the path, computed as the sum of the distances between every sequential pair of states. When SMB tests fail, the length of the generated global path is computed instead. The <ref> shows that in all test instances, SMB made robot travel shorter distances. This is explained by HATEB using larger deviations for early intention show, proactive elastic bands, and driving away maneuvers. Nonetheless, there is a specific reason for the large difference in path lengths in the Narrow Corridor scenario: the non-socially navigating robot comes to a halt as soon as it detects a blocking obstacle, and the CoHAN-based system, before declaring abortion or understanding that the robot is stuck, spends some time on mode switching and subsequent re-planning while oscillating. The same reason explains why CoHAN takes nearly three times as long as SMB to complete the navigation in a given scenario and why the minimum human-robot distance is more than two times shorter.
Due to proactive deviation maneuvers performed by CoHAN, the total execution time taken by it is greater than that of SMB unless the human-robot co-navigation context is such that the reactive planner takes longer because the robot needs to slow down in the human vicinity for collision avoidance, as happens in Wide Corridor experiments.
Finally, when it comes to the minimum human-to-robot distances, the HATEB's behavior varies because it performs the situation assessment and addresses each case individually. If the space is available, the integrated planner keeps a greater minimum distance from the human than SMB. Otherwise, it can also choose the strategy of slowing down and approaching closer.
In summary of the quantitative evaluation, the integrated system proves its human awareness and can be considered outperforming in all simulated scenarios, except for the Narrow Corridor, where the Simple Move Base exhibits more reasonable and less resource-consuming behavior. However, even in the latter case, the conceptual difference is not large and can be mitigated by debugging the mode.
§.§ Human Evaluation Results
In order to assess human acceptance of the assistive robot driven by the integrated system and estimate its usability in health-care facilities, 10 video demonstrations of the robot's behavior were presented to the physicians at Sant'Andrea Hospital.
The recipients we asked to rate how satisfied they were with the robot's behavior in each demonstrated scenario on a 5-point grading scale, with 1 representing absolutely unacceptable behavior and 5 representing perfection. The feedback form was filled out by 40 participants, and their average acceptance results are reported in aggregated form in <ref>.
The <ref> shows that the average level of robot acceptability is high and can be rounded up to 82%, regardless of the nature or difficulty of the simulated scenario. Thus, domain experts were not predisposed to higher cautiousness within more domain-specific or chaotic settings, resulting in a positive outlook on the robot's future integration in a safety-critical environment. Furthermore, average robot acceptability and the standard deviation from the
average computed for each estimated scenario separately confirm the hypothesis of clustered, scenario-independent results, as none of the average values differs significantly from the others.
It is worth noting that the lowest encountered estimate in all demonstrated scenarios was 3, indicating that nobody of the participating providers rated the robot's behavior as more likely to be unacceptable than acceptable. And another indicator of consistency is the median value, which is 4 in all cases and is equal to the mode value in all cases, except for the Door Crossing: Bed Approach and Wide Corridor: Cluttered Corridor where the mode is 5. To gain more intuition on the distribution of the grades assigned by experts to each video simulation let us display the bar chart in <ref>.
A precise examination of the chart leads to the conclusion that the robot's behavior cannot be considered more acceptable in some cases than in others. For example, the two previously mentioned scenarios: Door Crossing: Bed Approach and Wide Corridor: Cluttered Corridor, have the highest concentration of both minimum and maximum given estimates, 3 and 5, which does not allow to mark these scenarios as neither preferred nor disregarded by clinicians.
Overall, it seems from the chart in <ref> and from the calculated statistics in general that even confusing and causing inconveniences for humans robot behaviors such as following behind the person (Crowded Scenario: Free Space Crowd-1), stopping at random location in the middle of the crowd (Crowded Scenario: Emergency Situation-2) or blocking the human's way (Narrow Corridor: move-and-stop human) would be tolerated by people. As a result, expert validation of the robot's performance leads to a positive conclusion about the possibility of human-robot coexistence in health-care facilities.
§ CONCLUSION AND FUTURE WORK
Human-Aware Navigation is a field that broadens the horizons of robotics research by empowering existing robotic systems with socially meaningful capabilities and contributing to the seamless integration and sustainability of robots in human environments. It proposes a wide range of challenging tasks associated with the incorporation of the socially enhanced components into established navigation frameworks. One of such tasks is the enhancement of robot reliability in the healthcare sector, which we address in this work. In our case, the socially enhanced component is a Co-operative Human-Aware Navigation (CoHAN) planning system, and the established navigation framework is MARRtina robot software.
The results of the comparison of the integrated system with a non-social planning approach manifest an improvement in robot reliability, and the qualitative analysis demonstrates that the robot behavior became more socially compliant, which, in turn, promotes human acceptance of the robot, as confirmed by the statistical analysis of human validation of the results.
Furthermore, the work done on integration into MARRtina of a human-aware planning system opens the scope of future investigations for the community of people who are interested in using this robot for their own studies and developments in the field of Human-Aware Navigation.
Then, probably, the greatest value of this work lies in a thorough examination of the planner's features. While acknowledging the limitations of the system's potential applications, it also provides intuition on the directions of future system-upgrading initiatives.
A room for improvement has been found in the resilience of the method and further development of its prototypical edition. Then, the goal selection in the service can be automated by the implementation of some probabilistic goal inference approach. Besides, new Human Path Prediction techniques can be embedded into system to meet a need for improved estimation of human motion that has been verified throughout experimentation. The experimentation also suggests that the human model has to be updated. Eventually, the proactive and anticipative behavior on the side of the robot are two particularly important aspects, and they are even more critical in medical settings. Therefore, the inclusion of new methods with a design focus on strengthening the system in these aspects would be beneficial.
Shading more light on the vectors of the system upgrades, we plan a re-integration of the current version of the planner with a newer one <cit.>. The key motivation for doing so is to boost the robot's capability to act proactively by accounting for humans outside the Visible Region.
In addition, the system improvement challenge can be approached from a completely different angle: the human controller module can be enhanced, for instance, by creating human avatars that demonstrate rational social behaviors rather than just moving in a primitive way according to predefined rules. The insightful works on that are presented by Favier et al. in <cit.> and <cit.>. The authors propose a system called InHuS, which incorporates autonomous, intelligent human agents specifically designed to act and interact with a robot navigating in a simulated environment. Since this system is based on CoHAN, it would be interesting to combine it with the already integrated planner in the MARRtina software.
Finally, in order to complete a characterization of the system, we intend to deploy it on a physical robotic platform within a real-world clinical environment. As a result, new challenges will likely be identified, necessitating the development of new solutions.
| [lines=3, findent=2pt, nindent=-3pt]Taking into account the specificity of the application domain is one of the crucial considerations while developing a robotic system, and the chosen domain of a health-care facilities is represented by the quick-paced and safety-critical environment, which is frequently overcrowded, chaotic, and understaffed. The robotics community has been researching the use of robots in hospital settings to reduce the workload of caregivers by utilizing robots for low-value duties such as medical supply delivery or patient assistance at the bedside when clinicians are not available.
Nowadays, the number of autonomous mobile robots serving in hospitals is rather low. This reveals a major gap in terms of existing approaches and is explained by the fact that the higher the desired level of autonomy, the more challenging it is to provide it while also satisfying safety criteria of primary importance for the considered application domain. Thus, the clinical environment is a unique safety-critical environment that poses specific navigation challenges for robots <cit.>, <cit.> and gives rise to particular scenarios in which the navigational conflicts between humans and robots must be resolved.
For example, a robot in an Intensive Care Unit (ICU) is likely to encounter the situation when a group of Health-Care Workers (HCWs), some of whom may have been involved in other tasks and whose predictions of behaviors may have already been derived by the robot's planner, suddenly rush to the bed of a certain high-acuity patient to perform a life-saving treatment. In this scenario, to address the challenge of performing the situation assessment and responding to the changes in dynamics of the environment due to changing human plans, robots must be more than purely reactive; they must be proactive, adaptive, anticipative, and capable of intelligent decision-making.
Besides that, robot adaptability is required, which is defined as the ability of a robot to adapt to a new environment with different characteristics. To illustrate, let us assume that in a particular hospital, the ICU is a room on the Emergency Department (ED) floor. The ICU resembles a crowded open-space kind of environment (beds are normally installed along the walls, and often there is a wall and/or nursing station in the center of the room); meanwhile, the corridors of the ED could be not so crowded but cluttered with trolleys and appliances. Therefore, while exiting the ICU to navigate in the ED hallways and vice versa, the robot must adapt to changes and exhibit different types of behavior.
In our work, we address the described challenges and also the challenge of increasing the robot's acceptability and mitigating inconsistencies between the robot's actual and expected behavior by means of Human-Aware Navigation Planning. Specifically, we integrate the Co-operative Human-Aware Navigation (CoHAN) planner designed by Singamaneni, Favier, and Alami <cit.> into a given robot software framework. The CoHAN planner enables a robot to navigate in diverse contexts while being aware of humans in the environment. The software belongs to a ROS-based differential drive robot with functionalities equivalent to those of a Turtlebot system. By now, experimentation has been carried out in a simulation, but the social version of the robot in <ref>, MARRtina, is intended for deployment in a real hospital[Sant'Andrea Hospital, Roma RM, Italy] to serve as a bedside robot.
The integrated system becomes capable of performing Social Navigation as opposed to treating people simply as obstacles. The studies within the Social (or Human-Aware) Navigation field are centered on learning about various human motion patterns and relevant social aspects and developing navigation approaches that account for them. The associated research community has explored a wide range of interesting subject matters so far and developed numerous Human-Aware Navigation planners; however, these planners are mostly environment- and scenario-specific and hence do not meet the adaptability requirement indicated above. Therefore, the motivation behind choosing the CoHAN system for integration is that it is flexible, highly tunable, and easily adjustable to the various contexts, making it capable of handling complex indoor and crowded scenarios (ICU, ED).
The main contribution of this work is that, in addition to being integrated, a new planning system, enhanced with human awareness, has been thoroughly challenged, tested, and evaluated in multiple simulated human-robot scenarios and environments: ICU crowded free space, ICU with a wall in the middle, heart attack emergency in ICU, narrow hallway, free/cluttered wide ED corridor, narrow/wide door crossing scenario, patient bed approach scenario, etc. Qualitative and quantitative analysis is presented, uncovering the subtle nuances of the system's functioning and ultimately creating a comprehensive characterization of the properties and limitations of the adopted planning approach. As such, on a global level, we hope to contribute to better patient outcomes and lessen the workload of HCWs by making robots more reliable. One of the necessary steps towards this goal is to ensure that potential users accept the developed robotic technology. Therefore, after gathering feedback on the system's performance from the HCWs at the Sant'Andrea Hospital, a brief statistical analysis of the robot's acceptability by humans was carried out.
The rest of the paper is structured as follows. The background material and related works are described in [sec:2]Section <ref>sec:2. The CoHAN Planner’s architecture, including an overview of its features, components, and the scheme of communication between different modules and processes, is provided in [sec:3]Section <ref>sec:3. Following this, [sec:4]Section <ref>sec:4 deals with the details of the integration of a CoHAN system into MARRtina software. The primary matter of the work is found in [sec:5]Section <ref>sec:5. It covers the experimental set-up and analysis of the results in various simulated human-robot scenarios. Finally, the work is summarized in [sec:6]Section <ref>sec:6, which also comments on potential future research to improve the system. | This section briefly explains the fundamentals and some related works of the Robotic Operation System (ROS) 2D Navigation Stack[< in general, and a package with a Timed Elastic Band (TEB) local trajectory planner[< in particular, because this is exactly the component that has been made "human-aware" in the architecture of the CoHAN system. Then, the Human-Aware Navigation Planning problem is introduced.
§.§ ROS Navigation Stack
Both MARRtino robot software and the integrated Co-operative Human-Aware Navigation (CoHAN) planner are ROS-based. Citing its web page, ROS <cit.> is an open-source robotic middleware toolset that provides a structured communications layer above the host operating systems and is considered by many as the most general-purpose and commonly-used frameworks for developing robotic applications.
The atomic units for organizing software in ROS are packages, and the collections of packages are a stack. Particularly, to navigate with a ROS robot, the ROS Navigation Stack is used. The ROS navigation, or ROS Navigation Stack, is meant for 2D maps, and it takes as input information about odometry, sensor measurements, and a goal pose to provide as output safe velocity commands that are sent to a mobile base.
Functionally, the Navigation Stack packages contain a set of ROS nodes and navigation-related algorithms that are implemented to move mobile robots autonomously from one point to another while avoiding obstacles. The basic structure of a Navigation Stack is represented by the three main blocks or modules:
* Mapping - to create a map of the environment;
* Localization - to localize a robot in the created map;
* Path Planning - to plan the robot path (Global Path Planning) and execute the planned path while avoiding obstacles (Local Path Planning).
§.§ Timed Elastic Band
The main accent in a newly integrated CoHAN system is the improvement of a particular component of the described ROS Navigation Stack: the local planner. Precisely, the Timed Elastic Band (TEB) is the optimal local trajectory planner. This planner's package is called and is implemented in ROS as a plugin to the default local planner of the 2D Navigation Stack. The underlying planning approach was introduced in <cit.>, and its core idea is to optimize locally at runtime the initial trajectory generated by a global planner with respect to:
* Trajectory execution time (time-optimal objective);
* Separation from the obstacles;
* Kinodynamic constraints (maximum velocities and accelerations).
The Elastic Band is a sequence of n intermediate robot configurations x_i = (x_i, y_i, β_i)^T ∈ R^2 × S^1, where (x_i, y_i) is a position and β_i - orientation of the robot in a map reference frame:
Q = {x_i}_i=0...n n ∈ℕ
The Timed Elastic Band (TEB) appears as an augmentation by the time intervals between two consecutive configurations: n-1 time differences Δ T_i, each of which is the time needed to transit from one configuration to another, i.e., as a tuple of sequences:
B := (Q, τ) τ = {Δ T_i}_i=0...n-1
Then, the optimization problem can be formulated as a real-time weighted multi-objective optimization of TEB defined in <ref> in terms of both configurations and time intervals. Let us denote f(B) as an objective function and B^* as an optimal value (problem solution), then:
f(B) = ∑_k γ_kf_k(B)
B^* = Bminf(B)
where γ_k are the weights and f_k - multiple objectives that may capture the diverse aspects.
§.§ Human-Aware Navigation
The TEB planner was further expanded with human-aware characteristics to get Human-Aware Timed Elastic Band (HATEB) <cit.> by incorporating human motion predictions and estimations and simultaneous planning for humans and the robot. By "human-aware characteristics", we mean that the robot's paths generated by the planner must not only be safe, legible, and optimal in terms of time and robot resource consumption, but also acceptable and look natural to humans. And Social (or Human-Aware) Navigation is a whole new branch of robotic research that emerged from the synthesis of Human-Robot Interaction (HRI) and Motion Planning to focus on learning how to build such paths.
A survey in <cit.> suggests that the Human-Aware Navigation challenge can be defined as a challenge of navigating while accounting for the constraints related to social aspects and rules and offers one of the possible approaches to the sub-categorization of the characteristics that a robot must exhibit in order to be considered navigating in a human-aware manner: compliance with human comfort, naturalness of motion, and sociability.
Human Comfort appears as the absence of annoyance and stress for humans during human-robot interaction sessions or in shared spaces. In a sense, it is linked to the concept of safety and heavily relies on the study of proxemics, the study of how people perceive the proximity of others, illustrated in <ref>. It is a branch of knowledge that deals with the amount of space that people feel it is necessary to set between themselves and others to feel comfortable.
Motion Naturalness is the ability of the robot to mimic the nature of human motion and elaborate human-alike navigation behavior by capturing and recreating human low-level motion patterns. The main struggle in this research area resides in the fact that robots have different abilities compared to humans, and therefore not all patterns are meant to be transferred.
Finally, Sociability is concerned with high-level decision-making and attempts to comply with social and cultural norms by either explicit modeling of known social protocols or learning them from humans as spacial effect. The latter is especially advantageous because learning from humans allows their behavior to be not only predicted but also affected and exploited for spacial conflict resolution. | null | null | null | null |
http://arxiv.org/abs/2409.17341v1 | 20240925203255 | Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors | [
"Md Abdullah-Al Kaiser",
"Sreetama Sarkar",
"Peter A. Beerel",
"Akhilesh R. Jaiswal",
"Gourav Datta"
] | cs.CV | [
"cs.CV"
] |
Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors
Md Abdullah-Al Kaiser^1,⋆^⋆,†Equally contributing authors. Sreetama Sarkar^2,⋆
Peter A. Beerel^2,†
Akhilesh R. Jaiswal^1,† Gourav Datta^2
^1University of Wisconsin-Madison, Madison, USA ^2University of Southern California, Los Angeles, USA
{sreetama,pabeerel,gdatta}@usc.edu {mkaiser8,akhilesh.jaiswal}@wisc.edu
September 28, 2024
==================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Current video-based computer vision (CV) applications typically suffer from high energy consumption due to reading and processing all pixels in a frame, regardless of their significance. While previous works have attempted to reduce this energy by skipping input patches or pixels and using feedback from the end task to guide the skipping algorithm, the skipping is not performed during the sensor read phase. As a result, these methods can not optimize the front-end sensor energy. Moreover, they may not be suitable for real-time applications due to the long latency of modern CV networks that are deployed in the back-end. To address this challenge, this paper presents a custom-designed reconfigurable CMOS image sensor (CIS) system that improves energy efficiency by selectively skipping uneventful regions or rows within a frame during the sensor's readout phase, and the subsequent analog-to-digital conversion (ADC) phase. A novel masking algorithm intelligently directs the skipping process in real-time, optimizing both the front-end sensor and back-end neural networks for applications including autonomous driving and augmented/virtual reality (AR/VR). Our system can also operate in standard mode without skipping, depending on application needs. We evaluate our hardware-algorithm co-design framework on object detection based on BDD100K and ImageNetVID, and gaze estimation based on OpenEDS, achieving up to 53% reduction in front-end sensor energy while maintaining state-of-the-art (SOTA) accuracy.
CMOS image sensor, reconfigurable, energy-efficient, region skipping, power-gating, object detection.
§ INTRODUCTION
As the demand for edge computing in computer vision (CV) applications grows—ranging from surveillance systems <cit.> and autonomous vehicles <cit.> to smart devices—energy efficiency has emerged as a key challenge.
A major contributor to energy inefficiency in edge devices is the repetitive reading and processing of redundant frames and pixels, particularly those that do not provide useful information for the task at hand. This problem is further exacerbated in video applications that are processed at high frame rates <cit.>, where large amounts of data are handled unnecessarily.
Consequently, optimizing the input data processing at the sensor level becomes essential to enable complex CV applications efficiently at the edge.
Previous studies have made strides in addressing temporal redundancy in video processing by skipping frames or selectively processing regions with significant changes <cit.>. However, these approaches still require reading entire frames to decide which pixels or patches to process. This implies that pixel readouts are still performed for all regions, leading to continued energy consumption during the analog-to-digital (ADC) conversion process, which remains a bottleneck for energy efficiency.
Moreover, current methods often rely on the output of the previous frame’s task to predict the significance of pixels or patches <cit.>, introducing delays in processing the current frame. For instance, while the HiRISE system <cit.> compresses the high-resolution images, thereby reducing the energy incurred in processing them, it depends on feedback from downstream tasks to generate high-resolution regions for object detection. This dependency can impede real-time effectiveness and adds complexity due to the back-and-forth data transfer, which is challenging in energy-limited environments. Moreover, methods that skip large portions of pixels often suffer from performance degradation <cit.>, which is problematic for safety-critical applications.
This underscores the need for an intelligent, lightweight pixel masking algorithm capable of generating real-time region-of-interest (RoI) masks without relying on feedback from downstream tasks. In addition, hardware reconfigurability is crucial to enable the skipping of regions or pixels during the pixel readout phase of the sensor, significantly improving energy efficiency. Towards this end, we propose a low-cost vision transformer (ViT)-based intelligent mask generator network that operates independently of task feedback, ensuring compatibility with real-time constraints and optimizing both accuracy and energy use. We also integrate hardware reconfigurability into existing conventional image sensor systems with minimal overhead, including additional memory banks for binary mask storage and power-gating switches. This helps reduce the energy consumed during the sensor readout phase guided by the mask generator.
Additionally, the masking algorithm boosts the compute efficiency of the back-end neural network processing the CV application by focusing on the salient regions of interest (RoI) identified by the mask. Note that previous works have explored compressing input data at the sensor level through in-sensor computing approaches, where the initial layers of the computer vision (CV) network are processed directly within the sensor <cit.>. While these approaches offer significant energy savings, they complement our method and can be integrated alongside it to further enhance energy efficiency.
The key contributions of our work are as follows.
1) Real-time Pixel Masking Algorithm: We propose a novel lightweight pixel masking algorithm tailored for CMOS image sensors that can create a binary mask in real time based on the significance of the pixels in a scene, without depending on feedback from end tasks.
2) Reconfigurable Peripheral Hardware Integration: We develop a reconfigurable sensor hardware system that integrates with the masking algorithm, enabling various sensing operation modes, including standard and two skip modes (row-wise skip and region-wise skip). This system yields the front-end energy savings by allowing pixel readouts to be skipped, guided by the binary mask during the sensor's read phase.
3) Significant Front-end Energy Savings: Our proposed algorithm-hardware co-design framework achieves notable improvements in front-end (constituting the sensor and the pixel mask generator) energy efficiency—46% (row-wise skip), 53% (row-wise skip), and 52% (region-wise skip), respectively—while maintaining state-of-the-art accuracy in autonomous driving and AR/VR applications.
§ PROPOSED METHOD
§.§ Pixel Masking Algorithm
The most commonly used technique for skipping redundant computations in a video sequence is by computing pixel-wise differences between input frames and intermediate feature maps <cit.>, followed by processing only pixels or patches that exceed a pre-determined threshold. However, this requires reading the current frame and hence does not save sensor read energy. Previous works have proposed predicting RoI, and processing the CV network on the RoI <cit.> to save overall computation. However, these methods are either expensive <cit.>, or depend on the output of the previous frames <cit.>. This implies that reading subsequent frames in the sensor side has to be stalled till the output information is obtained, and may hinder real-time deployment. Our goal is to design a lightweight network that can be implemented near the sensor and can predict regions of importance without relying on the output of the back-end model. We propose a transformer-based low-cost region mask generator network that can predict the significance of input patches based on attention scores at the input frame rate without consuming significant energy.
Mask Generator Network: Our mask generator network (MGN) is built using a single transformer block <cit.>, followed by a self-attention layer <cit.> and a linear layer. Initially, the MGN divides the input image into N patches of size p×p and embeds each patch into a vector of length L. A classification token (cls_token), which is an embedding vector of the same length, is appended to the N patch embeddings, resulting in N+1 vectors, as illustrated in Figure <ref>. A trainable positional embedding is added to each patch embedding before passing the vectors through the transformer encoder block. This encoder consists of a multi-head self-attention (MHSA) layer followed by a feed-forward network (FFN). The embedding vectors are transformed into Query (Q), Key (K), and Value (V) matrices, each of dimension [N+1, d], where d=L/H and H is the number of heads in the MHSA. Self-attention is then performed by each head, as shown in Equation <ref>. The FFN is a 2-layer multi-layer perceptron (MLP) with GELU activation <cit.>.
Attention(Q, K, V) = Softmax(QK^T/√(d)) V.
The cls_token is crucial in ViT-based image classification <cit.> as it gathers and aggregates information from different patches of the image at the classification head. This aggregation is guided by the attention scores between the cls_token and the other image patches. Specifically, the attention score, denoted as S_cls_attn, is computed through the dot product of the query vector derived from the cls_token (q_class) and the key matrix from the other patches.
S_cls_attn = q_classK^T/√(d).
S_cls_attn inherently captures the importance of each patch. Inspired by <cit.>, we utilize this attention score and feed it into a linear layer of the same output dimension as the number image patches to generate the importance scores S_region for the patches. These scores are then passed through a Sigmoid function and thresholded using a region threshold t_reg to produce the Region Mask (Mask_reg). This mask contains binary values for each p×p region, with ones indicating regions to be processed and zeros indicating regions to be skipped. Subsequently, the row mask (Mask_row) is derived from Mask_reg using a row threshold t_row. A row is processed if the percentage of active regions in that row exceeds t_row.
We compute the region mask every P frames, where P refers to the interval after which a full-frame is read and a new region mask is predicted. The identical predicted mask is used to skip readout and computation in the masked regions for the subsequent (P-1) frames.
Model Training: The MGN is trained to predict region importance based on object locations within the frame. The ground truth labels are generated from the ground truth bounding boxes or segmentation maps, constructing a 2D binary matrix where regions containing the object, either fully or partially, are marked with ones, and regions with no part of the object are marked with zeros. The network is trained using binary cross-entropy loss between the predicted region importance scores and the ground truth labels, with performance evaluated through the mean Intersection-over-Union (mIoU) metric.
§.§ Reconfigurable Hardware Implementation
To achieve energy savings from sensing and ADC operations for the skipped rows and regions, as identified by our MGN, we design a reconfigurable CMOS image sensor system. This system supports three modes: standard, row-wise skip, and region-wise skip. In the standard mode, the sensor functions like a conventional system, reading each pixel row-by-row with the column-parallel ADC and transmitting the digital bit-streams off-chip for further processing. In row-wise skip mode, the system can bypass reading an entire row in a frame, while in region-wise (patch-wise) skip mode, it skips reading pixels within areas deemed insignificant by the MGN.
The proposed reconfigurable CMOS image sensor system, illustrated in Fig. <ref>(a), includes conventional components, including a pixel array (e.g., 3T or 4T), row driver, ramp generator, counter, column-parallel single-slope ADC (SSADC), output latch, bias circuits, and control circuits <cit.>. In addition to these standard blocks, our design incorporates an engine to implement the MGN (that provides the mask information to the sensor), which is illustrated in section II-A. In row-wise skip mode, the engine produces a 1D binary mask that marks significant rows with a value of 1 and insignificant rows with 0. Similarly, in region-wise skip mode, a 2D mask is generated based on pixel importance within the scene. This mask is applied to skip uneventful rows and regions for a number of frames in the row-wise skip and region-wise skip modes, respectively. Hence, our system embeds a 1D memory array of size 1×r, that is connected to the row scanner for the row-wise mode, and a 2D memory array of size c/p×r/p for the region-wise mode, to store the respective row or region masks. Note that r and c denote the number of rows and columns in the pixel array respectively, and p denotes the patch size employed in our MGN for region-wise skip.
Fig. <ref>(b) illustrates a representative diagram of the entire system. The back-side illuminated CMOS image sensor (BI-CIS) die, which includes additional memory banks to store the binary mask, is heterogeneously 3D-integrated with a digital logic chip <cit.> that implements our mask generation engine. Unlike the sensor die, which is typically fabricated in a lagging process node (e.g., 45 nm), this chip can be manufactured using an advanced process node (e.g., 7 nm), further minimizing the energy overhead of the MGN. Based on the pixel array dimension and patch size employed by the MGN, the total memory requirement to store the binary row and region masks is less than 5 kB, which can either be allocated within the BI-CIS die without significant overhead or can also be integrated into the digital logic chip. Digital bit-streams from the CIS array, along with the generated binary masks from the MGN engine, are communicated between the sensor and logic chips through fine-pitched Cu-Cu hybrid bonding <cit.>. Finally, the digital output generated by the image sensor is transmitted off-chip to a back-end processor for the downstream CV processing, depending on the selected mode (standard, row-wise skip, or region-wise skip).
Fig. <ref>(c) depicts the modified single-slope ADC (SSADC) used in our design. The ADC consists of a counter, current-steering DAC-based ramp signal generator, comparator, and latch to convert analog pixel data into digital bit-streams <cit.>. Our intelligent masking algorithm helps reduce the number of ADC conversions, thereby saving energy. Since the ramp generation circuit is shared across all column ADCs, when the row mask bit is 0 in the row-skip mode, the ramp generation circuit (counter and DAC) can be power-gated using switches S1 and S2, as shown in Fig. <ref>(c). Additionally, the ADC comparators and latch associated with the column-parallel ADC can be power-gated using switch S3 (Fig. <ref>(c)). Clock gating is also employed to further reduce energy consumption, along with resetting the latch in the row-skip mode. However, in region-wise skipping, where only some pixels in a row are read, complete power-gating of the ramp generation, and counters is not possible. Instead, we can save energy by power-gating only the comparators and column latches while keeping the ramp generation circuit and counter active.
Our ADC is designed to be reconfigurable, allowing for precise adjustments to output bit precision by modifying the ramp slope (voltage per clock cycle). By altering the reference current in the ramp generation circuit (which utilizes a pMOS cascode current mirror-based current DAC) or the reference resistance, we can effectively reconfigure the ramp slope <cit.>. In this work, we employ a 10-bit ADC, aligning with the traditional pixel bit-depth of 10 <cit.> for our CIS system. However, the precision can be adapted for different applications by adjusting the bias current or resistance and resetting the counter at specific intervals. For instance, to switch from 10-bit to 8-bit precision, the counter can be reset after 256 clock cycles. This capability enhances energy efficiency, depending on application-specific requirements.
Fig. <ref> illustrates the various operational modes of our proposed hardware, which are described below.
Standard Mode: Fig. <ref>(a) illustrates the standard mode of operation, where the conventional row-by-row reading method is applied. One row is activated at a time, and the analog pixel values are read by the column-parallel ADC, following the conventional procedure. In this mode, the row-mask and column-mask memories are both constantly set to 1.
Row-skip Mode: In the row-skip mode, entire rows are disabled during the read process. For example, when the row mask bit is 0, indicating an insignificant row, the row driver is power-gated, preventing the activation of the pixel transmission gates (e.g., RS<2> in Fig. <ref>(b)). Consequently, all bitlines are discharged by the tail current source of the source follower, resulting in no bitline activity, which conserves sensor energy. Additionally, the counter and ramp generation circuits, along with all column-parallel ADCs, are power-gated. The latches are reset, and all 0s are transmitted for the skipped row. For significant rows, where the row mask bit is 1, the conventional reading method is used.
Region-skip Mode: In region-skip mode, patch-wise masks for each region in the CMOS image sensor array are stored in both the column and row mask memories. During this process, the row and column masks operate concurrently. If the row mask is 0, the row remains inactive, and the column readout circuits and SSADC are power-gated to conserve the maximum energy. Conversely, if the row mask is 1, indicating that there are active pixels in the row, the column mask memory determines which column-parallel SSADCs will be active. The ADCs associated with the insignificant pixels (where the column mask is 0) will be power-gated, while significant ones (where the column mask is 1) will be read. For example, in Fig. <ref>(c), SSADCs associated with CL<1>, CL<c> are actively read, while CL<0>, CL<2> are skipped in this region-wise skip mode.
Note that in row-skip and region-skip modes, uneventful pixels are not read, resulting in an N-bit (ADC bit precision) value of 0 being transmitted for those pixels. This method preserves the same transmission protocol as traditional systems, however, asynchronous reading <cit.> can be employed to enhance latency and transmission bandwidth, further reducing the energy.
§ EXPERIMENTAL RESULTS
§.§ Models and Datasets
Object Detection: We evaluate our approach on two video object detection datasets: ImageNetVID <cit.> and BDD100K <cit.>. We demonstrate results using ViTDet <cit.> on ImageNetVID validation set, which consists of 639 video sequences, spanning 30 categories of objects, with each sequence containing up to 2895 frames. For BDD100K, we report results using single-head Tiny-YOLO <cit.> on 200 test samples, each with a sequence length of 200. The annotated dataset is at 5 FPS whereas the videos in BDD100K are captured at 30 FPS. We perform evaluation with P=4 at 5 FPS, which translates to P=24 at 30 FPS.
Tiny-YOLO on BDD100K is fine-tuned with the input mask for 20 epochs. For ViTDet, no additional fine-tuning is required. Our masking method is applied directly during inference, as described in <cit.>, to reduce computational overhead in the ViT back-end by processing fewer tokens.
To prevent accuracy loss, a reference tensor is stored for each transformer block from the last processed frame, ensuring consistent performance across frames while minimizing redundant computations.
Eye Tracking:
A typical eye-tracking pipeline consists of two stages: (a) feature extraction via eye segmentation and (b) gaze estimation. While gaze estimation is relatively lightweight, eye segmentation constitutes 85% of the system's overall computational cost <cit.>. We focus on the performance bottleneck—eye segmentation—as it largely dictates the accuracy of gaze estimation. Our approach is evaluated using EyeNet <cit.>, a U-Net-based eye segmentation network, applied to sequential data from the OpenEDS 2020 <cit.> dataset. This dataset includes 152 participants with diverse ethnicities and eye colors, captured at 100 FPS with a resolution of 640×400. For our experiments, we use the ground truth segmentation maps generated by <cit.>, which cover 185 video sequences totaling 27,431 frames—about 150 frames per 30-second sequence. Since the dataset is annotated at 5 FPS, we incorporate a scaling factor of 20 in the calculation of P to reflect the actual real-time processing at 100 FPS.
Mask Generator Network: The MGN consists of a single ViT layer with p=16, L=192, an input resolution 224×224, that results in 1.86M parameters and 0.161 GFLOPs, where FLOPs denotes the number of floating point operations. The initial weights for the mask generator network as well as the cls_token are restored from a ImageNet pre-trained DeiT-Tiny <cit.> model.
The MGN is trained for 20 epochs using Adam optimizer <cit.> with a learning rate of 0.001.
§.§ Performance Analysis
Table <ref> showcases the accuracy vs skip (or energy) ratios achieved using our MGN with variations in hyperparameters applied to two tasks across three datasets. We evaluate object detection using mAP-50 and eye segmentation using mIoU. The key hyperparameters governing the trade-off between accuracy and energy are P, t_reg and t_row. Increasing t_reg and t_row leads to skipping a large number of pixels and rows, which enhances energy efficiency but at the expense of mAP/mIoU. On the other hand, increasing P extends the interval between full-frame reads and mask re-computation, reducing the computational overhead of mask generation per frame but also impacting mAP/mIoU. Our MGN demonstrates the ability to skip up to 57%, 70%, and 80% pixels in masked frames on BDD100K, ImageNetVID, and OpenEDS in region-skip mode with accuracy degradation of 0.4%, 0.8%, and 2.4%, respectively. In row-skip mode, we achieve skip rates of up to 58%, 65% and 59% with accuracy degradation of 0.4%, 1.3% and 1.6% respectively. We choose skip ratios that provide an optimal trade-off and maintain mAP/mIoU close to the baseline (<1.5% for detection and <2.5% for segmentation) for front-end energy estimation (bolded in Table <ref>). Since transformer-based networks process data in a patch-wise manner, the reduction in back-end computation is roughly proportional to the skip ratio of input patches. In CNNs, this compute reduction can be achieved by focusing on the RoI. However, due to the mask shape constraints, additional regions may need to be processed, resulting in energy savings, that are bounded by the skip ratio.
Comparison with exiting approaches: In Table <ref>, we compare the mAP-50 and FLOPs of the back-end CV network with our approach against existing efficient video object detection methods on ImageNetVID. While PatchNet <cit.> achieves the lowest FLOPs, it suffers from significant accuracy degradation. In contrast, our method maintains mAP-50 comparable to state-of-the-art approaches with less than 1% accuracy drop for region skipping and ∼1.3% degradation for row skipping.
Notably, unlike existing methods, which only determine which patches to process after reading the entire frame, our approach skips patches during the readout stage, making the task considerably more challenging but more energy-efficient.
Figure <ref> compares our method against current eye-tracking baselines in terms of both accuracy and front-end energy. We evaluate against EyeNet <cit.> and its light-weight variants proposed in Edgaze<cit.>. Unlike our approach, Edgaze <cit.> performs ROI prediction based on the current frame and can not skip pixels during the readout stage. As a result, we assume Edgaze consumes the same front-end energy as the baseline. Our method matches the accuracy of Edgaze while reducing the front-end energy by 58%. Additionally, our approach yields similar back-end processing energy savings as Edgaze.
§.§ Qualitative Analysis
The visualization of masks in Figures <ref> and <ref> demonstrates that MGN accurately captures the important regions at a small fraction of the cost (2% for EyeNet and 0.09% for ViTDet) of detection and segmentation networks.
§.§ Front-end Energy Analysis
To determine the front-end energy, we simulate our designed system using the GF22FDX node. We evaluate the front-end energy for the three datasets discussed above, accounting for the additional energy required for memory, data communication through the 3D integrated chip <cit.>, and the MGN engine. In our approach, a full frame is read periodically to compute the mask for skipping less relevant rows or regions in subsequent frames. Consequently, we calculate the average energy for both row-skip and region-skip modes based on the relative energy contributions from the fully read and masked frames, as depicted in Eq. <ref>. Our energy estimation technique closely follow <cit.>.
E_F = ( E_F,base + E_mem + (P-1) × E_F,mode)/ P + E_M
Here, E_F,base represents the baseline energy for a standard mode frame read, while E_mem accounts for the memory energy required to set all the row and column masks to 1 to enable the standard mode, allowing the system to read every pixel in a frame without skipping. E_F,mode denotes the energy for row-skip or region-skip modes, P denotes period after which a full frame is read, and E_M denotes the MGN energy per frame.
The 1.86M parameters in our MGN necessitates a total memory of 1.86 MB (assuming 8-bit weights) which can be integrated within the logic chip where the MGN is implemented. The energy of the MGN is estimated by accounting for both the compute operations, specifically the multiply-and-accumulate operations, and the memory access energy required to retrieve the parameters. We estimate these using our in-house circuit simulations at 22nm CMOS technology and subsequently scale the results to 7nm using technology scaling methodologies for advanced process nodes <cit.>. The E_F,mode is computed as
E_F,mode = E_mem + E_com + (e_sense,r + e_adc,r) × n_px,read +
(e_sense,s + e_adc,s) × n_px,skip
Here, E_com denotes to the communication energy between the CMOS BI-CIS and the digital logic die (MGN), while e_sense,r and e_sense,s denote the pixel sensing energy for conventional and skip-mode read operations, respectively. Similarly, e_adc,r and e_adc,s represent the ADC energy for conventional and skip-mode reads respectively. n_px,read and n_px,skip denote the number of pixels with mask value of 1 and 0, respectively.
Fig. <ref> displays the normalized front-end sensor energy relative to the baseline (standard mode) for both row-skip and region-skip modes. The figure illustrates that row-skip mode exhibits better energy efficiency compared to region-skip mode for the BDD100K and ImageNetVID datasets, whereas region-skip mode performs better for the OpenEDS dataset.
§.§ Row-wise vs Region-wise skipping
In the standard CIS system, rows are activated sequentially, and all pixels in a row are read in parallel using column-parallel ADCs. The row drivers and part of the SSADCs (including the counter and ramp generator) are shared across columns. In row-skip mode, row drivers and all the ADC blocks (shared and column-parallel) can be power-gated, allowing for skipping entire rows in the frame, thus maximizing energy savings. However, in region-skip mode, row drivers and shared ADC components (counter and ramp generator) must remain active to read the significant pixels in the row. Therefore, only the column-parallel components can be power-gated, leading to lower energy savings compared to row-skip mode for the same skip ratio. Note that by adding a switch between the bitline and the source-follower tail current source, energy efficiency can be further improved for skipped pixels in the region-skip mode. However, this work does not incorporate this extra switch to preserve the traditional CIS system structure. The normalized front-end sensor energy for the two modes for various skip ratios is illustrated in Fig. <ref>.
The normalized front-end sensor energy improvement is closely linked to the skip ratio. While our MGN aims for state-of-the-art accuracy at higher skip ratios, the energy savings vary between modes depending on these ratios. For example, in the BDD100K dataset, the skip ratio is 0.58 in row-skip mode and 0.57 in region-skip mode. Despite similar skip ratios, row-skip mode achieves greater energy efficiency due to larger energy savings (see the left subplot of Fig. <ref>, with P=24). Conversely, in the OpenEDS dataset, the skip ratio is 0.59 in row-skip mode and 0.80 in region-skip mode. Here, the higher skip ratio in region-skip mode results in lower energy (right subplot of Fig. <ref>, with P=160), demonstrating that the increased skip ratio in region-skip mode outweighs the energy benefits of row-skip mode. In addition, energy savings also depend on the total number of masked frames (P-1); higher frame period (P) results in greater energy reductions. Pixel array size also influences energy savings, with larger arrays offering higher energy savings by minimizing the number of ADC reads. Overall, our algorithm-hardware co-design approach is essential to maximize the skip ratio and number of masked frames while maintaining accuracy and reducing front-end energy.
§ DISCUSSIONS
This paper introduces a reconfigurable CMOS image sensor (CIS) system designed to enhance front-end energy efficiency for video-based complex CV tasks by selectively skipping rows or regions during the sensor readout phase, all while maintaining minimal overhead compared to traditional architectures. The system utilizes an intelligent masking algorithm capable of generating real-time RoI masks without relying on feedback from end tasks, making it particularly suitable for real-time applications in energy-constrained environments. We evaluate our system across autonomous driving and AR/VR applications demonstrating significant energy reduction while maintaining high accuracy. Furthermore, our approach can be employed in edge applications utilizing large-format and high-resolution cameras, offering strong potential for low-power, real-time computer vision technologies at high frame rates. By integrating intelligent algorithms with hardware reconfigurability, this method presents significant potential for advancing low-power, real-time computer vision technologies where images are captured at a high frame rate.
§ ACKNOWLEDGMENT
This work is supported in part by National Science Foundation under award CCF2319617. This work is also supported in part by gift fundings from Samsung and Intel Neuromorphic Research Lab (INRC). In particular, we would like to thank Sumit Bam Shrestha from INRC for fruitful discussions.
ieee_fullname
| As the demand for edge computing in computer vision (CV) applications grows—ranging from surveillance systems <cit.> and autonomous vehicles <cit.> to smart devices—energy efficiency has emerged as a key challenge.
A major contributor to energy inefficiency in edge devices is the repetitive reading and processing of redundant frames and pixels, particularly those that do not provide useful information for the task at hand. This problem is further exacerbated in video applications that are processed at high frame rates <cit.>, where large amounts of data are handled unnecessarily.
Consequently, optimizing the input data processing at the sensor level becomes essential to enable complex CV applications efficiently at the edge.
Previous studies have made strides in addressing temporal redundancy in video processing by skipping frames or selectively processing regions with significant changes <cit.>. However, these approaches still require reading entire frames to decide which pixels or patches to process. This implies that pixel readouts are still performed for all regions, leading to continued energy consumption during the analog-to-digital (ADC) conversion process, which remains a bottleneck for energy efficiency.
Moreover, current methods often rely on the output of the previous frame’s task to predict the significance of pixels or patches <cit.>, introducing delays in processing the current frame. For instance, while the HiRISE system <cit.> compresses the high-resolution images, thereby reducing the energy incurred in processing them, it depends on feedback from downstream tasks to generate high-resolution regions for object detection. This dependency can impede real-time effectiveness and adds complexity due to the back-and-forth data transfer, which is challenging in energy-limited environments. Moreover, methods that skip large portions of pixels often suffer from performance degradation <cit.>, which is problematic for safety-critical applications.
This underscores the need for an intelligent, lightweight pixel masking algorithm capable of generating real-time region-of-interest (RoI) masks without relying on feedback from downstream tasks. In addition, hardware reconfigurability is crucial to enable the skipping of regions or pixels during the pixel readout phase of the sensor, significantly improving energy efficiency. Towards this end, we propose a low-cost vision transformer (ViT)-based intelligent mask generator network that operates independently of task feedback, ensuring compatibility with real-time constraints and optimizing both accuracy and energy use. We also integrate hardware reconfigurability into existing conventional image sensor systems with minimal overhead, including additional memory banks for binary mask storage and power-gating switches. This helps reduce the energy consumed during the sensor readout phase guided by the mask generator.
Additionally, the masking algorithm boosts the compute efficiency of the back-end neural network processing the CV application by focusing on the salient regions of interest (RoI) identified by the mask. Note that previous works have explored compressing input data at the sensor level through in-sensor computing approaches, where the initial layers of the computer vision (CV) network are processed directly within the sensor <cit.>. While these approaches offer significant energy savings, they complement our method and can be integrated alongside it to further enhance energy efficiency.
The key contributions of our work are as follows.
1) Real-time Pixel Masking Algorithm: We propose a novel lightweight pixel masking algorithm tailored for CMOS image sensors that can create a binary mask in real time based on the significance of the pixels in a scene, without depending on feedback from end tasks.
2) Reconfigurable Peripheral Hardware Integration: We develop a reconfigurable sensor hardware system that integrates with the masking algorithm, enabling various sensing operation modes, including standard and two skip modes (row-wise skip and region-wise skip). This system yields the front-end energy savings by allowing pixel readouts to be skipped, guided by the binary mask during the sensor's read phase.
3) Significant Front-end Energy Savings: Our proposed algorithm-hardware co-design framework achieves notable improvements in front-end (constituting the sensor and the pixel mask generator) energy efficiency—46% (row-wise skip), 53% (row-wise skip), and 52% (region-wise skip), respectively—while maintaining state-of-the-art accuracy in autonomous driving and AR/VR applications. | null | null | null | null | null |
http://arxiv.org/abs/2409.17981v1 | 20240926155418 | BlinkTrack: Feature Tracking over 100 FPS via Events and Images | [
"Yichen Shen",
"Yijin Li",
"Shuo Chen",
"Guanglin Li",
"Zhaoyang Huang",
"Hujun Bao",
"Zhaopeng Cui",
"Guofeng Zhang"
] | cs.CV | [
"cs.CV"
] |
A neural network study of the phase transitions of the two-dimensional antiferromagnetic q-state Potts models on the square lattice
Fu-Jiun Jiang
-
===================================================================================================================================
§ ABSTRACT
Feature tracking is crucial for, structure from motion (SFM), simultaneous localization and mapping (SLAM), object tracking and various computer vision tasks. Event cameras, known for their high temporal resolution and ability to capture asynchronous changes, have gained significant attention for their potential in feature tracking, especially in challenging conditions. However, event cameras lack the fine-grained texture information that conventional cameras provide, leading to error accumulation in tracking. To address this, we propose a novel framework, BlinkTrack, which integrates event data with RGB images for high-frequency feature tracking. Our method extends the traditional Kalman filter into a learning-based framework, utilizing differentiable Kalman filters in both event and image branches. This approach improves single-modality tracking, resolves ambiguities, and supports asynchronous data fusion. We also introduce new synthetic and augmented datasets to better evaluate our model. Experimental results indicate that BlinkTrack significantly outperforms existing event-based methods, exceeding 100 FPS with preprocessed event data and 80 FPS with multi-modality data.
§ INTRODUCTION
Feature tracking aims to estimate the trajectories of query points from a reference timestamp over subsequent periods. It serves as the cornerstone for many computer vision tasks, including structure from motion <cit.>, simultaneous localization and mapping (SLAM) <cit.>, and object tracking <cit.>.
In recent years, the success of event cameras <cit.> has garnered significant attention in the research community. Event cameras are innovative sensors that asynchronously detect changes in a scene at a very high temporal resolution, capturing events as they occur rather than recording frames at fixed intervals. This unique characteristic enables event cameras to perform feature tracking at high frequencies, even under challenging lighting conditions or with fast-moving objects.
Nevertheless, the event cameras are unable to capture detailed fine-grained texture information like conventional cameras, which can lead to error accumulation and inhibit tracking performance. Therefore, in this paper, we leverage an integration of the valuable information from event cameras with that of standard cameras for efficient and powerful feature tracking.
To achieve our objective, we must address two primary challenges: (i) Event cameras and standard cameras do not operate in a synchronized manner, with standard cameras capturing at fixed fps (e.g., 30fps) and event cameras detecting changes asynchronously. This mismatch can lead to ambiguity in tracking positions when fusing these two signals. (ii) Another significant challenge is to seamlessly integrate the complementary data from both modalities while minimizing noise interference.
Previous learning-based methods for feature tracking, such as Deep-EV-Tracker <cit.>, naively combine event data and RGB images by initializing the tracking position with results from another modality, which leads to only limited improvements. In other fields <cit.>, attention-based <cit.> modules have been explored for alignment and fusion. While these methods can be effective, they do not meet the efficiency requirements for high-frame-rate feature tracking.
On the other hand, traditional techniques such as the Kalman filter <cit.>, offer efficient tools for the fusion of asynchronous data. However, they usually require careful hand-crafted parameter tuning and still do not achieve the performance levels as recent learning-based methods.
Based on these observations, we propose a novel framework that can achieve high-frame-rate feature tracking (over 100 FPS) by leveraging the event data and RGB images, dubbed BlinkTrack.
Our method is inspired by the traditional technique, i.n., Kalman filter, but extends it to the learning-based framework. Specifically, our framework consists of an event branch and an image branch.
Both branches employ the differentiable Kalman filters. They learn to predict uncertainty from new measurements and incorporate these measurements into feature tracking through end-to-end training.
The characteristics inherited from the differentiable Kalman filter offer several advantages. First, it improves the single-modality tracker by learning the optimal state. Second, it helps resolve ambiguities and incorrect measurements caused by occlusions. Third, it naturally supports the fusion of asynchronous measurements from different modalities.
In addition to the Kalman filter, our event branch and RGB branch are well-designed, achieving a balance of high precision and efficiency.
However, during training and evaluation, we found that the current datasets are too simple. As a result, they do not fully allow our model to reach its potential and fail to provide a thorough evaluation. To address this, we first generated a more complicated synthetic dataset for training our model. Then, we augmented two existing evaluation datasets with more occlusions.
The experiments show that the proposed tracker significantly outperforms existing event-based method and is much more robust to occlusions. Additionally, it can run at over 100 FPS.
Our contributions can be outlined as follows.
First, we propose an efficient Kalman-filter-based framework that can achieve state-of-the-art performance while running at over 100 FPS.
Second, we generate new datasets for training and evaluating event-based feature tracking.
Lastly, exclusive experiments show that the proposed methods outperform existing methods by a large margin and are much more robust in handling occlusions.
§ RELATED WORK
Frame-based Feature Tracking
Frame-based feature tracking methods develop rapidly for their wide application. Harley et al.<cit.> and Doersch et al.<cit.> were pioneers in this area, introducing new datasets and networks. Harley et al. proposed the FlyingThings++<cit.> based on FlyingThings<cit.>, while Doersch et al. introduced Kubric<cit.>. Both datasets are synthetic and the proposed networks, PIPs<cit.> and TAP-Net<cit.>, are trained on them. PointOdyssey<cit.> significantly improves the training dataset with an automatic scene generate pipeline and extends PIPs to PIPs+<cit.> and PIPs++<cit.> which is more robust in long-term tracking. Context-PIPs<cit.> uses context feature near tracked points to achieve more stable tracking while CoTracker<cit.> tracks almost dense points and sharing motions between nearby points.
While frame based methods is suffering from hardware limitations, event cameras have low temporal resolution and lite data which could handle these extreme scenes.
Event-based Feature Tracking
The event camera is used to increase tracking robustness in challenging conditions in recent years. ICP<cit.> is widely used in early traditional methods<cit.>, which also hold the possibility to fuse with standard camera<cit.>. Recently, asynchronous tracker is developing<cit.>, among them AMH<cit.> and HASTE<cit.> explore the multi-hypothesis using pure event data, and EKLT<cit.> makes use of warped color patch gradient to track features. To better fit the motion, Bézier curves and B-splines are applied to optimize trajectories by Seok et al.<cit.> and Chui et al.<cit.>, while eCDT<cit.> and Wang et al.<cit.> use event cluster and blob to represent a feature. Inspired by recent data-driven event-based optical flow estimate method<cit.>, Deep-EV-Tracker<cit.> is the first to implement neural networks on event-based feature tracking. Given the absence of a dataset that includes both event data and corresponding ground truth trajectories, Deep-EV-Tracker uses MultiFlow<cit.>, with calculated trajectories from ground truth optical flow.
Our event module follows Deep-EV-Tracker and improves its network architecture design. We also follow MultiFlow to generate our own synthetic dataset, which is the first event-based dataset on feature tracking.
Kalman Filter
Kalman filter has been widely used for object tracking<cit.> and optical flow<cit.> because it is able to build an explicit motion model, which could benefit the tracking task. The differentiability of the Kalman filter makes it coexistable with neural networks. Bao et al.<cit.> add Kalman filter on existing optical flow methods, gaining performance promotion. Also, Kalman filter holds the advantage of fusing different signals, which is used by Wang et al.<cit.> to reconstruct video using LDR image and event data.
Our work employs differentiable Kalman filters to supervise our learning-based network and combine predictions from the event module and color module.
§ BLINKTRACK
Feature tracking usually takes a reference frame and a query point 𝐩_ref∈ℝ^2 as input, and aims to track this point in subsequent T timestamps to get estimation 𝐩̂ = {𝐩̂_0, 𝐩̂_1, …, 𝐩̂_T-1}.
An overview of our method BlinkTrack is presented in Fig. <ref>. It consists of an event module and an image module. Both modules are trained using the differentiable Kalman filter. In Sec. <ref>,<ref>,<ref>,<ref> we introduce how we design the event module and how we integrate it with the differentiable Kalman filter.
Then in Sec. <ref> we talk about the design of the color module.
Finally, we introduce how to supervise the training in Sec. <ref>.
§.§ Pyramid Feature Extraction
From the reference grayscale image 𝐈_ref at 𝐩_ref, we extract the reference patch 𝐏_evt_ref with size P_evt× P_evt.
Then taking it as the reference, the event module predicts the relative feature displacement by comparing it with the event patch 𝐏_evt_j, where the latter one is extracted from the event stream according to last predicted position 𝐩_j-1.
In practice, we use two shallow convolution neural networks to encode the reference patch and the event patch, respectively.
Then we construct a two-level feature pyramid by applying average pooling to the patch features, as inspired by DPVO <cit.>.
The feature maps of smaller scale are designed to offer more detailed information for the small displacement in high frame rate event data, while the feature maps of larger scale could produce vast information to resist unstable tracking, especially on similar features.
Compared with Deep-EV-Tracker <cit.>, the design of the two-level feature pyramid allows for a larger perception field with a limited increase in budget.
Then, we compute the correlation 𝐂_evt_j between the reference feature vector, aiming to explicitly compare the similarity between the reference feature and the event feature, which is extracted from the center of the reference feature map 𝐅_evt_ref and the event patch feature map 𝐅_evt_j.
§.§ LSTM Displacement Predictor
Previous methods <cit.> have demonstrated that the design of LSTM provides significant benefits to event-based tasks due to its efficiency in extracting temporal features and producing smooth results. Since temporal context information is beneficial to estimation accuracy, we also employ LSTM in our tracking module to convey information between frames.
Specifically, to better facilitate information sharing across frames, we propose a dual LSTM module for feature and motion information delivery.
The feature LSTM module employs a ConvLSTM Block<cit.> and several standard convolutional layers, transforming the concatenated 𝐅_evt_ref, 𝐅_evt_j and 𝐂_evt_j into a displacement feature vector 𝐟_evt_j. The displacement LSTM module maintains a hidden displacement feature vector 𝐡_j-1 across frames to utilize displacement insights from preceding frames. Each displacement feature vector 𝐟_evt_j is fused with the preceding 𝐡_j-1 through a MLP, obtaining a merged displacement vector 𝐦_j. This vector is processed through a gating layer to formulate the current hidden displacement feature vector 𝐡_j, which is subsequently followed by a linear predictor to predict the displacement prediction Δ𝐩̂.
§.§ Uncertainty Predictor
This module predicts the uncertainty (or noise) associated with each measurement and the uncertainty can be incorporated with the update step of kalman filter.
Unlike the previous displacement predictor, this module does not employ LSTM because the uncertainty may change drastically and does not follow temporal smoothness. Such cases occur, for example, during occlusions.
Besides, general uncertainty ranges from 0 to positive infinity, covering too wide a distribution, which is not stable for learning.
As a result, we learn an approximation: the probability of visibility, which ranges from 0 to 1. Then the probability is remapped using a parabola function and finally extended to Σ̂_evt_j ∈ℝ^2×2.
Specifically, the predictor takes all 𝐅_evt_ref, 𝐅_evt_j and 𝐂_evt_j as input, which is concatenated and convoluted into an uncertainty feature vector 𝐟_uncert_j. A linear layer followed by a Softmax function is then applied to generate the occlusion probability and visible probability (p̂_occ, p̂_vis).
§.§ Kalman Filter
The Kalman filter<cit.> is composed of two fundamental steps: prediction and update. Based on the current state estimate and the process model, the prediction step predicts the state and the uncertainty at the next time step. It is formulated as:
x̂_k|k-1 = F x̂_k-1|k-1,
P_k|k-1 = F P_k-1|k-1 F^T + Q.
Here, x̂_k-1|k-1 is the current state estimate, x̂_k|k-1 is the predicted state estimate, P_k-1|k-1 is the current state covariance,, P_k|k-1 is the predicted covariance F is the state-transition model, and Q is the process noise covariance.
The update step incorporates the new measurement to refine the state estimate and its uncertainty. It is formulated as:
y_k = z_k - H x̂_k|k-1,
S_k = H P_k|k-1 H^T + R,
K_k = P_k|k-1 H^T S_k^-1,
x̂_k|k = x̂_k|k-1 + K_k y_k,
P_k|k = (I - K_k H) P_k|k-1,
where y_k is the measurement residual (innovation), z_k is the new measurement, x̂_k|k is the updated state estimate, S_k is the residual covariance, R is the measurement noise covariance, P_k|k is the updated covariance, H is the observation model, K_k is the Kalman gain, and I is the identity matrix.
The measurement z_k and the measurement noise covariance R are the predictions from our network.
The process of kalman filter is fully differentiable. As a result, it allows the network to learn to minimize error by incorporating correct measurements and preventing interference from noise.
In our implementation, we use a simple assumption of constant velocity model. The detailed value can be found in the supplementary material.
§.§ Color Frame Relocalization Module
Event cameras are not good at measuring fine-grained texture information like traditional cameras, which can easily lead to error accumulation and track loss. To improve long-term tracking performance and minimize cumulative error, we developed a lightweight Color Frame Relocalization Module. The design of our color module is inspired by recent work on point tracking, such as PIPs<cit.>. However, these works usually run slowly, negating the low latency advantage of event cameras, so we simplified the architecture design, and our module can operate at over 50 FPS. Specifically, we utilize a Pyramid Encoder<cit.> to encode the image and achieve a much larger receptive field than that of the event module. Then, a pyramid correlation map is calculated to capture multi-scale information. Finally, an iterative updater takes the embedded feature vector and correlation map to predict displacement and uncertainty, which are subsequently incorporated into the Kalman filter to obtain the final prediction. The detailed design of the color module is provided in the supplementary material.
§.§ Supervision
The mixed train of event and color modules would not be efficient and consume too many calculation resources, so we chose to train the two modules separately. To ensure the direct predict accuracy and training stability, we first train our event module without Uncertainty Predictor <ref> and Kalman filter <ref> on our synthetic dataset, MultiTrack, see Sec. <ref>. As there is no dataset for event feature tracking offering visible conditions currently, our MultiTrack could offer ground truth visible conditions to help supervise our full module including Uncertainty Predictor and Kalman filter, while our color module is also trained on this dataset.
Event Displacement Supervision
Since the synthetic dataset could provide ground truth track position, we could apply L1 distance between the predicted displacement Δ𝐩̂ and ground truth displacement Δ𝐩 each frame to construct the loss.
We only gather loss for those ground truth points in our largest pyramid patch with radius r, which is in the receptive field. Moreover, we follow <cit.> to augment the patch from each frame with scale, translation, and rotation.
ℒ_d̂îŝp̂ = {[ ||Δ𝐩̂ - Δ𝐩||_1, ||Δ𝐩||_1 < r; 0, else ].
Event Uncertainty Supervision
To ensure displacement prediction, we freeze the total pipeline of displacement prediction including Pyramid Feature Encoder <ref> and LSTM Displacement Predictor <ref>.
We supervise the uncertainty predictor on both displacement loss and uncertainty loss.
Kalman filter is added to make use of predicted uncertainty, and the displacement loss supervises on Kalman filter prediction Δ𝐩̃.
For uncertainty, we apply cross-entropy loss on predicted possibility p̂ = (p̂_occ, p̂_vis) and ground truth visible status v, because visibility is a strong reference for uncertainty.
We also add parameter w_1 = 2 to balance the proportion of two losses.
ℒ_d̃ĩs̃p̃ = {[ ||Δ𝐩̃ - Δ𝐩||_1, ||Δ𝐩||_1 < r; 0, else ].
ℒ_uncert = CrossEntropy(p̂, v)
ℒ_event = ℒ_d̃ĩs̃p̃ + w_1ℒ_uncert
The supervision for color module is provided in the supplementary material.
§ MULTITRACK, EC-OCC AND EDS-OCC
Since there is no dataset with event data that have occluded track, and the ground truth of visual condition could hardly be manually labeled, we introduce a pipeline, see Fig. <ref> which could synthesize color image, event, tracks with occlusion and visibility with adjustable parameters, or generate these data from existing dataset, which only need raw color image and given pixel trajectories.
§.§ MultiTrack
For the generated dataset, following MultiFlow<cit.>, a background image is chosen from Flickr 30k Dataset<cit.> while foreground images have two types. The first type consists of objects with transparent backgrounds from Google Scanned Objects<cit.>, the other type has random polygons or smoothed shapes with random images hatch, whose shape is inspired by AutoFlow<cit.>, and images are from COCO2014<cit.>. With random translation, rotation, and scale of each image, the foreground images are overlaid together on background images, producing synthetic high frame rate images, instance maps, and dense tracks with visibility. The high frame rate images are taken as input to DVS-Voltmeter<cit.> to generate synthetic event data while they are also sampled in a normal frame rate to get a color image. By using this pipeline, we generate MultiTrack as our train data.
§.§ EC-occ, EDS-occ
When applying occlusion on an existing dataset, in order to acquire high frame rate images, we interpolate the color image using FILM<cit.>, which iterates 4 times, inserting 15 frames for each color frame interval. With existing high frame rate background images, the same procedures are applied to create occlusion on existing dataset, obtaining event data with occlusion from DVS-Voltmeter<cit.> and visibility of given tracks with instance map.
EC<cit.> and EDS<cit.> are occluded and processed to EC-occ and EDS-occ. However, it is held by some that synthetic events exhibit significant differences from events captured by real event cameras since the noise is not modeled. So we conduct an experiment by also generating events from interpolated but not occluded images, EC-syn and EDS-syn, to measure the real events and our synthetic events. Tab. <ref> shows the slight difference in evaluation metric between real and synthetic data, which proves that the synthetic data could be creditable in evaluation.
§ EXPERIMENT
Dataset
We test our method in a commonly used real captured dataset, Event Camera dataset(EC)<cit.> and Event-aided Direct Sparse Odometry dataset (EDS)<cit.>. EC provides 24 Hz APS frames and events with a resolution of 240 × 180, which is recorded by a DAVIS240C camera<cit.> while EDS contains 640 × 480 pixels data with a beam splitter setup. The events from EC and EDS are pre-processed to SBT-Max<cit.> with intervals 10ms and 5ms, so that the input event data are in frequency of 100Hz and 200Hz, respectively. These datasets do not originally offer the ground truth for feature tracking, and <cit.> extend it by calculating the ground truth through KLT<cit.> tracking and triangulation. We also test in our synthetic dataset with occlusion, EC-occ, and EDS-occ introduced at Sec. <ref>. Another widely-used event-based dataset, DSEC<cit.>, is also used in qualitative comparison to evaluate performance on large outdoor scenes.
Baselines
We conduct experiments against the state-of-the-art method Deep-EV-Tracker<cit.>, which tracks event frames with grayscale image reference with smaller patches and produces no uncertainty. We also compare with EKLT<cit.> which has a similar pipeline with Deep-EV-Tracker but without using a neural network. Additionally, HASTE<cit.>, which demands no color image but uses pure event data, is included. Moreover, to manifest our novel multi-modality combination technique, we combine Deep-EV-Tracker and KLT<cit.>, a classic and popular color frame tracker, by replacing the initial position with another module's prediction as <cit.> mentioned.
Metric
The EC<cit.> and EDS<cit.> only provide ground truth trajectories with 24Hz and 75Hz. To evaluate the high-rate event predictions with 100Hz and 200Hz, we interpolate the event predictions to continuous trajectories and sample points on those timestamps that have ground truth data following the setting in <cit.>.
We evaluate feature age, defined as the duration for which a feature exceeds a specified distance from the ground truth, and expect feature age, which is calculated as the product of feature age and the ratio of stable tracks. These two metrics are common in event-based feature tracking benchmark<cit.>. We also evaluate accuracy through δ_avg<cit.>, which is the average fraction of points tracked within 1, 2, 4, 8, and 16 pixels, also widely used in point tracking benchmarks. The detailed definition can be found in previous methods <cit.>.
Implementation Details
As mentioned in Sec. <ref>, our event module is pre-trained on the MultiTrack dataset on 30000 feature tracks from 2000 sequences employing a continual learning approach <cit.>. We use ADAM<cit.> optimizer with a learning rate of 1 × 10^-4 and 140000 training steps in total with batch size 32. To stabilize training at the beginning, the input sequences are clipped to 4 frames, which would increase to 12 and 23 after 80000 and 120000 steps following <cit.>. Then we train our uncertainty predictor on the same dataset with the same hyperparameters. At the same time, we train our color module on MultiTrack which is generated for color image training with 10000 sequences using ADAMW<cit.> with a learning rate of 5 × 10^-4. Each sequence is regarded as a batch, which is set to size 16, and we sample 24 tracks from each track. The sequence length is fixed to 10 in all 50000 training steps. We run supervision on two NVIDIA RTX3090 24GB GPUS for 48 hours and experiments on one NVIDIA RTX3090 GPU and Intel(R) Xeon(R) Gold 6139M CPU.
§.§ Experiments Results
Quantitative Experiment
As reported in Tab. <ref>, our method achieves the best performance on EC and EDS merely using the event module. With the Kalman filter and color module, we attain even more margin, while we must emphasize that the origin EC and EDS are all static scenes with almost no occluded trajectories, which do not fully reveal the superiority of our approach.
Besides, the comparison between with and without Kalman filter reveals the effectiveness of the Kalman filter.
To better demonstrate the superiority of the Kalman filter, experiments are conducted on datasets with occlusion, EC-occ, and EDS-occ, see Tab. <ref>. The better performance of modules with Kalman filter proves the ability to stabilize tracking, especially in the occluded situation since the occluded points has the most increment.
When the point is occluded, the uncertainty would increase, and the Kalman filter would trust its explicit motion state rather than the untrustworthy network output.
Both experiments support the argument that multimodality should be combined carefully. Simply combining two modules by replacing initial points does not produce desirable results and can even yield worse outcomes than using a single module. However, with the assistance of the Kalman filter, we observed a significant improvement, demonstrating that the Kalman filter is an effective method for combining multimodality modules.
To explain in more depth, uncertainty increases in scenarios where event cameras or traditional cameras are challenged, such as extreme lighting or motion for traditional cameras or low latent features for event cameras. In these situations, the biased prediction has a slight influence on the final prediction when passed through the Kalman filter. This process naturally leverages the advantages of both event and color cameras, balancing predictions from the two different sources.
Qualitative Comparison
We visualize the tracks estimated by our methods, along with those from other methods, in Fig. <ref>. As shown, all methods exhibit apparent errors after occlusion, but our methods result in fewer lost tracks. The effectiveness of the color relocalization module is evident, as it facilitates relocalization after occlusion, whereas the other methods struggle and often fail post-occlusion. In conclusion, our methods provide the longest and most stable estimated tracks, demonstrating their robustness in occlusion situations.
Runtime Analysis
Our event module takes less than 9 ms to track one event patch across one preprocessed event frame on EC<cit.> and EDS<cit.> using an Nvidia RTX3090. The introduction of the Kalman filter adds less than 1 ms to the budget. As a result, the integration of the event module and Kalman filter enables precise predictions at frequencies exceeding 100Hz. Additionally, our image module can operate efficiently with the Kalman filter at frequencies above 50Hz, making it efficient enough for low-frequency image relocalization.
§.§ Ablations
To evaluate the distinct contributions of each integrated network component on tracking accuracy and stability, we conducted a series of ablation studies on real data EC<cit.> and EDS<cit.> and occluded data EC-occ and EDS-occ outlined in Tab. <ref>.
Feature Pyramid
We use a Feature Pyramid to take both the perception field and accuracy into account. We experiment with replacing the Pyramid Feature Encoder with a single patch like <cit.> do, which achieves worse performance because of the scarcer information from the patch, indicating that the pyramid has a contribution both on tracking accuracy and occlusion.
LSTM Module
Our event module employs two LSTM modules, designed for delivering feature and displacement information. We tested versions with one of the LSTM modules removed and a version with no LSTM modules. The results show that removing any LSTM module leads to performance loss, demonstrating the contribution of both LSTM modules.
Correlation Vector
The Deep-EV-Tracker<cit.> uses the U-Net bottleneck vector as the reference vector to correlate with the target feature map, while we believe the reference vector should come from the reference feature map considering symmetry. So we conduct an experiment on it, in which the vector from the reference feature map performs better.
Dataset
In this experiment, we train our event module and Deep-EV-Tracker<cit.> respectively on the MultiTrack and MultiFlow<cit.> datasets and evaluate average performance of the two methods on the two datasets. Although we view MultiTrack as a supplement to MultiFlow rather than a complete replacement, the results show that our MultiTrack dataset still outperforms the MultiFlow dataset on average metrics. This suggests that MultiTrack can unleash more potential of modules.
§ CONCLUSION
In this paper, we presented BlinkTrack, a novel framework for high-frame-rate feature tracking that leverages the strengths of both event data and RGB images. By integrating a differentiable Kalman filter within a learning-based architecture, BlinkTrack effectively addresses the challenges of asynchronous data fusion and improves tracking performance under occlusions. Our extensive experiments, supported by newly generated and augmented datasets, show that BlinkTrack significantly outperforms existing methods in terms of robustness and speed, achieving state-of-the-art performance while running at over 100 FPS.
These results underscore the potential of our approach for advanced feature tracking applications, establishing a new benchmark for future research in this field.
Limitations. Since the event module and color image module are trained separately, their fusion performance could be hindered. In our future work, we will explore training these two modules jointly with more resources.
plain
| Feature tracking aims to estimate the trajectories of query points from a reference timestamp over subsequent periods. It serves as the cornerstone for many computer vision tasks, including structure from motion <cit.>, simultaneous localization and mapping (SLAM) <cit.>, and object tracking <cit.>.
In recent years, the success of event cameras <cit.> has garnered significant attention in the research community. Event cameras are innovative sensors that asynchronously detect changes in a scene at a very high temporal resolution, capturing events as they occur rather than recording frames at fixed intervals. This unique characteristic enables event cameras to perform feature tracking at high frequencies, even under challenging lighting conditions or with fast-moving objects.
Nevertheless, the event cameras are unable to capture detailed fine-grained texture information like conventional cameras, which can lead to error accumulation and inhibit tracking performance. Therefore, in this paper, we leverage an integration of the valuable information from event cameras with that of standard cameras for efficient and powerful feature tracking.
To achieve our objective, we must address two primary challenges: (i) Event cameras and standard cameras do not operate in a synchronized manner, with standard cameras capturing at fixed fps (e.g., 30fps) and event cameras detecting changes asynchronously. This mismatch can lead to ambiguity in tracking positions when fusing these two signals. (ii) Another significant challenge is to seamlessly integrate the complementary data from both modalities while minimizing noise interference.
Previous learning-based methods for feature tracking, such as Deep-EV-Tracker <cit.>, naively combine event data and RGB images by initializing the tracking position with results from another modality, which leads to only limited improvements. In other fields <cit.>, attention-based <cit.> modules have been explored for alignment and fusion. While these methods can be effective, they do not meet the efficiency requirements for high-frame-rate feature tracking.
On the other hand, traditional techniques such as the Kalman filter <cit.>, offer efficient tools for the fusion of asynchronous data. However, they usually require careful hand-crafted parameter tuning and still do not achieve the performance levels as recent learning-based methods.
Based on these observations, we propose a novel framework that can achieve high-frame-rate feature tracking (over 100 FPS) by leveraging the event data and RGB images, dubbed BlinkTrack.
Our method is inspired by the traditional technique, i.n., Kalman filter, but extends it to the learning-based framework. Specifically, our framework consists of an event branch and an image branch.
Both branches employ the differentiable Kalman filters. They learn to predict uncertainty from new measurements and incorporate these measurements into feature tracking through end-to-end training.
The characteristics inherited from the differentiable Kalman filter offer several advantages. First, it improves the single-modality tracker by learning the optimal state. Second, it helps resolve ambiguities and incorrect measurements caused by occlusions. Third, it naturally supports the fusion of asynchronous measurements from different modalities.
In addition to the Kalman filter, our event branch and RGB branch are well-designed, achieving a balance of high precision and efficiency.
However, during training and evaluation, we found that the current datasets are too simple. As a result, they do not fully allow our model to reach its potential and fail to provide a thorough evaluation. To address this, we first generated a more complicated synthetic dataset for training our model. Then, we augmented two existing evaluation datasets with more occlusions.
The experiments show that the proposed tracker significantly outperforms existing event-based method and is much more robust to occlusions. Additionally, it can run at over 100 FPS.
Our contributions can be outlined as follows.
First, we propose an efficient Kalman-filter-based framework that can achieve state-of-the-art performance while running at over 100 FPS.
Second, we generate new datasets for training and evaluating event-based feature tracking.
Lastly, exclusive experiments show that the proposed methods outperform existing methods by a large margin and are much more robust in handling occlusions. | Frame-based Feature Tracking
Frame-based feature tracking methods develop rapidly for their wide application. Harley et al.<cit.> and Doersch et al.<cit.> were pioneers in this area, introducing new datasets and networks. Harley et al. proposed the FlyingThings++<cit.> based on FlyingThings<cit.>, while Doersch et al. introduced Kubric<cit.>. Both datasets are synthetic and the proposed networks, PIPs<cit.> and TAP-Net<cit.>, are trained on them. PointOdyssey<cit.> significantly improves the training dataset with an automatic scene generate pipeline and extends PIPs to PIPs+<cit.> and PIPs++<cit.> which is more robust in long-term tracking. Context-PIPs<cit.> uses context feature near tracked points to achieve more stable tracking while CoTracker<cit.> tracks almost dense points and sharing motions between nearby points.
While frame based methods is suffering from hardware limitations, event cameras have low temporal resolution and lite data which could handle these extreme scenes.
Event-based Feature Tracking
The event camera is used to increase tracking robustness in challenging conditions in recent years. ICP<cit.> is widely used in early traditional methods<cit.>, which also hold the possibility to fuse with standard camera<cit.>. Recently, asynchronous tracker is developing<cit.>, among them AMH<cit.> and HASTE<cit.> explore the multi-hypothesis using pure event data, and EKLT<cit.> makes use of warped color patch gradient to track features. To better fit the motion, Bézier curves and B-splines are applied to optimize trajectories by Seok et al.<cit.> and Chui et al.<cit.>, while eCDT<cit.> and Wang et al.<cit.> use event cluster and blob to represent a feature. Inspired by recent data-driven event-based optical flow estimate method<cit.>, Deep-EV-Tracker<cit.> is the first to implement neural networks on event-based feature tracking. Given the absence of a dataset that includes both event data and corresponding ground truth trajectories, Deep-EV-Tracker uses MultiFlow<cit.>, with calculated trajectories from ground truth optical flow.
Our event module follows Deep-EV-Tracker and improves its network architecture design. We also follow MultiFlow to generate our own synthetic dataset, which is the first event-based dataset on feature tracking.
Kalman Filter
Kalman filter has been widely used for object tracking<cit.> and optical flow<cit.> because it is able to build an explicit motion model, which could benefit the tracking task. The differentiability of the Kalman filter makes it coexistable with neural networks. Bao et al.<cit.> add Kalman filter on existing optical flow methods, gaining performance promotion. Also, Kalman filter holds the advantage of fusing different signals, which is used by Wang et al.<cit.> to reconstruct video using LDR image and event data.
Our work employs differentiable Kalman filters to supervise our learning-based network and combine predictions from the event module and color module. | null | null | null | In this paper, we presented BlinkTrack, a novel framework for high-frame-rate feature tracking that leverages the strengths of both event data and RGB images. By integrating a differentiable Kalman filter within a learning-based architecture, BlinkTrack effectively addresses the challenges of asynchronous data fusion and improves tracking performance under occlusions. Our extensive experiments, supported by newly generated and augmented datasets, show that BlinkTrack significantly outperforms existing methods in terms of robustness and speed, achieving state-of-the-art performance while running at over 100 FPS.
These results underscore the potential of our approach for advanced feature tracking applications, establishing a new benchmark for future research in this field.
Limitations. Since the event module and color image module are trained separately, their fusion performance could be hindered. In our future work, we will explore training these two modules jointly with more resources.
plain |
http://arxiv.org/abs/2409.17975v1 | 20240926155046 | Simulation-Based Inference Benchmark for LSST Weak Lensing Cosmology | [
"Justine Zeghal",
"Denise Lanzieri",
"François Lanusse",
"Alexandre Boucaud",
"Gilles Louppe",
"Eric Aubourg",
"Adrian E. Bayer",
"The LSST Dark Energy Science Collaboration"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.IM"
] |
Université Paris Cité, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France
Université Paris Cité, Université Paris-Saclay, CEA, CNRS, AIM, F-91191, Gif-sur-Yvette, France
Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France
University of Liège, Liège, Belgium
Université Paris Cité, CNRS, CEA, Astroparticule et Cosmologie, F-75013 Paris, France
Sony Computer Science Laboratories - Rome, Joint Initiative CREF-SONY, Centro Ricerche Enrico Fermi, Via Panisperna 89/A, 00184, Rome, Italy
Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY, 10010, USA
Department of Astrophysical Sciences, Princeton University, Peyton Hall, 4 Ivy Lane, Princeton, NJ 08544, USA
Standard cosmological analysis, which relies on two-point statistics, fails to extract the entire information embedded in cosmological data. This limits our ability to constrain with precision cosmological parameters. With the willingness to use modern analysis techniques to match the power of upcoming telescopes, recent years have seen a paradigm shift from analytical likelihood-based to simulation-based inference. However, such methods require a large number of costly simulations.
We focus on full-field inference, which is considered the optimal form of inference as it enables recovery of cosmological constraints from simulations without any loss of cosmological information. Our objective is to review and benchmark several ways of conducting full-field inference to gain insight into the number of simulations required for each method. Specifically, we make a distinction between explicit inference methods that require an explicit form of the likelihood such that it can be evaluated and thus sampled through sampling schemes and implicit inference methods that can be used when only an implicit version of the likelihood is available through simulations.
Moreover, it is crucial for explicit full-field inference to use a differentiable forward model. Similarly, we aim to discuss the advantages of having differentiable forward models for implicit full-field inference.
We use the https://github.com/DifferentiableUniverseInitiative/sbi_lens package which provides a fast and differentiable log-normal forward model that can generate convergence maps at the quality expected for the tenth year of LSST. This fast forward model enables us to compare explicit and implicit full-field inference with and without gradient. The former is achieved by sampling the forward model through the No U-Turns (NUTS) sampler. The latter starts by compressing the data into sufficient statistics and uses the Neural Likelihood Estimation (NLE) algorithm and the one augmented with gradient (∂NLE) to learn the likelihood distribution and then sample the posterior distribution.
We perform a full-field analysis on LSST Y10 like weak lensing simulated log-normal convergence maps where we constrain (Ω_c, Ω_b, σ_8, h_0, n_s, w_0). We demonstrate that explicit full-field and implicit full-field inference yield consistent constraints. Explicit full-field inference requires 630 000 simulations with our particular sampler which corresponds to 400 independent samples. Implicit full-field inference requires a maximum of 101 000 simulations split into 100 000 simulations to build neural-based sufficient statistics (this number of simulations is not fine tuned) and 1 000 simulations to perform inference using implicit inference. Additionally, while differentiability is very useful for explicit full-field inference we show that, for this specific case, our way of exploiting the gradients does not significantly help implicit full-field inference.
Simulation-Based Inference Benchmark for LSST Weak Lensing Cosmology
Justine Zeghal 1
Denise Lanzieri 2,6
François Lanusse 3,7
Alexandre Boucaud 1
Gilles Louppe 4
Eric Aubourg 5
Adrian E. Bayer 8,7
The LSST Dark Energy Science Collaboration
Received XXXX; accepted XXXX
================================================================================================================================================================================================================================================================
§ INTRODUCTION
Understanding the cause of the observed accelerated expansion of the Universe is currently a major topic in cosmology. The source of this acceleration has been dubbed Dark Energy, but its nature is still unknown. Dark Energy cannot be directly observed but several observational probes can be used to understand better its characteristics, with weak gravitational lensing, in which background galaxies are sheared by foreground matter, being one of the most powerful. This phenomenon is sensitive to both the geometry of the Universe and the growth of structure, which both depend on the cosmological parameters of the dark energy model. Many photometric galaxy surveys such as CFHTLenS <cit.>, KiDS <cit.>, DES <cit.>, and HSC <cit.>, have already demonstrated its constraining power on the matter density Ω_m and fluctuation amplitude σ_8 parameters. Upcoming weak lensing surveys (LSST <cit.>, Roman <cit.>, Euclid <cit.>) are expected to be larger and deeper allowing us to refine our estimations even further.
In cosmological inference, a significant challenge lies in the absence of an analytic likelihood p(x| θ) to recover cosmological parameters θ from the data x. Most of the mathematical inference frameworks proposed to overcome this problem are based on two-stage inference: compression of the data into summary statistics t=f(x) and then Bayesian inference to obtain the posterior p(θ| t). The most famous one is the two-point statistics analysis (e.g., ). It uses as a summary statistic t the two-point correlation function or its analog in Fourier space, the power spectrum. Then, the inference part of the analysis is performed using the corresponding analytic Gaussian likelihood p(t| θ) which is sampled through Markov Chain Monte Carlo (MCMC). On large scales, the Universe remains close to a Gaussian field and the 2-point function is a near sufficient statistic to extract cosmological information. However, on small scales where non-linear evolution gives rise to a highly non-Gaussian field, this summary statistic is not sufficient anymore.
At a time when future surveys will access small scales, we need to investigate summary statistics that can capture non-Gaussianities. This has led to a new class of statistics, known as higher-order statistics, including, for example, lensing peak counts (e.g., ), 3-point statistics (e.g., ) and machine learning compression (e.g., ), all with varying degrees of signal extraction power. Most of the time, no analytical models p(t|x) exist and these statistics are usually assumed to be Gaussian distributed leading to potentially biased inference or inaccurate uncertainty estimation. On top of that, since no analytical function t = g(θ) to map cosmological parameters to the summary statistic exists, the inference part requires a large number of very costly simulations x ∼ p(x|θ) (with p(x|θ) a simulator) to compute the summary statistics t = f(x). This is in addition to the number of simulations already required to compute the covariance matrix.
Full-field inference (e.g., ), aims to perform inference from simulations without any loss of information. This means no loss of information coming from a compression step and no loss of information coming from assumptions on the likelihood function employed for inference. Hence the quality of the learned posterior is solely tied to the forward model's accuracy. This paper focuses on this particular kind of inference.
Depending on the nature of the forward model p(x|θ), one can either perform explicit inference or implicit inference. The former refers to inference methods that can be used when the likelihood function p(x = x_0|θ) can be evaluated for different θ and sampling schemes can thus be employed. The latter can be used when only an implicit version of the likelihood is available through a set of simulations (θ, x). Implicit inference, also known as likelihood-free inference or simulation-based inference, commonly recasts the inference problem as a neural density optimization problem where the distribution is learned and can be evaluated for all θ and x. There exist different flavors of implicit inference, one aims to learn the likelihood function p(x |θ) (e.g., ) or the likelihood ratio r(θ, x) = p(x |θ) / p(x) (e.g., ). This learned likelihood or likelihood ratio can now be evaluated and sampling methods can be used to get the posterior p(θ | x). Others choose to directly
approximate the posterior distribution p(θ | x) (e.g., ).
Explicit inference applied in the context of full-field inference is known as Bayesian Hierarchical/Forward Modeling. Because of the high complexity and dimension of the field-based likelihood, sampling schemes guided by the gradient information ∇_θlog p(x| θ) are typically used to explore the parameter space in a more efficient way. This motivates the development of differentiable forward models p(x|θ).
Naturally, we could ask, could these gradients also help implicit inference methods for full-field inference? Specifically, <cit.> and <cit.> proposed implicit inference methods to leverage the gradient information from the forward model while approximating the likelihood, the likelihood ratio, or the posterior distribution. They showed that this additional information helps to constrain the target distribution and thus improve sample efficiency.
In summary, within the context of LSST Y10, this paper aims to answer the following questions:
* Is the differentiability of the forward model a useful asset for full-field implicit inference?
* Which methods allow full-field inference with the fewest simulations?
To meet the full-field criterion, we focus our benchmark analysis on two inference strategies:
* Explicit full-field inference: we sample our forward model through the use of the Hamiltonian Monte Carlo (HMC) sampling method. Specifically, we use the No-U-Turn (NUTS) algorithm.
* Implicit full-field inference: after compressing the simulations into sufficient statistics, we compare the Neural Likelihood Estimation (NLE) and Neural Likelihood Estimation augmented with gradients (∂NLE).
For the implicit inference strategy, maps are compressed using an optimal neural compression approach: we train a Convolutional Neural Network (CNN) by maximizing the mutual information between the cosmological parameters and the summary statistic (e.g. ; see for a review on optimal neural compression strategies). In this study, we separate the compression process from the inference process and concentrate solely on the amount of simulations necessary for inference. We will explain why in <ref>.
We use the same forward model to benchmark the different inference strategies and use the same fiducial data x_0. Our forward model is a differentiable field-based likelihood that can be evaluated and can generate simulations such that both approaches explicit and implicit can be performed. Specifically, it is a log-normal model that produces LSST Y10-like weak lensing convergence maps. The cosmological parameters θ that we aim to constrain are (Ω_c, Ω_b, σ_8, h_0, n_s, w_0). The forward model can be found in https://github.com/DifferentiableUniverseInitiative/sbi_lens.
We start by introducing our lensing forward modeling in <ref>. In <ref> we introduce our Bayesian inference framework. Then in <ref> we present the metric used to benchmark the different inference approaches. We then describe in <ref> the explicit inference approach and present the results. It is followed by the implicit inference approaches both with and without gradients and the corresponding results in <ref>. Finally, we conclude in <ref>.
§ THE LENSING FORWARD MODEL
Due to the non-linear growth of structures in the universe, the cosmological density field is expected to be highly non-Gaussian. Therefore log-normal fields which account for non-Guassianities[<ref> quantifies the amount of non-Gaussianities in our model compared to Gaussian simulations.] provide a fast representation of the late-time 2D convergence field <cit.>.
For our study, we use https://github.com/DifferentiableUniverseInitiative/sbi_lens’s JAX-based differentiable forward model introduced in <cit.> to generate log-normal convergence maps at LSST Y10 quality. In this section, we recall the log-normal forward model of <cit.>.
§.§ Log-Normal modeling
Given a Gaussian field κ_g fully characterized by its correlation function ξ^ij_g (i and j denoting the i-th and j-th source redshift bins see <ref>) we parametrize the log-normal field κ_ln as
κ_ln=e^κ_g-λ,
with λ an additional parameter that makes the log-normal field more flexible than its corresponding Gaussian field. This parameter is called the "shift" or “minimum value" and depends on the cosmology.
Hence, the field is no longer only described by its correlation function.
Note that this log-normal transformation leads to the following modification of the correlation function:
ξ^ij_ln = λ_i λ_j (e^ξ^ij_g-1).
To ensure that the log-normal field shares the same correlation function as its Gaussian analog we apply the following correction
f(ξ^ij) = log[ ξ^ij/λ_i λ_j+1 ],
which also makes the correlation function independent of the choice of the shift parameter. However, the shift parameter has to be carefully set as it is related to the skewness of κ_ln. It can be computed from simulations using matching moments <cit.> or by using perturbation theory <cit.>.
Finally, the correlation function is related to the power spectrum by
C^ij_ln(ℓ)=2π∫_0^π dθsinθP_ℓ(cosθ)ξ^ij_ln(θ),
with P_ℓ the Legendre polynomial of order ℓ. In Fourier space, the covariance of κ_ln is diagonal and defined as:
⟨κ̃^(i)_ln (ℓ) κ̃^*(j)_ln(ℓ')⟩ =C^ij_ln(ℓ)δ^K(ℓ-ℓ').
§.§ 's log-normal forward model
's forward model is structured as follows (see <ref>):
first, we define the prior p(θ) over the cosmological parameters (Ω_c, Ω_b, σ_8, n_s, w_0, h_0) (see <ref>). Given a cosmology from the prior, we compute the corresponding nonlinear power spectrum C_ℓ, g using https://github.com/DifferentiableUniverseInitiative/jax_cosmo<JAX-COSMO> <cit.> that we project on two-dimensional grids of the size of the final mass map. For this cosmology, we also compute the cosmology-dependent shift parameter λ using
https://github.com/OliverFHD/CosMomentum<CosMomentum> <cit.>. To ensure that the log-normal field preserves the power spectrum C_ℓ, g we apply the correction on the correlation function from <ref>.
Then, we convolve the Gaussian latent variables
z (also known as latent variables) with the corrected two-dimensional power spectrum:
κ̂_̂ĝ = Ẑ·Σ^1/2,
with Ẑ denoting the Fourier transform of the latent variables z and Σ^1/2 the square root of the covariance matrix
Σ= (C_ℓ^ij)_1≤ i≤ 4, 1 ≤ j ≤ 4,
C_ℓ^ij denotes the corrected and projected nonlinear power spectrum. To compute the square root of the covariance matrix, we perform an eigenvalue decomposition of Σ :
Σ=Q Λ Q^T,
with Q the eigenvectors and Λ the eigenvalues of the symmetric matrix Σ.
This allows us to compute the square root efficiently as
Σ^1/2=QΛ^1/2Q^T.
Finally, we
build the log-normal field κ_ln as described by <ref>. An example of log-normal convergence maps is shown in <ref>.
§.§ LSST Y10 settings
According to the central limit theorem, we assume LSST Y10 observational noise to be Gaussian as we expect a high number of galaxies per pixel. Hence, the shear noise per pixel is given by zero-mean Gaussian whose standard deviation is
σ^2_n= σ_e^2/N_s,
where σ_e = 0.26 is the per component shape standard deviation as defined in the LSST DESC Science Requirement Document (SRD, <cit.>), and N_s is the number of source galaxies per bin and pixel, computed using n_gal=27 arcmin^-2 the galaxy number density (as in LSST DESC SRD) and A_pix≈ 5.49 arcmin^2 the pixel area.
The convergence field is related to the shear field through the Kaiser Squires operator <cit.>. As this operator is unitary, it preserves the noise of the shear field, therefore the convergence noise is also given by <ref>.
Our convergence map, x, is a 256 × 256 pixels map that covers an area of 10 × 10 deg^2 in five tomographic redshift bins with an equal number of galaxies (see <ref>).
The redshift distribution is modeled using the parametrized Smail distribution <cit.>:
n(z)∝ z^2 exp-(z/z_0)^α,
with z_0=0.11, α=0.68 and we assume a photometric redshift error σ_z=0.05(1+z) (still accordingly to LSST DESC SRD).
§ BAYESIAN INFERENCE
In this section, we introduce our Bayesian inference framework enabling us to distinguish between implicit and explicit (full-field) inference more clearly in this paper.
Given a priori knowledge p(θ) about the parameters θ and information provided by data x linked to the parameters via the likelihood function p(x | θ), we are able to recover the parameters θ that might have led to this data. This is summarized by Bayes' theorem:
p(θ |x) = p(x |θ)p(θ)/p(x),
with p(θ|x) the posterior distribution of interest and p(x) = ∫ p(x|θ)p(θ)dθ the evidence.
However, physical forward models are typically of the form p(x| θ, z) involving additional variables z known as latent variables. The presence of these latent variables make the link between the data x and the parameters θ not straightforward as x is now the result of a transformation involving two random variables
θ and z. Since the forward model depends on latent variables, we need to compute the marginal likelihood to perform inference, i.e.
p(x | θ) = ∫ p(x |θ, z)p(z | θ)dz,
which is typically intractable when z is of high dimension. As a result, the marginal likelihood p(x | θ) cannot be evaluated and explicit inference techniques that rely on explicit likelihood such as MCMC or variational inference cannot be directly applied on the marginal likelihood p(x | θ). For this reason, this marginal likelihood is often assumed to be Gaussian yielding to inaccurate estimation of the true posterior. Full-field inference instead aims to consider the exact distribution of the data x or the sufficient statistics t.
§ INFERENCE QUALITY EVALUATION
To quantify the quality of inference and thus benchmark all the inference algorithms, a performance metric has to be carefully chosen. Several metrics exist, each offering varying levels of precision, and are usually chosen according to the knowledge we have about the true posterior (i.e. if we have access to the probability density function of the true distributions, its samples, or only the fiducial data or fiducial parameters).
We choose to take the 160 000 posterior samples obtained through explicit full-field inference as our ground truth and use: the Classifier 2-Sample Tests (C2ST, ).
This decision is based on the understanding that the explicit full-field approach, which relies on sampling schemes, should theoretically converge to the true posterior distribution within the limit of a large number of samples.
The convergence analysis of the MCMC, along with the large number of samples (160 000), indicates that the explicit full-field inference posterior has converged. Additionally, we confirm this by visually comparing in <ref> the marginals of those fully converged samples obtained through explicit full-field inference (black) to the marginals of the posterior obtained through implicit inference (blue). Although we present implicit inference performed with only 1 000 simulations in <ref>, it's worth noting that we ran the implicit inference method with over 1 000 simulations and it was consistently in agreement with this explicit posterior.
Since we are using samples from the explicit approach as our ground truth, 2-sample tests, specifically the C2ST is the most powerful metric to be used to compare two distributions (according to <cit.> benchmark).
A two-sample test is a statistical method that tests whether samples X ∼ P and Y ∼ Q are sampled from the same distribution. For this, one can train a binary classifier f to discriminate between X (label 0) and Y (label 1) and then compute the C2ST statistic
t̂ = 1/N_test∑_i=1^N_test𝕀[𝕀(f(z_i) > 1/2) = l_i ],
where {(x_i, 0)}_i=1^N ∪{(y_i, 1)}_i=1^N =: {(z_i, l_i)}_i=1^2N and N_test denotes the number of samples not used during the classifier training. If P = Q, the classifier fails to distinguish the two samples and thus the C2ST statistic remains at chance level (C2ST = 0.5). On the other hand, if P and Q are so different that the classifier perfectly matches the right label, C2ST = 1.
In practice, in our 6 dimensional inference problem, we find this metric very sensitive, and the two distributions considered converged in <ref> result in a C2ST of 0.6.
Therefore, to make a fair comparison between all inference methods we benchmark all the methods with the same metric, the C2ST metric, and choose to fix a threshold of 0.6.
§ EXPLICIT INFERENCE
§.§ Sampling the forward model
When the forward model is explicit, which means that the joint likelihood p(x | θ, z)
can be evaluated, it is possible to sample it directly through MCMC bypassing the computation of the intractable marginal likelihood p(x | θ).
Unlike sampling the marginal likelihood, this necessitates sampling both the parameters of interest θ as well as all latent variables z involved in the forward model:
p(θ, z | x) ∝ p(x | θ, z) p(z | θ) p(θ),
and to marginalize over the latent variables z afterward to get the posterior distribution p(θ |x).
As the latent variables are usually high-dimensional they require a large number of sampling steps to make the MCMC converge. Therefore, Hamilton Monte Carlo (HMC, ), which can efficiently explore the parameter space thanks to gradient information, is usually used for such high-dimensional posteriors. However, this requires the explicit likelihood to be differentiable.
Note that, for each step, the forward model needs to be called, which can make this approach costly in practice as generating one simulation can take a very long time. This would also be true in cases where the marginal likelihood can be evaluated but since the latent variables z do not have to be sampled, the parameter space is smaller and the MCMC does not need as many steps.
§.§ Explicit full-field inference constraints
' differentiable joint likelihood is:
p(x | θ, z) = 𝒩(κ_ln(θ, z), σ^2_n),
with κ_ln the convergence map that depends on the cosmology θ and the latent variables z.
Given that the observational noise is uncorrelated across tomographic redshift bins and pixels, we can express the log-likelihood of the observed data x_0 as:
ℒ(θ, z) = constant - ∑_i^N_pix∑_j^N_bins[κ_ln^i,j(θ, z)-x_0^i,j]^2/2σ_n^2.
By construction p(z | θ) is independent of the cosmology θ, hence the log posterior we aim to sample is:
log p(θ, z | x = x_0) ∝ℒ(θ, z) + log p(z) + log p(θ),
with p(z) a reduced centered Gaussian and p(θ) as in <ref>.
We use a HMC scheme to sample <ref>. Specifically, we use the No-U-Turn sampler (NUTS, ) from https://github.com/pyro-ppl/numpyro <cit.> that efficiently proposes new relevant samples using the derivatives of the distribution we sample from, namely: ∇_θ, zlog p(θ, z | x = x_0).
Given the fixed observed convergence map x_0, <ref> shows the posterior constraints on Ω_c, Ω_b, σ_8, n_s, w_0, h_0. As explained in <ref>, we consider this posterior of 160 000 samples converged as it yields the same constraint as our implicit full-field approach. Therefore, we consider these 160 000 samples as our ground truth.
§.§ How many simulations for explicit full-field inference?
We now conduct a study to access the minimum number of simulations needed to get a good approximation of posterior distribution p(θ | x=x_0). In other words, we try to access the minimum number of simulations needed to have converged MCMC chains and a good representation of the posterior distribution.
Since there is no robust metric to estimate the convergence of MCMCs and because we aim to compare all inference methods with the same metric, we use the C2ST metric to access the minimum number of simulations required to have converged chains.
We proceed as follows: given the fully converged chains of 160 000 posterior samples from <ref>,
for each number of simulation N, we take the first N samples and compute the C2ST metric comparing those samples to the 160 000 ones from the fully converged chains.
The C2ST metric is based on the training of a classifier to distinguish between two populations under the cross-entropy loss and thus requires an equal number of samples of the two distributions.
We use a Kernel Density Estimator (KDE) <cit.> to fit the samples enabling us to generate the required number of samples to compare the two distributions. Note that KDEs, Gaussian filters, or smoothing are always used to visualize distribution samples using contour plots, thus motivating our approach. In addition, the distribution of interest is a 6 dimension unimodal and almost Gaussian distribution making it easy to fit through KDE. We use a Gaussian kernel and adjust the bandwidth to align with the contour plots shown by GetDist, as highlighted in <ref>.
Note that samples and simulations are not the same thing. During each step, the proposal of the MCMC suggests a pair of parameters θ_1 and z_1 and produces a corresponding simulation x ∼ p(x| θ = θ_1, z=z_1). The MCMC keeps only the parameters θ_1 and z_1 if it yields a plausible x according to the likelihood function evaluated on the observation x_0, and plausible θ_1 and z_1 according to their priors. The sample is the pair θ and z that is kept by the MCMC.
Specifically, to get one posterior sample using the NUTS algorithm we need 2 × N simulations with N denoting the number of leapfrog steps. Indeed, the proposal of HMC methods is based on Hamiltonian equations which are discretized using the leapfrog integrator:
r^ t + ϵ / 2 = r^ t - (ϵ / 2) ∇_αlog p(α^ t | x_0),
α^ t + ϵ = α^ t + ϵ M^-1r^ t+ϵ / 2,
r^ t + ϵ = r^ t + ϵ / 2 - (ϵ / 2)∇_αlog p(α^ t+ϵ | x_0),
with ϵ the step size, M the mass matrix, α^ t correspond to the position of (θ, z) at time t, and r^ t denote the values of the random momentum at time t ∈ [0, N]. After N leapfrog steps, the total number of log probability evaluations is N. As each gradient requires the cost of two simulations (one to evaluate the primal values during the forward pass and one to evaluate gradients backward in the reverse mode of automatic differentiation), the total number of simulations is 2 × N. In our case, we find that the NUTS algorithm requires 2×(2^6 -1) = 126 simulations (always reaching the maximum depth of the tree which is set to 6) to generate one sample.
<ref> shows the convergence results of our explicit full-field inference as a function of the number of simulations and the effective sample size. According to the threshold of C2ST= 0.6 that we have chosen in <ref>, this study suggests that 630 000 simulations for our sampler corresponding to 400 independent samples are enough to have converged MCMC chains. The number of independent samples is estimated using the effective sample size (ess) lower bound estimate from https://github.com/tensorflow/probabilityTensorFlow Probability <cit.>.
In addition, <ref> shows the explicit posterior constraints obtained for different simulation budgets, and <ref> shows the evolution of the mean and standard deviation of the posteriors as a number of simulations.
Note that the C2ST metric is sensitive to higher-order correlations, but if one only cares about marginals, the explicit inference posterior can be considered converged with only 63 000 simulations (corresponding to 24 indendepent samples) as shown by the combination of contour plots <ref> and <ref>.
These results are not a strong statement about explicit inference in general as we do not investigate other sampling schemes and preconditioning schemes (this study is left for future work). However, the NUTS algorithm is one of the state-of-the-art samplers and has already been used in various full-field studies <cit.>. But it is important to note that there exist other powerful HMC schemes such as the Microcanonical Langevin Monte Carlo (MCLMC) <cit.> that might perform with fewer simulations and has also been used in full-field studies <cit.>.
Regardless of the sampling scheme used, we suggest that readers refer to the effective sample size values to translate the results to their sampler.
§ IMPLICIT INFERENCE
Although explicit full-field inference offers a promising framework for performing rigorous Bayesian inference it comes with the downside of requiring an explicit likelihood. Additionally, sampling from the joint likelihood even with HMC schemes can be very challenging and require a large number of simulations. Instead, implicit inference has emerged as a solution to tackle the inference problem without relying on having an explicit likelihood.
These techniques rely on implicit likelihoods, more commonly known as simulators. A simulator is a stochastic process that takes as input the parameter space θ∼ p(θ) and returns a random simulation x. It does not require the latent process of the simulator to be explicit.
Comparably to sampling the forward model, given an observation x_0, to build the posterior p(θ | x=x_0),
one can simulate a large range of θ_i and accept the
parameters that verify |x_i - x_0| < ϵ with ϵ a fixed threshold.
This is the idea behind Approximate Bayesian Computation (ABC) method
(e.g. ).
This method used to be the traditional way to do implicit inference but its
poor scalability with dimension encouraged the community to develop new techniques. In particular, the introduction of machine learning leading to neural implicit inference methods has been shown to perform better.
These neural-based methods cast the inference problem into an optimization task,
where the goal is to find the set of parameters φ so that the neural parametric
model best describes the data. Then, the posterior is approximated using this
surrogate model evaluated on the given observation.
Implicit inference has already been successfully applied to cosmic shear analyses. For instance, <cit.> and <cit.> applied it to two-point statistics rather than using the standard explicit inference method assuming a Gaussian likelihood. Similarly, to bypass this traditional Gaussian likelihood assumption, <cit.> applied implicit inference to the power spectra, peak counts, and neural summary statistics.
In this section, we introduce the NLE method, its augmented version with gradient ∂NLE, and we present the benchmark results.
The Neural Ratio Estimation (NRE), and Neural Posterior Estimation (NPE) as well as sequential methods are described in <ref> and the benchmark results of (S)NLE, (S)NPE and (S)NRE can be found in <ref>.
In this section, we chose to focus our study on the NLE method as our comparison of the three main implicit inference methods (see <ref>) suggests that NLE and NPE are the ones that perform the best. We chose not to use the NPE method as the augmented gradient version of NPE <cit.> requires specific neural architectures that proved to be more simulation-costly.
§.§ Learning the Likelihood
Neural Likelihood Estimation (NLE) aims to learn the marginal likelihood
p_φ(x | θ) from a set of parameters and corresponding
simulations (θ, x)_i=1..N. Thanks to the development
of new architectures in the neural density estimator field, this can be
achieved by using conditional Normalizing Flows (NFs) <cit.>.
Conditional NFs are parametric models p_φ that take as input
(θ, x) and return a probability density p_φ(x | θ),
which can be evaluated and/or sampled.
To find the optimal parameters φ̂ which makes
p_φ(x | θ) best describe the data, one trains the NF
so that the approximate distribution p_φ(x | θ) is
the closest to the unknown distribution p(x | θ). To quantify this, we use the forward Kullback–Leibler divergence D_KL(. || .). The D_KL is positive, and equal to zero if and only if the two distributions are the same, motivating the following optimization scheme:
φ̂ = min_φ𝔼_p(θ) [ D_KL(p(x | θ) || p_φ(x | θ)) ]
= min_φ𝔼_p(θ) [𝔼_p(x | θ)[ log(p(x | θ)/p_φ(x | θ)) ] ]
= min_φ𝔼_p(θ) [𝔼_p(x | θ)[ log(p(x | θ)) ] ]_constant w.r.t φ
- 𝔼_p(θ) [𝔼_p(x | θ)[ log(p_φ(x | θ)) ] ]
= min_φ - 𝔼_p(θ) [𝔼_p(x | θ)[ log(p_φ(x | θ)) ] ]
leading to the loss
ℒ = 𝔼_p(θ, x)[ - log p_φ (x | θ) ],
which does not require evaluation of the true target distribution p(x | θ) anymore. To compute this loss, only a set of simulations (θ, x) ∼ p(θ, x) obtained by first generating parameters from the prior θ_i ∼ p(θ) and then generating the corresponding simulation x_i ∼ p(x|θ = θ_i) through the simulator, are needed. Note that the approximated likelihood, under the loss of <ref>, is learned for every combination (θ, x) ∼ p(x, θ) at once.
Given observed data x_0, the approximated posterior
p̂(θ | x=x_0) ∝ p_φ̂(x=x_0 | θ) p(θ) is then obtained by using an MCMC with the following log probability: log p_φ̂(x=x_0 | θ)+ log p(θ). This MCMC step makes NLE (and NRE) less amortized and slower than the NPE method which directly learned the posterior distribution p_φ(θ | x) for every pair (θ, x) ∼ p(x, θ) and only need to be evaluated on the desire observation x_0 to get the approximated posterior p_φ(θ | x = x_0). However, it is less challenging than using an MCMC scheme to sample the forward model in the explicit inference framework. Indeed, now one only has to sample the learned marginal likelihood p_φ(x | θ) (or learned likelihood ratio) not the joint likelihood of the forward model p(x | θ, z).
§.§ NLE augmented with gradients
Although there are methods to reduce the number of simulations, such as sequential approaches (see <ref>), they still treat the simulator as a black box. As underlined by <cit.>, the emergence of probabilistic programming languages makes it easier to open this black box (making the implicit likelihood explicit) and extract additional information such as the gradient of the simulation. In particular, <cit.> noticed that they can compute the joint score ∇_θlog p(x,z | θ) as the sum of the scores of all the latent transformations encounter in the differentiable simulator:
∇_θlog p(x,z | θ) = ∇_θlog p(x | θ, z) + ∇_θlog p(z | θ)
= ∇_θlog p(x | θ, z) + ∑_i^N ∇_θlog p(z_i | z_1 ... z_i -1, θ).
The most important result: through the use of the classical mean squared error (MSE) loss (also known as score matching (SM) loss)
ℒ_ SM = 𝔼_p(x,z,θ)[ ∥∇_θlog p(x,z | θ) - ∇_θlog p_φ(x | θ) ∥_2^2 ],
they showed how to link this joint score to the intractable marginal score ∇_θlog p(x | θ). As explained in <ref>, ℒ_ SM is minimized by
𝔼_p(z |x,θ)[ ∇_θ log p(x, z |θ) ]
and can be derived as
𝔼_p(z |x,θ)[ ∇_θ log p(x, z |θ) ]
= 𝔼_p(z |x,θ)[ ∇_θ log p(z|x , θ) ] + ∇_θ log p(x | θ)
= 𝔼_p(z| x,θ)[ ∇_θ p(z|x , θ)/p(z|x , θ)] + ∇_θ log p(x | θ)
= ∫∇_θ p(z|x , θ) dz + ∇_θ log p(x | θ)
= ∇_θ ∫ p(z|x , θ) dz + ∇_θ log p(x | θ)
= ∇_θ log p(x | θ).
This loss learns how the probability of x given θ changes according to θ and thus can be combined with the traditional negative log-likelihood loss (<ref>) to help the neural density estimator learn the marginal likelihood with fewer simulations. The NF now learns p_φ(x | θ) from (θ, x, ∇_θlog p (x,z | θ))_i = 1..N under the combined loss:
ℒ = ℒ_ NLL + λ ℒ_ SM,
with λ a hyper-parameter that has to be fined-tuned according to the task at hand.
<cit.> called this method SCore-Augmented Neural Density Approximates Likelihood (SCANDAL) we choose to rename it ∂NLE for clarity in our paper.
Equivalently, other quantities such as the joint likelihood ratio r(x,z | θ_0, θ_1) = p(x,z | θ_0) / p(x,z | θ_1) and the joint posterior gradients ∇_θlog p(θ | x, z)
can be used to help learning the likelihood ratio <cit.> and the posterior <cit.> respectively.
§.§ Compression procedure
In this section, we provide a brief summary of the compression procedure we perform to build sufficient statistics. A more detailed description and comparison of compression procedures applied in the context of weak-lensing full-field implicit inference can be found in <cit.>.
Based on the benchmark results of <cit.>, we choose to use the Variational Mutual Information Maximisation (VMIM, ) neural compression. This compression builds summary statistics t = F_φ(x) by
maximizing the mutual information I(t, θ) between the parameters of interest θ and the summary statistics t. More precisely, the mutual information is defined as
I(t, θ) = 𝔼_p(t, θ) [logp(θ | t )]- H(θ),
where H denotes the entropy. Replacing the summary statistics t by the neural network F_φ and the intractable posterior p(θ| t) by a variational distribution p_ψ(θ| t) to be optimized jointly with the compressor, we get the following variational lower bound <cit.>:
I(t, θ) ≥𝔼_p(x, θ) [logp_ψ(θ | F_φ(x) )]- H(θ).
Hence, by training the neural network F_φ jointly with a variational distribution (typically a NF) p_ψ under the loss
ℒ_VMIM = - 𝔼_p(x, θ) [logp_ψ(θ | F_φ(x) )],
enable, by construction and within the limit of the flexibility of F_φ and p_ψ, to build summary statistics t that contain the maximum amount of information regarding θ that is embedded in the data x. As equality is approached, the maximization of the mutual information yields sufficient statistics such that p(θ | x) = p(θ | t).
As proof that in our particular case, these summary statistics extract all the information embedded in our convergence maps, and thus are sufficient statistics, we show in <ref> that the contours obtained using this compression and NLE implicit inference technique allow us to recover the explicit full-field constraints.
We used 100 000 simulations for the compression part and did not investigate the question of the minimum number of simulations required.
Although we use a large number of simulations to train our compressor, we can produce near-optimal summary statistics without training a neural network, which eliminates the need for additional simulations. As an example, <cit.> shows that they can produce summary statistics using scattering transforms that result in constraints similar to those obtained by building summary statistics using a CNN trained under mean absolute error (MAE) loss. While it is not guaranteed that these scattering transform coefficients provide sufficient statistics required to perform full-field inference, we hope that advances in transfer learning will allow us to propose new compression schemes that need very few simulations. This is left for future work.
Details regarding our compressor architecture can be found in
<ref>.
§.§ Results
For this study, we use the NLE algorithm as in <cit.>. And use the ∂NLE method introduced by <cit.> to leverage gradient information. All approaches share the same NF architecture and sampling scheme (all details can be found in <ref> <ref>).
We benchmark the previously presented implicit inference methods on our 's log-normal LSST Y10-like forward model. The goal of this inference problem is to constrain the following cosmological parameters: Ω_c, Ω_b, σ_8, n_s, w_0, h_0 given a fiducial convergence map x_0. Our fiducial map x_0 is the same for all the benchmarked methods in the paper.
This benchmark aims to find the inference method that can achieve a given posterior quality (C2ST = 0.6) with the minimum number of simulations, for this the procedure is the following:
* Starting from the entire dataset, we compress the tomographic convergence maps x of 256 × 256 × 5 pixels into 6-dimension sufficient statistics. We use the VMIM neural compression as described in <ref>.
* From this compressed dataset, we then pick a number of simulations and approximate the posterior distribution using NLE and ∂NLE methods.
* Then we evaluate the approximated posterior against the fully converged explicit full-field posterior (our ground truth) using the C2ST metric.
The C2ST convergence results are displayed in <ref>. In Appendix we provide additional convergence results, <ref> shows the posterior contours evolution obtained through NLE and <ref>, <ref>, <ref> depicts the evolution of the mean and standard deviation of the approximated posterior as a number of simulation.
We find that unlike previous results <cit.>, the gradients ∇_θlog p(x,z | θ) do not provide additional information enabling a reduction in the number of simulations. Indeed, <ref> shows similar convergence curves for the ∂NLE method (yellow) and NLE method (black).
This issue arises as we attempt to constrain the gradients of the learned marginal distribution p_φ(x | θ) by using the joint gradients ∇_θlog p(x,z | θ) from the simulator. Indeed, the benefit of these joint gradients depends on their "level of noise". In other words, their benefit depends on how much they vary compared to the marginal gradients. To visually exhibit this gradient stochasticity, we consider the gradients of a 2 dimensional posterior ∇_θlog p(θ | x) and the joint gradients ∇_θlog p(θ | x, z) provided by the simulator. By definition, the gradients should align with the distribution, as seen in the left panel of <ref>. As demonstrated in the middle panel of <ref>, the gradients we obtain from the simulator are directed towards p(θ | x, z), which differs from p(θ | x) = ∫ p(θ | x, z)p(z|x)dz. The stochasticity of the gradients relies on the standard deviation of p(z|x) and how much p(θ | x, z) "moves" according to z.
As a result, instead of the gradients field being displayed in the left panel of <ref>, we end up with the gradients field depicted in the right panel.
To confirm this claim, we learn from the simulator's gradients ∇_θlog p(x,z | θ) the marginal ones ∇_θlog p(x | θ). For this, we use a neural network (the architecture can be found in <ref> <ref>) that we train under the following MSE loss function:
ℒ_ MarginalSM = 𝔼_p(x,z,θ)[ ∥∇_θlog p(x,z | θ) - g_φ(x,θ) ∥_2^2 ].
This loss is almost the same as <ref> except that instead of using a NF to approximate ∇_θlog p(x | θ) and then take its gradients, we train a neural network to approximate the gradients values given θ and x. This loss is minimized by g_φ(x,θ) = ∇_θlog p(x | θ) (as explained in <ref>) allowing us to learn the intractable marginal gradients from simulations.
We then use these marginal gradients in the ∂NLE method (blue curve) and show that those gradients help to reduce the number of simulations.
What we have demonstrated here is that the stochasticity of our LSST Y10-like simulator dominates the gradient information and thus ∂NLE method does not help to perform inference with fewer simulations.
We could have used a method to denoise the gradients. Specifically, <cit.> introduced Marginal Unbiased Score Expansion (MUSE), a way of computing marginal gradients from simulations, and proposed a frequentist and Bayesian approach for parameter inference that leverages this quantity. In our case, the ∂NLE with marginal gradients converges with ∼ 400 simulations while ∂NLE with gradients from the simulator converges with 2 times more simulations. Hence, to be beneficial, computing marginal gradient should take less than 2 simulations which is not feasible with MUSE as it requires at least 10 simulations to have an "acceptable" estimation of the marginal gradient <cit.>.
§ CONCLUSION AND DISCUSSION
Full-field inference is the optimal form of inference as it aims to perform inference without any loss of information. This kind of inference is based on a simulation model known as a simulator, forward model, or Bayesian hierarchical model in cases where the model is hierarchical.
There are two ways of conducting full-field inference from this forward model: through explicit or implicit inference.
The first way can be applied when the forward model is explicit. This means that the field-based joint likelihood p(x = x_0|θ,z) can be evaluated and thus sampled through sampling schemes such as MCMC. The second one can be used when only simulations are available, in this case, it is said that the likelihood is implicit. While it is possible to perform implicit inference directly at the pixel level by feeding the maps to the neural density estimator <cit.>, it is usually more robust and careful to decompose it into two steps: first performing a lossless compression and then performing the implicit inference on this sufficient statistics. Specifically, in this work, sufficient statistics are built using an optimal neural-based compression based on the maximization of the mutual information I(θ, t) between the cosmological parameters θ and the summary statistics t. But other compression schemes, requiring fewer or zero simulations, could be used while still offering very good quality summary statistics <cit.>. Additionally, the advent of transfer learning could offer a way for performing compression with fewer simulations; this is left for future work.
This work aimed to answer the following questions: which full-field inference methods require the minimum number of simulations? Is differentiability useful for implicit full-field inference?
To answer these questions, we have introduced a benchmark that compares various methods to perform weak lensing full-field inference. For our benchmark, we used 's differentiable forward model, which can generate log-normal convergence maps at the quality expected for the tenth year of LSST. We evaluated the performance of several inference strategies by evaluating the constraints on (Ω_c, Ω_b, σ_8, h_0, n_s, w_0), specifically using the C2ST metric.
We found the following results:
* Explicit and implicit full-field inference yield the same constraints. However, according to the C2ST metric and the threshold of C2ST = 0.6, the explicit full-field inference requires 630 000 simulations (corresponding to 400 independent samples). In contrast, the implicit inference approach requires 101 000 simulations split into 100 000 simulations for compression and 1 000 for inference. Note that we arbitrarily used 100 000 simulations for the compression part and did not explore the question of performing compression with a minimum number of simulations.
Hence, 101 000 simulations is an upper bound of the number of simulations actually required to perform implicit full-field inference in this particular problem.
* The C2ST is sensitive to higher-order correlations that one can not see by looking at the marginals or first moments making it a good metric for comparing distributions. However, as we mostly care about those marginals, it is worth noting that by looking at the combination of contour plots from <ref> and first moments convergence plots from <ref>, the explicit inference can be considered "converged" with 63 000 simulations (corresponding to 24 independent samples) as emphasized by <ref> which correspond to C2ST=0.76 and the implicit inference performed through NLE with 101 000 (1 000 for inference and 100 000 to build sufficient statistics) as shown in <ref> which corresponds to C2ST=0.6.
* For implicit inference, we exploited the simulator's gradient using the SCANDAL method proposed by <cit.>. Our study indicates that the gradients contain a significant noise level due to the latent variable's behavior, which makes it difficult to achieve convergence with fewer simulations. Note that the effectiveness of such gradient-based methods depends on the specific problem at hand. These methods can still be useful in scenarios where the noise level is not significant. This has been demonstrated in studies such as <cit.> and <cit.>. It is also important to keep in mind that there may be other ways to leverage the differentiability of simulators and encourage further research in this area. Finally, note that methods to denoise the gradients exist <cit.> but, in our specific case, the gain compared to the number of simulations that this method requires is not significant.
It is worth noting that for each explicit inference simulation budget, the C2ST is calculated against fully converged explicit inference samples, resulting in a value that can reach almost 0.5. For implicit inference, the C2ST is also computed against the fully converged explicit inference samples. Both methods should produce the same constraints, but due to slight differences in the posterior approximation, the C2ST cannot go below 0.6. Hence, we consider a value of 0.6 as indicating convergence (see <ref>).
It is important to mention that in most of real-world physical inference problems, such a metric cannot be used as it requires comparing the approximated posterior to the true one. Instead, for implicit inference, coverage tests <cit.> should be used to assess the quality of the posterior. For explicit full-field inference, although diagnostics exist it is very difficult to verify if the MCMC has explored the entire space. If possible, the safest would be comparing the two full-field approaches as they should yield the same posterior. Implicit inference is likely the easiest to use in such a scenario because it does not require modeling the very complicated latent process of the forward model and can be performed even in multimodal regimes. Whereas, explicit inference has to sample the latent process of the forward model and the more dimensions there are, the more time it needs to explore the entire parameter space. In addition, it can fail in the case of multimodal distribution as it can stay stuck in local maxima and never converge. However, for implicit inference, too few simulations can result in an overconfident posterior approximation, as shown in <ref>. Therefore, within the limit of a reasonable number of simulations, the implicit inference method should be the easiest to use.
Finally, we discuss some limitations of our setting. We chose to use a fast log-normal model that enables us to investigate various approaches for this benchmark. While this model takes into account additional non-Gaussianity (as illustrated in <ref>), it is not as realistic as expensive N-Body simulations. Moreover, we did not include any systematics. However, we are optimistic that our findings will be relevant for realistic weak lensing inference. Additionally, even though these numerical results depend on our particular inference problem, we do not expect our conclusion regarding the comparison of implicit and explicit inference, to change when using a more realistic gravity model but it will be interesting to confirm this in future work.
The explicit inference results are not a strong statement, as we did not explore other sampling and preconditioning schemes (which is left for future work). Our sampler choice for the benchmark has been motivated by the fact that the NUTS algorithm is a state-of-the-art sampler and has been extensively used in full-field studies <cit.>. But there exist other sampling schemes such as powerful Microcanonical Langevin Monte Carlo (MCLMC) <cit.> that might require fewer simulations and have been applied in full-field studies <cit.>. Meanwhile, we recommend the reader refer to the effective sample size values to translate the results to its sampler.
We use the NLE implicit method for our study as, regarding our benchmark results of <ref>, it seems to be the one that performs the best. NPE provides comparable results but necessitates using the ∂NPE method of <cit.> to leverage gradient information. Since the NPE method aims to learn the posterior directly, this method requires the NF to be differentiable. But, the smooth NF architecture <cit.> that <cit.> used was too simulation-expensive for our needs. We also experimented with continuous normalizing flows trained under negative log-likelihood loss but found that it took a very long time to train.
This paper has undergone internal review in the LSST Dark Energy Science Collaboration. The authors would like to express their sincere gratitude to the internal reviewers, Alan Heavens and Adrian Bayer, for their valuable feedback, insightful comments, and suggestions, which helped to significantly improve the quality of this work. They also extend their thanks to Benjamin Remy for his contributions through countless discussions and helpful comments on the paper. Additionally, they appreciate the constructive feedback provided by Martin Kilbinger and Sacha Guerrini.
JZ led the project, contributed to brainstorming, developed the code, and wrote the paper.
DL contributed to brainstorming and code development, particularly in developing the forward model, and reviewed the paper.
FL initiated the project and contributed through mentoring, brainstorming, code development, and paper reviews.
AB contributed mentoring, brainstorming, code development, and paper reviews.
GL and EA provided mentoring, participated in brainstorming, and contributed to reviewing the paper. AB contributed to the review of the paper and participated in brainstorming the metric used for explicit inference.
The DESC acknowledges ongoing support from the Institut National de
Physique Nucléaire et de Physique des Particules in France; the
Science & Technology Facilities Council in the United Kingdom; and the
Department of Energy, the National Science Foundation, and the LSST
Corporation in the United States. DESC uses resources of the IN2P3
Computing Center (CC-IN2P3–Lyon/Villeurbanne - France) funded by the
Centre National de la Recherche Scientifique; the National Energy
Research Scientific Computing Center, a DOE Office of Science User
Facility supported by the Office of Science of the U.S. Department of
Energy under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities,
funded by UK BEIS National E-infrastructure capital grants; and the UK
particle physics grid, supported by the GridPP Collaboration. This
work was performed in part under DOE Contract DE-AC02-76SF00515.
This work was supported by the Data Intelligence Institute of Paris (diiP), and IdEx Université de Paris (ANR-18-IDEX-0001).
This work was granted access to the HPC/AI resources of IDRIS under the allocations 2023-AD010414029 and AD011014029R1 made by GENCI.
This work used the following packages: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.
aa
§ LOG-NORMAL SIMULATIONS
The following plot demonstrates that log-normal simulations can mimic the non-Gaussian behavior of late-time fields. Indeed, the constraints obtained from the full-field approach (sampling the forward model) are much tighter compared to the standard power spectrum analysis.
§ IMPLICIT INFERENCE BENCHMARK
§.§ Methods
§.§.§ Learning the Likelihood Ratio
Neural Ratio Estimation (NRE) is based on the well-known likelihood ratio test. The idea is to test whether x has been generated by θ_0 or θ_1 through the following quantity:
r(x | θ_0, θ_1) = p(x | θ_0)/p(x | θ_1).
Using the Likelihood Ratio Trick this test can be cast as a binary classification problem where we train a classifier d_φ to learn the probability that x has been generated by θ_0:
d_φ(x) = p(y = 1 | x) = p(x | θ_0) /p(x |θ_0) + p(x |θ_1),
r(x | θ_0, θ_1) = d_φ(x)/1 - d_φ(x),
with the two labels y = 0 and y = 1 corresponding respectively to x ∼ p(x | θ_1) and x ∼ p(x | θ_0).
Finally, this is generalized to all possible parameters θ by defining the label y = 0 as (x, θ) ∼ p(x) p(θ) and the label y = 1 corresponding to (x, θ) ∼ p(x, θ). This means that now the classifier learns
d_φ(x, θ) = p(x, θ) /p(x) p(θ) + p(x, θ) = p( θ | x)/p( θ | x) + p(θ),
leading to the following likelihood ratio
r(x , θ) = d_φ(x, θ)/1 - d_φ(x, θ) = p(θ | x)/p(θ).
<cit.> generalized this binary classification into a K multi-class classification and showed performance improvement when K > 2.
Similarly to NLE, given observed data x_0, the approximated posterior is then obtained by sampling the distribution.
§.§.§ Learning the Posterior
Neural Posterior Estimation (NPE) aims to directly
learn the posterior distribution. Similarly to NLE,
NPE is based on neural density estimators such as NFs, whose goal
is to learn p_φ(θ | x) from a set of parameters
and corresponding simulations (θ, x)_i=1..N. This can be done
by using a conditional NF and minimizing D_KL:
φ̂ = min_φ𝔼_p(x) [ D_KL(p(θ |x) || p_φ(θ |x)) ],
ℒ = 𝔼_p(θ, x)[ - log p_φ (θ |x) ].
Note that, unlike NLE and NRE, for NPE no MCMC is needed to get samples from the posterior. This approach is very convenient if one has to evaluate the posterior distribution for different observations as it only requires a new evaluation of the learned model p_φ(θ | x).
§.§ Sequentially refined posterior
In most cases the prior p(θ) is significantly broader compared to the posterior p(θ | x = x_0), making it unnecessary to sample the entire parameter space.
Instead, we would like to sample from a proposal p(θ) which
denotes the most suitable regions. The question arises: How to choose this proposal p(θ) if we know neither the posterior location nor its size?
Starting from the prior, sequential methods offer a way to iteratively select this proposal by using the previous posterior approximation as the new relevant area and consequently refining the posterior at each iteration.
Each of the methods described above (NPE, NLE, and NRE) can be sequentially adjustable, however, there are some specificities to bear in mind:
both SNLE <cit.> and SNRE <cit.> necessitate a sampling method or variational inference <cit.> at the end of each iteration to obtain the new parameters θ.
SNPE <cit.> usually requires a costly correction of the approximated posterior since now minimizing the loss from <ref> under the proposal p(θ) leads to
p̃(θ |x) = p(θ |x) p̃(θ) p(x)/p(θ) p̃(x).
§.§ Results
To benchmark (S)NLE, (S)NPE, and (S)NRE methods, we use the same benchmark procedure as the one presented in <ref>.
We use the https://sbi-dev.github.io/sbi/<sbi> package for (S)NPE, (S)NLE, and (S)NRE methods. We choose to rely on 's developers' expertise and use the default setting of but optimizing the architectures would be interesting future work. For now, more detail about the implementation of these algorithms can be found in <ref> <ref>.
Our numerical results in <ref>, suggest that NPE and NLE methods perform the best. The results also show that the sequential methods outperform their nonsequential analog. In particular, we find that SNPE and SNLE are the methods to favor as they allow to achieve a posterior quality of 0.6 with only 1 000 simulations.
§ MSE MINIMIZATION
In this section, we demonstrate that the following loss function
ℒ = 𝔼_p(x,z,θ)[ ∥ g(x, θ, z) - g_φ(x,θ) ∥_2^2 ],
is minimized ∀ (x,θ) ∼ p(x,θ) by
g_φ(x,θ) = 𝔼_p(z | x,θ)[ g(x, θ, z) ].
This proof is inspired by <cit.>.
The optimal parameters of neural networks are typically chosen to cancel the following gradient
∂ℒ/∂φ = ∂/∂φ 𝔼_p(x,z,θ)[ ∥ g(x, θ, z) - g_φ(x,θ) ∥_2^2 ]
= ∂/∂ g_φ 𝔼_p(x,z,θ)[ ∥ g(x, θ, z) - g_φ(x,θ) ∥_2^2 ] ×∂ g_φ/∂φ.
Since g_φ is by construction very unlikely to have null derivatives with respect to its parameters it means that
∂/∂ g_φ 𝔼_p(x,z,θ)[ ∥ g(x, θ, z) - g_φ(x,θ) ∥_2^2 ] =0.
Thanks to Leibniz integral rule
we can switch the gradient and integrals such that
𝔼_p(x,z,θ)[ ∂/∂ g_φ∥ g(x, θ, z) - g_φ(x,θ) ∥_2^2 ] = 0
𝔼_p(x,z,θ)[ -2 g(x, θ, z) + 2 g_φ(x,θ)] = 0
𝔼_p( x,θ) [ -2 𝔼_p(z| x,θ)[ g(x, θ, z)] + 2 g_φ(x,θ)] = 0.
As 𝔼_p(x,z,θ)[∥ g(x, θ, z) - g_φ(x,θ) ∥_2^2] is convex with respect to g_φ, it has a unique minimum that is reached when
g_φ(x,θ) = 𝔼_p(z | x,θ)[ g(x, θ, z) ].
§ EXPERIMENTS ADDITIONAL INFORMATIONS
Codes for the compressor, the forward model, and the explicit full-field analysis are available at https://github.com/DifferentiableUniverseInitiative/sbi_lens. All codes relative to the benchmark of implicit inference techniques are available at https://github.com/LSSTDESC/sbi_bm_lens/tree/main<sbi_bm_lens>.
§.§ Compressor Architecture
To compress the convergence maps of 5 × 256 × 256 pixels into a 6 dimensional summary statistics we used a residual neural network (ResNet) <cit.> architecture. Specifically the ResNet-18. The ResNet-18 was trained under the VMIM loss function as described in <ref>.
§.§ Neural Network Architecture to learn marginal gradients
To learn the marginal gradients ∇_θp(x | θ) from the joint stochastic one ∇_θp(x, z | θ) provided by the simulator, we used a neural network with 2 layers of 256 hidden units and Leaky ReLU activation functions. To test that we learned the correct marginal gradients we compared them against the gradients of a conditional NF trained with 10^5 simulations under the NLE loss.
§.§ NLE and ∂ NLE Architectures
For this study, the NF architecture remains fixed for the two methods, only the input changes: 1) we used only simulations; 2) we used simulations and the gradients of the simulator; 3) we used the simulations and the learned marginal gradients. Our conditional NF is a RealNVP <cit.>
of 4 coupling layers. Scale and shift parameters are learned using a neural network of 2 layers of 128 hidden units each. We used SiLU activation functions. To get the posterior from the learned likelihood, we used NUTS sampler.
The epistemic uncertainty is approximated by training 7 NFs.
§.§ Standard Implicit Inference Architectures
To compare all the implicit inference techniques, we used the https://sbi-dev.github.io/sbi/<sbi> package for (S)NPE, (S)NLE and (S)NRE methods.
For the sequential approach, the simulation budget was split across 5 rounds. To approximate the epistemic uncertainty we trained 5 NFs for each simulation budget.
(S)NLE -
We used <cit.> version of NLE and SNLE algorithm. In line with previous works <cit.>, our neural density estimator is a Masked Autoregressive Flow (MAF) <cit.> with 5 autoregressive layers, each has two hidden layers of 50 units each. We used Tanh activation functions. Still in line with previous works, we used Slice Sampling schemes to recover the posterior distribution. Note that this is not the most efficient MCMC to explore high-dimensional or multi-modal spaces. However, since we are in a 6 almost Gaussian dimensional space this scheme works very well.
(S)NPE -
We used NPE algorithm as formulated in <cit.> but used as a neural density estimator a MAF instead of a Mixture Density Network (MDN). For SNPE algorithm we use Automatic Posterior Transformation (APT) by <cit.>. In line with previous works, our neural density estimator is a MAF with 5 autoregressive layers, each has two hidden layers of 50 units each. We used Tanh activation functions. For APT, to compute the atomic proposal, we used M = 10 atoms. The computational complexity of APT is O(M^2) and as underlined by <cit.> more atoms are very demanding in terms of memory. In addition, unlike <cit.> we found a difference in training time between M = 10 and M = 100 atoms.
Even though APT outperforms previous sequential NPE methods <cit.>, as reported by the APT paper itself <cit.> and <cit.>, this algorithm can suffer from leakage of posterior mass outside the prior support. To overcome this issue <cit.> introduced Truncated Sequential Neural Posterior Estimation (TSNPE).
(S)NRE -
We used NRE algorithm as in <cit.> and used K = 10 class.
In line with previous works <cit.>, the K multi-class classifier is a residual neural network with two residual blocks of 50 hidden units and ReLU activation functions. Still in line with previous works, we used Slice Sampling schemes to recover the posterior distribution.
§ ADDITIONAL CONVERGENCE PLOTS
We provide additional results showing the convergence of inference methods. <ref> shows the contours evolution of the implicit inference posteriors approximated with NLE method.
<ref>, <ref>, <ref> and <ref> show the evolution of the approximated mean and standard deviation of the posteriors approximated with NLE, ∂NLE with joint gradients and marginal gradient, and the explicit inference methods. <ref> shows the contours evolution of the explicit inference posterior. Finally, <ref> displays the KDE approximation used to compute the C2ST metric of explicit inference method.
| Understanding the cause of the observed accelerated expansion of the Universe is currently a major topic in cosmology. The source of this acceleration has been dubbed Dark Energy, but its nature is still unknown. Dark Energy cannot be directly observed but several observational probes can be used to understand better its characteristics, with weak gravitational lensing, in which background galaxies are sheared by foreground matter, being one of the most powerful. This phenomenon is sensitive to both the geometry of the Universe and the growth of structure, which both depend on the cosmological parameters of the dark energy model. Many photometric galaxy surveys such as CFHTLenS <cit.>, KiDS <cit.>, DES <cit.>, and HSC <cit.>, have already demonstrated its constraining power on the matter density Ω_m and fluctuation amplitude σ_8 parameters. Upcoming weak lensing surveys (LSST <cit.>, Roman <cit.>, Euclid <cit.>) are expected to be larger and deeper allowing us to refine our estimations even further.
In cosmological inference, a significant challenge lies in the absence of an analytic likelihood p(x| θ) to recover cosmological parameters θ from the data x. Most of the mathematical inference frameworks proposed to overcome this problem are based on two-stage inference: compression of the data into summary statistics t=f(x) and then Bayesian inference to obtain the posterior p(θ| t). The most famous one is the two-point statistics analysis (e.g., ). It uses as a summary statistic t the two-point correlation function or its analog in Fourier space, the power spectrum. Then, the inference part of the analysis is performed using the corresponding analytic Gaussian likelihood p(t| θ) which is sampled through Markov Chain Monte Carlo (MCMC). On large scales, the Universe remains close to a Gaussian field and the 2-point function is a near sufficient statistic to extract cosmological information. However, on small scales where non-linear evolution gives rise to a highly non-Gaussian field, this summary statistic is not sufficient anymore.
At a time when future surveys will access small scales, we need to investigate summary statistics that can capture non-Gaussianities. This has led to a new class of statistics, known as higher-order statistics, including, for example, lensing peak counts (e.g., ), 3-point statistics (e.g., ) and machine learning compression (e.g., ), all with varying degrees of signal extraction power. Most of the time, no analytical models p(t|x) exist and these statistics are usually assumed to be Gaussian distributed leading to potentially biased inference or inaccurate uncertainty estimation. On top of that, since no analytical function t = g(θ) to map cosmological parameters to the summary statistic exists, the inference part requires a large number of very costly simulations x ∼ p(x|θ) (with p(x|θ) a simulator) to compute the summary statistics t = f(x). This is in addition to the number of simulations already required to compute the covariance matrix.
Full-field inference (e.g., ), aims to perform inference from simulations without any loss of information. This means no loss of information coming from a compression step and no loss of information coming from assumptions on the likelihood function employed for inference. Hence the quality of the learned posterior is solely tied to the forward model's accuracy. This paper focuses on this particular kind of inference.
Depending on the nature of the forward model p(x|θ), one can either perform explicit inference or implicit inference. The former refers to inference methods that can be used when the likelihood function p(x = x_0|θ) can be evaluated for different θ and sampling schemes can thus be employed. The latter can be used when only an implicit version of the likelihood is available through a set of simulations (θ, x). Implicit inference, also known as likelihood-free inference or simulation-based inference, commonly recasts the inference problem as a neural density optimization problem where the distribution is learned and can be evaluated for all θ and x. There exist different flavors of implicit inference, one aims to learn the likelihood function p(x |θ) (e.g., ) or the likelihood ratio r(θ, x) = p(x |θ) / p(x) (e.g., ). This learned likelihood or likelihood ratio can now be evaluated and sampling methods can be used to get the posterior p(θ | x). Others choose to directly
approximate the posterior distribution p(θ | x) (e.g., ).
Explicit inference applied in the context of full-field inference is known as Bayesian Hierarchical/Forward Modeling. Because of the high complexity and dimension of the field-based likelihood, sampling schemes guided by the gradient information ∇_θlog p(x| θ) are typically used to explore the parameter space in a more efficient way. This motivates the development of differentiable forward models p(x|θ).
Naturally, we could ask, could these gradients also help implicit inference methods for full-field inference? Specifically, <cit.> and <cit.> proposed implicit inference methods to leverage the gradient information from the forward model while approximating the likelihood, the likelihood ratio, or the posterior distribution. They showed that this additional information helps to constrain the target distribution and thus improve sample efficiency.
In summary, within the context of LSST Y10, this paper aims to answer the following questions:
* Is the differentiability of the forward model a useful asset for full-field implicit inference?
* Which methods allow full-field inference with the fewest simulations?
To meet the full-field criterion, we focus our benchmark analysis on two inference strategies:
* Explicit full-field inference: we sample our forward model through the use of the Hamiltonian Monte Carlo (HMC) sampling method. Specifically, we use the No-U-Turn (NUTS) algorithm.
* Implicit full-field inference: after compressing the simulations into sufficient statistics, we compare the Neural Likelihood Estimation (NLE) and Neural Likelihood Estimation augmented with gradients (∂NLE).
For the implicit inference strategy, maps are compressed using an optimal neural compression approach: we train a Convolutional Neural Network (CNN) by maximizing the mutual information between the cosmological parameters and the summary statistic (e.g. ; see for a review on optimal neural compression strategies). In this study, we separate the compression process from the inference process and concentrate solely on the amount of simulations necessary for inference. We will explain why in <ref>.
We use the same forward model to benchmark the different inference strategies and use the same fiducial data x_0. Our forward model is a differentiable field-based likelihood that can be evaluated and can generate simulations such that both approaches explicit and implicit can be performed. Specifically, it is a log-normal model that produces LSST Y10-like weak lensing convergence maps. The cosmological parameters θ that we aim to constrain are (Ω_c, Ω_b, σ_8, h_0, n_s, w_0). The forward model can be found in
We start by introducing our lensing forward modeling in <ref>. In <ref> we introduce our Bayesian inference framework. Then in <ref> we present the metric used to benchmark the different inference approaches. We then describe in <ref> the explicit inference approach and present the results. It is followed by the implicit inference approaches both with and without gradients and the corresponding results in <ref>. Finally, we conclude in <ref>. | null | null | null | null | null |
http://arxiv.org/abs/2409.17683v1 | 20240926094927 | Zero- and Few-shot Named Entity Recognition and Text Expansion in Medication Prescriptions using ChatGPT | [
"Natthanaphop Isaradech",
"Andrea Riedel",
"Wachiranun Sirikul",
"Markus Kreuzthaler",
"Stefan Schulz"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Stationarity of Manifold Time Series
Junhao ZhuJZ is partially supported by CANSSI (Canadian Statistical Sciences Institute), Data Science Institute and Medicine by Design, University of Toronto, Dehan Kong
DK and ZZ acknowledge financial support from a
Catalyst Grant from Data Science Institute and Medicine by Design, University of Toronto. , Zhaolei Zhang[2],
University of Toronto,
and
Zhenhua LinZL research is partially supported by the NUS startup grant A-0004816-00-00
National University of Singapore
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Introduction: Medication prescriptions are often in free text and include a mix of two languages, local brand names, and a wide range of idiosyncratic formats and abbreviations. Large language models (LLMs) have shown promising ability to generate text in response to input prompts. We use ChatGPT 3.5 to automatically structure and expand medication statements in discharge summaries and thus make them easier to interpret for people and machines.
Methods: Named-entity Recognition (NER) and Text Expansion (EX) are used in a zero- and few-shot setting with different prompt strategies. 100 medication statements were manually annotated and curated. NER performance was measured by using strict and partial matching. For the task EX, two experts interpreted the results by assessing semantic equivalence between original and expanded statements. The model performance was measured by precision, recall, and F1 score.
Results: For NER, the best-performing prompt reached an average F1 score of 0.94 in the test set. For EX, the few-shot prompt showed superior performance among other prompts, with an average F1 score of 0.87.
Conclusion: Our study demonstrates good performance for NER and EX tasks in free-text medication statements using ChatGPT. Compared to a zero-shot baseline, a few-shot approach prevented the system from hallucinating, which would be unacceptable when processing safety-relevant medication data.
§ INTRODUCTION
Prescribing drugs has a high impact on patient safety. In many places, handwritten prescriptions are still common, and only when a discharge summary is written, medication information is registered in electronic health records (EHRs). Here, physicians tend to use compact language and particularly abbreviations. They mix up brands with ingredient names and skip units ("ASA 100") and dose form information (“tablet”, “suspension”, “eyedrops”). For route of administration, medication frequency, and time patterns, a broad range of different styles is used. Abbreviated Latin terms such as “tds” (three times a day) and “qPM” (once in the evening) are common in some jurisdictions, but completely unknown in others where “1-1-1” corresponds to “tds”, and “0-0-1” to “qPM”. Wherever drug prescriptions are done in narrative form, their interpretation - particularly beyond their narrow context of use - remains a challenge. Ideally, a cryptic “ASA 100 qPM” should be automatically transformed into a clear “Acetylsalicylic acid 100 milligrams oral tablet, to be taken orally every afternoon”.
English is the lingua franca in scholarly communications, but EHRs mostly use the official languages of the respective jurisdiction. Yet there are exceptions, such as in Arabic or Asian countries, which have not developed sophisticated medical terminology in their languages to an extent that would suffice for clinical documentation and therefore use English. International healthcare teams, especially in the Middle East, communicate in English. English is the interlingua of multilingual countries such as India and Malaysia. It is therefore unsurprising that many flavors of non-standard medical English have developed, which present challenges both for native and second-language speakers. Beyond affecting communication, we expect medical content in "Asian Englishes" to pose also difficulties for machine processing of medical language, because tools and resources have been trained with mainstream English EHR content and with publications polished and standardized by editors. In a similar vein, controlled vocabularies such as ICD-10 or SNOMED CT do not account for non-mainstream English term variants.
This paper focuses on healthcare documentation in Thailand, which has undertaken a concerted effort to stimulate computerized physician order entry (CPOE) to achieve high-quality prescriptions <cit.> Nevertheless, most hospitals still depend on written prescriptions, which subsequently require manual input into EHRs <cit.>. Narrative medication statements are still used for drug reconciliation and communication <cit.>. Such expressions in “Thai English” blend two languages and character sets, and use a variety of styles and formats depending on the physician's preferences, such as the merging of trade names and ingredients <cit.>, the use of abbreviated names and routes6, and the omission of dosage forms <cit.>. E.g., in “Thyrosit (50) 0.5x1 o ac
< g r a p h i c s >
”, “Thyrosit” is a trade name for levothyroxine, and “(50)” indicates the strength. The unit (here micrograms) is omitted as there is no levothyroxine preparation at the milligram level. “0.5x1” indicates the action of consuming half (“0.5”) of the dosage once (“x1”) each day. Finally, “o ac
< g r a p h i c s >
”, comprises a mixture of Thai, English, and Latin abbreviations denoting routes and timings. “o” represents “per os (po)” (by mouth) and “ac” means “ante cibum” (before meals).
< g r a p h i c s >
is a Thai abbreviation for “Monday to Friday”. It is foreseeable that considerable problems arise whenever we aim to automatically extract information from such free-text instructions, although over the last decade machine learning-based approaches have gained popularity in clinical information extraction thanks to the ever-improving performance of human language technologies <cit.>. Large-language models (LLMs) have attracted immense interest after the release of ChatGPT in 2022 <cit.>. Their capacity for handling and analyzing large-scale text and generating content in response to input prompts makes them highly promising for a wide range of applications, from clinical name entity recognition (NER) <cit.> to encoding clinical knowledge <cit.>. Moreover, LLMs have vastly benefited from transfer learning where their pre-existing knowledge is leveraged and fine-tuned with minimal labeled data or instructions, making them highly adaptable. This contributed to the popularity of zero to few-shot learning paradigms: for each new class, one-shot learning uses just one labeled example, few-shot learning uses a limited set thereof, whereas in zero-shot learning no labeled data are provided at all <cit.>. The capabilities of LLMs can be customized, improved, or refined by a set of instructions called a prompt <cit.>. To effectively communicate with LLMs, prompt engineering strategies have turned out to be fundamental <cit.>.
Against this background, we focus on the idiosyncratic and compact nature of medication statements in Thai medical records and formulate our research questions as follows:
* To which extent is ChatGPT 3.5 able to restructure and normalize medication statements?
* Does this depend on prompting patterns?
* What are the weaknesses of this approach regarding patient safety?
§ METHODS
§.§ Overview
Our study consists of a named entity recognition (NER) and a text expansion (EX) phase, as shown in Figure <ref>. The initial step involves the human annotation of medication statements with the entity types Medication, Strength, Unit, Mode, and Instructions, as well as with manually expanded text, done by a domain expert. Prompts are then optimized by utilizing the NER-annotated training dataset (see below). Following this, prompt validation and selection occur on a separate dataset. The selected prompt is then evaluated on a designated test dataset. In the same way, leveraging the outcomes derived from NER prompts, we design and optimize another set of prompts for the task EX, using EX-annotated training data. Finally, prompts find the best-performing one for EX and assess its performance on the test dataset.
§.§ Dataset
Nineteen discharge summaries were randomly selected from 6000 ones from the internal medicine department at Maharaj Nakorn Chiang Mai Hospital (2018–2022), authored by internists, residents, or 6th-year medical students. 100 medication prescriptions were manually extracted, annotated by a physician, and curated by another physician. Data was then randomly divided into a training set, a validation set, and a test set with a 25:25:50 split as shown in Table <ref>.
§.§ NER Annotation
The annotations were divided into two separate sections, one for NER annotation and one for EX annotation. For NER, the following five entity types were distinguished:
Medication. A brand, a substance, or a set of substances that make up a single drug product, depending on how it was mentioned in the prescription. Example: “Paracetamol”, “ASA”, “Thyrosit”, “Sulfamethoxazole/Trimethoprim”.
Strength. The amount of the substance. Given a prescription, "Paracetamol (500) po q 4-6 hr", "500" represents the strength of the substance, and the unit "milligram" is omitted.
Unit. The unit in which the strength is measured, such as milligrams (“mg”) or micrograms (“mcg”). The unit is often omitted, but can be inferred from the context.
Mode. The route for administering the medication, e.g. “po” (per oral), “opc” (per oral and postprandial).
Instructions. A narrative explanation of how to take the medication including Quantity of Dose Form, Dose Form, Relation to Meal, Frequency, and Other. For example, "po pc q 4-6 hr" means consuming the medication by oral route (po; per oral), after meal (pc; postpandrial), and do it every 4-6 hours (q 4-6 hr.)
§.§ EX Annotation
For the EX annotation, the expansion types of the text were classified into eight categories. The latter five of them, viz. Quantity of Dose Form, Dose Form, Relation to Meal, Frequency, and Others extend the Instructions entity type:
Active ingredients. Substituting the text in the Medication entity type by the corresponding active ingredient(s). E.g., converting “paracet” to “paracetamol” or “Thyrosit” to “levothyroxine”.
Unit. Expanding the short form Unit to a long-form, e.g. “milligram” for “m.g.” and “mg”.
Mode. Expanding the Mode entity type into a non-abbreviated description of the route of administration, such as “po” to “oral”.
Instructions Dose Form. Describing the form of drug dose, such as tablet, inhaler, or droplet.
Instructions Quantity of Dose Form. Expanding information related to the quantity associated with the Dose Form expansion type, e.g., in “Take Simvastatin 40 mg 0.5 tablet once daily”, “0.5” indicates Quantity of Dose Form, i.e., without using abbreviations or ambiguous terms.
Instructions Relation to Meal. Describing the relation of the medication intake related to meals from the Instructions entity type, including “before meals”, “after meals”, and “before bed”.
Instructions Frequency. Expanding details regarding how often the medication should be taken, e.g. “once daily”, “once weekly”, or “twice daily”.
Instructions Other. Capturing any additional information in Instructions that do not fit into the EX categories, such as the duration of the prescription, or conditional instructions such as “take when experiencing palpitations”.
§.§ Prompt engineering
Our approach to zero-shot learning prompt design was based on combinations of prompt patterns, as recently suggested <cit.>. We first created prompts using the training dataset and assessed the ChatGPT outcomes by applying these prompts to the validation set. Then we refined the prompt combinations to enhance the outcome performance, guided by the results from the validation set. Once we achieved satisfactory results, we selected the most effective prompt based on the validation set and evaluated it on the test dataset. For the NER task, we selected the following patterns for our prompt design:
Persona. Assigning a role provides context and direction, enabling the LLM to adopt a specific identity and tailor its responses accordingly.
Template. Assigning the LLM a precise template to adjust its output to a specific format.
Few-shot. Assigning the LLM prescription examples from the annotated training dataset for the LLM to understand what the output looks like. In our case, we simply give the example of annotated NER in tabular format.
The prompts were structured based on combinations of patterns.
§.§.§ NER Prompt Patterns
For the Named Entity Recognition (NER) task, six different prompt combinations are considered:
Prompt A. This prompt combines Persona and Template patterns.
Prompt B. This prompt uses only the Template pattern.
Prompt C. This prompt uses the Template pattern along with five examples of correct NER recognition.
Prompt D. This prompt combines the Persona and Template patterns, including five examples of correct NER recognition.
Prompt E. This prompt uses the Template pattern with ten examples of correct NER recognition.
Prompt F. This prompt combines the Persona and Template patterns with ten examples of correct NER recognition.
§.§.§ EX Prompt Patterns
For the EX task, three different prompt combinations are considered:
Prompt 1. This is a plain prompt command that does not follow any specific prompt pattern.
Prompt 2. This prompt combines Persona and Template patterns.
Prompt 3. This prompt combines Persona and Template patterns along with five examples of correct NER recognition.
The details of the prompts are in Appendices <ref> and <ref>.
To gather NER and EX results, we manually collected the ChatGPT outputs and organized the NER and EX results in a spreadsheet. For every prompt, a new thread was created to keep ChatGPT's responses independent. Where no response was generated, we left the corresponding cell empty.
§.§ Evaluation
Precision (P), recall (R), and averaged F1 scores were used to evaluate the model performance. For NER, scores were computed using strict and partial matching criteria. Strict matching required identical spans and a correct entity type. Partial matching only required overlapping of spans and correct entity types. The evaluation of the EX task was based on inspection by two domain experts. They carefully reviewed each result and assessed where semantic equivalence could be asserted, according to their interpretation of the intended meaning of the prescription statement in the context of their expertise.
Figure <ref> depicts how we analyze text. For the original “Simvas(40) 1x1 po pc for 3 mos.,” our NER annotation labels spans and entity types at the token level, excluding the special characters “()[]x*”. In the EX phase, annotators expand the components Active ingredients, Units, Mode, and Instructions. For Instructions, annotators expand terms based on five classes: Quantity of dose form, Dose form, Relation to Meal, Frequency, and Other.
In the evaluation phase, ChatGPT’s generated NER output is compared with our annotations using strict and partial matching. E.g., if “Simvas(40),” is identified as medication, it would be false for strict matching and true for partial matching (because the strength “(40)” is a separate entity type). Regarding ChatGPT's EX output, each expanded class is compared for semantic equivalence. A response “by mouth” instead of the gold standard annotation “oral” is considered true because the meaning is judged identical by the experts. For Instructions, five additional categories assess the output quality. Evaluators carefully examine each category by comparing the expanded Instructions against the gold standard. E.g., if ChatGPT expanded “take 1 tablet once daily for 3 months after a meal”, we would compare “1”, “tablet”, “once daily”, “after a meal”, and “for three months” with the gold standard Quantity of dose form, Dose form, Frequency, Relation to Meal, and Other, respectively.
§ RESULTS
§.§ Zero- and few-shot named-entity recognition
The NER results are shown in Table <ref> and <ref>. Prompts A and B showed good performance with an average F1 of 0.72 and 0.74, respectively, based on partial matching. Prompts C, D, E, and F demonstrated better overall performance than A or B. However, even though prompt C generally had a better overall F1 than prompt A, it exhibited the poorest Unit detection compared to all prompts, even with the same prompt pattern and more examples. Prompt D showed a similar issue, as it failed to recognize Instructions effectively. Prompts E and F showed superior performance compared to the others, with an average F1 of 0.94 and 0.90, respectively (partial matching). Both prompts revealed promising for medical NER tasks. Nevertheless, we decided to prioritize prompt E, as it had the highest average F1 and better NER performance on Unit in partial matching.
The results of the NER task in the test dataset are shown in Table <ref>. Overall, ChatGPT commanded by prompt E, generated good performance for the NER task with an average F1 score for all entity types of 0.79 and 0.92 based on strict and partial matching, respectively. However, the test dataset findings highlighted that the Unit continues to be the poorest-performing in NER. Compared to the validation dataset, the performance metrics for the test dataset are slightly inferior, indicating that ChatGPT has a high generalizability for identifying entity types.
§.§ Zero- and few-shot text expansion and manual assessment
After obtaining metrics from NER prompts, we passed the ChatGPT output of the best-performing prompt E to the EX phase of our experiment. The NER output was converted to a tabular format to put the text into the ChatGPT chatbox. The EX performance results are shown in Table <ref>. In the validation dataset, Prompt 3 had the best overall performance on term expansion in all categories, excluding Unit, with an average F1 score of 0.87. Prompts 2 and 3 were generally better as they were given more examples of medication prescriptions. Prompt 1, being the simplest EX prompt, was not capable of expanding any terms in the Mode class at all. Prompt 3 was then selected for test dataset evaluation, and the results showed an average F1 of 0.77, which decreased by 0.10 compared to the validation set because the metrics for Unit and [Instructions] Relation to Meal dropped largely compared to the validation dataset.
§ DISCUSSION
§.§ Comparison to previous work
Early attempts to use ChatGPT for clinical NLP were made by Hu et al., but its performance lagged behind that of the supervised BioClinicalBERT model, regarding the extraction of treatments mentioned in discharge summaries <cit.>. However, they did not distinguish drug treatments from other medical treatments. Therefore, it seems that our work is the first to use ChatGPT for specific and detailed medication statement analysis.
Comparing our NER results (mean F1 score 0.92) to an earlier study (mean F1 score 0.80) using CRF shows a clear superiority <cit.>. In that study, medication details were improperly delineated whenever punctuation patterns such as brackets did not follow the norm <cit.>. However, problems like this can mostly be seen as resolved by deep learning. So did the unusual bracketing of strength (e.g., "ASA (500)") not constitute any obstacle.
More interesting is the comparison with MT-NER-PMB (2021), which can be seen as a recent non-LLM baseline, which outperformed other BERT models. Comparing our results to their F1 score for partial results we could show slightly better detections for Medication (0.99 vs 0.96), Mode (1.00 vs. 0.95), and Strength (0.99 vs. 0.98). However, our study separates Strength and Unit, therefore the relatively low F1 score for Unit 0.51 is not integrated in the calculation for Strength <cit.>.
That our medication statements often did not contain units of measurement (in the gold standard annotation constituting so-called zero-with annotations, cf. Fig. <ref>) led to a low precision in Prompt 1 because it repeated the NER task by adding the found abbreviation in the text. The results suggest that the addition of prompt patterns in prompts 2 and 3 solved this.
Compared to traditional machine learning, it was astonishing to observe how few training examples produced a considerable benefit. This highlights the importance of good prompt engineering to optimally exploit the content of an LLM. However, we cannot explain the poor result for Unit in NER, whereas the even worse performance of Unit in EX (such as from "mg" to "Milligram") may result from the fact that unit symbols are hardly ever expanded, even in scholarly or popular publications or drug leaflets.
§.§ Error Analysis
An important result of our study was that incremental LLM prompting with repetitive human assessment improved the result and the NER task achieved an F1 of 0.79 and 0.92 on the test set for strict and partial matching, respectively, with the recognition of units of measurement (which were often omitted) showing the poorest results. The Unit expansion also performed poorly in the EX task, which achieved an F1 of 0.07 only, against 0.77 for all tasks. The expansion of abbreviated drug names and brand names to active ingredients achieved a 1.00 precision such as the expansion of mode of administration, which shows that the result has in no way deteriorated compared to the original text. It is important to highlight that from a patient safety point of view, precision is more important than recall. A low recall means that the system did not improve the quality of the medication statement, whereas any suboptimal precision value bears the risk that the content of the statement is distorted.
Unclear short forms are a well-known problem in clinical texts <cit.>. Abbreviation of drugs often follows local jargon. Interestingly, only one ("ASA") was correctly resolved, whereas the other four were not resolved at all ("MFM" for metformin, "MTV" for multivitamins, and "K" for potassium). Regarding ChatGPT’s risk of hallucination, it is remarkable that the system opted for non-resolution instead of for wrong resolution. While in prompt D the abbreviation "ORS" for "oral rehydration solution" was even not detected during the NER validation phase as Medication, in prompt E at least in NER it was detected in validation and test but not extended by test EX. Despite the fact it is a common abbreviation in our data set, the complexity increased due to a mixture of languages and documentation styles. Our results for EX show a drastic increase in F1 for the expansion type Mode from Prompt 1 without any prompt patterns (F1 score: 0.00) to Prompt 2 with adding Persona and Template (F1 score 0.94). In Prompt 2, ChatGPT detected all abbreviations of "oral" in all samples, but still tried to translate each different abbreviation literally like "opc" to "oral, by consumption" or "po" "per mouth". By giving 5 examples with the prompt pattern "Few-shot" in Prompt 3, ChatGPT detected, e.g., "po", "o", "opc", and "po ac" as "oral". Even "oac", which was not given as an example, was now correctly defined as "oral" and not as "oral administration". The changed definition of Mode leads to an increase of F1 by 0.11 for the expansion type Relation to Meal in Prompt 3 as well, because ChatGPT can now categorize the subsequent information. In contrast to the quality improvement of ChatGPT’s output for Mode, its F1 slightly decreased by 0.03. Adding clearly defined case-specific prompt patterns for foreign language abbreviations is an essential step in creating the expected results.
Even though most of our medication statements are in English, some Thai brand names such as "HIDIL" show the non-English vocabulary limitations of ChatGPT. For the expansion type Active Ingredients, the main positive results are the already detected active ingredients in the NER phase in comparison to medication brands, which are less often transformed to the active ingredient, e.g. instead of recognizing "HIDIL" as Gemfibrozil, ChatGPT inserted the brand name "HIDIL". This parallels a similar finding when querying ChatGPT with the Chinese medication "Motrin" (ibuprofen) which was mistakenly linked to aspirin when retrieving adverse drug reactions-related information <cit.>.
§.§ Limitations
Hallucinations are a relevant issue in ChatGPT <cit.>. Here, we obtained the remarkable result one instance of a hallucinated response could be identified in the test data (n=50). The daily dosage instruction of an antipsychotic drug was interpreted correctly, but ChatGPT added an "as needed" statement on top, however, regarding a small dose that is not expected to cause harm.
Given the original text prescription, “Ridperidone (1) 0.5x1 po hs”, ChatGPT gave us “0.5 Tablet oral at bedtime; 0.25 Tablet oral as needed for agitation” for instructions EX task instead of “0.5 Tablet Oral before bed Once daily” which is the correct answer.
The size of the dataset must also be mentioned. Particularly, the assessment of rare but important outcomes such as the occurrence and the severity of hallucinations would have required much more data. Another limitation was the restriction to ChatGPT 3.5. We could have compared it to other LLMs, but our focus has been on the comparison of different prompting strategies rather than different models.
§ CONCLUSIONS AND OUTLOOK
Converting medication information into a standardized format is expected to improve patient safety by making it easier to interpret by both humans and machines. This study demonstrated good performance in the task of entity recognition (NER) and entity expansion (EX) in free-text medication statements using ChatGPT 3.5, in comparison to related work. The data, taken from Thai medical records, exhibited particularities in style and format and used a mixture of English and Thai, including local drug product names. We were able to demonstrate good performance in NER and EX. Compared to a zero-shot baseline, a 10-shot approach prevented the system from hallucinating, which would be unacceptable when processing highly safety-relevant medication data.
Our study works on anonymized text snippets of real-world data. Further tests and analysis of huge real-world patient data for training are necessary to use ChatGPT on a variety of disciplines and text document types. In the future for the use of real-world patient data, especially the ethical and data safety issues need to be discussed <cit.>. A good alternative would be an LLM deployment on-premise. Like Hu et al. with ChatGPT version 3, we experienced "a significant degree of randomness" while performing the equal prompts more often with version 3.5 as well. In our study, the input sequence length was even longer than the one presented in the publication of Hu et al <cit.>.
As we are currently at the very beginning of the LLM era, we expect even better performance in the future. Medication-related statements are a particularly challenging use case because their overly compact and idiosyncratic style constitutes a risk for human and machine misinterpretation with unforeseeable impacts on patient safety on the one hand. On the other hand, a similar risk may occur from even minor hallucinations and imprecisions when expanded and normalized by AI methods. Finally, medication styles drastically vary across languages and jurisdictions. We therefore advocate investing particular efforts in the creation of multilingual, multicultural, and multi-specialty medication benchmarks.
§ DATA ACCESS
The dataset was approved by the Research Ethics Committee, Faculty of Medicine, Chiang Mai University, Thailand: COM-256509305, Research-ID: 9305.
§ ACKNOWLEDGEMENT
AR’s fellowship was granted by the German BMBF within the DAAD IFI programme. NI’s fellowship was granted by OeAD-GmbH/MPC in cooperation with ASEA-UNINET, financed by the Austrian BMBWF. This manuscript is part of a thesis for NI’s Ph.D. in Digital Health, Faculty of Medicine, Chiang Mai University.
§ CONFLICT OF INTEREST
Nothing to declare.
10
url<#>1urlprefixURL href#1#2#2 #1#1
theera-ampornpunt_thai_2011
N. Theera-Ampornpunt, http://conservancy.umn.edu/handle/11299/162267Thai hospitals' adoption of information technology: a theory development and nationwide surveyAccepted: 2014-01-09T20:03:49Z (Dec. 2011).
<http://conservancy.umn.edu/handle/11299/162267>
prakob_retrospective_2023
J. Prakob, P. Maturapanee, W. Pianmongkon, M. Nenpang, A Retrospective Study of Prescribing Error Associated with The Computerized Prescribing System and The Handwritten Prescribing System Among Prescriptions in a Private Hospital, Chiang Mai Province, The Bangkok Medical Journal 19 (2023) 85–91.
https://doi.org/10.31524/bkkmedj.2023.21.003 doi:10.31524/bkkmedj.2023.21.003.
sanguansak_impact_2012
T. Sanguansak, M. G. Morley, Y. Yospaiboon, A. Lorch, B. Hedt, K. Morley, https://bmjopen.bmj.com/content/2/1/e000539The impact of preprinted prescription forms on medication prescribing errors in an ophthalmology clinic in northeast Thailand: a non-randomised interventional study, BMJ Open 2 (1) (2012) e000539, publisher: British Medical Journal Publishing Group Section: Health services research.
https://doi.org/10.1136/bmjopen-2011-000539 doi:10.1136/bmjopen-2011-000539.
<https://bmjopen.bmj.com/content/2/1/e000539>
chiewchantanakit_effectiveness_2020
D. Chiewchantanakit, A. Meakchai, N. Pituchaturont, P. Dilokthornsakul, T. Dhippayom, https://www.sciencedirect.com/science/article/pii/S1551741119301706The effectiveness of medication reconciliation to prevent medication error: A systematic review and meta-analysis, Research in Social and Administrative Pharmacy 16 (7) (2020) 886–894.
https://doi.org/10.1016/j.sapharm.2019.10.004 doi:10.1016/j.sapharm.2019.10.004.
<https://www.sciencedirect.com/science/article/pii/S1551741119301706>
noauthor_rational_nodate
https://asean.org/book/rational-use-of-medicines-in-the-asean-region/Rational Use of Medicines in the ASEAN Region.
<https://asean.org/book/rational-use-of-medicines-in-the-asean-region/>
salmasi_medication_2015
S. Salmasi, T. M. Khan, Y. H. Hong, L. C. Ming, T. W. Wong, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4560405/Medication Errors in the Southeast Asian Countries: A Systematic Review, PLoS ONE 10 (9) (2015) e0136545.
https://doi.org/10.1371/journal.pone.0136545 doi:10.1371/journal.pone.0136545.
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4560405/>
hahn_medical_2020
U. Hahn, M. Oleynik, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7442512/Medical Information Extraction in the Age of Deep Learning, Yearbook of Medical Informatics 29 (1) (2020) 208–220.
https://doi.org/10.1055/s-0040-1702001 doi:10.1055/s-0040-1702001.
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7442512/>
jin_large_2023
B. Jin, G. Liu, C. Han, M. Jiang, H. Ji, J. Han, http://arxiv.org/abs/2312.02783Large Language Models on Graphs: A Comprehensive Survey, arXiv:2312.02783 [cs] (Dec. 2023).
https://doi.org/10.48550/arXiv.2312.02783 doi:10.48550/arXiv.2312.02783.
<http://arxiv.org/abs/2312.02783>
dave_chatgpt_2023
T. Dave, S. A. Athaluri, S. Singh, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10192861/ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations, Frontiers in Artificial Intelligence 6 (2023) 1169595.
https://doi.org/10.3389/frai.2023.1169595 doi:10.3389/frai.2023.1169595.
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10192861/>
vaswani_attention_2023
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, http://arxiv.org/abs/1706.03762Attention Is All You Need, arXiv:1706.03762 [cs] (Aug. 2023).
https://doi.org/10.48550/arXiv.1706.03762 doi:10.48550/arXiv.1706.03762.
<http://arxiv.org/abs/1706.03762>
ramachandran_extracting_2023
G. K. Ramachandran, K. Lybarger, Y. Liu, D. Mahajan, J. J. Liang, C.-H. Tsou, M. Yetisgen, Ö. Uzuner, https://linkinghub.elsevier.com/retrieve/pii/S1532046423000230Extracting medication changes in clinical narratives using pre-trained language models, Journal of Biomedical Informatics 139 (2023) 104302.
https://doi.org/10.1016/j.jbi.2023.104302 doi:10.1016/j.jbi.2023.104302.
<https://linkinghub.elsevier.com/retrieve/pii/S1532046423000230>
sivarajkumar_healthprompt_2022
S. Sivarajkumar, Y. Wang, http://arxiv.org/abs/2203.05061HealthPrompt: A Zero-shot Learning Paradigm for Clinical Natural Language Processing, arXiv:2203.05061 [cs] (Mar. 2022).
https://doi.org/10.48550/arXiv.2203.05061 doi:10.48550/arXiv.2203.05061.
<http://arxiv.org/abs/2203.05061>
singhal_large_2023
K. Singhal, S. Azizi, T. Tu, S. S. Mahdavi, J. Wei, H. W. Chung, N. Scales, A. Tanwani, H. Cole-Lewis, S. Pfohl, P. Payne, M. Seneviratne, P. Gamble, C. Kelly, A. Babiker, N. Schärli, A. Chowdhery, P. Mansfield, D. Demner-Fushman, B. Agüera y Arcas, D. Webster, G. S. Corrado, Y. Matias, K. Chou, J. Gottweis, N. Tomasev, Y. Liu, A. Rajkomar, J. Barral, C. Semturs, A. Karthikesalingam, V. Natarajan, https://www.nature.com/articles/s41586-023-06291-2Large language models encode clinical knowledge, Nature 620 (7972) (2023) 172–180, number: 7972 Publisher: Nature Publishing Group.
https://doi.org/10.1038/s41586-023-06291-2 doi:10.1038/s41586-023-06291-2.
<https://www.nature.com/articles/s41586-023-06291-2>
kadam_review_2020
S. Kadam, V. Vaidya, Review and Analysis of Zero, One and Few Shot Learning Approaches, in: A. Abraham, A. K. Cherukuri, P. Melin, N. Gandhi (Eds.), Intelligent Systems Design and Applications, Advances in Intelligent Systems and Computing, Springer International Publishing, Cham, 2020, pp. 100–112.
https://doi.org/10.1007/978-3-030-16657-1_10 doi:10.1007/978-3-030-16657-1_10.
zhou_large_2023
Y. Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, J. Ba, http://arxiv.org/abs/2211.01910Large Language Models Are Human-Level Prompt Engineers, arXiv:2211.01910 [cs] (Mar. 2023).
https://doi.org/10.48550/arXiv.2211.01910 doi:10.48550/arXiv.2211.01910.
<http://arxiv.org/abs/2211.01910>
white_prompt_2023
J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, D. C. Schmidt, http://arxiv.org/abs/2302.11382A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT, arXiv:2302.11382 [cs] (Feb. 2023).
https://doi.org/10.48550/arXiv.2302.11382 doi:10.48550/arXiv.2302.11382.
<http://arxiv.org/abs/2302.11382>
hu_zero-shot_2023
Y. Hu, I. Ameer, X. Zuo, X. Peng, Y. Zhou, Z. Li, Y. Li, J. Li, X. Jiang, H. Xu, https://arxiv.org/abs/2303.16416Zero-shot Clinical Entity Recognition using ChatGPT (2023).
https://doi.org/10.48550/ARXIV.2303.16416 doi:10.48550/ARXIV.2303.16416.
<https://arxiv.org/abs/2303.16416>
tao_prescription_2017
C. Tao, M. Filannino, Ö. Uzuner, https://linkinghub.elsevier.com/retrieve/pii/S1532046417301582Prescription extraction using CRFs and word embeddings, Journal of Biomedical Informatics 72 (2017) 60–66.
https://doi.org/10.1016/j.jbi.2017.07.002 doi:10.1016/j.jbi.2017.07.002.
<https://linkinghub.elsevier.com/retrieve/pii/S1532046417301582>
wang_future_2023
H. Wang, Y. J. Ding, Y. Luo, https://doi.org/10.1007/s40264-023-01315-2Future of ChatGPT in Pharmacovigilance, Drug Safety 46 (8) (2023) 711–713.
https://doi.org/10.1007/s40264-023-01315-2 doi:10.1007/s40264-023-01315-2.
<https://doi.org/10.1007/s40264-023-01315-2>
narayanan_contextual_2022
S. Narayanan, K. Mannam, P. Achan, M. V. Ramesh, P. V. Rangan, S. P. Rajan, https://www.sciencedirect.com/science/article/pii/S1532046421002896A contextual multi-task neural approach to medication and adverse events identification from clinical text, Journal of Biomedical Informatics 125 (2022) 103960.
https://doi.org/10.1016/j.jbi.2021.103960 doi:10.1016/j.jbi.2021.103960.
<https://www.sciencedirect.com/science/article/pii/S1532046421002896>
kreuzthaler_unsupervised_2016
M. Kreuzthaler, M. Oleynik, A. Avian, S. Schulz, https://aclanthology.org/W16-4213Unsupervised Abbreviation Detection in Clinical Narratives, in: A. Rumshisky, K. Roberts, S. Bethard, T. Naumann (Eds.), Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP), The COLING 2016 Organizing Committee, Osaka, Japan, 2016, pp. 91–98.
<https://aclanthology.org/W16-4213>
fink_potential_2023
M. A. Fink, A. Bischoff, C. A. Fink, M. Moll, J. Kroschke, L. Dulz, C. P. Heußel, H.-U. Kauczor, T. F. Weber, http://pubs.rsna.org/doi/10.1148/radiol.231362Potential of ChatGPT and GPT-4 for Data Mining of Free-Text CT Reports on Lung Cancer, Radiology 308 (3) (2023) e231362.
https://doi.org/10.1148/radiol.231362 doi:10.1148/radiol.231362.
<http://pubs.rsna.org/doi/10.1148/radiol.231362>
lee_utilizing_2023
S.-W. Lee, W.-J. Choi, http://anesth-pain-med.org/journal/view.php?doi=10.17085/apm.23056Utilizing ChatGPT in clinical research related to anesthesiology: a comprehensive review of opportunities and limitations, Anesthesia and Pain Medicine 18 (3) (2023) 244–251.
https://doi.org/10.17085/apm.23056 doi:10.17085/apm.23056.
<http://anesth-pain-med.org/journal/view.php?doi=10.17085/apm.23056>
sallam_chatgpt_2023
M. Sallam, https://www.mdpi.com/2227-9032/11/6/887ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns, Healthcare 11 (6) (2023) 887.
https://doi.org/10.3390/healthcare11060887 doi:10.3390/healthcare11060887.
<https://www.mdpi.com/2227-9032/11/6/887>
beltrami_consulting_2023
E. J. Beltrami, J. M. Grant-Kels, https://linkinghub.elsevier.com/retrieve/pii/S019096222300364XConsulting ChatGPT: Ethical dilemmas in language model artificial intelligence, Journal of the American Academy of Dermatology (2023) S019096222300364Xhttps://doi.org/10.1016/j.jaad.2023.02.052 doi:10.1016/j.jaad.2023.02.052.
<https://linkinghub.elsevier.com/retrieve/pii/S019096222300364X>
§ PROMPTS USED IN NER TASK FOR CHATGPT 3.5
Prompt A
You are now a Named Entity Recognition Model. I will give you a list of narrative drug prescriptions. Please slice the narrative text based on the Entity Types you detect and organize it as a table record with the columns: Medication ET, Strength ET, Unit ET, Quantity of Dose Form per intake ET, Dose Form ET, Mode ET, Timing ET, Frequency ET, Duration ET, Instructions ET (Dose, Frequency, Duration) without changing anything in the narrative prescription. For missing values, leave the cell blank.
Prompt B
I will give you a list of narrative drug prescriptions. Please slice the narrative text based on the Entity Types you detect and organize it as a table record with the columns: Medication ET, Strength ET, Unit ET, Quantity of Dose Form per intake ET, Dose Form ET, Mode ET, Timing ET, Frequency ET, Duration ET, Instructions ET (Dose, Frequency, Duration) without changing anything in the narrative prescription. For missing values, leave the cell blank.
Prompt C
I will give you a list of narrative drug prescriptions. Please slice the narrative text based on the Entity Types you detect and organize it as a table record with the columns: Medication ET, Strength ET, Unit ET, Quantity of Dose Form per intake ET, Dose Form ET, Mode ET, Timing ET, Frequency ET, Duration ET, Instructions ET (Dose, Frequency, Duration) without changing anything in the narrative prescription. For missing values, leave the cell blank.
Here are some examples that you can study with:
* Xarator (40) 1/2x1 opc, Xarator, 40, mg, 1/2, tablet, opc, opc, 1/2x1, , 1/2x1 opc
* Douzabox 1x2 opc, Douzabox, , , 1, tablet, opc, opc, 1x2, , 1x2 opc
* omeprazole (20) 1x1 po ac, omeprazole, 20, mg, 1, tablet, po, ac, 1x1, , 1x1 po ac
* Thyrosit (50) 0.5x1 o ac figures/thaitext.png, Thyrosit, 50, mcg, 0.5, tablet, o, ac, 0.5x1,
< g r a p h i c s >
, 0.5x1 o ac
< g r a p h i c s >
* Mevalotin Pretect (40) 1x1 o pc, Mevalotin Pretect, 40, mg, 1, tablet, o, pc, 1x1, , 1x1 o pc
Prompt D
You are now a Named Entity Recognition Model. I will give you a list of narrative drug prescriptions. Please slice the narrative text based on the Entity Types you detect and organize it as a table record with the columns: Medication ET, Strength ET, Unit ET, Quantity of Dose Form per intake ET, Dose Form ET, Mode ET, Timing ET, Frequency ET, Duration ET, Instructions ET (Dose, Frequency, Duration) without changing anything in the narrative prescription. For missing values, leave the cell blank.
Here are some examples that you can study with:
* Xarator (40) 1/2x1 opc, Xarator, 40, mg, 1/2, tablet, opc, opc, 1/2x1, , 1/2x1 opc
* Douzabox 1x2 opc, Douzabox, , , 1, tablet, opc, opc, 1x2, , 1x2 opc
* omeprazole (20) 1x1 po ac, omeprazole, 20, mg, 1, tablet, po, ac, 1x1, , 1x1 po ac
* Thyrosit (50) 0.5x1 o ac
< g r a p h i c s >
, Thyrosit, 50, mcg, 0.5, tablet, o, ac, 0.5x1,
< g r a p h i c s >
, 0.5x1 o ac
< g r a p h i c s >
* Mevalotin Pretect (40) 1x1 o pc, Mevalotin Pretect, 40, mg, 1, tablet, o, pc, 1x1, , 1x1 o pc
Prompt E
I will give you a list of narrative drug prescriptions. Please slice the narrative text based on the Entity Types you detect and organize it as a table record with the columns: Medication ET, Strength ET, Unit ET, Quantity of Dose Form per intake ET, Dose Form ET, Mode ET, Timing ET, Frequency ET, Duration ET, Instructions ET (Dose, Frequency, Duration) without changing anything in the narrative prescription. For missing values, leave the cell blank.
Here are some examples that you can study with:
* Xarator (40) 1/2x1 opc, Xarator, 40, mg, 1/2, tablet, opc, opc, 1/2x1, , 1/2x1 opc
* Douzabox 1x2 opc, Douzabox, , , 1, tablet, opc, opc, 1x2, , 1x2 opc
* omeprazole (20) 1x1 po ac, omeprazole, 20, mg, 1, tablet, po, ac, 1x1, , 1x1 po ac
* Thyrosit (50) 0.5x1 o ac
< g r a p h i c s >
, Thyrosit, 50, mcg, 0.5, tablet, o, ac, 0.5x1,
< g r a p h i c s >
, 0.5x1 o ac
< g r a p h i c s >
* Mevalotin Pretect (40) 1x1 o pc, Mevalotin Pretect, 40, mg, 1, tablet, o, pc, 1x1, , 1x1 o pc
* Ridperidone (1) 0.5x1 po hs, Ridperidone, 1, mg, 0.5, tablet, po, hs, 0.5x1, , 0.5x1 po hs
* Cotrimazole 2 tab po od, Cotrimazole, , , 2, tablet, po, , od, , 2 tab po od
* Trazodone (50) 1x1 po hs, Trazodone, 50, mg, 1, tablet, po, hs, 1x1, , 1x1 po hs
* LoRANTA (100) 1x1, LoRANTA, 100, mg, 1, tablet, , , 1x1, , 1x1
* Nuelin SR (200) 1x2, Nuelin SR, 200, mg, 1, tablet, , , 1x2, , 1x2
Prompt F
You are now a Named Entity Recognition Model. I will give you a list of narrative drug prescriptions. Please slice the narrative text based on the Entity Types you detect and organize it as a table record with the columns: Medication ET, Strength ET, Unit ET, Quantity of Dose Form per intake ET, Dose Form ET, Mode ET, Timing ET, Frequency ET, Duration ET, Instructions ET (Dose, Frequency, Duration) without changing anything in the narrative prescription. For missing values, leave the cell blank.
Here are some examples that you can study with:
* Xarator (40) 1/2x1 opc, Xarator, 40, mg, 1/2, tablet, opc, opc, 1/2x1, , 1/2x1 opc
* Douzabox 1x2 opc, Douzabox, , , 1, tablet, opc, opc, 1x2, , 1x2 opc
* omeprazole (20) 1x1 po ac, omeprazole, 20, mg, 1, tablet, po, ac, 1x1, , 1x1 po ac
* Thyrosit (50) 0.5x1 o ac
< g r a p h i c s >
, Thyrosit, 50, mcg, 0.5, tablet, o, ac, 0.5x1,
< g r a p h i c s >
, 0.5x1 o ac
< g r a p h i c s >
* Mevalotin Pretect (40) 1x1 o pc, Mevalotin Pretect, 40, mg, 1, tablet, o, pc, 1x1, , 1x1 o pc
* Ridperidone (1) 0.5x1 po hs, Ridperidone, 1, mg, 0.5, tablet, po, hs, 0.5x1, , 0.5x1 po hs
* Cotrimazole 2 tab po od, Cotrimazole, , , 2, tablet, po, , od, , 2 tab po od
* Trazodone (50) 1x1 po hs, Trazodone, 50, mg, 1, tablet, po, hs, 1x1, , 1x1 po hs
* LoRANTA (100) 1x1, LoRANTA, 100, mg, 1, tablet, , , 1x1, , 1x1
* Nuelin SR (200) 1x2, Nuelin SR, 200, mg, 1, tablet, , , 1x2, , 1x2
§ PROMPTS USED IN EX TASK
Prompt 1
I will give you a table of medication data in csv format.
Please translate and expand the information in columns named Medication ET, Unit ET, Mode ET, Instruction ET (Dose,Frequency,Duration)
Here is the table that I want you to translate and expand: Original Text,Medication ET,Unit ET,Mode ET,"Instructions ET (Dose, Frequency, Duration)"
* Amlodipine (10) 1x1 opc: Amlodipine, mg, opc, 1x1 opc
* ORS: ORS, , , ,
* paracet (500) 1 tab po prn q 4-6 hr: paracet, mg, po, 1 tab po prn q 4-6 hr
* Ramipril (2.5) 0.5x1 po hs: Ramipril, , po, 0.5x1 po hs
* VitD2 (20000) 1 tab po weekly: VitD2, , po, 1 tab po weekly
* Omeprazole (20) 1x1 oac: Omeprazole, mg, o, 1x1 oac
* Filgrastim 300 sc od start day 3-12: Filgrastim, , sc, 300 sc od start day 3-12
* Hidil Cap(300) 1x1: Hidil Cap, , , , 1x1
* Eucor Tab(20) 1x1: Eucor Tab, , , , 1x1
* Keflex(500) 1x4 opc: Keflex, , opc, 1x4 opc
* Rosuvastatin 20 1 x 1 po hs: Rosuvastatin, , po, 1x1 po hs
* Norfloxacin (400) 1x2 po ac: Norfloxacin, , po, 1x2 po ac
* Lorazepam (0.5) 1 tab po prn hs: Lorazepam, mg, po, 1 tab po prn hs
* Senokot 2 tabs po prn constipation hs: Senokot, , po, 2 tabs po prn constipation hs
* Ferli-6 1 tab po bid pc: Ferli-6, , po, 1 tab po bid pc
* Senokot 2 tab po hs: Senokot, , po, 2 tabs po hs
* Vit B co 1*3 po pc: Vit B co, , po, 1*3 po pc
* PROgraf Cap(1 ) 2*2 po pc: PROgraf Cap, , po, 2*2 po pc
* Ursolin (250) 1*3 po pc: Ursolin, , po, 1*3 po pc
* Paracetamol (500) 1 tab po prn q 4-6 hr: Paracetamol, mg, po, 1 tab po prn q 4-6 hr
* Mucotic(600) 1
< g r a p h i c s >
opc: Mucotic, , opc, 1 sachet opc
* Clopidogrel(75) 1*1 po pc: Clopidogrel, , po, 1*1 po pc
* Mydocalm 1x3 po pc: Mydocalm, , po, 1x3 po pc
* Bisoprolol(2.5) 1/2x1 po pc: Bisoprolol, , po, 1/2x1 po pc
* Omeprazole(20) 1x1 po ac: Omeprazole, , po, 1x1 po ac
Prompt 2
You are now a medication interpretator. I will give you a table of medication data in csv format.
Please normalize the information in columns named Medication ET, Unit ET, Mode ET, Instruction ET (Dose, Frequency, Duration) based on the following instructions:
Original Text: This is the original narrative text of the medication prescription.
Medication ET: This is the medication entity type. Please interpret and put the results in a new table named "Active Ingredients EX". In case of multiple possible entries in the fields of Active Ingredients, separate the entries by ";" in the same cell.
Unit ET: This is a unit entity type. Please interpret this and put the result in a new table named "Unit EX".
Mode ET: This is an intake route entitype type of the medication. Please interpret this and put the result in "Mode EX" column.
Instructions ET (Dose,Frequency,Duration): This is an instruction entity type. Please interpret this and put the result in "Instructions (Dose,Frequency,Duration) EX" column.
Here is the table that I want you to transform:
Original Text,Medication ET,Unit ET,Mode ET,"Instructions ET (Dose, Frequency, Duration)"
* Amlodipine (10) 1x1 opc: Amlodipine, mg, opc, 1x1 opc
* ORS: ORS, , , ,
* paracet (500) 1 tab po prn q 4-6 hr: paracet, mg, po, 1 tab po prn q 4-6 hr
* Ramipril (2.5) 0.5x1 po hs: Ramipril, , po, 0.5x1 po hs
* VitD2 (20000) 1 tab po weekly: VitD2, , po, 1 tab po weekly
* Omeprazole (20) 1x1 oac: Omeprazole, mg, o, 1x1 oac
* Filgrastim 300 sc od start day 3-12: Filgrastim, , sc, 300 sc od start day 3-12
* Hidil Cap(300) 1x1: Hidil Cap, , , , 1x1
* Eucor Tab(20) 1x1: Eucor Tab, , , , 1x1
* Keflex(500) 1x4 opc: Keflex, , opc, 1x4 opc
* Rosuvastatin 20 1 x 1 po hs: Rosuvastatin, , po, 1x1 po hs
* Norfloxacin (400) 1x2 po ac: Norfloxacin, , po, 1x2 po ac
* Lorazepam (0.5) 1 tab po prn hs: Lorazepam, mg, po, 1 tab po prn hs
* Senokot 2 tabs po prn constipation hs: Senokot, , po, 2 tabs po prn constipation hs
* Ferli-6 1 tab po bid pc: Ferli-6, , po, 1 tab po bid pc
* Senokot 2 tab po hs: Senokot, , po, 2 tabs po hs
* Vit B co 1*3 po pc: Vit B co, , po, 1*3 po pc
* PROgraf Cap(1 ) 2*2 po pc: PROgraf Cap, , po, 2*2 po pc
* Ursolin (250) 1*3 po pc: Ursolin, , po, 1*3 po pc
* Paracetamol (500) 1 tab po prn q 4-6 hr: Paracetamol, mg, po, 1 tab po prn q 4-6 hr
* Mucotic(600) 1
< g r a p h i c s >
opc: Mucotic, , opc, 1 sachet opc
* Clopidogrel(75) 1*1 po pc: Clopidogrel, , po, 1*1 po pc
* Mydocalm 1x3 po pc: Mydocalm, , po, 1x3 po pc
* Bisoprolol(2.5) 1/2x1 po pc: Bisoprolol, , po, 1/2x1 po pc
* Omeprazole(20) 1x1 po ac: Omeprazole, , po, 1x1 po ac
The end output should compile all results into one unified table.
Prompt 3
You are now a medication interpretator. I will give you a table of medication data in csv format.
Please normalize the information in column named Medication ET, Unit ET, Mode ET, Instruction ET (Dose,Frequency,Duration) based on the following instructions:
Original Text: This is the original narrative text of medication prescription.
Medication ET: This is the medication entity types. Please interpret and put the results in a new table named "Active Ingredients EX". In case of multiple possible entries in the fields of Active Ingredients, separate the entries by ";" in the same cell.
Unit ET: This is a unit entity type. Please interpret this and and put the result in a new table named "Unit EX". For example, "mg" should be "milligram."
Mode ET: This is an intake route entitype type of the medication. Please interpret this and put the result in "Mode EX" column. For example, "po" should be "oral."
Instructions ET (Dose,Frequency,Duration): This is an instruction entity type. Please interpret this and put the result in "Instructions (Dose,Frequency,Duration) EX" column. For example, "1*1 po pc" should be translated into "1 tablet oral after meal once daily"
Here is the table that I want you to transform:
Original Text,Medication ET,Unit ET,Mode ET,"Instructions ET (Dose, Frequency, Duration)"
* Amlodipine (10) 1x1 opc: Amlodipine, mg, opc, 1x1 opc
* ORS: ORS, , , ,
* paracet (500) 1 tab po prn q 4-6 hr: paracet, mg, po, 1 tab po prn q 4-6 hr
* Ramipril (2.5) 0.5x1 po hs: Ramipril, , po, 0.5x1 po hs
* VitD2 (20000) 1 tab po weekly: VitD2, , po, 1 tab po weekly
* Omeprazole (20) 1x1 oac: Omeprazole, mg, o, 1x1 oac
* Filgrastim 300 sc od start day 3-12: Filgrastim, , sc, 300 sc od start day 3-12
* Hidil Cap(300) 1x1: Hidil Cap, , , , 1x1
* Eucor Tab(20) 1x1: Eucor Tab, , , , 1x1
* Keflex(500) 1x4 opc: Keflex, , opc, 1x4 opc
* Rosuvastatin 20 1 x 1 po hs: Rosuvastatin, , po, 1x1 po hs
* Norfloxacin (400) 1x2 po ac: Norfloxacin, , po, 1x2 po ac
* Lorazepam (0.5) 1 tab po prn hs: Lorazepam, mg, po, 1 tab po prn hs
* Senokot 2 tabs po prn constipation hs: Senokot, , po, 2 tabs po prn constipation hs
* Ferli-6 1 tab po bid pc: Ferli-6, , po, 1 tab po bid pc
* Senokot 2 tab po hs: Senokot, , po, 2 tabs po hs
* Vit B co 1*3 po pc: Vit B co, , po, 1*3 po pc
* PROgraf Cap(1 ) 2*2 po pc: PROgraf Cap, , po, 2*2 po pc
* Ursolin (250) 1*3 po pc: Ursolin, , po, 1*3 po pc
* Paracetamol (500) 1 tab po prn q 4-6 hr: Paracetamol, mg, po, 1 tab po prn q 4-6 hr
* Mucotic(600) 1
< g r a p h i c s >
opc: Mucotic, , opc, 1 sachet opc
* Clopidogrel(75) 1*1 po pc: Clopidogrel, , po, 1*1 po pc
* Mydocalm 1x3 po pc: Mydocalm, , po, 1x3 po pc
* Bisoprolol(2.5) 1/2x1 po pc: Bisoprolol, , po, 1/2x1 po pc
* Omeprazole(20) 1x1 po ac: Omeprazole, , po, 1x1 po ac
Here are some examples that you can look up to:
Original Text,Medication ET,Unit ET,Mode ET,"Instructions ET (Dose, Frequency, Duration)",Active Ingredient EX,Unit EX,Mode EX,"Instructions (Dose, Frequency, Duration) EX"
* Xarator (40) 1/2x1 opc:
* Original Text: Xarator (40) 1/2x1 opc
* Medication ET: Xarator
* Unit ET: mg
* Mode ET: opc
* Instructions ET (Dose, Frequency, Duration): 1/2x1 opc
* Active Ingredient EX: Simvastatin
* Unit EX: milligram
* Mode EX: Oral
* Instructions (Dose, Frequency, Duration) EX: 1 Tablet Oral after meal Once daily
* Douzabox 1x2 opc:
* Original Text: Douzabox 1x2 opc
* Medication ET: Douzabox
* Unit ET:
* Mode ET: opc
* Instructions ET (Dose, Frequency, Duration): 1x2 opc
* Active Ingredient EX: Douzabox
* Unit EX:
* Mode EX: Oral
* Instructions (Dose, Frequency, Duration) EX: 1 Tablet Oral after meal Twice daily
* omeprazole (20) 1x1 po ac:
* Original Text: omeprazole (20) 1x1 po ac
* Medication ET: omeprazole
* Unit ET: mg
* Mode ET: po
* Instructions ET (Dose, Frequency, Duration): 1x1 po ac
* Active Ingredient EX: Omeprazole
* Unit EX: milligram
* Mode EX: Oral
* Instructions (Dose, Frequency, Duration) EX: 1 Tablet Oral before meal Once daily
* Thyrosit (50) 0.5x1 o ac
< g r a p h i c s >
:
* Original Text: Thyrosit (50) 0.5x1 o ac
< g r a p h i c s >
* Medication ET: Thyrosit
* Unit ET: mcg
* Mode ET: o
* Instructions ET (Dose, Frequency, Duration): 0.5x1 o ac
< g r a p h i c s >
* Active Ingredient EX: Thyrosit
* Unit EX: microgram
* Mode EX: Oral
* Instructions (Dose, Frequency, Duration) EX: 0.5 Tablet Oral before meal Once daily
< g r a p h i c s >
* Mevalotin Pretect (40) 1x1 o pc:
* Original Text: Mevalotin Pretect (40) 1x1 o pc
* Medication ET: Mevalotin Pretect
* Unit ET: mg
* Mode ET: o
* Instructions ET (Dose, Frequency, Duration): 1x1 o pc
* Active Ingredient EX: Pravastatin
* Unit EX: milligram
* Mode EX: Oral
* Instructions (Dose, Frequency, Duration) EX: 1 Tablet Oral after meal Once daily
The end output should compile all results into one unified table.
| Prescribing drugs has a high impact on patient safety. In many places, handwritten prescriptions are still common, and only when a discharge summary is written, medication information is registered in electronic health records (EHRs). Here, physicians tend to use compact language and particularly abbreviations. They mix up brands with ingredient names and skip units ("ASA 100") and dose form information (“tablet”, “suspension”, “eyedrops”). For route of administration, medication frequency, and time patterns, a broad range of different styles is used. Abbreviated Latin terms such as “tds” (three times a day) and “qPM” (once in the evening) are common in some jurisdictions, but completely unknown in others where “1-1-1” corresponds to “tds”, and “0-0-1” to “qPM”. Wherever drug prescriptions are done in narrative form, their interpretation - particularly beyond their narrow context of use - remains a challenge. Ideally, a cryptic “ASA 100 qPM” should be automatically transformed into a clear “Acetylsalicylic acid 100 milligrams oral tablet, to be taken orally every afternoon”.
English is the lingua franca in scholarly communications, but EHRs mostly use the official languages of the respective jurisdiction. Yet there are exceptions, such as in Arabic or Asian countries, which have not developed sophisticated medical terminology in their languages to an extent that would suffice for clinical documentation and therefore use English. International healthcare teams, especially in the Middle East, communicate in English. English is the interlingua of multilingual countries such as India and Malaysia. It is therefore unsurprising that many flavors of non-standard medical English have developed, which present challenges both for native and second-language speakers. Beyond affecting communication, we expect medical content in "Asian Englishes" to pose also difficulties for machine processing of medical language, because tools and resources have been trained with mainstream English EHR content and with publications polished and standardized by editors. In a similar vein, controlled vocabularies such as ICD-10 or SNOMED CT do not account for non-mainstream English term variants.
This paper focuses on healthcare documentation in Thailand, which has undertaken a concerted effort to stimulate computerized physician order entry (CPOE) to achieve high-quality prescriptions <cit.> Nevertheless, most hospitals still depend on written prescriptions, which subsequently require manual input into EHRs <cit.>. Narrative medication statements are still used for drug reconciliation and communication <cit.>. Such expressions in “Thai English” blend two languages and character sets, and use a variety of styles and formats depending on the physician's preferences, such as the merging of trade names and ingredients <cit.>, the use of abbreviated names and routes6, and the omission of dosage forms <cit.>. E.g., in “Thyrosit (50) 0.5x1 o ac
< g r a p h i c s >
”, “Thyrosit” is a trade name for levothyroxine, and “(50)” indicates the strength. The unit (here micrograms) is omitted as there is no levothyroxine preparation at the milligram level. “0.5x1” indicates the action of consuming half (“0.5”) of the dosage once (“x1”) each day. Finally, “o ac
< g r a p h i c s >
”, comprises a mixture of Thai, English, and Latin abbreviations denoting routes and timings. “o” represents “per os (po)” (by mouth) and “ac” means “ante cibum” (before meals).
< g r a p h i c s >
is a Thai abbreviation for “Monday to Friday”. It is foreseeable that considerable problems arise whenever we aim to automatically extract information from such free-text instructions, although over the last decade machine learning-based approaches have gained popularity in clinical information extraction thanks to the ever-improving performance of human language technologies <cit.>. Large-language models (LLMs) have attracted immense interest after the release of ChatGPT in 2022 <cit.>. Their capacity for handling and analyzing large-scale text and generating content in response to input prompts makes them highly promising for a wide range of applications, from clinical name entity recognition (NER) <cit.> to encoding clinical knowledge <cit.>. Moreover, LLMs have vastly benefited from transfer learning where their pre-existing knowledge is leveraged and fine-tuned with minimal labeled data or instructions, making them highly adaptable. This contributed to the popularity of zero to few-shot learning paradigms: for each new class, one-shot learning uses just one labeled example, few-shot learning uses a limited set thereof, whereas in zero-shot learning no labeled data are provided at all <cit.>. The capabilities of LLMs can be customized, improved, or refined by a set of instructions called a prompt <cit.>. To effectively communicate with LLMs, prompt engineering strategies have turned out to be fundamental <cit.>.
Against this background, we focus on the idiosyncratic and compact nature of medication statements in Thai medical records and formulate our research questions as follows:
* To which extent is ChatGPT 3.5 able to restructure and normalize medication statements?
* Does this depend on prompting patterns?
* What are the weaknesses of this approach regarding patient safety? | null | null | §.§ Zero- and few-shot named-entity recognition
The NER results are shown in Table <ref> and <ref>. Prompts A and B showed good performance with an average F1 of 0.72 and 0.74, respectively, based on partial matching. Prompts C, D, E, and F demonstrated better overall performance than A or B. However, even though prompt C generally had a better overall F1 than prompt A, it exhibited the poorest Unit detection compared to all prompts, even with the same prompt pattern and more examples. Prompt D showed a similar issue, as it failed to recognize Instructions effectively. Prompts E and F showed superior performance compared to the others, with an average F1 of 0.94 and 0.90, respectively (partial matching). Both prompts revealed promising for medical NER tasks. Nevertheless, we decided to prioritize prompt E, as it had the highest average F1 and better NER performance on Unit in partial matching.
The results of the NER task in the test dataset are shown in Table <ref>. Overall, ChatGPT commanded by prompt E, generated good performance for the NER task with an average F1 score for all entity types of 0.79 and 0.92 based on strict and partial matching, respectively. However, the test dataset findings highlighted that the Unit continues to be the poorest-performing in NER. Compared to the validation dataset, the performance metrics for the test dataset are slightly inferior, indicating that ChatGPT has a high generalizability for identifying entity types.
§.§ Zero- and few-shot text expansion and manual assessment
After obtaining metrics from NER prompts, we passed the ChatGPT output of the best-performing prompt E to the EX phase of our experiment. The NER output was converted to a tabular format to put the text into the ChatGPT chatbox. The EX performance results are shown in Table <ref>. In the validation dataset, Prompt 3 had the best overall performance on term expansion in all categories, excluding Unit, with an average F1 score of 0.87. Prompts 2 and 3 were generally better as they were given more examples of medication prescriptions. Prompt 1, being the simplest EX prompt, was not capable of expanding any terms in the Mode class at all. Prompt 3 was then selected for test dataset evaluation, and the results showed an average F1 of 0.77, which decreased by 0.10 compared to the validation set because the metrics for Unit and [Instructions] Relation to Meal dropped largely compared to the validation dataset. | §.§ Comparison to previous work
Early attempts to use ChatGPT for clinical NLP were made by Hu et al., but its performance lagged behind that of the supervised BioClinicalBERT model, regarding the extraction of treatments mentioned in discharge summaries <cit.>. However, they did not distinguish drug treatments from other medical treatments. Therefore, it seems that our work is the first to use ChatGPT for specific and detailed medication statement analysis.
Comparing our NER results (mean F1 score 0.92) to an earlier study (mean F1 score 0.80) using CRF shows a clear superiority <cit.>. In that study, medication details were improperly delineated whenever punctuation patterns such as brackets did not follow the norm <cit.>. However, problems like this can mostly be seen as resolved by deep learning. So did the unusual bracketing of strength (e.g., "ASA (500)") not constitute any obstacle.
More interesting is the comparison with MT-NER-PMB (2021), which can be seen as a recent non-LLM baseline, which outperformed other BERT models. Comparing our results to their F1 score for partial results we could show slightly better detections for Medication (0.99 vs 0.96), Mode (1.00 vs. 0.95), and Strength (0.99 vs. 0.98). However, our study separates Strength and Unit, therefore the relatively low F1 score for Unit 0.51 is not integrated in the calculation for Strength <cit.>.
That our medication statements often did not contain units of measurement (in the gold standard annotation constituting so-called zero-with annotations, cf. Fig. <ref>) led to a low precision in Prompt 1 because it repeated the NER task by adding the found abbreviation in the text. The results suggest that the addition of prompt patterns in prompts 2 and 3 solved this.
Compared to traditional machine learning, it was astonishing to observe how few training examples produced a considerable benefit. This highlights the importance of good prompt engineering to optimally exploit the content of an LLM. However, we cannot explain the poor result for Unit in NER, whereas the even worse performance of Unit in EX (such as from "mg" to "Milligram") may result from the fact that unit symbols are hardly ever expanded, even in scholarly or popular publications or drug leaflets.
§.§ Error Analysis
An important result of our study was that incremental LLM prompting with repetitive human assessment improved the result and the NER task achieved an F1 of 0.79 and 0.92 on the test set for strict and partial matching, respectively, with the recognition of units of measurement (which were often omitted) showing the poorest results. The Unit expansion also performed poorly in the EX task, which achieved an F1 of 0.07 only, against 0.77 for all tasks. The expansion of abbreviated drug names and brand names to active ingredients achieved a 1.00 precision such as the expansion of mode of administration, which shows that the result has in no way deteriorated compared to the original text. It is important to highlight that from a patient safety point of view, precision is more important than recall. A low recall means that the system did not improve the quality of the medication statement, whereas any suboptimal precision value bears the risk that the content of the statement is distorted.
Unclear short forms are a well-known problem in clinical texts <cit.>. Abbreviation of drugs often follows local jargon. Interestingly, only one ("ASA") was correctly resolved, whereas the other four were not resolved at all ("MFM" for metformin, "MTV" for multivitamins, and "K" for potassium). Regarding ChatGPT’s risk of hallucination, it is remarkable that the system opted for non-resolution instead of for wrong resolution. While in prompt D the abbreviation "ORS" for "oral rehydration solution" was even not detected during the NER validation phase as Medication, in prompt E at least in NER it was detected in validation and test but not extended by test EX. Despite the fact it is a common abbreviation in our data set, the complexity increased due to a mixture of languages and documentation styles. Our results for EX show a drastic increase in F1 for the expansion type Mode from Prompt 1 without any prompt patterns (F1 score: 0.00) to Prompt 2 with adding Persona and Template (F1 score 0.94). In Prompt 2, ChatGPT detected all abbreviations of "oral" in all samples, but still tried to translate each different abbreviation literally like "opc" to "oral, by consumption" or "po" "per mouth". By giving 5 examples with the prompt pattern "Few-shot" in Prompt 3, ChatGPT detected, e.g., "po", "o", "opc", and "po ac" as "oral". Even "oac", which was not given as an example, was now correctly defined as "oral" and not as "oral administration". The changed definition of Mode leads to an increase of F1 by 0.11 for the expansion type Relation to Meal in Prompt 3 as well, because ChatGPT can now categorize the subsequent information. In contrast to the quality improvement of ChatGPT’s output for Mode, its F1 slightly decreased by 0.03. Adding clearly defined case-specific prompt patterns for foreign language abbreviations is an essential step in creating the expected results.
Even though most of our medication statements are in English, some Thai brand names such as "HIDIL" show the non-English vocabulary limitations of ChatGPT. For the expansion type Active Ingredients, the main positive results are the already detected active ingredients in the NER phase in comparison to medication brands, which are less often transformed to the active ingredient, e.g. instead of recognizing "HIDIL" as Gemfibrozil, ChatGPT inserted the brand name "HIDIL". This parallels a similar finding when querying ChatGPT with the Chinese medication "Motrin" (ibuprofen) which was mistakenly linked to aspirin when retrieving adverse drug reactions-related information <cit.>.
§.§ Limitations
Hallucinations are a relevant issue in ChatGPT <cit.>. Here, we obtained the remarkable result one instance of a hallucinated response could be identified in the test data (n=50). The daily dosage instruction of an antipsychotic drug was interpreted correctly, but ChatGPT added an "as needed" statement on top, however, regarding a small dose that is not expected to cause harm.
Given the original text prescription, “Ridperidone (1) 0.5x1 po hs”, ChatGPT gave us “0.5 Tablet oral at bedtime; 0.25 Tablet oral as needed for agitation” for instructions EX task instead of “0.5 Tablet Oral before bed Once daily” which is the correct answer.
The size of the dataset must also be mentioned. Particularly, the assessment of rare but important outcomes such as the occurrence and the severity of hallucinations would have required much more data. Another limitation was the restriction to ChatGPT 3.5. We could have compared it to other LLMs, but our focus has been on the comparison of different prompting strategies rather than different models. | null |
http://arxiv.org/abs/2409.17413v1 | 20240925224824 | Setpoint Tracking and Disturbance Attenuation for Gas Pipeline Flow Subject to Uncertainties using Backstepping | [
"Bhathiya Rathnayake",
"Anatoly Zlotnik",
"Svetlana Tokareva",
"Mamadou Diagne"
] | math.OC | [
"math.OC",
"93C20, 76N25, 93D15"
] |
Setpoint Tracking and Disturbance Attenuation for Gas Pipeline Flow Subject to Uncertainties using Backstepping
Bhathiya Rathnayake^1, Anatoly Zlotnik^2, Svetlana Tokareva^2, and Mamadou Diagne^3
^1B. Rathnayake is with the Department of Electrical and Computer Engineering, University of California San Diego, USA,
[email protected]
^2A. Zlotnik and S. Tokareva are with the Applied Mathematics and Plasma Physics Group, Los Alamos National Laboratory, USA
^3M. Diagne is with the Department of Mechanical and Aerospace Engineering, University of California San Diego, USA
April 2024
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this paper, we consider the problem of regulating the outlet pressure of gas flowing through a pipeline subject to uncertain and variable outlet flow. Gas flow through a pipe is modeled using the coupled isothermal Euler equations, with the Darcy–Weisbach friction model used to account for the loss of gas flow momentum. The outlet flow variation is generated by a periodic linear dynamic system, which we use as a model of load fluctuations caused by varying customer demands. We first linearize the nonlinear equations around the equilibrium point and obtain a 2 × 2 coupled hyperbolic partial differential equation (PDE) system expressed in canonical form. Using an observer-based PDE backstepping controller, we demonstrate that the inlet pressure can be manipulated to regulate the outlet pressure to a setpoint, thus compensating for fluctuations in the outlet flow. Furthermore, we extend the observer-based controller to the case when the outlet flow variation is uncertain within a bounded set. In this case, the controller is also capable of regulating the outlet pressure to a neighborhood of the setpoint by manipulating the inlet pressure, even in the presence of uncertain fluctuations in the outlet flow. We provide numerical simulations to demonstrate the performance of the controller.
§ INTRODUCTION
In recent years, significant attention has been directed to study the system dynamics in gas pipeline networks. Various modeling approaches were introduced and explored for gas network flow, and were examined from analytical and numerical perspectives <cit.>. The primary engineering challenge is to compensate for the decrease in pressure along the direction of flow that is caused by friction at the pipe's inner surface. The one-dimensional isothermal Euler equations, incorporating Darcy–Weisbach friction model, form a 2 × 2 system of coupled partial differential equations (PDEs) that represent the relevant conservation laws. These equations adequately describe gas flow dynamics along the pipe, accounting for gas momentum loss in the wave- and shock-free physical regime <cit.>.
The operation of natural gas transport networks presents complex challenges within the domain of control theory. The field of boundary stabilization of PDEs offers a rich theoretical framework to address these challenges <cit.>. Lyapunov-based control <cit.>, optimal control <cit.>, and PDE backstepping <cit.> are among the control approaches that have been applied to gas flow control in pipelines using PDE models. Methods that responsively compensate for unanticipated variations in boundary conditions caused by unscheduled changes in consumption, while maintaining pressure above minimum requirements, compel ongoing research.
In this paper, we examine the control of gas flowing through a pipeline subject to fluctuating outlet flow that we suppose is generated by a periodic linear dynamic system, which we use as a model of variable consumer demand. By appropriately selecting the parameters of the dynamic system, a desired periodic fluctuation profile can be achieved. First, we linearize the nonlinear isothermal Euler equations including Darcy-Weisbach friction term around an equilibrium solution and express the system in canonical form, resulting in a coupled 2 × 2 hyperbolic PDE. We then employ an observer-based PDE backstepping controller, adapted from <cit.>, to regulate the outlet pressure to a setpoint by manipulating the inlet pressure, thus compensating for variations in outlet flow. Furthermore, we extend the observer-based controller to account for uncertain yet bounded fluctuations in the outlet flow. The derived controller manipulates the inlet pressure to regulate the outlet pressure to a neighborhood of the setpoint.
§ PRELIMINARIES
Isothermal gas flow along a pipe, in a wave- and shock-free regime, can adequately be described by the one-dimensional isothermal Euler equations <cit.>:
ρ_t+(ρ u)_x =0,
(ρ u)_t+(p+ρ u^2)_x =-λ/2Dρ u| u|-ρ gh',
for all x∈(0,ℓ) and t>0, where
p(t,x)=ZRTρ(t,x)=σ^2ρ(t,x),
for all x∈[0,ℓ] and t>0,
with the boundary conditions
ρ(t,0) =U(t),
ρ(t,ℓ)u(t,ℓ) =φ_L+s(t),
for all t>0. In equations (<ref>)-(<ref>), the variables ρ(t,x), u(t,x), and p(t,x) represent the density, velocity, and pressure profiles of the gas, respectively, and h(x) is the elevation of the pipe. The parameters involved are the internal pipe diameter D, the pipe length ℓ, the gravitational acceleration g, and the speed of sound σ=√(ZRT) in the gas, where Z, R, and T are the gas compressibility factor, specific gas constant, and absolute temperature, respectively. We use the ideal gas equation of state, so that controlling density corresponds directly to controlling pressure. We consider U(t) in equation (<ref>) to be the control, which represents the density at the discharge of a compressor located at the pipe inlet. The flow leaving the pipe outlet at x=ℓ is φ_L>0 in equation (<ref>), and s(t) in (<ref>) represents the variation in the outlet flow with respect to the nominal value φ_ L.
The outlet flow variation s(t) takes the form
s(t) = CX(t),
where X(t)∈ℝ^n× 1,
Ẋ(t) = AX(t),
C∈ℝ^1× n, and A∈ℝ^n× n. The initial condition of X(t) is in the set
X(0) ={X_0| CX_0=s(0)}.
Assumption <ref> is intended to model pipe outlet flow variation s(t) to represent cyclic consumer energy demands. Periodic signals with specific amplitudes and periods can be constructed or approximated via (<ref>)-(<ref>) by choosing A∈ℝ^n× n, C, and n, appropriately.
In the wave- and shock-free regime, the gas advection term (ρ u^2)_x in (<ref>) is often ignored since it is very small when compared with the pressure gradient p_x <cit.>. Further, we assume that the pipeline is level, which allows the gravity term ρ gh' in (<ref>) to be ignored. Let us define
φ(t,x):=ρ(t,x)u(t,x),
for all x∈ [0,ℓ] and t≥ 0. Here, φ(t,x) is the mass flux per area. This allows us to rewrite the system (<ref>)-(<ref>) as
ρ_t+φ_x =0,
φ_t+σ^2ρ_x =-λ/2Dφ|φ|/ρ,
for all x∈(0,ℓ) and t>0 with the boundary conditions
ρ(t,0) =U(t),
φ(t,ℓ) =φ_L+s(t),
for all t>0.
§.§ Equilibrium Steady-State Solution
The system (<ref>)-(<ref>) with U(t):=U_⋆>0 and s(t)≡ 0 allows a continuum of equilibrium solutions, namely
φ(x) ≡φ_⋆=φ_L>0,
ρ(x) ≡ρ_⋆(x)=√((U_⋆)^2-λφ_L^2/σ^2D x), ∀ x∈[0,ℓ].
As evident from equation (<ref>), the equilibrium control input U_⋆>0 should be chosen such that
U_⋆> √(λφ_L^2ℓ/σ^2D).
§.§ Linearization
Let us define
δρ(t,x) :=ρ(t,x)-ρ_⋆(x),
δφ(t,x) :=φ(t,x)-φ_L,
δ U(t) :=U(t)-U_⋆,
for all x∈[0,ℓ] and t≥ 0, where (ϕ_ L,ρ_⋆(x),U_⋆) is the equilibrium. Further, because we consider small perturbations around the equilibrium (φ_L,ρ_⋆(x),U_⋆) with φ_L>0, ρ_⋆(x)>0 for all x∈[0,ℓ], and U_⋆>0 chosen as in equation (<ref>), let us assume that φ(t,x)>0 for all t> 0 and x∈ [0,ℓ]. Then, linearizing the system (<ref>)-(<ref>) with s(t)≡ 0 around the equilibrium (φ_L,ρ_⋆(x),U_⋆), we obtain
δρ_t+δφ_x =0,
δφ_t+σ^2δρ_x =λ_1(x)δρ-λ_2(x)δφ,
for all x∈(0,ℓ),t>0, where
λ_1(x):=λφ_L^2/2D(ρ_⋆(x))^2 and λ_2(x):=λφ_L/Dρ_⋆(x),
for all x∈(0,ℓ) with boundary conditions
δρ(t,0) =δ U(t),
δφ(t,ℓ) =0,
for all t>0.
§.§ Transforming the System (<ref>)-(<ref>) into Canonical Form
Let us consider the following change of coordinates:
x̅=ℓ-x/ℓ,
and the change of variables
v(t,ℓ-x/ℓ)
=√(ρ_⋆(x)/ρ_⋆(ℓ))e^σφ_ L(ρ_⋆(x)-ρ_⋆(ℓ))(1/2δρ(t,x)-1/2σδφ(t,x)),
w(t,ℓ-x/ℓ)
=√(ρ_⋆(x)/ρ_⋆(ℓ))e^-σ/φ_ L(ρ_⋆(x)-ρ_⋆(ℓ))(1/2δρ(t,x)+1/2σδφ(t,x)),
for all x∈[0,ℓ] and t≥ 0. One can show that (v,w) satisfy
v_t+ σ/ℓv_x̅ =-μ_1(x̅)w,
w_t- σ/ℓw_x̅ =μ_2(x̅)v(t,x̅),
for all x̅∈ (0,1) and t>0, with the boundary conditions
v(t,0) =w(t,0),
w(t,1) = -r_1v(t,1)+r_2δ U(t),
for all t>0. In equations (<ref>) and (<ref>), μ_1(x̅) and μ_2(x̅) are given by
μ_1(x̅)=[λφ^2_ L/4σ D (ρ_⋆(ℓ(1-x̅)))^2
-λφ_ L/2Dρ_⋆(ℓ(1-x̅))]e^2σφ_ L(ρ_⋆(ℓ(1-x̅))-ρ_⋆(ℓ)),
μ_2(x̅)=[λφ^2_ L/4σ D (ρ_⋆(ℓ(1-x̅)))^2
+λφ_ L/2Dρ_⋆(ℓ(1-x̅))]e^-2σφ_ L(ρ_⋆(ℓ(1-x̅))-ρ_⋆(ℓ)),
for all x̅∈(0,1). In equation (<ref>), r_1 and r_2 are given by
r_1 =e^-2σφ_L(ρ_⋆(0)-ρ_⋆(ℓ)),
r_2 =√(ρ_⋆(0)/ρ_⋆(ℓ))e^-σφ_ L(ρ_⋆(0)-ρ_⋆(ℓ)).
The signs of the transport speed indicate that the variable v represents information that travels from left to right and w represents information that travel from right to left. The system (<ref>)-(<ref>) is well-posed with boundary conditions on the left and right specified for v and w, respectively.
The outlet flow fluctuation s(t)=CX(t) is such that s(t)≪ϕ_L.
In light of Assumption <ref>, when outlet flow fluctuation is present, the boundary condition (<ref>) is modified as
δφ(t,ℓ)=CX(t).
With this modification, we rewrite the linearized system in canonical form
v_t+ σ/ℓv_x̅ =-μ_1(x̅)w,
w_t- σ/ℓw_x̅ =μ_2(x̅)v,
for all x̅∈ (0,1) and t>0, with the boundary conditions
v(t,0) =w(t,0)-1/σCX(t),
w(t,1) = -r_1v(t,1)+r_2δ U(t),
for all t>0, where μ_1(x̅), μ_2(x̅), r_1, and r_2 are as in equations (<ref>)-(<ref>), respectively, and X(t) satisfies eq. (<ref>).
§.§ Problem Formulation
Our goal is to regulate the variation in outlet density, δρ(t, ℓ) → 0, despite the presence of outlet flow fluctuations, δφ(t, ℓ) = CX(t). Note that by equations (<ref>) and (<ref>),
ρ(t,ℓ) = v(t,0)+w(t,0).
Therefore, regulation of outlet density variation δρ(t, ℓ) → 0 is equivalent to v(t, 0) → -w(t, 0).
§ REGULATION OF OUTLET DENSITY
§.§ When outlet flow varies according to Equations (<ref>)-(<ref>)
Similar to the approach in an earlier study <cit.>, we use backstepping boundary control to achieve v(t,0) → -w(t,0) subject to Assumption <ref> using the boundary measurement v(t,1).
Let us consider the following observer
v̂_t+ σ/ℓv̂_x̅ =-μ_1(x̅)ŵ+p_1(x̅)(v(t,1)-v̂(t,1)),
ŵ_t- σ/ℓŵ_x̅ =μ_2(x̅)v̂+p_2(x̅)(v(t,1)-v̂(t,1)),
for all x̅∈ (0,1) and t>0,
v̂(t,0) =ŵ(t,0)-1/σCX(t),
ŵ(t,1) = -r_1v(t,1)+r_2δ U(t),
for all t>0, where p_1(x̅) and p_2(x̅) are observer gains chosen as
p_1(x̅) =-σ/ℓP^11(x̅,1),
p_2(x̅) =-σ/ℓP^21(x̅,1),
for all x̅∈ (0,1) with P^11 and P^21 satisfying
P^11_x̅(x̅,ξ) + P^11_ξ(x̅,ξ) = -ℓ/σμ_1(x̅) P^21(x̅,ξ),
P^21_x̅(x̅,ξ) - P^21_ξ(x̅,ξ) = -ℓ/σμ_2(x̅) P^11(x̅,ξ),
with boundary conditions
P^11(0,ξ) = P^21(0,ξ),
P^21(x̅, x̅) = -ℓ/2σμ_2(x̅).
The coupled hyperbolic PDEs (<ref>)-(<ref>) operate in the triangular domain {0≤x̅≤ξ≤ 1}. Further, let us choose the control input δ U(t) as
δ U(t) = r_1/r_2v(t,1)+
1/r_2∫_0^1K^21(1,ξ)v̂(t,ξ)dξ
+1/r_2∫_0^1K^22(1,ξ)ŵ(t,ξ)dξ+1/r_2KX(t),
for all t>0, where K∈ℝ^1× n,K^21,K^22 are control gains satisfying
K = 1/2σCe^Aℓ/σ-1/σ∫_0^1 K^21(τ,0)Ce^Aℓ/σ(1-τ)dτ,
and
K^21_x̅(x̅,ξ) - K^21_ξ(x̅,ξ) = ℓ/σμ_2(ξ) K^22(x̅,ξ),
K^22_x̅(x̅,ξ) + K^22_ξ(x̅,ξ) = - ℓ/σμ_1(ξ) K^21(x̅,ξ),
with boundary conditions
K^21(x̅, x̅) = -ℓ/2σμ_2(x̅),
K^22(x̅, 0) = K^21(x̅, 0).
The coupled hyperbolic PDEs (<ref>)-(<ref>) operate in the triangular domain {0≤ξ≤x̅≤ 1}. Then, we can obtain the following result:
[adapted from <cit.>]
Let the observer gains p_1(x̅),p_2(x̅) be chosen as in equations (<ref>)-(<ref>) and the control input δ U(t) be chosen as (<ref>)-(<ref>). Then, v(t,0)= -w(t,0) i.e δρ(t,ℓ)=0 for all t≥ 3ℓ/σ.
§.§ When the outlet flow variation is subject to uncertainties
In the previous subsection, outlet density variation δρ(t,ℓ) is regulated to zero in finite time when outlet flow is varying. The outlet flow variation is captured by s(t)=CX(t) with the dynamics of X(t) given by equation (<ref>). Here, we suppose that the flow variation s(t) is subject to an unknown but bounded disturbance. That is, the outlet flow is given by
φ(t,ℓ) = φ_ L+d(t),
where
d(t) = s(t) + ε(t),
with s(t) satisfying equations (<ref>) and (<ref>), and ε(t) is an unknown but bounded disturbance.
The unknown disturbance ε(t)∈ C^1(ℝ_+) is bounded. That is, there exists a constant M>0 such that
|ε(t)|≤ M,
for all t≥ 0. Furthermore, it holds that M≪φ_ L.
In light of Assumption <ref>, when outlet flow fluctuation with uncertainties is present, the boundary condition (<ref>) is modified as follows:
δφ(t,ℓ)=CX(t)+ε(t).
With this modification, we rewrite the linearized system in canonical form
v_t+ σ/ℓv_x̅ =-μ_1(x̅)w,
w_t- σ/ℓw_x̅ =μ_2(x̅)v,
for all x̅∈ (0,1) and t>0, with the boundary conditions
v(t,0) =w(t,0)-1/σCX(t)-1/σε(t),
w(t,1) = -r_1v(t,1)+r_2δ U(t),
for all t>0, where μ_1(x̅), μ_2(x̅), r_1, and r_2 are given by equations (<ref>)-(<ref>), respectively, and X(t) is generated by the system (<ref>).
Let us consider the observer
v̂_t+ σ/ℓv̂_x̅ =-μ_1(x̅)ŵ+p_1(x̅)(v(t,1)-v̂(t,1)),
ŵ_t- σ/ℓŵ_x̅ =μ_2(x̅)v̂+p_2(x̅)(v(t,1)-v̂(t,1)),
for all x̅∈ (0,1) and t>0, with
v̂(t,0) =ŵ(t,0)-1/σCX̂(t),
ŵ(t,1) = -r_1v(t,1)+r_2δ U(t),
for all t>0, and
Ẋ̂̇(t)=AX̂(t)+ e^Aℓ/σ H (v(t,1)-v̂(t,1)),
for all t>0. Let the observer gains H∈ℝ^n× 1,p_1(x̅), and p_2(x̅) be chosen such that A+1/σHC is hurwitz,
p_1(x̅) =-1/σCe^Aℓ/σH-σ/ℓP^11(x̅,1)
+1/σ∫_x̅^1P^11(x̅,ξ)Ce^A ℓ/σ(1-ξ)Hdξ,
p_2(x̅) =-σ/ℓP^21(x̅,1)+1/σ∫_x̅^1P^21(x̅,ξ)Ce^A ℓ/σ(1-ξ)Hdξ,
where P^11 and P^21 are solutions to the system (<ref>)-(<ref>). Further, let the control input δ U(t) be chosen as
δ U(t) = r_1/r_2v(t,1)+
1/r_2∫_0^1K^21(1,ξ)v̂(t,ξ)dξ
+1/r_2∫_0^1K^22(1,ξ)ŵ(t,ξ)dξ+1/r_2KX̂(t),
where K is given by eq. (<ref>), and K^21,K^22 satisfy equations (<ref>)-(<ref>). Then, we can obtain the following result.
Let the observer gain vector H∈ℝ^n× 1 in (<ref>) is chosen such that A+1/σHC is hurwitz. Further, let the observer gain functions p_1(x̅) and p_2(x̅) in equations (<ref>) and (<ref>) be set as equations (<ref>)-(<ref>), and let the control input δ U(t) be chosen as equation (<ref>). Then, δρ(t,ℓ) converges to a neighborhood of the origin as t→∞, i.e.,
lim_t→∞|δρ(t,ℓ)|≤M̅,
for some M̅>0.
Proof. The proof contains two parts: I) obtaining an expression for δρ(t,ℓ) valid for all t≥ℓ/σ; and II) obtaining an upper-bound for the limit lim_t→∞|δρ(t,ℓ)|.
I) An expression for δρ(t,ℓ) valid for all t≥ℓ/σ
Define
ṽ(t,x̅) := v(t,x̅)-v̂(t,x̅),
w̃(t,x̅) := w(t,x̅)-ŵ(t,x̅),
X̃(t) :=X(t)-X̂(t),
for all x̅∈ [0,1] and t≥ 0. Then, we can rewrite the control input δ U(t) chosen in equation (<ref>) as,
δ U(t) = r_1/r_2v(t,1)+
1/r_2∫_0^1K^21(1,ξ)v(t,ξ)dξ
+1/r_2∫_0^1K^22(1,ξ)w(t,ξ)dξ+1/r_2V(t),
where
V(t):= KX(t)-∫_0^1K^21(1,ξ)ṽ(t,ξ)dξ
-∫_0^1K^22(1,ξ)w̃(t,ξ)dξ-KX̃(t).
Consider the following backstepping transformations:
α(t,x̅) = v(t,x̅)-∫_0^x̅K^11(x̅,ξ)v(t,ξ)dξ
-∫_0^x̅K^12(x̅,ξ)w(t,ξ)dξ,
β(t,x̅) = w(t,x̅)-∫_0^x̅K^21(x̅,ξ)v(t,ξ)dξ
-∫_0^x̅K^22(x̅,ξ)w(t,ξ)dξ,
defined in the triangular domain {0≤ξ≤x̅≤ 1}, where K^21 and K^22 satisfy equations (<ref>)-(<ref>), and K^11 and K^12 satisfy
K^11_x̅(x̅,ξ) + K^11_ξ(x̅,ξ) = - ℓ/σμ_2(ξ) K^12(x̅,ξ),
K^12_x̅(x̅,ξ) - K^12_ξ(x̅,ξ) = ℓ/σμ_1(ξ) K^11(x̅,ξ),
with boundary conditions
K^11(x̅, 0) = K^12(x̅, 0),
K^12(x̅, x̅) = -ℓ/2σμ_1(x̅),
in the domain {0≤ξ≤x̅≤ 1}. Subject to the backstepping transformations (<ref>) and (<ref>), the system (<ref>)-(<ref>) is transformed to the system
α_t+σ/ℓα_x̅ =1/ℓK^11(x̅,0)CX(t)+1/ℓK^11(x̅,0)ε(t),
β_t-σ/ℓβ_x̅ =1/ℓK^21(x̅,0)CX(t)+1/ℓK^21(x̅,0)ε(t),
for all x̅∈(0,1) and t>0, and the boundary conditions
α(t,0) =β(t,0)-1/σCX(t)-1/σε(t),
β(t,1) =V(t),
for all t>0. Using the same line of reasoning as in proof of Lemma 3 and Theorem 4 of <cit.>, we can show that
α(t,0)=-β(t,0)+2V(t-ℓ/σ)
+2/σ∫_0^1K^21(τ,0)Ce^-Aℓ/στdτ X(t)-C/σX(t)
+2/σ∫_0^1K^21(τ,0)ε(t-ℓ/στ)dτ-1/σε(t),
for all t≥ℓ/σ. Considering (<ref>), we can rewrite (<ref>) as
α(t,0) = -β(t,0)+2KX(t-ℓ/σ)
-2∫_0^1K^21(1,ξ)ṽ(t-ℓ/σ)dξ
-2∫_0^1K^22(1,ξ)w̃(t-ℓ/σ,ξ)dξ-2KX̃(t-ℓ/σ)
+2/σ∫_0^1K^21(τ,0)Ce^-Aℓ/στdτ X(t)-C/σX(t)
+2/σ∫_0^1K^21(τ,0)ε(t-ℓ/στ)dτ-1/σε(t).
However, by the semigroup property of the system (<ref>), we have that X(t-ℓ/σ)=e^-Aℓ/σX(t). Using this fact and rearranging the terms of equation (<ref>), we can obtain
α(t,0) =-β(t,0)+2Ke^-Aℓ/σX(t)
+2/σ∫_0^1K^21(τ,0)Ce^-Aℓ/στdτ X(t)-C/σX(t)
-2∫_0^1K^21(1,ξ)ṽ(t-ℓ/σ)dξ
-2∫_0^1K^22(1,ξ)w̃(t-ℓ/σ,ξ)dξ-2KX̃(t-ℓ/σ)
+2/σ∫_0^1K^21(τ,0)ε(t-ℓ/στ)dτ-1/σε(t),
for all t≥ℓ/σ. Recalling that K is chosen as in equation (<ref>), we can simplify (<ref>) to obtain that
α(t,0) = -β(t,0)-2∫_0^1K^21(1,ξ)ṽ(t-ℓ/σ)dξ
-2∫_0^1K^22(1,ξ)w̃(t-ℓ/σ,ξ)dξ-2KX̃(t-ℓ/σ)
+2/σ∫_0^1K^21(τ,0)ε(t-ℓ/στ)dτ-1/σε(t),
for all t≥ℓ/σ. It follows from equations (<ref>) and (<ref>) that
δρ(t,ℓ) = v(t,0)+w(t,0).
However, equations (<ref>) and (<ref>) imply that α(t,0)=v(t,0) and β(t,0)=w(t,0), and thus equation (<ref>) yields
δρ(t,ℓ) = -2∫_0^1K^21(1,ξ)ṽ(t-ℓ/σ,ξ)dξ
-2∫_0^1K^22(1,ξ)w̃(t-ℓ/σ,ξ)dξ-2KX̃(t-ℓ/σ)
+2/σ∫_0^1K^21(τ,0)ε(t-ℓ/στ)dτ-1/σε(t),
for all t≥ℓ/σ.
In order to analyse lim_t→∞|δρ(t,ℓ)|, we examine lim_t→∞ṽ(t), lim_t→∞w̃(t), and lim_t→∞X̃(t) as follows.
II) An upper-bound for lim_t→∞|δρ(t,ℓ)|
Considering equations (<ref>), (<ref>)-(<ref>), and (<ref>)-(<ref>), we can show that the observer errors (ṽ,w̃,X̃) satisfy
ṽ_t+σ/ℓṽ_x̅ =-μ_1(x̅)w̃-p_1(x̅)ṽ(1,t),
w̃_t- σ/ℓw̃_x̅ =μ_2(x̅)ṽ-p_2(x̅)ṽ(1,t),
for all x̅∈ (0,1) and t>0, with the boundary conditions
ṽ(t,0) =w̃(t,0)-1/σCX̃(t)-1/σε(t),
w̃(t,1) =0,
for all t>0, and
Ẋ̃̇(t) = AX̃(t)-e^Aℓ/σHṽ(t,1),
for all t>0. Consider the backstepping transformations
ṽ(t,x̅) = α̃(t,x̅)-∫_x̅^1P^11(x̅,ξ)α̃(t,ξ)dξ
-∫_x̅^1P^12(x̅,ξ)β̃(t,ξ)dξ,
w̃(t,x̅) = β̃(t,x̅)-∫_x̅^1P^21(x̅,ξ)α̃(t,ξ)dξ
-∫_x̅^1P^22(x̅,ξ)β̃(t,ξ)dξ,
defined in the triangular domain {0≤x̅≤ξ≤ 1}, where P^11 and P^21 are solutions to equations (<ref>)-(<ref>), and P^21 and P^22 satisfy
P^12_x̅(x̅,ξ) - P^12_ξ(x̅,ξ) = -ℓ/σμ_1(x̅) P^22(x̅,ξ),
P^22_x̅(x̅,ξ) + P^22_ξ(x̅,ξ) = - ℓ/σμ_2(x̅) P^12(x̅,ξ),
with boundary conditions
P^12(x̅, x̅) = -ℓ/2σμ_1(x̅),
P^22(0,ξ) = P^12(0,ξ).
Subject to the observer error backstepping transformations (<ref>) and (<ref>), and the observer gain functions p_1(x̅),p_2(x̅) chosen as in equations (<ref>) and (<ref>), the observer error system is transformed to
α̃_t(t,x̅)+σ/ℓα̃_x̅(t,x̅) =1/σCe^Aℓ/σ(1-x̅)Hα̃(t,1),
β̃(t,x̅)-σ/ℓβ̃_x̅(t,x̅) =0,
for all x̅∈ (0,1) and t>0 and the boundary conditions
α̃(t,0) =β̃(t,0)-1/σCX̃(t)-1/σε(t),
β̃(t,1) =0,
for all t>0, and
Ẋ̃̇(t)=AX̃(t)-e^AℓσHα̃(t,1),
for all t>0. Notice that β̃[t]=0 for all t≥ℓ/σ. Therefore, it holds that
α̃_t(t,x̅)+σ/ℓα̃_x̅(t,x̅) =1/σCe^Aℓσ(1-x̅)Hα̃(t,1),
α̃(t,0) =-1/σCX̃(t)-1/σε(t),
for all x̅∈ (0,1) and for all t≥ℓ/σ, and
Ẋ̃̇(t)=AX̃(t)-e^AℓσHα̃(t,1),
for all t>0. Define
γ(t,x̅):=α̃(t,x̅)+1/σCe^-Aℓσx̅X̃(t),
for all x̅∈ (0,1) and for all t≥ℓ/σ. Then, we can show that
γ_t(t,x̅)+σ/ℓγ_x̅(t,x̅) =0,
for all x̅∈(0,1) and t≥ℓ/σ and the boundary conditions
γ(t,0) =-1/σε(t),
for all t≥ℓ/σ. Further, we have that
Ẋ̃̇(t)= (A+1/σe^Aℓ/σHCe^-Aℓσ)X̃(t)-e^AℓσHγ(t,1)
= e^Aℓ/σ(A+1/σHC)e^-AℓσX̃(t)-e^AℓσHγ(t,1),
for all t≥ℓ/σ.
Let
à = e^Aℓσ(A+1/σHC)e^-Aℓσ.
The gain vector H is chosen such that A+1/σHC is hurwitz. Therefore, we have that à also hurwitz.
Referring to Proposition 3.2 of <cit.>, we can obtain the input-to-state stability results
‖γ[t]‖≤ e^-(t-5ℓ2σ)‖γ[ℓ/σ]‖+√(1/2σℓ)e^3ℓ2σmax_ℓ/σ≤τ≤ t|ε(τ)|,
|γ(t,1)|≤ e^-(t-3ℓ2σ)|max_0≤x̅≤ 1|γ(ℓ/σ,x̅)|+e^2ℓσ/σmax_ℓ/σ≤τ≤ t|ε(τ)|,
for all t≥ℓ/σ, and that
‖X̃(t)‖≤ Ω e^-υ(t-ℓ/σ)‖X̃(ℓ/σ)‖
+Ω/υ e^‖ A‖ℓ/σ‖ H‖max_ℓ/σ≤τ≤ t|γ (τ,1)|,
where υ is the smallest absolute value of the real parts of the eigenvalues of the Hurwitz matrix A+1/σHC. Therefore, as t→∞, we have that
lim_t→∞‖γ[t]‖=√(1/2σℓ)e^3ℓ2σmax_ℓ/σ≤τ≤∞|ε(τ)|≤√(1/2σℓ)e^3ℓ2σ M,
lim_t→∞|γ(t,1)| = e^2ℓσmax_ℓ/σ≤τ≤∞|ε(τ)|/σ≤ e^2ℓσ M/σ,
lim_t→∞‖X̃(t)‖ = Ω/υ e^‖ A‖ℓσ‖ H‖max_ℓ/σ≤τ≤∞|γ (τ,1)|
≤Ω/υσ e^‖ A‖ℓσ‖ H‖ e^2ℓσ M.
From equation (<ref>), we can obtain that
‖α̃[t]‖ ≤‖γ[t]‖ + 1/σ‖ C‖ e^‖ A‖ℓ/σ‖X̃(t)‖,
for all t≥ℓ/σ. Therefore, considering equations (<ref>) and (<ref>), we can show that
α̃[t]‖≤√(1/2σℓ)e^3ℓ2σ M+Ω/σ^2υ‖ C‖ e^‖ A‖2ℓ/σ‖ H‖ e^2ℓσ M,
as t→∞. Recall that β̃[t]=0 for all t≥ℓ/σ. Therefore, considering the observer error backstepping transformations (<ref>) and (<ref>), we can obtain that
‖ṽ[t] ‖≤P̃_11‖α̃[t]‖, and ‖w̃[t]‖≤P̃_21‖α̃[t]‖,
for all t≥ℓ/σ, where
P̃_11= 1+(∫_0^1∫_x̅^1P_11^2(x̅,ξ)dξ dx̅)^1/2,
P̃_21= ∫_0^1∫_x̅^1P_21^2(x̅,ξ)dξ dx̅.
Further, considering equation (<ref>), we can obtain
|δρ(t,ℓ)|≤ 2max_0≤ξ≤ 1| K^21(1,ξ)|·‖ṽ[t-ℓ/σ]‖
+2max_0≤ξ≤ 1| K^22(1,ξ)|·‖w̃[t-ℓ/σ]‖+M/σ
+2‖ K‖·‖X̃[t-ℓ/σ]‖+2/σmax_(0≤ξ≤ 1)| K^21(ξ,0)| M ,
for all t≥ℓ/σ.
Then, using equations (<ref>)-(<ref>), (<ref>)-(<ref>), and (<ref>), we can show that |δρ(t,ℓ)|→M̅ as t→∞, for some M̅>0. This completes the proof. □
|δρ(t,ℓ)|≤ 2max_0≤ξ≤ 1| K^21(1,ξ)|P̃_11(√(σ/4ℓ)e^5ℓ2σ M
+Ω/συ‖ C‖ e^‖ A‖ℓ/σ e^‖ A‖ℓ/σ‖ H‖ e^3ℓσ M)
+2max_0≤ξ≤ 1| K^22(1,ξ)|P̃_21(√(σ/4ℓ)e^5ℓ2σ M
+Ω/συ‖ C‖ e^‖ A‖ℓ/σ e^‖ A‖ℓ/σ‖ H‖ e^3ℓσ M)
+2‖ K‖Ω/υ e^‖ A‖ℓ/σ‖ H‖ e^3ℓσ M
+2/σmax_(0≤τ≤ 1)| K^21(ξ,0)| M+M/σ ,
as t→∞.
§ NUMERICAL SIMULATIONS
We demonstrate the observer-based boundary control method developed above for setpoint tracking and disturbance attenuation in a simple gas pipeline model. Motivated by previous studies <cit.>, we use the plant parameters λ = 0.011, σ = 378 m/s, D = 0.5 m, ℓ = 25× 10^3 m, φ_ L = 289 kg/m^2/s, and U_⋆ = 46 kg/m^3.
Let A and C be
A = [ 0 1; -(2π/6× 3600)^2 0 ],
C =[ 1 0 ].
Further, let the initial conditions be chosen as
ρ(0,x) = ρ_⋆(x),
ϕ(0,x) = ϕ_ L,
X(0) = [ 0; 0.1ϕ_ L/3600 ].
Then, considering the solution of the system (<ref>) and (<ref>), we can show that s(t) is given by
s(t) = 0.6φ_ L/2πsin(2π t/6× 3600).
Fig. <ref> shows the evolution of s(t). The initial conditions for the observer (<ref>)-(<ref>) are chosen as
v̂[0]≡ 0, ŵ[0]≡ 0.
Below, we conduct numerical simulations for two cases: A) when the outlet flow varies according to equations (<ref>)-(<ref>) as discussed in Section <ref>; and B) when the outlet flow variation is subject to uncertainties as in Section <ref>.
§.§ When outlet flow varies according to (<ref>)-(<ref>)
In Figs. <ref> and <ref>, we illustrate the evolution of density and flow at three locations along the pipe. Fig. <ref> depicts the scenario where δ U(t) = 0, while Fig. <ref> shows the case where δ U(t) is given by equation (<ref>). As seen in Fig. <ref>, the outlet density is regulated to the steady-state value even in the presence of outlet flow fluctuations. In contrast, Fig. <ref> shows that the outlet density fluctuates due to the presence of outlet flow disturbances.
In Fig. <ref>, we show the norms of the control gains (see (<ref>)) versus pipe length. The value √(∫(K^22(1,ξ))^2dξ) increases faster than linearly, indicating that the proposed control design is better suited for shorter pipes.
§.§.§ Over a short time horizon (100 s) and with a short pipe (ℓ = 10 km)
Let A and C be chosen as
A = [ 0 1; -(2π/12)^2 0 ], C=[ 1 0 ].
Further, let the initial conditions be chosen as
ρ(0,x) = ρ_⋆(x)+0.01ρ_⋆(0)sin(π x/ℓ),
ϕ(0,x) = ϕ_ L+0.1ϕ_ Lsin(π x/ℓ),
X(0) = [ 0; 0.1ϕ_ L ].
§.§ When the outlet flow variation is subject to uncertainties
Here, we assume that the outlet flow variation is subject to an unknown yet bounded disturbance, ε(t). For simulation purposes, we take
ε(t) = 0.001s^3(t).
In Fig. <ref>, we show the evolution of the disturbed outlet flow. The initial conditions of the observer X̂(t) are chosen as
X̂(0) = [ 0 0 ]^T.
In Figs. <ref> and <ref>, we illustrate the evolution of density in the presence of disturbed outlet flow fluctuation. Fig. <ref> depicts the scenario where δ U(t) = 0, while Fig. <ref> shows the case where δ U(t) is given by (<ref>). As seen in Fig. <ref>, the outlet density is regulated to a neighborhood of the steady-state value even in the presence of disturbed outlet flow fluctuations. In contrast, Fig. <ref> shows that the outlet density fluctuates due to the presence of outlet flow disturbances.
§ CONCLUSIONS
In this study, we develop an output feedback regulation approach to suppress disturbances from a setpoint in the outlet pressure of gas flowing through a pipeline subject to fluctuating outlet flow. We model the variation in consumption as outlet flow fluctuations generated by a periodic linear dynamic system. The nonlinear isothermal Euler equations including the Darcy-Weisbach friction model are linearized and expressed in canonical form as a coupled 2 × 2 hyperbolic PDE. Using an observer-based PDE backstepping controller, we regulate the outlet pressure to a setpoint, even in the presence of outlet flow variations, by manipulating the inlet pressure. Additionally, we extended the controller to handle bounded uncertainties in the outlet flow fluctuations, ensuring regulation of the outlet pressure variation to a neighborhood of the setpoint. The numerical results have validated the effectiveness of the proposed control strategy, and the approach can be extended to gas pipeline networks.
§ ACKNOWLEDGEMENTS
This study was supported by the LDRD program project “Stochastic Finite Volume Method for Robust Optimization of Nonlinear Flows” at Los Alamos National Laboratory. Research is done at Los Alamos National Laboratory under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. 89233218CNA000001. Report No. LA-UR-24-29852.
IEEEtranS
| In recent years, significant attention has been directed to study the system dynamics in gas pipeline networks. Various modeling approaches were introduced and explored for gas network flow, and were examined from analytical and numerical perspectives <cit.>. The primary engineering challenge is to compensate for the decrease in pressure along the direction of flow that is caused by friction at the pipe's inner surface. The one-dimensional isothermal Euler equations, incorporating Darcy–Weisbach friction model, form a 2 × 2 system of coupled partial differential equations (PDEs) that represent the relevant conservation laws. These equations adequately describe gas flow dynamics along the pipe, accounting for gas momentum loss in the wave- and shock-free physical regime <cit.>.
The operation of natural gas transport networks presents complex challenges within the domain of control theory. The field of boundary stabilization of PDEs offers a rich theoretical framework to address these challenges <cit.>. Lyapunov-based control <cit.>, optimal control <cit.>, and PDE backstepping <cit.> are among the control approaches that have been applied to gas flow control in pipelines using PDE models. Methods that responsively compensate for unanticipated variations in boundary conditions caused by unscheduled changes in consumption, while maintaining pressure above minimum requirements, compel ongoing research.
In this paper, we examine the control of gas flowing through a pipeline subject to fluctuating outlet flow that we suppose is generated by a periodic linear dynamic system, which we use as a model of variable consumer demand. By appropriately selecting the parameters of the dynamic system, a desired periodic fluctuation profile can be achieved. First, we linearize the nonlinear isothermal Euler equations including Darcy-Weisbach friction term around an equilibrium solution and express the system in canonical form, resulting in a coupled 2 × 2 hyperbolic PDE. We then employ an observer-based PDE backstepping controller, adapted from <cit.>, to regulate the outlet pressure to a setpoint by manipulating the inlet pressure, thus compensating for variations in outlet flow. Furthermore, we extend the observer-based controller to account for uncertain yet bounded fluctuations in the outlet flow. The derived controller manipulates the inlet pressure to regulate the outlet pressure to a neighborhood of the setpoint. | null | null | null | null | null |
http://arxiv.org/abs/2409.17062v1 | 20240925162024 | Entanglement Hamiltonian and effective temperature of non-Hermitian quantum spin ladders | [
"Pei-Yun Yang",
"Yu-Chin Tzeng"
] | quant-ph | [
"quant-ph"
] |
UTF8bsmi
Department of Physics, National Taiwan University, Taipei 106319, Taiwan
[email protected]
Department of Electrophysics and Center for Theoretical and Computational Physics, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan
§ ABSTRACT
Quantum entanglement plays a crucial role not only in understanding Hermitian many-body systems but also in offering valuable insights into non-Hermitian quantum systems. In this paper, we analytically investigate the entanglement Hamiltonian and entanglement energy spectrum of a non-Hermitian spin ladder using perturbation theory in the biorthogonal basis. Specifically, we examine the entanglement properties between coupled non-Hermitian quantum spin chains. In the strong coupling limit (J_rung≫1), first-order perturbation theory reveals that the entanglement Hamiltonian closely resembles the single-chain Hamiltonian with renormalized coupling strengths, allowing for the definition of an ad hoc temperature.
Our findings provide new insights into quantum entanglement in non-Hermitian systems and offer a foundation for developing novel algorithms, such as applying finite-temperature Density Matrix Renormalization Group (DMRG) to non-Hermitian quantum systems.
Entanglement Hamiltonian and effective temperature of non-Hermitian quantum spin ladders
Yu-Chin Tzeng (曾郁欽)
September 28, 2024
========================================================================================
§ INTRODUCTION
Quantum entanglement is a foundational concept that significantly enhances our understanding of many-body physics by elucidating quantum correlations between subsystems. The entanglement reveals concealed connections beyond classical physics. <cit.>
Suppose the total Hamiltonian H=H_A+H_B+H_AB is written as summation of the subsystem Hamiltonians H_A and H_B, and their interaction H_AB.
To further analyze the ground-state entanglement properties, the reduced density matrix ρ_A=Tr_B|ψ_0⟩⟨ψ_0| usually becomes an essential tool, where |ψ_0⟩ is the normalized ground-state of the total Hamiltonian H, and the partial trace is performed on tracing the degrees of freedom of subsystem B. One common measure for quantifying entanglement between the subsystem A and B is the von Neumann entanglement entropy, defined by S_von=-∑_i ω_ilnω_i, where ω_i is the ith eigenvalue of ρ_A.
For example, in gapless systems where the low-energy theory is described by conformal field theory, the ground-state entanglement entropy exhibits a logarithmic scaling behavior with respect to subsystem size. This scaling allows for extraction of the central charge, a key parameter in the conformal field theory, which serves as an indicator of the phase transition's universality class. <cit.>.
For gapped systems, the ground-state entanglement entropy follows an area law, meaning it is proportional to the size of subsystem's boundary.
The entanglement energy spectrum provides a detailed view of the quantum correlations between subsystems, with the entanglement entropy serving as a condensed summary of the information contained within this spectrum.
The entanglement energy ξ_i is the ith energy eigenvalue of a hypothetical Hamiltonian, called entanglement Hamiltonian H_E, which is defined by regarding the reduced density matrix as a thermal density matrix of the entanglement Hamiltonian at unity temperature, ρ_A=e^-H_E/Z. Where Z is the partition function that ensures Tr[ρ_A]=1.
Although the exact form of H_E is generally unknown, its eigenvalues can be obtained by taking logarithm on the eigenvalues of the reduced density matrix, ξ_i=-lnω_i-ln Z. The entanglement energy spectrum provides deep insights into topological systems and can be considered as a kind of `fingerprint' of these systems. <cit.> This `fingerprint' means that even when the entire system is divided into two halves, a topological system generates a gapless edge state, and this signature can be observed in the low-energy portion of the entanglement energy spectrum. <cit.>
For example in the 2-dimensional topological systems, such as the fractional quantum Hall systems, the low-energy portion of momentum-resolved entanglement spectrum presents the same state counting with the low-energy spectrum of the edge Hamiltonian. This relationship is known as the renowned Li-Haldane conjecture <cit.> or the edge-entanglement spectrum correspondence. <cit.>
Interesting phenomena related to the entanglement Hamiltonian H_E can also arise when the subsystem B is considered as an ancilla system copied from the subsystem A, and introducing strong enough interaction H_AB to create a nearly maximally entangled state between A and B. <cit.>
For example, in an antiferromagnetic spin ladder where the rung coupling is much stronger than the leg coupling, the ground-state forms multiple rung-singlets, resulting in a nearly maximally entangled state between the two legs. <cit.>
In contrast to the Li-Haldane conjecture in topological systems, the entire entanglement spectrum has some similarity to the energy spectrum of subsystem A. Remarkably, under carefully selected parameters, the entanglement Hamiltonian H_E≈β H_A can be proportional to the Hamiltonian of subsystem A by a constant β.
In other words, the finite temperature properties of an isolated system A can be approximated by the reduced density matrix ρ_A obtained from the ground-state of an enlarged system at zero temperature,
ρ_A≈1/Zexp[-β H_A],
where β is the inverse temperature as a function of system parameters.
This development allows the finite-temperature Density Matrix Renormalization Group (DMRG) based on matrix product states to evolve into a robust numerical method for finite-temperature strongly correlated systems. <cit.>
In this paper, we show that Eq. (<ref>) remains valid for non-Hermitian Hamiltonians. This implies that the ancilla trick used in finite-temperature algorithms <cit.> is also expected to be effective for non-Hermitian quantum systems.
In the next section, we briefly introduce non-Hermitian quantum mechanics and describe the non-Hermitian Hamiltonian for the spin-1/2 ladder.
§ NON-HERMITIAN HAMILTONIAN
Non-Hermitian systems have become an important multidisciplinary field of study, <cit.> spanning photonics, <cit.> condensed matter physics, <cit.> and quantum information science. <cit.> Unlike traditional quantum systems, which are governed by Hermitian Hamiltonians that ensure real eigenvalues and physically observable energy levels, non-Hermitian systems are described by Hamiltonians that do not necessarily satisfy this condition. This leads to complex eigenvalues and unique physical phenomena, e.g. exceptional points <cit.> and non-Hermitian skin effect. <cit.>
An exceptional point (EP) in non-Hermitian systems occurs when two or more eigenvalues and their corresponding eigenvectors coalesce, rendering the Hamiltonian non-diagonalizable. This leads to unique phenomena, such as enhanced sensitivity to external changes <cit.> and a negative divergence in real part of fidelity susceptibility <cit.> or quantum metric, <cit.> which measures how ground state changes under perturbations.
The non-Hermitian skin effect arises in systems with nonreciprocal coupling. <cit.> This nonreciprocity means that particles or excitations have different probabilities of hopping forward versus backward. As a result, even in the absence of an external field or disorder, the system can exhibit an accumulation of states at one end under open boundary condition, leading to the Hermitian bulk – non-Hermitian boundary correspondence. <cit.>
Mathematically, nonreciprocal coupling is often introduced into a system's Hamiltonian through asymmetric hopping terms. <cit.> For instance, in a one-dimensional lattice model, the hopping amplitude from site j to site j+1 might differ from the amplitude from site j+1 to site j. This asymmetry results in a complex band structure with eigenvalues that can form loops in the complex plane, leading to the skin effect. <cit.>
In this paper, we study the following non-Hermitian Hamiltonian H=H_A+H_B+H_AB for spin-1/2 ladder with the nonreciprocal coupling.
H_A =J_leg∑_j=1^N[1/2(e^Ψ S_j,A^+S_j+1,A^-+e^-ΨS_j,A^-S_j+1,A^+)+Δ S_j,A^zS_j+1,A^z],
H_B =J_leg∑_j=1^N[1/2(e^Ψ S_j,B^+S_j+1,B^-+e^-ΨS_j,B^-S_j+1,B^+)+Δ S_j,B^zS_j+1,B^z],
H_AB =J_rung∑_j=1^N[1/2(e^Φ S_j,A^+S_j,B^-+e^-ΦS_j,A^-S_j,B^+)+Δ S_j,A^zS_j,B^z],
where N denotes the number of rungs, and Φ and Ψ are real parameters that control the nonreciprocal coupling between the legs and within each leg, respectively. J_rung and J_leg represent the coupling strengths between the legs and within the legs. Lastly, Δ denotes the XXZ anisotropy strength. Periodic boundary conditions are assumed.
In the Hermitian limit, Φ=Ψ=0, the ground-state phase diagram has been studied in the literature. <cit.> We will mainly focus on the entanglement Hamiltonian in the rung-singlet phase.
The schematic representation of the non-Hermitian spin-1/2 ladder is shown in Fig. <ref>.
Due to the non-Hermitian nature of the system, where H^†≠ H, the time evolution of the wavefunctions is driven by both H and H^† simultaneously. This results in the following time evolution equations: ∂/∂ t|φ^R(t)⟩=-iH|φ^R(t)⟩, and ∂/∂ t|φ^L(t)⟩=-iH^†|φ^L(t)⟩. Note that ℏ≡1 is set. Consequently, in analogy to standard linear algebra, the eigenvectors of a non-Hermitian Hamiltonian are generalized into biorthogonal left and right eigenvectors, satisfying the following eigenvalue equations: H^†|ψ_n^L⟩=E_n^*|ψ_n^L⟩ and H|ψ_n^R⟩=E_n|ψ_n^R⟩, with the biorthonormal condition: ⟨ψ_n^L|ψ_m^R⟩=δ_nm. <cit.>
In non-Hermitian quantum mechanics, observables are defined through the expectation value ⟨ψ^L|O|ψ^R⟩, involving both left and right eigenvectors. This biorthogonal framework naturally extends to the definition of the reduced density matrix in a bipartite system. For a system divided into subsystems A and B, the reduced density matrix (RDM) of the ground-state for subsystem A is defined as
ρ_A=Tr_B|ψ^R_0⟩⟨ψ^L_0|.
Note that Tr[ρ_A]=1, since ⟨ψ_0^L|ψ_0^R⟩=1.
This biorthogonal definition of the RDM immediately raises a key issue: the RDM itself becomes non-Hermitian, meaning that its eigenvalues, ω_i, are generally complex. Even when the total Hamiltonian has PT symmetry and the ground-state energy is real <cit.>, the eigenvalues of the RDM in typical cases remain complex. As a result, appropriate definitions of generic entanglement entropy of both von Neumann type and Rényi type have been proposed by Tu, Tzeng and Chang to account for these complex eigenvalues. <cit.>
S_TTC =-∑_iω_iln|ω_i|,
S_TTC^(n) =1/1-nln(∑_iω_i|ω_i|^n-1)
These entropies Eq.(<ref>) effectively capture the negative central charge in non-Hermitian critical systems through the logarithmic scaling. <cit.>
The entanglement Hamiltonian H_E defined by ρ_A=e^-H_E/Z is also non-Hermitian, and the entanglement energy ξ_i is complex in general. The real part of the entanglement energy can be obtained directly as Re[ξ_i] = -ln|ω_i|-ln Z, and the entanglement entropy Eq. (<ref>) can be seen as the expectation value of the real part of the entanglement energy, S_TTC=∑_iω_iRe[ξ_i]+ln Z.
[In PT-symmetric non-Hermitian systems, if the bipartition does not break the PT symmetry, the reduced density matrix remains PT-symmetric, meaning that its eigenvalues ω_i are either real or come in complex conjugate pairs. Consequently, the partition function Z is real.]
However, the imaginary part of the entanglement energy cannot be easily determined, as the logarithmic function becomes multi-valued. To gain further insight into this complexity, in the following section, we directly derive the entanglement Hamiltonian of the non-Hermitian spin-1/2 ladder using perturbation theory for the rung-singlet phase, where the ground state is adiabatically connected to the limit case of J_rung≫1.
§ ENTANGLEMENT HAMILTONIAN
§.§ Perturbation Theory
The non-Hermitian spin-1/2 ladder Hamiltonian is given by Eq.(<ref>).
In the limit of J_rung≫1, the interaction between legs A and B at each rung j defines the unperturbed Hamiltonian H_0=H_AB=∑_j=1^Nh_0^j,
where
h_0^j=J_rung[1/2(
e^ΦS_j,A^+S_j,B^-+e^-ΦS_j,A^-S_j,B^+)
+Δ S_j,A^zS_j,B^z],
and the Hamiltonians of the legs A and B, H_1=H_A+H_B, are treated as perturbation.
The left and right eigenvectors of the single-rung Hamiltonian Eq.(<ref>) are
|s_j^x⟩ =1/√(2)( e^σ(x)Φ|↑⟩
_j,A|↓⟩_j,B-|↓⟩_j,A|↑⟩
_j,B),
|t_j^+x⟩ =|t_j^+⟩ =|↑⟩_j,A|↑⟩_j,B,
|t_j^0x⟩ =1/√(2)( e^σ(x)Φ|↑⟩_j,A|↓⟩_j,B+|↓⟩_j,A|↑⟩_j,B),
|t_j^-x⟩ =|t_j^-⟩ =|↓⟩_j,A|↓⟩_j,B,
where x=L, R labels the `left' and `right', respectively, and
σ(x) =
+1, x=R,
-1, x=L.
Let |ψ_0^L(0)⟩ and |ψ_0^R(0)⟩ denote the left and right ground states of the unperturbed Hamiltonian H_0, with the corresponding ground state energy E_0^(0)=-NJ_rung(1/2+Δ/4). Here, we assume Δ>-1.
The ground state is a product of singlet states on each rung,
|ψ_0^x(0)⟩=
⊗_j
|s_j^x⟩,
Using first-order perturbation theory, the corrected ground state can be written as,
|ψ_0^x⟩≈ |ψ_0^x(0)⟩ + |ψ_0^x(1)⟩,
where the left and right first-order correction terms are,
⟨ψ_0^L(1)| = ∑_j=1^N∑_n≠0⟨ψ_0^L(0)|H_1|ψ_n^jR(0)
⟩/E_0^(0)-E_n^(0)⟨ψ_n^jL(0)|
=J_leg/4J_rung∑_j=1^N[2e^-Φe^-Ψ/1+Δ...⟨ t_j^+|⟨ t_j+1^-|...
+2e^-Φe^Ψ/1+Δ...⟨ t_j^-|⟨ t_j+1^+|...
-Δ...⟨ t_j^0L|⟨ t_j+1^0L|...],
|ψ_0^R(1)⟩ =∑_j=1^N∑_n≠0|ψ_n^jR(0)⟩⟨ψ_n^jL(0)|H_1
|ψ_0^R(0)⟩/E_0^(0)-E_n^(0)
=J_leg/4J_rung∑_j=1^N[
2e^Φe^Ψ/1+Δ...|t_j^+⟩|t_j+1^-⟩...
+2e^Φe^-Ψ/1+Δ...|t_j^-⟩|t_j+1^+⟩...
-Δ...|t_j^0R⟩|t_j+1^0R⟩...].
Where, the dots represent the singlet states on each rung.
|ψ_n^jL(0)⟩ and |ψ_n^jR(0)⟩ denote the excited states of the unperturbed Hamiltonian H_0=H_AB, and E_n^(0) are the corresponding energies. Specifically, the excited states with non-zero contributions to the corrections are
|ψ_1^j(0)⟩ =...|t_j^+⟩|t_j+1^-⟩...,
|ψ_2^j(0)⟩ =...|t_j^-⟩|t_j+1^+⟩...,
|ψ_3^jx(0)⟩ =...|t_j^0x⟩|t_j+1^0x⟩...,
where x=L,R,
and the corresponding eigenenergies are
E_1^(0)=E_2^(0) =J_rung[ ( 1+Δ)-N( 1/2+1/4Δ)],
E_3^(0) =J_rung[2-N( 1/2+1/4Δ)].
The reduced density matrix ρ_A defined in Eq. (<ref>) can be approximated as,
ρ_A ≈ρ_A^(0)+ρ_A^(1)
=Tr_B(|ψ_0^R(0)⟩⟨ψ_0^L(0)|
+ |ψ_0^R(1)⟩⟨ψ_0^L(0)|
+ |ψ_0^R(0)⟩⟨ψ_0^L(1)|)
=1/2^N( 1-4J_leg/J_rung (1+Δ)∑_j=1^N[ 1/2( e^ΨS_j,A^+S_j+1,A^-.
.+e^-ΨS_j,A^-S_j+1,A^+) +1/2( Δ+Δ^2)
S_j,A^zS_j+1,A^z] )
Thus, the reduced density matrix can be written in terms of the Hamiltonian of subsystem A,
ρ_A ≈1/Zexp[-βH̃_A]
=1/Z(1-βH̃_A+1/2!β^2H̃_A^2-1/3!β^3H̃_A^3+⋯)
Compare Eq. (<ref>) with Eq. (<ref>), we obtain
the ad hoc inverse temperature
β=4/1+Δ1/J_rung≪1,
and the Hamiltonian of the subsystem A
H̃_A
= J_leg∑_j=1^N[1/2(e^ΨS_j,A^+S_j+1,A^-+e^-ΨS_j,A^-S_j+1,A^+)
+Δ̃S_j,A^zS_j+1,A^z],
which is in the form of XXZ interaction with a renormalized parameter Δ̃=1/2(Δ+Δ^2). The partition function is Z=Tr[exp(-βH̃_A)]=2^N.
§.§ Discussion
We make some remarks regarding the derivation:
For the spin ladder Eq. (<ref>) considered in this paper, the renormalized anisotropy parameter remains unchanged, i.e., Δ̃=Δ, when Δ=1 or 0. In these specific cases, the entanglement Hamiltonian is exactly equal to the subsystem Hamiltonian, H̃_A = H_A.
When Ψ=Φ=0, our results are consistent with those from earlier studies <cit.>, reproducing the known Hermitian case. Even when non-Hermitian couplings are introduced with non-zero Ψ and Φ, the overall behavior remains remarkably similar, confirming that the methods and conclusions from the Hermitian regime can be successfully extended to non-Hermitian systems without major deviations.
In the general non-Hermitian case, if one wishes to ensure that H̃_A=H_A, a simple approach is to choose the inter-subsystem Hamiltonian H_AB as a Hermitian and isotropic Heisenberg interaction.
H_AB=J_rung∑_j=1^NS⃗_j,A·S⃗_j,B
This allows the reduced density matrix to reflect the exact form of the subsystem Hamiltonian without any parameter renormalization.
A particularly interesting scenario arises when H_A and H_B are both Hermitian, while only the inter-subsystem coupling H_AB is non-Hermitian, specifically with parameters Ψ=0 and Φ≠0. In this case, despite both the total Hamiltonian and the reduced density matrix being non-Hermitian, all the entanglement energies remain real. This situation is quite rare and demonstrates an unusual interplay between Hermitian and non-Hermitian components in the system.
§ CONCLUSION
The entanglement Hamiltonian, whose eigenvalue spectrum is known as the entanglement energy spectrum<cit.>, plays a crucial role in revealing quantum correlations between subsystems in many-body systems. Understanding its analytical form is essential for gaining deeper insights into the nature of quantum entanglement and for facilitating entanglement Hamiltonian tomography <cit.>. However, obtaining the entanglement Hamiltonian is often challenging, especially in non-Hermitian systems, where it can be complex and non-Hermitian itself. While studies on non-interacting systems, such as the non-Hermitian Su-Schrieffer-Heeger (SSH) model <cit.>, have made progress in deriving the entanglement Hamiltonian, the challenge is even more pronounced in interacting systems, where many-body effects add significant complexity. Although the real part of the entanglement energy can be derived from the eigenvalues of the reduced density matrix as Re[ξ_i]=-ln|ω_i|-ln Z,
capturing the full entanglement Hamiltonian remains crucial for exploring the deeper properties of quantum many-body systems.
In this paper, we explore the entanglement Hamiltonian of a non-Hermitian spin ladder system using perturbation theory, providing an analytical approach to this difficult problem. Remarkably, we find that the entanglement Hamiltonian in the non-Hermitian case can be approximated by the Hamiltonian of subsystem A, indicating that the thermal density matrix of an isolated non-Hermitian system A in equilibrium can be derived by the partial trace of an enlarged system. This suggests that the ancilla trick applied for developing finite-temperature Density Matrix Renormalization Group (DMRG) method <cit.>, can be extended to non-Hermitian many-body systems as well. Our work offers new insights into the study of quantum entanglement in non-Hermitian systems, potentially facilitating the development of advanced numerical algorithms for investigating their finite-temperature behavior.
Although non-Hermitian systems exhibit many phenomena absent in Hermitian systems, such as exceptional points (EP) and the non-Hermitian skin effect, many features of Hermitian systems persist in non-Hermitian counterparts when appropriately generalized. For instance, the entanglement entropy in critical systems still follows a logarithmic scaling <cit.>, fidelity susceptibility diverges near phase transitions or EPs <cit.>, and machine learning methods can be transferred from Hermitian to non-Hermitian systems <cit.>. In our work on entanglement Hamiltonians, we extend the Hermitian case to non-Hermitian systems and find that, for nearly maximally entangled states, the results remain consistent with the Hermitian case.
YCT is grateful to Chia-Yi Ju, Po-Yao Chang and Gunnar Möller for many invaluable discussions.
We thank to National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational and storage resources.
YCT is grateful to the supports from National Science and Technology Council (NSTC) of Taiwan under grant No. 113-2112-M-A49-015-MY3.
54
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Amico et al.(2008)Amico,
Fazio, Osterloh, and Vedral]Amico:RMP2008
author author L. Amico, author R. Fazio,
author A. Osterloh, and author V. Vedral, title title Entanglement in many-body systems, https://doi.org/10.1103/RevModPhys.80.517 journal journal Rev. Mod. Phys. volume 80, pages 517 (year 2008)NoStop
[Laflorencie(2016)]Laflorencie:review
author author N. Laflorencie, title title Quantum entanglement
in condensed matter systems, https://doi.org/10.1016/j.physrep.2016.06.008 journal
journal Physics Reports volume 646, pages 1 (year 2016)NoStop
[Calabrese and Cardy(2004)]Calabrese_Cardy_2004
author author P. Calabrese and author J. Cardy, title title Entanglement entropy and
quantum field theory, https://doi.org/10.1088/1742-5468/2004/06/P06002 journal
journal J. Stat. Mech.: Theo. Exp. volume 2004, pages P06002 (year
2004)NoStop
[Calabrese and Cardy(2009)]Calabrese_Cardy_2009
author author P. Calabrese and author J. Cardy, title title Entanglement entropy and
conformal field theory, https://doi.org/10.1088/1751-8113/42/50/504005 journal
journal J. Phys. A: Math. Theo. volume
42, pages 504005 (year 2009)NoStop
[Li and Haldane(2008)]Li-Haldane
author author H. Li and author F. D. M. Haldane, title title Entanglement spectrum as
a generalization of entanglement entropy: Identification of topological order
in non-Abelian fractional quantum Hall effect states, https://doi.org/10.1103/PhysRevLett.101.010504 journal
journal Phys. Rev. Lett. volume 101, pages 010504 (year 2008)NoStop
[Chandran et al.(2011)Chandran, Hermanns, Regnault, and Bernevig]Bernevig
author author A. Chandran, author M. Hermanns,
author N. Regnault, and author B. A. Bernevig, title title Bulk-edge correspondence in
entanglement spectra, https://doi.org/10.1103/PhysRevB.84.205136
journal journal Phys. Rev. B volume 84, pages 205136 (year
2011)NoStop
[Qi et al.(2012)Qi,
Katsura, and Ludwig]Qi2012
author author X.-L. Qi, author H. Katsura, and author A. W. W. Ludwig, title title General relationship between the
entanglement spectrum and the edge state spectrum of topological quantum
states, https://doi.org/10.1103/PhysRevLett.108.196402 journal journal Phys. Rev. Lett. volume 108, pages 196402 (year
2012)NoStop
[Ho et al.(2015)Ho,
Cincio, Moradi, Gaiotto, and Vidal]Vidal2015
author author W. W. Ho, author L. Cincio, author H. Moradi, author
D. Gaiotto, and author
G. Vidal, title title Edge-entanglement spectrum correspondence in a nonchiral topological
phase and Kramers-Wannier duality, https://doi.org/10.1103/PhysRevB.91.125119 journal journal Phys. Rev. B volume 91, pages 125119 (year 2015)NoStop
[Zache et al.(2022)Zache,
Kokail, Sundar, and Zoller]Quantum
author author T. V. Zache, author C. Kokail,
author B. Sundar, and author P. Zoller, title
title Entanglement spectroscopy and probing the Li-Haldane
conjecture in topological quantum matter, https://doi.org/10.22331/q-2022-04-27-702 journal journal Quantum volume 6, pages
702 (year 2022)NoStop
[Feiguin and White(2005)]Feiguin_2005
author author A. E. Feiguin and author S. R. White, title title Finite-temperature density
matrix renormalization using an enlarged Hilbert space, https://doi.org/10.1103/PhysRevB.72.220401 journal journal Phys. Rev. B volume 72, pages 220401(R) (year 2005)NoStop
[Verstraete et al.(2004)Verstraete, García-Ripoll, and Cirac]ancilla:Verstraete
author author F. Verstraete, author J. J. García-Ripoll, and author J. I. Cirac, title title Matrix product
density operators: Simulation of finite-temperature and dissipative
systems, https://doi.org/10.1103/PhysRevLett.93.207204 journal journal Phys. Rev. Lett. volume 93, pages 207204 (year
2004)NoStop
[Zwolak and Vidal(2004)]ancilla:Vidal
author author M. Zwolak and author G. Vidal, title title Mixed-state dynamics in
one-dimensional quantum lattice systems: A time-dependent superoperator
renormalization algorithm, https://doi.org/10.1103/PhysRevLett.93.207205 journal
journal Phys. Rev. Lett. volume 93, pages 207205 (year 2004)NoStop
[Poilblanc(2010)]Poilblanc2010
author author D. Poilblanc, title title Entanglement spectra of
quantum Heisenberg ladders, https://doi.org/10.1103/PhysRevLett.105.077202 journal
journal Phys. Rev. Lett. volume 105, pages 077202 (year 2010)NoStop
[Cirac et al.(2011)Cirac,
Poilblanc, Schuch, and Verstraete]PEPS
author author J. I. Cirac, author D. Poilblanc,
author N. Schuch, and author F. Verstraete, title
title Entanglement spectrum and boundary theories with projected
entangled-pair states, https://doi.org/10.1103/PhysRevB.83.245134
journal journal Phys. Rev. B volume 83, pages 245134 (year
2011)NoStop
[Peschel and Chung(2011)]Peschel_2011
author author I. Peschel and author M. Chung, title title On the relation between
entanglement and subsystem Hamiltonians, https://doi.org/10.1209/0295-5075/96/50006 journal journal Europhysics Letters volume 96, pages 50006 (year 2011)NoStop
[Läuchli and Schliemann(2012)]Lauchli_2012
author author A. M. Läuchli and author J. Schliemann, title title Entanglement spectra
of coupled s=1/2 spin chains in a ladder geometry, https://doi.org/10.1103/PhysRevB.85.054403 journal journal Phys. Rev. B volume 85, pages 054403 (year 2012)NoStop
[Lundgren et al.(2013)Lundgren, Fuji, Furukawa, and Oshikawa]Rex2013
author author R. Lundgren, author Y. Fuji,
author S. Furukawa, and author M. Oshikawa, title title Entanglement spectra between coupled
Tomonaga-Luttinger liquids: Applications to ladder systems and
topological phases, https://doi.org/10.1103/PhysRevB.88.245137
journal journal Phys. Rev. B volume 88, pages 245137 (year
2013)NoStop
[Predin(2017)]Predin_2017
author author S. Predin, title title Entanglement spectrum of
the degenerative ground state of Heisenberg ladders in a time-dependent
magnetic field, https://doi.org/10.1209/0295-5075/119/57003
journal journal Europhysics Letters volume 119, pages 57003 (year
2017)NoStop
[Fujita et al.(2018)Fujita,
Nakagawa, Sugiura, and Oshikawa]MachineLearning
author author H. Fujita, author Y. O. Nakagawa, author S. Sugiura, and author M. Oshikawa, title title Construction of Hamiltonians by
supervised learning of energy and entanglement spectra, https://doi.org/10.1103/PhysRevB.97.075114 journal journal Phys. Rev. B volume 97, pages 075114 (year 2018)NoStop
[Zhu et al.(2019)Zhu,
Huang, and He]Reconstruct
author author W. Zhu, author Z. Huang, and author Y.-C. He, title title Reconstructing entanglement Hamiltonian via
entanglement eigenstates, https://doi.org/10.1103/PhysRevB.99.235109 journal journal Phys. Rev. B volume 99, pages 235109 (year 2019)NoStop
[Yan and Meng(2023)]QMC
author author Z. Yan and author Z. Y. Meng, title title Unlocking the general relationship
between energy and entanglement spectra via the wormhole effect, https://doi.org/10.1038/s41467-023-37756-7 journal journal Nat. Commun. volume 14, pages 2360 (year 2023)NoStop
[Bender and Boettcher(1998)]Bender_1998
author author C. M. Bender and author S. Boettcher, title title Real spectra in
non-Hermitian Hamiltonians having PT symmetry, https://doi.org/10.1103/PhysRevLett.80.5243 journal journal Phys. Rev. Lett. volume 80, pages 5243 (year 1998)NoStop
[Bender(2007)]Bender_2007
author author C. M. Bender, title title Making sense of
non-Hermitian Hamiltonians, https://doi.org/10.1088/0034-4885/70/6/R03 journal journal Rep. Prog. Phys. volume 70, pages 947 (year 2007)NoStop
[Brody(2013)]Brody_2014
author author D. C. Brody, title title Biorthogonal quantum
mechanics, https://doi.org/10.1088/1751-8113/47/3/035305
journal journal J. Phys. A: Math. Theo. volume 47, pages 035305 (year 2013)NoStop
[Hodaei et al.(2017)Hodaei,
Hassan, Wittek, Garcia-Gracia, El-Ganainy, Christodoulides, and Khajavikhan]EP2017
author author H. Hodaei, author A. U. Hassan,
author S. Wittek, author H. Garcia-Gracia, author
R. El-Ganainy, author
D. N. Christodoulides, and author M. Khajavikhan, title title Enhanced sensitivity at higher-order exceptional
points, https://doi.org/10.1038/nature23280 journal
journal Nature volume 548, pages 187 (year 2017)NoStop
[Hatano and Nelson(1997)]Hatano-Nelson_1997
author author N. Hatano and author D. R. Nelson, title title Vortex pinning and
non-Hermitian quantum mechanics, https://doi.org/10.1103/PhysRevB.56.8651 journal journal Phys. Rev. B volume 56, pages 8651 (year 1997)NoStop
[Hatano and Nelson(1998)]Hatano-Nelson_1998
author author N. Hatano and author D. R. Nelson, title title Non-Hermitian
delocalization and eigenfunctions, https://doi.org/10.1103/PhysRevB.58.8384 journal journal Phys. Rev. B volume 58, pages 8384 (year 1998)NoStop
[Tu et al.(2022)Tu,
Tzeng, and Chang]Tu2022
author author Y.-T. Tu, author Y.-C. Tzeng, and author P.-Y. Chang, title title Rényi entropies and negative central charges in
non-Hermitian quantum systems, https://doi.org/10.21468/SciPostPhys.12.6.194 journal
journal SciPost Phys. volume 12, pages 194 (year 2022)NoStop
[Fossati et al.(2023)Fossati, Ares, and Calabrese]symm-res
author author M. Fossati, author F. Ares, and author P. Calabrese, title title Symmetry-resolved entanglement in critical
non-Hermitian systems, https://doi.org/10.1103/PhysRevB.107.205153 journal journal Phys. Rev. B volume 107, pages 205153 (year 2023)NoStop
[Chang et al.(2020)Chang,
You, Wen, and Ryu]Poyao
author author P.-Y. Chang, author J.-S. You,
author X. Wen, and author S. Ryu, title
title Entanglement spectrum and entropy in topological
non-Hermitian systems and nonunitary conformal field theory, https://doi.org/10.1103/PhysRevResearch.2.033069 journal
journal Phys. Rev. Res. volume 2, pages 033069 (year 2020)NoStop
[Hsieh and Chang(2023)]hsieh2023relating
author author C.-T. Hsieh and author P.-Y. Chang, title title Relating non-Hermitian
and Hermitian quantum systems at criticality, https://doi.org/10.21468/SciPostPhysCore.6.3.062 journal
journal SciPost Phys. Core volume 6, pages 062 (year 2023)NoStop
[Herviou et al.(2019a)Herviou, Regnault, and Bardarson]Bardarson_2019
author author L. Herviou, author N. Regnault, and author J. H. Bardarson, title title Entanglement spectrum and symmetries
in non-Hermitian fermionic non-interacting models, https://doi.org/10.21468/SciPostPhys.7.5.069 journal
journal SciPost Phys. volume 7, pages 069 (year 2019a)NoStop
[Herviou et al.(2019b)Herviou, Bardarson, and Regnault]SVD
author author L. Herviou, author J. H. Bardarson, and author N. Regnault, title title Defining a bulk-edge
correspondence for non-Hermitian Hamiltonians via singular-value
decomposition, https://doi.org/10.1103/PhysRevA.99.052118
journal journal Phys. Rev. A volume 99, pages 052118 (year
2019b)NoStop
[Ju et al.(2019)Ju,
Miranowicz, Chen, and Nori]Ju2019
author author C.-Y. Ju, author A. Miranowicz,
author G.-Y. Chen, and author F. Nori, title title Non-Hermitian Hamiltonians and no-go theorems
in quantum information, https://doi.org/10.1103/PhysRevA.100.062118 journal journal Phys. Rev. A volume 100, pages 062118 (year 2019)NoStop
[Ju et al.(2022)Ju,
Miranowicz, Minganti, Chan,
Chen, and Nori]Ju2022
author author C.-Y. Ju, author A. Miranowicz,
author F. Minganti, author C.-T. Chan, author
G.-Y. Chen, and author
F. Nori, title title Einstein's quantum elevator: Hermitization of non-Hermitian
Hamiltonians via a generalized vielbein formalism, https://doi.org/10.1103/PhysRevResearch.4.023070 journal
journal Phys. Rev. Res. volume 4, pages 023070 (year 2022)NoStop
[Ju et al.(2024)Ju,
Miranowicz, Chen, Chen, and Nori]Ju2024
author author C.-Y. Ju, author A. Miranowicz,
author Y.-N. Chen, author G.-Y. Chen, and author F. Nori, title
title Emergent parallel transport and curvature in Hermitian
and non-Hermitian quantum mechanics, https://doi.org/10.22331/q-2024-03-13-1277 journal journal Quantum volume 8, pages
1277 (year 2024)NoStop
[Miri and Alù(2019)]EP:review2019
author author M.-A. Miri and author A. Alù, title title Exceptional points in optics and
photonics, https://doi.org/10.1126/science.aar7709 journal journal Science volume
363, pages eaar7709 (year 2019)NoStop
[Li et al.(2023)Li,
Wei, Cotrufo, Chen,
Mann, Ni, Xu, Chen, Wang, Fan et al.]EP:review2023
author author A. Li, author H. Wei, author M. Cotrufo, author
W. Chen, author S. Mann, author X. Ni, author B. Xu, author J. Chen, author
J. Wang, author S. Fan, et al., title title Exceptional points and non-Hermitian photonics at the nanoscale, https://doi.org/10.1038/s41565-023-01408-0 journal
journal Nature Nanotechnology volume
18, pages 706 (year 2023)NoStop
[Tzeng et al.(2021)Tzeng,
Ju, Chen, and Huang]Tzeng2021
author author Y.-C. Tzeng, author C.-Y. Ju,
author G.-Y. Chen, and author W.-M. Huang, title title Hunting for the non-Hermitian exceptional points
with fidelity susceptibility, https://doi.org/10.1103/PhysRevResearch.3.013015 journal
journal Phys. Rev. Res. volume 3, pages 013015 (year 2021)NoStop
[Tu et al.(2023)Tu,
Jang, Chang, and Tzeng]Tu2023
author author Y.-T. Tu, author I. Jang, author P.-Y. Chang, and author Y.-C. Tzeng, title
title General properties of fidelity in non-Hermitian quantum
systems with PT symmetry, https://doi.org/10.22331/q-2023-03-23-960 journal journal Quantum volume 7, pages
960 (year 2023)NoStop
[Henry and Batchelor(2023)]Batchelor
author author R. A. Henry and author M. T. Batchelor, title title Exceptional points in
the Baxter-Fendley free parafermion model, https://doi.org/10.21468/SciPostPhys.15.1.016 journal
journal SciPost Phys. volume 15, pages 016 (year 2023)NoStop
[Ju and Huang(2024)]Ju:arxiv
author author C.-Y. Ju and author F.-H. Huang, https://arxiv.org/abs/2403.16503 title Quantum state
behavior at exceptional points and quantum phase transitions (year 2024), https://arxiv.org/abs/2403.16503 arXiv:2403.16503
[quant-ph] NoStop
[Yao and Wang(2018)]ZhongWang2018
author author S. Yao and author Z. Wang, title title Edge states and topological invariants
of non-Hermitian systems, https://doi.org/10.1103/PhysRevLett.121.086803 journal
journal Phys. Rev. Lett. volume 121, pages 086803 (year 2018)NoStop
[Wang et al.(2022)Wang,
You, and Jen]wang2022non
author author Y.-C. Wang, author J.-S. You, and author H.-H. Jen, title title A non-Hermitian optical atomic mirror, https://doi.org/10.1038/s41467-022-32372-3 journal journal Nat. Commun. volume 13, pages 4598 (year 2022)NoStop
[Kawabata et al.(2023)Kawabata, Numasawa, and Ryu]NHSE:PRX2023
author author K. Kawabata, author T. Numasawa, and author S. Ryu, title title Entanglement phase transition induced by the
non-Hermitian skin effect, https://doi.org/10.1103/PhysRevX.13.021007 journal journal Phys. Rev. X volume 13, pages 021007 (year 2023)NoStop
[Zhang et al.(2019a)Zhang, Wang, and Gong]DaJian2019a
author author D.-J. Zhang, author Q.-H. Wang, and author J. Gong, title title Quantum geometric tensor in PT-symmetric quantum
mechanics, https://doi.org/10.1103/PhysRevA.99.042104 journal journal Phys. Rev. A volume
99, pages 042104 (year
2019a)NoStop;
title title Time-dependent PT-symmetric quantum mechanics in
generic non-Hermitian systems, https://doi.org/10.1103/PhysRevA.100.062121 journal journal Phys. Rev. A volume 100, pages 062121 (year 2019b)NoStop
[Hu et al.(2024)Hu,
Ostrovskaya, and Estrecho]Hu:24
author author Y.-M. R. Hu, author E. A. Ostrovskaya, and author E. Estrecho, title title Generalized quantum
geometric tensor in a non-Hermitian exciton-polariton system, https://doi.org/10.1364/OME.497010 journal journal Opt. Mater. Express volume 14, pages 664 (year 2024)NoStop
[Schindler et al.(2023)Schindler, Gu, Lian, and Kawabata]Schindler_2023
author author F. Schindler, author K. Gu,
author B. Lian, and author K. Kawabata, title
title Hermitian bulk – non-Hermitian boundary
correspondence, https://doi.org/10.1103/PRXQuantum.4.030315
journal journal PRX Quantum volume 4, pages 030315 (year
2023)NoStop
[Hijii et al.(2005)Hijii,
Kitazawa, and Nomura]xxzladder:PRB
author author K. Hijii, author A. Kitazawa, and author K. Nomura, title title Phase diagram of S=1/2
two-leg XXZ spin-ladder systems, https://doi.org/10.1103/PhysRevB.72.014449 journal journal Phys. Rev. B volume 72, pages 014449 (year 2005)NoStop
[Note1()]Note1
note In PT-symmetric non-Hermitian systems, if the bipartition
does not break the PT symmetry, the reduced density matrix remains
PT-symmetric, meaning that its eigenvalues ω _i are either real or
come in complex conjugate pairs. Consequently, the partition function Z is
real.Stop
[Kokail et al.(2021)Kokail,
van Bijnen, Elben, Vermersch, and Zoller]tomography
author author C. Kokail, author R. van Bijnen,
author A. Elben, author B. Vermersch, and author P. Zoller, title
title Entanglement Hamiltonian tomography in quantum
simulation, https://doi.org/10.1038/s41567-021-01260-w journal journal Nature Physics volume
17, pages 936 (year 2021)NoStop
[Rottoli et al.(2024)Rottoli, Fossati, and Calabrese]Calabrese_2024
author author F. Rottoli, author M. Fossati, and author P. Calabrese, title title Entanglement Hamiltonian in the
non-Hermitian SSH model, https://doi.org/10.1088/1742-5468/ad4860 journal journal J. Stat. Mech.: Theo. Exp. volume 2024, pages 063102 (year 2024)NoStop
[Sayyad and Lado(2024)]Sayyad_2024
author author S. Sayyad and author J. L. Lado, title title Transfer learning from
Hermitian to non-Hermitian quantum many-body physics, https://doi.org/10.1088/1361-648X/ad22f8 journal journal J. Phys.: Cond. Mat. volume 36, pages 185603 (year 2024)NoStop
| Quantum entanglement is a foundational concept that significantly enhances our understanding of many-body physics by elucidating quantum correlations between subsystems. The entanglement reveals concealed connections beyond classical physics. <cit.>
Suppose the total Hamiltonian H=H_A+H_B+H_AB is written as summation of the subsystem Hamiltonians H_A and H_B, and their interaction H_AB.
To further analyze the ground-state entanglement properties, the reduced density matrix ρ_A=Tr_B|ψ_0⟩⟨ψ_0| usually becomes an essential tool, where |ψ_0⟩ is the normalized ground-state of the total Hamiltonian H, and the partial trace is performed on tracing the degrees of freedom of subsystem B. One common measure for quantifying entanglement between the subsystem A and B is the von Neumann entanglement entropy, defined by S_von=-∑_i ω_ilnω_i, where ω_i is the ith eigenvalue of ρ_A.
For example, in gapless systems where the low-energy theory is described by conformal field theory, the ground-state entanglement entropy exhibits a logarithmic scaling behavior with respect to subsystem size. This scaling allows for extraction of the central charge, a key parameter in the conformal field theory, which serves as an indicator of the phase transition's universality class. <cit.>.
For gapped systems, the ground-state entanglement entropy follows an area law, meaning it is proportional to the size of subsystem's boundary.
The entanglement energy spectrum provides a detailed view of the quantum correlations between subsystems, with the entanglement entropy serving as a condensed summary of the information contained within this spectrum.
The entanglement energy ξ_i is the ith energy eigenvalue of a hypothetical Hamiltonian, called entanglement Hamiltonian H_E, which is defined by regarding the reduced density matrix as a thermal density matrix of the entanglement Hamiltonian at unity temperature, ρ_A=e^-H_E/Z. Where Z is the partition function that ensures Tr[ρ_A]=1.
Although the exact form of H_E is generally unknown, its eigenvalues can be obtained by taking logarithm on the eigenvalues of the reduced density matrix, ξ_i=-lnω_i-ln Z. The entanglement energy spectrum provides deep insights into topological systems and can be considered as a kind of `fingerprint' of these systems. <cit.> This `fingerprint' means that even when the entire system is divided into two halves, a topological system generates a gapless edge state, and this signature can be observed in the low-energy portion of the entanglement energy spectrum. <cit.>
For example in the 2-dimensional topological systems, such as the fractional quantum Hall systems, the low-energy portion of momentum-resolved entanglement spectrum presents the same state counting with the low-energy spectrum of the edge Hamiltonian. This relationship is known as the renowned Li-Haldane conjecture <cit.> or the edge-entanglement spectrum correspondence. <cit.>
Interesting phenomena related to the entanglement Hamiltonian H_E can also arise when the subsystem B is considered as an ancilla system copied from the subsystem A, and introducing strong enough interaction H_AB to create a nearly maximally entangled state between A and B. <cit.>
For example, in an antiferromagnetic spin ladder where the rung coupling is much stronger than the leg coupling, the ground-state forms multiple rung-singlets, resulting in a nearly maximally entangled state between the two legs. <cit.>
In contrast to the Li-Haldane conjecture in topological systems, the entire entanglement spectrum has some similarity to the energy spectrum of subsystem A. Remarkably, under carefully selected parameters, the entanglement Hamiltonian H_E≈β H_A can be proportional to the Hamiltonian of subsystem A by a constant β.
In other words, the finite temperature properties of an isolated system A can be approximated by the reduced density matrix ρ_A obtained from the ground-state of an enlarged system at zero temperature,
ρ_A≈1/Zexp[-β H_A],
where β is the inverse temperature as a function of system parameters.
This development allows the finite-temperature Density Matrix Renormalization Group (DMRG) based on matrix product states to evolve into a robust numerical method for finite-temperature strongly correlated systems. <cit.>
In this paper, we show that Eq. (<ref>) remains valid for non-Hermitian Hamiltonians. This implies that the ancilla trick used in finite-temperature algorithms <cit.> is also expected to be effective for non-Hermitian quantum systems.
In the next section, we briefly introduce non-Hermitian quantum mechanics and describe the non-Hermitian Hamiltonian for the spin-1/2 ladder. | null | null | null | null | The entanglement Hamiltonian, whose eigenvalue spectrum is known as the entanglement energy spectrum<cit.>, plays a crucial role in revealing quantum correlations between subsystems in many-body systems. Understanding its analytical form is essential for gaining deeper insights into the nature of quantum entanglement and for facilitating entanglement Hamiltonian tomography <cit.>. However, obtaining the entanglement Hamiltonian is often challenging, especially in non-Hermitian systems, where it can be complex and non-Hermitian itself. While studies on non-interacting systems, such as the non-Hermitian Su-Schrieffer-Heeger (SSH) model <cit.>, have made progress in deriving the entanglement Hamiltonian, the challenge is even more pronounced in interacting systems, where many-body effects add significant complexity. Although the real part of the entanglement energy can be derived from the eigenvalues of the reduced density matrix as Re[ξ_i]=-ln|ω_i|-ln Z,
capturing the full entanglement Hamiltonian remains crucial for exploring the deeper properties of quantum many-body systems.
In this paper, we explore the entanglement Hamiltonian of a non-Hermitian spin ladder system using perturbation theory, providing an analytical approach to this difficult problem. Remarkably, we find that the entanglement Hamiltonian in the non-Hermitian case can be approximated by the Hamiltonian of subsystem A, indicating that the thermal density matrix of an isolated non-Hermitian system A in equilibrium can be derived by the partial trace of an enlarged system. This suggests that the ancilla trick applied for developing finite-temperature Density Matrix Renormalization Group (DMRG) method <cit.>, can be extended to non-Hermitian many-body systems as well. Our work offers new insights into the study of quantum entanglement in non-Hermitian systems, potentially facilitating the development of advanced numerical algorithms for investigating their finite-temperature behavior.
Although non-Hermitian systems exhibit many phenomena absent in Hermitian systems, such as exceptional points (EP) and the non-Hermitian skin effect, many features of Hermitian systems persist in non-Hermitian counterparts when appropriately generalized. For instance, the entanglement entropy in critical systems still follows a logarithmic scaling <cit.>, fidelity susceptibility diverges near phase transitions or EPs <cit.>, and machine learning methods can be transferred from Hermitian to non-Hermitian systems <cit.>. In our work on entanglement Hamiltonians, we extend the Hermitian case to non-Hermitian systems and find that, for nearly maximally entangled states, the results remain consistent with the Hermitian case.
YCT is grateful to Chia-Yi Ju, Po-Yao Chang and Gunnar Möller for many invaluable discussions.
We thank to National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational and storage resources.
YCT is grateful to the supports from National Science and Technology Council (NSTC) of Taiwan under grant No. 113-2112-M-A49-015-MY3.
54
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Amico et al.(2008)Amico,
Fazio, Osterloh, and Vedral]Amico:RMP2008
author author L. Amico, author R. Fazio,
author A. Osterloh, and author V. Vedral, title title Entanglement in many-body systems, journal journal Rev. Mod. Phys. volume 80, pages 517 (year 2008)NoStop
[Laflorencie(2016)]Laflorencie:review
author author N. Laflorencie, title title Quantum entanglement
in condensed matter systems, journal
journal Physics Reports volume 646, pages 1 (year 2016)NoStop
[Calabrese and Cardy(2004)]Calabrese_Cardy_2004
author author P. Calabrese and author J. Cardy, title title Entanglement entropy and
quantum field theory, journal
journal J. Stat. Mech.: Theo. Exp. volume 2004, pages P06002 (year
2004)NoStop
[Calabrese and Cardy(2009)]Calabrese_Cardy_2009
author author P. Calabrese and author J. Cardy, title title Entanglement entropy and
conformal field theory, journal
journal J. Phys. A: Math. Theo. volume
42, pages 504005 (year 2009)NoStop
[Li and Haldane(2008)]Li-Haldane
author author H. Li and author F. D. M. Haldane, title title Entanglement spectrum as
a generalization of entanglement entropy: Identification of topological order
in non-Abelian fractional quantum Hall effect states, journal
journal Phys. Rev. Lett. volume 101, pages 010504 (year 2008)NoStop
[Chandran et al.(2011)Chandran, Hermanns, Regnault, and Bernevig]Bernevig
author author A. Chandran, author M. Hermanns,
author N. Regnault, and author B. A. Bernevig, title title Bulk-edge correspondence in
entanglement spectra,
journal journal Phys. Rev. B volume 84, pages 205136 (year
2011)NoStop
[Qi et al.(2012)Qi,
Katsura, and Ludwig]Qi2012
author author X.-L. Qi, author H. Katsura, and author A. W. W. Ludwig, title title General relationship between the
entanglement spectrum and the edge state spectrum of topological quantum
states, journal journal Phys. Rev. Lett. volume 108, pages 196402 (year
2012)NoStop
[Ho et al.(2015)Ho,
Cincio, Moradi, Gaiotto, and Vidal]Vidal2015
author author W. W. Ho, author L. Cincio, author H. Moradi, author
D. Gaiotto, and author
G. Vidal, title title Edge-entanglement spectrum correspondence in a nonchiral topological
phase and Kramers-Wannier duality, journal journal Phys. Rev. B volume 91, pages 125119 (year 2015)NoStop
[Zache et al.(2022)Zache,
Kokail, Sundar, and Zoller]Quantum
author author T. V. Zache, author C. Kokail,
author B. Sundar, and author P. Zoller, title
title Entanglement spectroscopy and probing the Li-Haldane
conjecture in topological quantum matter, journal journal Quantum volume 6, pages
702 (year 2022)NoStop
[Feiguin and White(2005)]Feiguin_2005
author author A. E. Feiguin and author S. R. White, title title Finite-temperature density
matrix renormalization using an enlarged Hilbert space, journal journal Phys. Rev. B volume 72, pages 220401(R) (year 2005)NoStop
[Verstraete et al.(2004)Verstraete, García-Ripoll, and Cirac]ancilla:Verstraete
author author F. Verstraete, author J. J. García-Ripoll, and author J. I. Cirac, title title Matrix product
density operators: Simulation of finite-temperature and dissipative
systems, journal journal Phys. Rev. Lett. volume 93, pages 207204 (year
2004)NoStop
[Zwolak and Vidal(2004)]ancilla:Vidal
author author M. Zwolak and author G. Vidal, title title Mixed-state dynamics in
one-dimensional quantum lattice systems: A time-dependent superoperator
renormalization algorithm, journal
journal Phys. Rev. Lett. volume 93, pages 207205 (year 2004)NoStop
[Poilblanc(2010)]Poilblanc2010
author author D. Poilblanc, title title Entanglement spectra of
quantum Heisenberg ladders, journal
journal Phys. Rev. Lett. volume 105, pages 077202 (year 2010)NoStop
[Cirac et al.(2011)Cirac,
Poilblanc, Schuch, and Verstraete]PEPS
author author J. I. Cirac, author D. Poilblanc,
author N. Schuch, and author F. Verstraete, title
title Entanglement spectrum and boundary theories with projected
entangled-pair states,
journal journal Phys. Rev. B volume 83, pages 245134 (year
2011)NoStop
[Peschel and Chung(2011)]Peschel_2011
author author I. Peschel and author M. Chung, title title On the relation between
entanglement and subsystem Hamiltonians, journal journal Europhysics Letters volume 96, pages 50006 (year 2011)NoStop
[Läuchli and Schliemann(2012)]Lauchli_2012
author author A. M. Läuchli and author J. Schliemann, title title Entanglement spectra
of coupled s=1/2 spin chains in a ladder geometry, journal journal Phys. Rev. B volume 85, pages 054403 (year 2012)NoStop
[Lundgren et al.(2013)Lundgren, Fuji, Furukawa, and Oshikawa]Rex2013
author author R. Lundgren, author Y. Fuji,
author S. Furukawa, and author M. Oshikawa, title title Entanglement spectra between coupled
Tomonaga-Luttinger liquids: Applications to ladder systems and
topological phases,
journal journal Phys. Rev. B volume 88, pages 245137 (year
2013)NoStop
[Predin(2017)]Predin_2017
author author S. Predin, title title Entanglement spectrum of
the degenerative ground state of Heisenberg ladders in a time-dependent
magnetic field,
journal journal Europhysics Letters volume 119, pages 57003 (year
2017)NoStop
[Fujita et al.(2018)Fujita,
Nakagawa, Sugiura, and Oshikawa]MachineLearning
author author H. Fujita, author Y. O. Nakagawa, author S. Sugiura, and author M. Oshikawa, title title Construction of Hamiltonians by
supervised learning of energy and entanglement spectra, journal journal Phys. Rev. B volume 97, pages 075114 (year 2018)NoStop
[Zhu et al.(2019)Zhu,
Huang, and He]Reconstruct
author author W. Zhu, author Z. Huang, and author Y.-C. He, title title Reconstructing entanglement Hamiltonian via
entanglement eigenstates, journal journal Phys. Rev. B volume 99, pages 235109 (year 2019)NoStop
[Yan and Meng(2023)]QMC
author author Z. Yan and author Z. Y. Meng, title title Unlocking the general relationship
between energy and entanglement spectra via the wormhole effect, journal journal Nat. Commun. volume 14, pages 2360 (year 2023)NoStop
[Bender and Boettcher(1998)]Bender_1998
author author C. M. Bender and author S. Boettcher, title title Real spectra in
non-Hermitian Hamiltonians having PT symmetry, journal journal Phys. Rev. Lett. volume 80, pages 5243 (year 1998)NoStop
[Bender(2007)]Bender_2007
author author C. M. Bender, title title Making sense of
non-Hermitian Hamiltonians, journal journal Rep. Prog. Phys. volume 70, pages 947 (year 2007)NoStop
[Brody(2013)]Brody_2014
author author D. C. Brody, title title Biorthogonal quantum
mechanics,
journal journal J. Phys. A: Math. Theo. volume 47, pages 035305 (year 2013)NoStop
[Hodaei et al.(2017)Hodaei,
Hassan, Wittek, Garcia-Gracia, El-Ganainy, Christodoulides, and Khajavikhan]EP2017
author author H. Hodaei, author A. U. Hassan,
author S. Wittek, author H. Garcia-Gracia, author
R. El-Ganainy, author
D. N. Christodoulides, and author M. Khajavikhan, title title Enhanced sensitivity at higher-order exceptional
points, journal
journal Nature volume 548, pages 187 (year 2017)NoStop
[Hatano and Nelson(1997)]Hatano-Nelson_1997
author author N. Hatano and author D. R. Nelson, title title Vortex pinning and
non-Hermitian quantum mechanics, journal journal Phys. Rev. B volume 56, pages 8651 (year 1997)NoStop
[Hatano and Nelson(1998)]Hatano-Nelson_1998
author author N. Hatano and author D. R. Nelson, title title Non-Hermitian
delocalization and eigenfunctions, journal journal Phys. Rev. B volume 58, pages 8384 (year 1998)NoStop
[Tu et al.(2022)Tu,
Tzeng, and Chang]Tu2022
author author Y.-T. Tu, author Y.-C. Tzeng, and author P.-Y. Chang, title title Rényi entropies and negative central charges in
non-Hermitian quantum systems, journal
journal SciPost Phys. volume 12, pages 194 (year 2022)NoStop
[Fossati et al.(2023)Fossati, Ares, and Calabrese]symm-res
author author M. Fossati, author F. Ares, and author P. Calabrese, title title Symmetry-resolved entanglement in critical
non-Hermitian systems, journal journal Phys. Rev. B volume 107, pages 205153 (year 2023)NoStop
[Chang et al.(2020)Chang,
You, Wen, and Ryu]Poyao
author author P.-Y. Chang, author J.-S. You,
author X. Wen, and author S. Ryu, title
title Entanglement spectrum and entropy in topological
non-Hermitian systems and nonunitary conformal field theory, journal
journal Phys. Rev. Res. volume 2, pages 033069 (year 2020)NoStop
[Hsieh and Chang(2023)]hsieh2023relating
author author C.-T. Hsieh and author P.-Y. Chang, title title Relating non-Hermitian
and Hermitian quantum systems at criticality, journal
journal SciPost Phys. Core volume 6, pages 062 (year 2023)NoStop
[Herviou et al.(2019a)Herviou, Regnault, and Bardarson]Bardarson_2019
author author L. Herviou, author N. Regnault, and author J. H. Bardarson, title title Entanglement spectrum and symmetries
in non-Hermitian fermionic non-interacting models, journal
journal SciPost Phys. volume 7, pages 069 (year 2019a)NoStop
[Herviou et al.(2019b)Herviou, Bardarson, and Regnault]SVD
author author L. Herviou, author J. H. Bardarson, and author N. Regnault, title title Defining a bulk-edge
correspondence for non-Hermitian Hamiltonians via singular-value
decomposition,
journal journal Phys. Rev. A volume 99, pages 052118 (year
2019b)NoStop
[Ju et al.(2019)Ju,
Miranowicz, Chen, and Nori]Ju2019
author author C.-Y. Ju, author A. Miranowicz,
author G.-Y. Chen, and author F. Nori, title title Non-Hermitian Hamiltonians and no-go theorems
in quantum information, journal journal Phys. Rev. A volume 100, pages 062118 (year 2019)NoStop
[Ju et al.(2022)Ju,
Miranowicz, Minganti, Chan,
Chen, and Nori]Ju2022
author author C.-Y. Ju, author A. Miranowicz,
author F. Minganti, author C.-T. Chan, author
G.-Y. Chen, and author
F. Nori, title title Einstein's quantum elevator: Hermitization of non-Hermitian
Hamiltonians via a generalized vielbein formalism, journal
journal Phys. Rev. Res. volume 4, pages 023070 (year 2022)NoStop
[Ju et al.(2024)Ju,
Miranowicz, Chen, Chen, and Nori]Ju2024
author author C.-Y. Ju, author A. Miranowicz,
author Y.-N. Chen, author G.-Y. Chen, and author F. Nori, title
title Emergent parallel transport and curvature in Hermitian
and non-Hermitian quantum mechanics, journal journal Quantum volume 8, pages
1277 (year 2024)NoStop
[Miri and Alù(2019)]EP:review2019
author author M.-A. Miri and author A. Alù, title title Exceptional points in optics and
photonics, journal journal Science volume
363, pages eaar7709 (year 2019)NoStop
[Li et al.(2023)Li,
Wei, Cotrufo, Chen,
Mann, Ni, Xu, Chen, Wang, Fan et al.]EP:review2023
author author A. Li, author H. Wei, author M. Cotrufo, author
W. Chen, author S. Mann, author X. Ni, author B. Xu, author J. Chen, author
J. Wang, author S. Fan, et al., title title Exceptional points and non-Hermitian photonics at the nanoscale, journal
journal Nature Nanotechnology volume
18, pages 706 (year 2023)NoStop
[Tzeng et al.(2021)Tzeng,
Ju, Chen, and Huang]Tzeng2021
author author Y.-C. Tzeng, author C.-Y. Ju,
author G.-Y. Chen, and author W.-M. Huang, title title Hunting for the non-Hermitian exceptional points
with fidelity susceptibility, journal
journal Phys. Rev. Res. volume 3, pages 013015 (year 2021)NoStop
[Tu et al.(2023)Tu,
Jang, Chang, and Tzeng]Tu2023
author author Y.-T. Tu, author I. Jang, author P.-Y. Chang, and author Y.-C. Tzeng, title
title General properties of fidelity in non-Hermitian quantum
systems with PT symmetry, journal journal Quantum volume 7, pages
960 (year 2023)NoStop
[Henry and Batchelor(2023)]Batchelor
author author R. A. Henry and author M. T. Batchelor, title title Exceptional points in
the Baxter-Fendley free parafermion model, journal
journal SciPost Phys. volume 15, pages 016 (year 2023)NoStop
[Ju and Huang(2024)]Ju:arxiv
author author C.-Y. Ju and author F.-H. Huang, title Quantum state
behavior at exceptional points and quantum phase transitions (year 2024), arXiv:2403.16503
[quant-ph] NoStop
[Yao and Wang(2018)]ZhongWang2018
author author S. Yao and author Z. Wang, title title Edge states and topological invariants
of non-Hermitian systems, journal
journal Phys. Rev. Lett. volume 121, pages 086803 (year 2018)NoStop
[Wang et al.(2022)Wang,
You, and Jen]wang2022non
author author Y.-C. Wang, author J.-S. You, and author H.-H. Jen, title title A non-Hermitian optical atomic mirror, journal journal Nat. Commun. volume 13, pages 4598 (year 2022)NoStop
[Kawabata et al.(2023)Kawabata, Numasawa, and Ryu]NHSE:PRX2023
author author K. Kawabata, author T. Numasawa, and author S. Ryu, title title Entanglement phase transition induced by the
non-Hermitian skin effect, journal journal Phys. Rev. X volume 13, pages 021007 (year 2023)NoStop
[Zhang et al.(2019a)Zhang, Wang, and Gong]DaJian2019a
author author D.-J. Zhang, author Q.-H. Wang, and author J. Gong, title title Quantum geometric tensor in PT-symmetric quantum
mechanics, journal journal Phys. Rev. A volume
99, pages 042104 (year
2019a)NoStop;
title title Time-dependent PT-symmetric quantum mechanics in
generic non-Hermitian systems, journal journal Phys. Rev. A volume 100, pages 062121 (year 2019b)NoStop
[Hu et al.(2024)Hu,
Ostrovskaya, and Estrecho]Hu:24
author author Y.-M. R. Hu, author E. A. Ostrovskaya, and author E. Estrecho, title title Generalized quantum
geometric tensor in a non-Hermitian exciton-polariton system, journal journal Opt. Mater. Express volume 14, pages 664 (year 2024)NoStop
[Schindler et al.(2023)Schindler, Gu, Lian, and Kawabata]Schindler_2023
author author F. Schindler, author K. Gu,
author B. Lian, and author K. Kawabata, title
title Hermitian bulk – non-Hermitian boundary
correspondence,
journal journal PRX Quantum volume 4, pages 030315 (year
2023)NoStop
[Hijii et al.(2005)Hijii,
Kitazawa, and Nomura]xxzladder:PRB
author author K. Hijii, author A. Kitazawa, and author K. Nomura, title title Phase diagram of S=1/2
two-leg XXZ spin-ladder systems, journal journal Phys. Rev. B volume 72, pages 014449 (year 2005)NoStop
[Note1()]Note1
note In PT-symmetric non-Hermitian systems, if the bipartition
does not break the PT symmetry, the reduced density matrix remains
PT-symmetric, meaning that its eigenvalues ω _i are either real or
come in complex conjugate pairs. Consequently, the partition function Z is
real.Stop
[Kokail et al.(2021)Kokail,
van Bijnen, Elben, Vermersch, and Zoller]tomography
author author C. Kokail, author R. van Bijnen,
author A. Elben, author B. Vermersch, and author P. Zoller, title
title Entanglement Hamiltonian tomography in quantum
simulation, journal journal Nature Physics volume
17, pages 936 (year 2021)NoStop
[Rottoli et al.(2024)Rottoli, Fossati, and Calabrese]Calabrese_2024
author author F. Rottoli, author M. Fossati, and author P. Calabrese, title title Entanglement Hamiltonian in the
non-Hermitian SSH model, journal journal J. Stat. Mech.: Theo. Exp. volume 2024, pages 063102 (year 2024)NoStop
[Sayyad and Lado(2024)]Sayyad_2024
author author S. Sayyad and author J. L. Lado, title title Transfer learning from
Hermitian to non-Hermitian quantum many-body physics, journal journal J. Phys.: Cond. Mat. volume 36, pages 185603 (year 2024)NoStop |
http://arxiv.org/abs/2409.17940v1 | 20240926151355 | Intrinsic statistical regularity of topological charges revealed in dynamical disk model | [
"Ranzhi Sun",
"Zhenwei Yao"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.mtrl-sci",
"cond-mat.stat-mech"
] |
[][email protected]
School of Physics and Astronomy, and Institute of Natural
Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
§ ABSTRACT
Identifying ordered structures hidden in the packings of particles is a
common scientific question in multiple fields. In this work, we investigate
the dynamical organizations of a large number of initially randomly packed
repulsive particles confined on a disk under the Hamiltonian dynamics by
the recently developed algorithm called the random batch method. This
algorithm is specifically designed for reducing the computational complexity
of long-range interacting particle systems. We highlight the revealed
intrinsic statistical regularity of topological charges that is otherwise
unattainable by the continuum analysis of particle density. We also identify
distinct collective dynamics of the interacting particles under short- and
long-range repulsive forces. This work shows the robustness and
effectiveness of the concept of topological charge for characterizing the
convoluted particle dynamics, and demonstrates the promising potential of
the random batch method for exploring fundamental scientific questions
arising in a variety of long-range interacting particle systems in soft
matter physics and other relevant fields.
Intrinsic statistical regularity of topological charges revealed in
dynamical disk model
Ranzhi Sun and Zhenwei Yao
September 28, 2024
==========================================================================================
§ INTRODUCTION
The packing of interacting particles in confined geometries is a common theme
in a host of soft matter
systems <cit.>.
In particular, the confluence of theoretical and experimental investigations in
the past few decades on the static two-dimensional(2D) packing problem on both
flat <cit.>
and
curved <cit.>
spaces reveals the crucial role of topological defects in the organizations of
the particles. Disclinations as a kind of fundamental topological defect
emerge in the crystalline packings of particles on curved
spaces <cit.> or under
mechanical <cit.> and
thermal <cit.> stimuli. In a triangular
lattice, a p-fold disclination refers to a vertex of coordination number
p, and it carries topological charge
6-p <cit.>. According to the continuum
elasticity theory, in analogy to electric charges, disclinations of the same
sign repel and unlike signs attract <cit.>. These
particle-like excitations are actively involved in several important physical
processes, such as the screening of the substrate
curvature <cit.> and
the healing of the disrupted crystalline
order <cit.>.
In our previous work on the inhomogeneous static packings of particles in
mechanical equilibrium confined on a disk, the unique perspective of a
topological defect allows us to reveal the characteristically negative sign of
the total topological charge Q and the associated hyperbolic
geometry <cit.>. Recently, the static disk model was extended to the
dynamical regime <cit.>. It was found that in the statistical
sense, the hyperbolic geometry as reflected by the negative sign of the
time-averaged total topological charge ⟨ Q⟩ is preserved in the
convoluted dynamical evolution of the particle
configuration <cit.>.
However, in our previous work the dynamical disk model consisted of a relatively
small number of particles as limited by the long computational
time <cit.>. It is thus still not clear if the revealed
statistical regularity of topological charges is caused by the boundary
effect or due to the intrinsic effect of the long-range force. Elucidating
this issue yields insights into the intrinsic order in seemingly irregular and
temporally-varying packings of particles arising in the physical systems of
charged colloids <cit.>, complex (dusty)
plasmas <cit.>, and a variety of charged
entities at the nanoscale in electrolyte environments <cit.>.
Furthermore, the relatively small number of particles in the previously
studied disk model hinders one from performing a coarse-graining procedure to
reveal and analyze the underlying flow patterns created by the repulsive
forces of varying range.
To address these questions, it is crucial to extend the dynamic disk model to
the large-N regime, where N is the number of particles. The challenge for
simulating the dynamics of a long-range interacting system comes from the high
computational complexity: the computation time is on the order of O(N^2)
per time step, and the whole simulation process involves up to one million
simulation steps. To solve this problem, we employ the recently developed
algorithm called the random batch method (RBM), which is specifically designed for
reducing the computational complexity of long-range interacting particle
systems down to the order of
O(N) <cit.>. Numerical experiments
show that the RBM is capable of capturing both transient dynamical
structures <cit.> and crucial statistical
features <cit.>.
The goal of this work is to employ the powerful computational tool of the RBM
for exploring the statistical and dynamical physics of the disk model
consisting of a large number of repulsive particles (up to 150,000).
Specifically, we aim at identifying the intrinsic statistical regularity of
topological charges, and studying the flow patterns under the repulsive forces
of varying range. To this end, in our model the particles confined on the disk
interact by the screened Coulomb potential; the range of the force is
conveniently controlled by the screening length. In the initial state, both
the positions and the velocities of the particles are randomly distributed. The
motion of the particles under the interacting force is governed by the
deterministic Hamiltonian dynamics. The equations of motion are numerically
integrated by the standard Verlet method <cit.>.
The main results of this work are presented below. By analyzing the
trajectories of 150,000 particles, we show that under a long-range repulsive
force, the time-averaged total topological charge ⟨ Q⟩ is
intrinsically negative corresponding to the hyperbolic geometry. Systematic
simulations of the disk model at varying screening length show the regulation
of the defect structure by the range of the repulsive force. These results
suggest the robustness and effectiveness of the concept of topological charge
for characterizing the convoluted particle dynamics at varying interaction
range. Furthermore, we reveal the distinct dynamical organizations of the
particles under short- and long-range repulsive forces by analyzing the
coarse-grained velocity fields, and we demonstrate the capability of the RBM in
capturing featured dynamical structures such as vortices and radial flows. In
this work, the revealed intrinsic statistical regularity of topological
charges and the featured flow patterns advance our understanding on the
convoluted dynamical organizations of geometrically confined repulsive
particles.
§ MODEL AND METHOD
The model consists of N identical point particles of mass m confined on a
disk of radius r_0; the particles interact by the pairwise screened Coulomb
potential. The evolution of the particle configuration is governed by
the deterministic Hamiltonian dynamics (without introducing any thermal noise):
H=∑_i=1^Np_i^2/2m+∑_i≠
jβ/r_ijexp(-r_ij/λ)
where p⃗_i is the momentum of particle i, r_ij is the distance
between particles i and j, λ is the screening length, and
β=q_0^2/(4πϵ), where ϵ is the dielectric constant. The
rigid boundary condition is implemented by the reflection of particle
trajectory at the boundary. In this work,
the units of mass, length and time are the particle mass m, the mean
distance of nearest particles a, and τ_0. τ_0 = √(m
a^3/β). The units of velocity and force are thus a/τ_0 and
β/a^2. Under the assumption that the particles are arranged in a
triangular lattice, the lattice spacing a is estimated as
a/r_0=√(2π/√(3)N).
In the initial state, the velocity v⃗_ini of each particle is random
in both magnitude and direction; v_ini∈ (0,1) and θ∈ (0,2π),
where θ is the angle of the velocity vector v⃗_ini with
respect to some reference direction. The initial positions of the particles
are randomly distributed by the procedure of random disk packing. The
technical details are presented in Appendix A. We employ the standard Verlet
method to numerically integrate the equations of motion under the specified
rigid boundary condition and the random initial
conditions <cit.>. Specifically, the position of particle
i at time t+2h is determined by its positions at times
t+h and t via
x⃗_i(t+2h) = 2 x⃗_i(t+h) - x⃗_i(t) + ẍ⃗̈_i(t+h)h^2
+ O (h^4),
where the step size h=10^-3 in simulations.
The most time-consuming procedure in the Verlet integration scheme for the
long-range interacting particle system is the calculation of the forces. The
computation time is on the order of O(N^2) per time step. This imposes a
challenge to explore the large-N systems of interest.
To resolve this issue, we employ the recently developed algorithm named the
random batch method (RBM) by
mathematicians <cit.>.
This algorithm is specifically designed for reducing the computing time for
long-range interacting particle systems down to the order of O(N). In the
following, we introduce our method of calculating force using RBM. Consider a
system consisting of a large number of point particles with pairwise
interaction. First, the interacting force on particle i by particle
j, f⃗_ij, is decomposed into two parts
<cit.>:
f⃗_ij(r_ij) = f⃗_ij(r_ij) [1-H(r_ij - r_c) ]
+ f⃗_ij(r_ij) H(r_ij - r_c)
≡f⃗^(1)_ij + f⃗^(2)_ij
where the Heaviside step function
H(r_ij - r_c) =
1 r_ij≥ r_c
0 r_ij < r_c.
r_ij is the distance between particle i and particle j. r_c is some
cutoff distance that is comparable with the mean distance of nearest particles
(see Fig. <ref>). Now, the total force on particle i can be written as
F⃗_i = ∑_j if⃗^(1)_ij + ∑_j if⃗^(2)_ij,
The first term in Eq. (<ref>) is the sum of the forces exerted by all
of the adjacent particles of particle i within the circular region of radius
r_c, which is calculated rigorously. The second term involves the forces
from all of the remaining particles, and it is approximated by the RBM to
promote computational efficiency.
The schematic plot for illustrating the RBM is presented in
Fig. <ref>. All of the N particles on the disk are labeled from 1
to N. For each instantaneous
particle configuration in the time evolution, the N
particles are randomly divided into a certain number of small batches by the
particle labels (not by the particle positions). Each batch contains p
particles. In Fig. <ref>, the particles marked by small triangles
belong to the same batch Ω_p; p=3 in this case. Only the particles
belonging to the same batch as particle i are counted in the calculation for
the second term in Eq. (<ref>). The summation of the forces from
these particles is denoted as F⃗_batch. Multiplying
F⃗_batch by a factor of (N-1)/(p-1) gives the approximate value for
the second term in Eq. (<ref>). To conclude, the total force on
particle i is approximated by <cit.>
F⃗_i = ∑_r_ij< r_cf⃗_ij + N-1/p-1∑_j∈Ω_pf⃗_ij(x⃗_ij)H(|x⃗_ij| - r_c).
In this work, the value of p is taken to be 2 by following the
convention <cit.>. Note that due to the random
selection of the particles, the RBM brings in random forces on the particles,
which results in a monotonous increase of the kinetic energy (temperature).
For the sake of the conservation of total energy, we perform the procedure of
cooling in simulations (see Appendix B). Specifically, the velocity of each
particle is rescaled by a common factor once the amount of the increased energy
exceeds some threshold value, which is set to be about 5% of the total energy
in the initial state.
Numerical experiments show that the RBM is capable of capturing both
transient dynamical structures <cit.> and crucial
statistical features <cit.>. The RBM works well in a series of
statistical and dynamical problems, such as the Dyson Brownian motion,
Thomson's problem, stochastic dynamics of wealth and opinion
dynamics <cit.>. This algorithm has wide
applications in quantum
physics <cit.>, plasma
physics <cit.>, hydromechanics <cit.>, machine
learning <cit.>, chemistry and
materials <cit.>. In this work, the RBM is used as a
powerful tool for exploring the physics of
long-range interacting systems containing a large number of particles (up to
150,000).
§ RESULTS AND DISCUSSION
In this section, we discuss the dynamics of the particles on
the disk under both short- and long-range repulsive forces. In Sec. 3 A, the particle
configurations are analyzed from the perspective of the underlying topological
defect structure. Statistical regularity in the distribution of
topological charge is revealed.
In Sec. 3 B, we analyze the characteristic coarse-grained velocity field, and
we reveal the distinct flow patterns created by the short- and long-range repulsive forces.
We also discuss the relaxation of particle speed in both short- and long-range interacting systems.
§.§ Statistical regularity in the distribution of topological charge
We first briefly introduce the concept of a topological defect in a triangular
lattice <cit.>. By the standard Delaunay triangulation, the
neighbors of each particle on the plane are uniquely determined, and the
topological charge q_i for any particle i is defined by the formula
q_i = 6 - z_i,
where z_i is the coordination number of particle
i <cit.>. A particle of coordination number z is called
a z-fold disclination. In analogy to electric charges, disclinations of the
same sign repel and unlike signs attract according to the continuum elasticity
theory <cit.>. Topological defect is an important entity and
a useful concept for understanding a series of elastic and plastic behaviors
of crystals <cit.>.
In previous work, we analyzed the static equilibrium distribution of the
long-range repulsive particles on the disk from the unique perspective of
topological defects <cit.>. Specifically, we focused on the
total topological charge Q:
Q ≡∑_r_i<r' q_i,
where r_i is the distance of the particle i carrying topological charge q_i to the center of the disk. The sum is over all of the topological charges within the circular
domain of radius r'. r' is smaller than the radius of the disk by one
lattice spacing. It is found that due to the intrinsic inhomogeneity created
by the long-range electrostatic force, the sign of the total topological charge is
negative, implying the existence of a hyperbolic geometric structure
underlying the inhomogeneity phenomenon of particle
packings <cit.>.
The static disk model was recently extended to the dynamical regime, and the
temporally-varying particle configurations on the disk were analyzed in terms
of topological defects <cit.>. An important observation is that
the time-averaged total topological charge ⟨ Q ⟩ is still
negative in the dynamical regime; the value of ⟨ Q ⟩ is obtained
by averaging over 100 statistically independent instantaneous particle
configurations after the system reaches equilibrium. In other words, in the
statistical sense, the characteristically negative sign of the total
topological charge as found in the static disk model is preserved in the
complicated dynamical evolution of the particle configuration. Furthermore, it
is found that the sign of the time-averaged total topological charge ⟨
Q ⟩ switches from positive to negative with the increase in the range
of interaction.
However, the revealed statistical regularity in ⟨ Q ⟩ occurs in
systems consisting of a relatively small number of particles; the maximum
value of N in the work of Ref. <cit.> is 5000 as limited by the
computational time. The boundary effect may be involved. It is
important to clarify if the statistical regularity in ⟨ Q ⟩ reflects the
intrinsic effect of the long-range interaction or it is caused by the boundary
effect.
To address this question, we shall investigate the regime of
λ/r_0 ≪ 1, where the boundary effect on the statistical distribution
of the particles could be reduced. In terms of the number of particles, by
making use of Eq. (<ref>),
λ/r_0=λ/a√(2π/√(3)N)≪ 1.
Furthermore, we tune the value of λ/a to be much larger than
unity for exploring the interested regime of long-range interaction. To
conveniently discuss both conditions of λ/r_0 ≪ 1 and λ/a ≫
1, we rewrite Eq.(<ref>) as
N=2π/√(3)(λ/a/λ/r_0)^2 ≫ 1.
Equation (<ref>) shows that the number of particles shall be
sufficiently large to simultaneously fulfill the above-mentioned two
conditions. For a given value of λ/r_0, the number of particles
increases with λ/a quadratically. For the example of
λ/r_0=0.1, when λ/a = 10, N=36 275. As the value of
λ/a is increased to 20, N=145 103. By striking a balance of the
limited computational resources and the required sufficiently large number of
particles, in simulations we set the value of N as N=150 000, for which
the disk size is as large as 10 times the screening length even when the
screening length is up to 20 times the mean lattice spacing. For
a shorter screening length, the ratio r_0/λ is even larger.
We first test the RBM by applying it to a relatively small system consisting
of 5000 particles under varying screening length, which has been studied based
on the rigorous calculations of the interacting forces <cit.>. It
turns out that the ⟨ Q ⟩-λ curves based on both the
RBM and the rigorous calculations are almost identical as shown in
Fig. <ref>. On both ⟨ Q ⟩-λ curves, the sign
of the time-averaged total topological charge ⟨ Q ⟩ turns from
positive to negative when the value of λ is increased to about 0.3;
the magnitudes of the error bars are also similar. The invariance of the
⟨ Q ⟩-λ curve is remarkable, considering that in the
calculation of the force on each particle only a few randomly picked particles
instead of all of the other particles are counted. Furthermore, the
histograms of the total topological charge at typical values of λ as
obtained by the RBM are similar to those obtained by the rigorous calculations of the
interacting forces.
Here, we shall point out that due to the approximation of the force in the
RBM, the resulting microscopic particle trajectories inevitably deviate from
those obtained by the rigorous calculations of the forces. However, the
statistical property in question is insensitive to the discrepancy in the
particle trajectories. Specifically, the agreement of the relevant results
based on the RBM and the rigorous calculations shows the capability of the RBM
in calculating the characteristic statistics of the topological charges in the
temporally-varying particle configurations under both short- and long-range
repulsions. As such, the RBM serves as a proper tool for exploring the
underlying statistical regularity in the dynamical disk model. These results
also suggest the robustness and effectiveness of the quantity Q for
characterizing the convoluted dynamical evolutions of short- and long-range
interacting particle systems.
Now, we employ the RBM to explore the large-N regime. The
results are summarized in Fig. <ref> for N=150 000. We collect
statistically independent particle configurations in the dynamical evolution
of the system in equilibrium, and we present the histograms of the total
topological charge Q for λ/a=0.05 and 20 in
Figs. <ref> and <ref>(b). It clearly shows that the short-
and long-range interacting systems could be distinguished by the distribution
of Q.
Specifically, for the case of λ/a=0.05 in Fig. <ref>, the values of Q are uniformly positive in all of the instantaneous
states. In contrast, for the long-range interacting system of λ/a=20
in Fig. <ref>, the values of Q become negative. Here, the
boundary effect as measured by the ratio λ/r_0 could be ignored due to
the large value of N. In both cases of λ/a=0.05 and 20,
the value of r_0 is larger than λ by at least one order of magnitude
for N=150 000. Therefore, the characteristic distributions of the total
topological charge Q in Fig. <ref> reflect the intrinsic effect
of the physical interactions on the dynamical organizations of the particles.
We further inquire how the range of the repulsive force influences the
distributions of the total topological charge Q. To address this question,
we systematically investigate the dynamical evolution of the particles at
varying λ. The quantitative dependence of the time-averaged total
topological charge ⟨ Q⟩ on the screening length λ is
shown in Fig. <ref>. We see that the ⟨
Q⟩-curve monotonously decreases with λ. The turning point for
the value of ⟨ Q⟩ switching from positive to negative is located
at λ_c ≈ 0.3.
Scrutiny of the ⟨ Q⟩-curves at varying N show that the value
of the time-averaged total topological charge ⟨ Q⟩ increases
with the increase of N. However, the value of λ_c, where the sign of
⟨ Q⟩ is changed, is insensitive to the variation of N. The
value of λ_c is very close to 0.3 as the number of particles is
increased from 5000 to 150 000. Comparison of Figs. <ref> and 3
shows that the relative fluctuation of ⟨ Q⟩ (as indicated by the
relative size of the error bars) decreases with the increase of N.
§.§ Characteristic coarse-grained velocity field and the relaxation of particle speed
The RBM allows one to explore the large-N regime, and to perform a
coarse-graining procedure for obtaining a smooth velocity field. In this
subsection, from the perspective of analyzing the coarse-grained velocity
field, we discuss the dynamical organizations of particles on the disk under
short- and long-range repulsive forces. Furthermore, we examine the
relaxation of particle speed in large-N systems of varying λ, by
which the validity of the RBM in capturing relaxation dynamics is also
tested.
To obtain a smooth coarse-grained velocity field, the circumscribed square of
the disk is evenly gridded into n× n square-shaped cells. Each cell is
associated with a spatio-temporally averaged velocity vector, which is
obtained by averaging over the velocities of the particles belonging to the specific
cell and the two layers of surrounding cells for multiple instantaneous
states of equal time interval δ t. For revealing the
characteristic behaviors of short- and long-range interacting systems, we
compare the coarse-grained velocity fields under identical conditions. The
values of the relevant parameters are listed here: n=80, δ t =
2τ_0, and t∈ [0.1τ_0, 60.1τ_0].
In Figs. <ref>(a), <ref>(e) and <ref>(g), we show the
coarse-grained velocity fields of distinct patterns at varying screening
length; the corresponding zoomed-in plots are also presented. The color bars
indicate the magnitude of kinetic energy as calculated according to the
coarse-grained velocity. For the case of λ=0.05 in
Figs. <ref>(a)-<ref>(d), small-scale dynamical structures are
developed under the short-range repulsive force. Specifically, we identify
vortices of winding number +1 and -1; the red loops and lines are
introduced for visual convenience. Figure <ref>(b) shows a deformed
+1 vortex surrounded by a clockwise velocity pattern. According to the
elasticity theory of vortex, vortices of the same sign repel and unlike signs
attract <cit.>. Here, in the dynamical regime, we
also observe a chain of connected +1 vortices with alternating clockwise
and counter-clockwise orientations as shown in Fig. <ref>(c); the
negative and positive signs represent clockwise and counterclockwise
directions, respectively. In Fig. <ref>(d), we show a vortex of winding
number -1. For both +1 and -1 vortices, the velocity field vanishes at
the central singular point in the continuum limit. As such, the regions of
light color in Fig. <ref>(a) correspond to the singular domains of
vortices. The entire velocity field is essentially composed of vortices of
winding number ± 1.
In contrast, the coarse-grained velocity fields at early stage in systems of
larger screening length are featured with the outward radial flow near the
boundary, as shown in Figs. <ref>(e)-<ref>(h) for the cases of
λ=10 and 1000, respectively. The formation of the radial
flow pattern could be attributed to the long-range nature of the repulsive
force; the particles near the boundary are subject to outward radial force
from interior particles. The boundary radial flow in the long-range
repulsive systems in Figs. <ref>(e)-<ref>(h) implies that the
total winding number of the velocity field is +1. Note that this flow is
connected to the phenomenon of density inhomogeneity via the accumulation of
particles near the boundary in the static equilibrium packings of long-range
repulsive particles on a disk <cit.>. Closer examination of the
zoomed-in velocity fields in Figs. <ref>(f) and <ref>(h) shows
that the region of the radial flow tends to extend towards the center of the
disk with the increase of the screening length, indicating the enhanced
alignment of velocity vectors under larger screening length. The ensuing
velocity fields for both cases of λ=10 and 1000 become
highly irregular.
Comparison of the distinct velocity fields shaped by the short- and long-range
forces in Fig. <ref> demonstrates the significant impact of the range of
interaction on the dynamical pattern. Remarkably, the RBM is able to capture
the characteristic dynamical structures in the Hamiltonian system consisting
of over 100000 interacting particles. The validity of the RBM in
approximating both the deterministic Hamiltonian dynamics and the stochastic dynamics
has been rigorously discussed from a mathematical
viewpoint <cit.>. Here, we provide an important
physical example of a classical particle system that is of wide interest to the
communities of soft matter physics and statistical
physics <cit.>.
We finally briefly discuss the relaxation process of the distribution of
particle speed in both short- and long-range interacting systems by the RBM.
According to the canonical ensemble formalism, the equilibrium distributions
of particle position and momentum are statistically independent, if the
Hamiltonian can be written as a sum of two terms containing coordinates and
momenta, respectively. Thus, regardless of the range of
interaction, the distribution of the particle speed is expected to converge to the
Maxwell-Boltzmann distribution for the dynamical disk model consisting of a
large number of particles.
The results for the cases of λ/a=0.05 and 10 are presented
in Fig. <ref>. We see that in both cases, the
distribution curves indeed tend to converge to the 2D Maxwell-Boltzmann
distribution (solid red curves):
f(v) δ v =Avexp(-α v^2) δ v,
where A and α are fitting parameters. For the case of
λ/a=0.05 in Fig. <ref>(a), the final
distribution curve at t=149.04τ_0 (black) agrees well with the 2D
Maxwell-Boltzmann distribution. For the case of λ/a=10 in
Fig. <ref>(b), we notice a slight deviation of the
final distribution curve at t=146.72τ_0 (black) and its optimal fitting
curve (red). This deviation could be attributed to the RBM-induced heating
phenomenon and the procedure of cooling in simulations, which is implemented
for the sake of the conservation of total energy. Note that the heating effect
caused by the RBM is even more pronounced for the long-range interacting
system; more information is provided in Appendix B.
From Fig. <ref>, we also see the distinct kinetic
pathways of the relaxation process for the short- and long-range interacting
systems. For the system of λ/a=0.05 in Fig. <ref>(a), the peak of the distribution curve keeps moving leftward
to approach the location of the peak in the equilibrium curve. This could be
understood from the perspective of energy transfer. Specifically,
kinetic energy is converted into potential energy in the relaxation of the
short-range interacting system (see Appendix B). In contrast, for the system
of λ/a=10 in Fig. <ref>(b), we observe the
initial leftward and the subsequent rightward movement of the peak of the
distribution curve; the rightward movement after t ≈ 2 τ_0
corresponds to the conversion of potential energy to kinetic energy.
To conclude, the relaxation process, as reflected in the movement of the peak
of the speed distribution curve, is classified into two categories: the
“overdamped" and the “underdamped" relaxation dynamics for the short- and
long-range interacting systems, respectively. These observations imply that
the short- and long-range interacting systems admit distinct microscopic
dynamical modes for the relaxation of the speed distribution.
§ CONCLUSION
In summary, we investigate the statistical and dynamical physics of the
dynamical disk model consisting of a large number of repulsive particles. The
recently developed powerful computational tool of the random batch method, which
is specifically designed for reducing the computational complexity of
long-range interacting particle systems down to the order of
O(N) <cit.>,
allows us to explore the
large-N regime of interest. To seek the order underlying the
convoluted dynamical evolution of the large number of particles, we resort to
the concept of a topological defect, and we reveal the intrinsic statistical regularity of
topological charges that is otherwise unattainable by the continuum analysis
of particle density. We further show the distinct dynamical organizations of the
particles under short- and long-range repulsive forces. This work extends the
disk model from the
static <cit.>
to the dynamical regime. We highlight the crucial role of topological
defects in elucidating the intrinsic statistical order underlying the convoluted
dynamical organizations of interacting particles. This work also demonstrates the promising potential of the
random batch method for exploring fundamental scientific questions arising in
a variety of long-range interacting particle systems in soft matter physics
and other relevant
fields <cit.>.
§ ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China
(Grant No. BC4190050).
§ APPENDIX A: TECHNICAL DETAILS OF SIMULATIONS
In this appendix, we present more information about the technical details of
the boundary condition and the initial condition, and the treatment of abnormally fast
particles.
In the model, we employ the reflecting boundary to confine the point
particles in the disk. For a particle that is about to collide with the boundary
in a simulation step, we first identify the crossing point of the straight
particle trajectory and the circular boundary, and then we let the particle make
a mirror reflection with respect to the cross point. The final speed of the
particle is the same as the initial incident speed prior to the reflection.
The initial state of the system is prepared by specifying a random velocity
v⃗_ini and a random position r⃗_ini to each particle. The
magnitude and the direction of the velocity v⃗_ini are uniform random
variables. v_ini∈ (0,1) and θ∈ (0,2π), where θ is the
angle of the velocity vector v⃗_ini with respect to some reference
direction. r⃗_ini is determined by the standard procedure of random
disk packing <cit.>. The radius of the disk is set to
be 0.3a, where a is the mean distance of nearest particles. The values of
the initial positions of the point particles in our model are given by the
centers of the disks. The random disk packing prevents the aggregation of
particles, which may result in a large force in a particle.
In simulations of long-range interacting systems, it is found that the speed of
a small fraction of the particles (on the order of 10 in the system containing
150,000 particles) may become abnormally large; such particles cover a
long distance over several a (mean distance of nearest particles) in a
single simulation step. One may set the time step to be sufficiently fine to
solve this problem, but such an operation is highly time consuming. In
simulations, the speed of these particles is set to be zero. This operation
seems to have no effect on the statistical properties of interest by a comparison with
the rigorous simulations of smaller-N systems.
§ APPENDIX B: CONSERVATION OF TOTAL ENERGY BY COOLING PROCEDURE
The RBM allows one to explore the large-N regime. However, it has been
reported that the RBM may lead to the heating effect in molecular-dynamics
simulations <cit.>. In our simulations, it is also found that
the kinetic energy (temperature) increases monotonously in time. The heating
effect could be tracked down to the partial and random selection of the
particles in the calculation of the total force on a particle. Consequently,
the RBM brings in temporally-varying random forces on the particles, which
causes the monotonous increase of the kinetic energy (temperature). Here, we
shall point out the robust statistical properties of topological charges
in the presence of the thermal (random) motion of the particles caused by the RBM
algorithm. A plausible explanation is that the magnitude of the thermal
(random) motion shall exceed some critical value (comparable with the lattice
spacing) to change the coordination number of a particle and thus to generate
topological charge.
We employ the following cooling procedure to ensure the conservation of the
total energy within some tolerance in the dynamical evolution of the system.
Specifically, the energy of the system is checked periodically, and if the
amount of the increased energy exceeds some threshold value (5% of the total
energy E^(0)_tot in the initial state), all of the velocity vectors are
rescaled by a common factor √((E^(0)_tot-E_p)/E_k) to ensure that
the total energy is corrected to its initial value. E_k and E_p are the
kinetic energy and the potential energy at the time when the system energy is
checked.
The temporally-varying energy curves in both short- and long-range interacting
systems are shown in Fig. <ref>. For the case of λ/a=0.05 in
Fig. <ref>(a), the total energy is well conserved during the simulation
without resorting to the cooling procedure. This could be attributed to the
rigorous calculations of the interacting forces between adjacent particles in the
RBM. For the case of λ/a=10 in Fig. <ref>(b), the cooling
procedure is triggered when the total energy increases to be above 5% of its initial
value, as indicated by the dashed vertical line from C to C' in the zoomed-in
inset. Note that at the time of the checkpoint, the total
energy of the system may have exceeded 5% of its initial value; for example, at
the checkpoint C, the value of the total energy is about 9% of its initial
value. In the data analysis, we select instantaneous states whose total energy
is approximately constant; the variation of the total energy is within 5% of
its initial value, as indicated by the solid black lines in
Fig. <ref>(b).
46
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Lai and Lin(1999)]Lai1999
authorY.-J. Lai and
authorL. I,
journalPhys. Rev. E volume60,
pages4743 (year1999).
[Åström
et al.(2000)Åström, Herrmann, and
Timonen]aastrom2000granular
authorJ. Åström,
authorH. Herrmann, and
authorJ. Timonen,
journalPhys. Rev. Lett. volume84,
pages638 (year2000).
[Donev et al.(2004)Donev, Stillinger,
Chaikin, and Torquato]donev2004unusually
authorA. Donev,
authorF. H. Stillinger,
authorP. Chaikin, and
authorS. Torquato,
journalPhys. Rev. Lett. volume92,
pages255506 (year2004).
[Weaire and Aste(2008)]weaire2008pursuit
authorD. Weaire and
authorT. Aste,
titleThe Pursuit of Perfect Packing
(publisherCRC, Boca Raton, FL, year2008).
[Mughal et al.(2011)Mughal, Chan, and
Weaire]mughal2011phyllotactic
authorA. Mughal,
authorH. K. Chan, and
authorD. Weaire,
journalPhys. Rev. Lett. volume106,
pages115704 (year2011).
[Manoharan(2015)]manoharan2015colloidal
authorV. N. Manoharan,
journalScience volume349,
pages1253751 (year2015).
[Yao(2019)]yao2019command
authorZ. Yao, journalPhys.
Rev. Lett. volume122, pages228002
(year2019).
[Mughal and Moore(2007)]Mughal2007
authorA. Mughal and
authorM. Moore,
journalPhys. Rev. E volume76,
pages011606 (year2007).
[Miguel et al.(2011)Miguel, Mughal, and
Zapperi]miguel2011laminar
authorM.-C. Miguel,
authorA. Mughal, and
authorS. Zapperi,
journalPhys. Rev. Lett. volume106,
pages245501 (year2011).
[Yao and Olvera de la Cruz(2013)]Yao2013a
authorZ. Yao and
authorM. Olvera de la Cruz,
journalPhys. Rev. Lett. volume111,
pages115503 (year2013).
[Cerkaski et al.(2015)Cerkaski,
Nazmitdinov, and Puente]cerkaski2015thomson
authorM. Cerkaski,
authorR. G. Nazmitdinov,
and authorA. Puente,
journalPhys. Rev. E volume91,
pages032312 (year2015).
[Soni et al.(2018)Soni, Gómez, and
Irvine]soni2018emergent
authorV. Soni,
authorL. R. Gómez,
and authorW. T.
Irvine, journalPhys. Rev. X
volume8, pages011039 (year2018).
[Silva et al.(2020)Silva, Menezes,
Cabral, and de Souza Silva]silva2020formation
authorF. C. Silva,
authorR. M. Menezes,
authorL. R. Cabral,
and authorC. C.
de Souza Silva, journalJ. Phys.: Condens. Matter
volume32, pages505401
(year2020).
[Meng and Grason(2021)]PhysRevE.104.034614
authorQ. Meng and
authorG. M. Grason,
journalPhys. Rev. E volume104,
pages034614 (year2021).
[Nelson(2002)]nelson2002defects
authorD. R. Nelson,
titleDefects and Geometry in Condensed Matter Physics
(publisherCambridge University Press, Cambridge,
year2002).
[Bausch et al.(2003)Bausch, Bowick,
Cacciuto, Dinsmore, Hsu, Nelson, Nikolaides, Travesset, and
Weitz]Bausch2003e
authorA. Bausch,
authorM. Bowick,
authorA. Cacciuto,
authorA. Dinsmore,
authorM. Hsu,
authorD. Nelson,
authorM. Nikolaides,
authorA. Travesset,
and authorD. Weitz,
journalScience volume299,
pages1716 (year2003).
[Bowick and Giomi(2009)]bowick2009two
authorM. J. Bowick and
authorL. Giomi,
journalAdv. Phys. volume58,
pages449 (year2009).
[Mehta et al.(2016)Mehta, Chen, Chen,
Kusumaatmaja, and Wales]mehta2016kinetic
authorD. Mehta,
authorJ. Chen,
authorD. Z. Chen,
authorH. Kusumaatmaja,
and authorD. J. Wales,
journalPhys. Rev. Lett. volume117,
pages028301 (year2016).
[Peach and Koehler(1950)]peach1950forces
authorM. Peach and
authorJ. Koehler,
journalPhys. Rev. volume80,
pages436 (year1950).
[Hirth and Lothe(1982)]hirth1982theory
authorJ. P. Hirth and
authorJ. Lothe,
titleTheory of Dislocations (publisherWiley, New York, year1982).
[Halperin and Nelson(1978)]halperin1978theory
authorB. Halperin and
authorD. R. Nelson,
journalPhys. Rev. Lett. volume41,
pages121 (year1978).
[Strandburg(1988)]strandburg1988two
authorK. J. Strandburg,
journalRev. Mod. Phys. volume60,
pages161 (year1988).
[Vitelli et al.(2006)Vitelli, Lucks, and
Nelson]vitelli2006crystallography
authorV. Vitelli,
authorJ. B. Lucks, and
authorD. R. Nelson,
journalProc. Natl. Acad. Sci. U.S.A.
volume103, pages12323
(year2006).
[Azadi and Grason(2014)]azadi2014emergent
authorA. Azadi and
authorG. M. Grason,
journalPhys. Rev. Lett. volume112,
pages225502 (year2014).
[Bowick et al.(2007)Bowick, Nelson, and
Shin]bowick2007interstitial
authorM. J. Bowick,
authorD. R. Nelson,
and authorH. Shin,
journalPhys. Chem. Chem. Phys. volume9,
pages6304 (year2007).
[Irvine et al.(2012)Irvine, Bowick, and
Chaikin]irvine2012fractionalization
authorW. T. Irvine,
authorM. J. Bowick,
and authorP. M.
Chaikin, journalNat. Mater.
volume11, pages948 (year2012).
[Yao(2020)]yao2020fraction
authorZ. Yao, journalSoft
Matter volume16, pages5633
(year2020).
[Yao(2021)]yao2021fast
authorZ. Yao, journalEur.
Phys. J. E volume44, pages1
(year2021).
[Thomas et al.(1994)Thomas, Morfill,
Demmel, Goree, Feuerbacher, and Möhlmann]thomas1994plasma
authorH. Thomas,
authorG. Morfill,
authorV. Demmel,
authorJ. Goree,
authorB. Feuerbacher,
and
authorD. Möhlmann,
journalPhys. Rev. Lett. volume73,
pages652 (year1994).
[Morfill and Ivlev(2009)]morfill2009complex
authorG. E. Morfill and
authorA. V. Ivlev,
journalRev. Mod. Phys. volume81,
pages1353 (year2009).
[Walker et al.(2011)Walker, Kowalczyk,
Olvera de la Cruz, and Grzybowski]Walker2011
authorD. A. Walker,
authorB. Kowalczyk,
authorM. Olvera de la Cruz,
and authorB. A.
Grzybowski, journalNanoscale
volume3, pages1316 (year2011).
[Jin and Li(2020)]jin2020random
authorS. Jin and
authorX. Li, journalComm.
Comput. Phys. volume28, pages1907
(year2020).
[Jin et al.(2022)Jin, Li, and
Sun]jin2022RBM_second_order
authorS. Jin,
authorL. Li, and
authorY. Sun,
journalMultiscale Model. Simul. volume20,
pages741 (year2022).
[Liang et al.(2022)Liang, Xu, and
Zhao]liang2022improved
authorJ. Liang,
authorZ. Xu, and
authorY. Zhao, journalJ.
Phys. Chem. A volume126, pages3583
(year2022).
[Qi and Liu(2023)]qi2023random
authorD. Qi and
authorJ.-G. Liu,
journalChaos volume33
(year2023).
[Frenkel and Smit(2002)]frenkel2002understanding
authorD. Frenkel and
authorB. Smit,
titleUnderstanding molecular simulation: From algorithms to
applications (publisherAcademic Press, Cambridge, Massachusetts; 2nd edition, year2002).
[Jin et al.(2020)Jin, Li, and
Liu]jinRandomBatchMethods2020
authorS. Jin,
authorL. Li, and
authorJ.-G. Liu,
journalJ. Comput. Phys. volume400,
pages108877 (year2020).
[Golse et al.(2021)Golse, Jin, and
Paul]golse2021random
authorF. Golse,
authorS. Jin, and
authorT. Paul, journalJ.
Comput. Math. volume39, pages897
(year2021), ISSN issn0254-9409.
[Wang et al.(2021)Wang, Chen, Zhou, and
Zhou]wang2021layer
authorT. Wang,
authorH. Chen,
authorA. Zhou, and
authorY. Zhou,
journalComm. Comput. Phys. volume30,
pages1474 (year2021).
[Carrillo et al.(2022)Carrillo, Tang
et al.]carrillo2022random
authorJ. A. Carrillo,
authorY. Tang, et al.,
journalComm. Comput. Phys. volume31,
pages997 (year2022).
[Carrillo et al.(2021)Carrillo, Jin, Li,
and Zhu]carrillo2021consensus
authorJ. A. Carrillo,
authorS. Jin,
authorL. Li, and
authorY. Zhu,
journalESAIM-Control OPtim. Calc. Var.
volume27, pagesS5 (year2021).
[Chaikin and Lubensky(1995)]Chaikin1995
authorP. Chaikin and
authorT. Lubensky,
titlePrinciples of Condensed Matter Physics
(publisherCambridge University Press, Cambridge, UK, year1995).
[Sethna(2006)]Sethna2006
authorJ. Sethna,
titleStatistical Mechanics: Entropy, Order Parameters, and
Complexity (publisherOxford University Press, Oxford, UK,
year2006).
[Menezes et al.(2019)Menezes, Sardella,
Cabral, and de Souza Silva]vortexSilva2019
authorR. M. Menezes,
authorE. Sardella,
authorL. R. Cabral,
and authorC. C.
de Souza Silva, journalJ. Phys.: Condens. Matter
volume31, pages175402
(year2019).
[Campa et al.(2014)Campa, Dauxois,
Fanelli, and Ruffo]Campa2014
authorA. Campa,
authorT. Dauxois,
authorD. Fanelli, and
authorS. Ruffo,
titlePhysics of Long-Range Interacting Systems
(publisherOxford University Press, Oxford, UK,
year2014).
[Lubachevsky and
Stillinger(1990)]lubachevsky1990geometric
authorB. D. Lubachevsky
and authorF. H.
Stillinger, journalJ. Stat. Phys.
volume60, pages561 (year1990).
| The packing of interacting particles in confined geometries is a common theme
in a host of soft matter
systems <cit.>.
In particular, the confluence of theoretical and experimental investigations in
the past few decades on the static two-dimensional(2D) packing problem on both
flat <cit.>
and
curved <cit.>
spaces reveals the crucial role of topological defects in the organizations of
the particles. Disclinations as a kind of fundamental topological defect
emerge in the crystalline packings of particles on curved
spaces <cit.> or under
mechanical <cit.> and
thermal <cit.> stimuli. In a triangular
lattice, a p-fold disclination refers to a vertex of coordination number
p, and it carries topological charge
6-p <cit.>. According to the continuum
elasticity theory, in analogy to electric charges, disclinations of the same
sign repel and unlike signs attract <cit.>. These
particle-like excitations are actively involved in several important physical
processes, such as the screening of the substrate
curvature <cit.> and
the healing of the disrupted crystalline
order <cit.>.
In our previous work on the inhomogeneous static packings of particles in
mechanical equilibrium confined on a disk, the unique perspective of a
topological defect allows us to reveal the characteristically negative sign of
the total topological charge Q and the associated hyperbolic
geometry <cit.>. Recently, the static disk model was extended to the
dynamical regime <cit.>. It was found that in the statistical
sense, the hyperbolic geometry as reflected by the negative sign of the
time-averaged total topological charge ⟨ Q⟩ is preserved in the
convoluted dynamical evolution of the particle
configuration <cit.>.
However, in our previous work the dynamical disk model consisted of a relatively
small number of particles as limited by the long computational
time <cit.>. It is thus still not clear if the revealed
statistical regularity of topological charges is caused by the boundary
effect or due to the intrinsic effect of the long-range force. Elucidating
this issue yields insights into the intrinsic order in seemingly irregular and
temporally-varying packings of particles arising in the physical systems of
charged colloids <cit.>, complex (dusty)
plasmas <cit.>, and a variety of charged
entities at the nanoscale in electrolyte environments <cit.>.
Furthermore, the relatively small number of particles in the previously
studied disk model hinders one from performing a coarse-graining procedure to
reveal and analyze the underlying flow patterns created by the repulsive
forces of varying range.
To address these questions, it is crucial to extend the dynamic disk model to
the large-N regime, where N is the number of particles. The challenge for
simulating the dynamics of a long-range interacting system comes from the high
computational complexity: the computation time is on the order of O(N^2)
per time step, and the whole simulation process involves up to one million
simulation steps. To solve this problem, we employ the recently developed
algorithm called the random batch method (RBM), which is specifically designed for
reducing the computational complexity of long-range interacting particle
systems down to the order of
O(N) <cit.>. Numerical experiments
show that the RBM is capable of capturing both transient dynamical
structures <cit.> and crucial statistical
features <cit.>.
The goal of this work is to employ the powerful computational tool of the RBM
for exploring the statistical and dynamical physics of the disk model
consisting of a large number of repulsive particles (up to 150,000).
Specifically, we aim at identifying the intrinsic statistical regularity of
topological charges, and studying the flow patterns under the repulsive forces
of varying range. To this end, in our model the particles confined on the disk
interact by the screened Coulomb potential; the range of the force is
conveniently controlled by the screening length. In the initial state, both
the positions and the velocities of the particles are randomly distributed. The
motion of the particles under the interacting force is governed by the
deterministic Hamiltonian dynamics. The equations of motion are numerically
integrated by the standard Verlet method <cit.>.
The main results of this work are presented below. By analyzing the
trajectories of 150,000 particles, we show that under a long-range repulsive
force, the time-averaged total topological charge ⟨ Q⟩ is
intrinsically negative corresponding to the hyperbolic geometry. Systematic
simulations of the disk model at varying screening length show the regulation
of the defect structure by the range of the repulsive force. These results
suggest the robustness and effectiveness of the concept of topological charge
for characterizing the convoluted particle dynamics at varying interaction
range. Furthermore, we reveal the distinct dynamical organizations of the
particles under short- and long-range repulsive forces by analyzing the
coarse-grained velocity fields, and we demonstrate the capability of the RBM in
capturing featured dynamical structures such as vortices and radial flows. In
this work, the revealed intrinsic statistical regularity of topological
charges and the featured flow patterns advance our understanding on the
convoluted dynamical organizations of geometrically confined repulsive
particles. | null | null | null | null | In summary, we investigate the statistical and dynamical physics of the
dynamical disk model consisting of a large number of repulsive particles. The
recently developed powerful computational tool of the random batch method, which
is specifically designed for reducing the computational complexity of
long-range interacting particle systems down to the order of
O(N) <cit.>,
allows us to explore the
large-N regime of interest. To seek the order underlying the
convoluted dynamical evolution of the large number of particles, we resort to
the concept of a topological defect, and we reveal the intrinsic statistical regularity of
topological charges that is otherwise unattainable by the continuum analysis
of particle density. We further show the distinct dynamical organizations of the
particles under short- and long-range repulsive forces. This work extends the
disk model from the
static <cit.>
to the dynamical regime. We highlight the crucial role of topological
defects in elucidating the intrinsic statistical order underlying the convoluted
dynamical organizations of interacting particles. This work also demonstrates the promising potential of the
random batch method for exploring fundamental scientific questions arising in
a variety of long-range interacting particle systems in soft matter physics
and other relevant
fields <cit.>. |
http://arxiv.org/abs/2409.17233v1 | 20240925180002 | Extended hot dust emission around the earliest massive quiescent galaxy | [
"Zhiyuan Ji",
"Christina C. Williams",
"George H. Rieke",
"Jianwei Lyu",
"Stacey Alberts",
"Fengwu Sun",
"Jakob M. Helton",
"Marcia Rieke",
"Irene Shivaei",
"Francesco D'Eugenio",
"Sandro Tacchella",
"Brant Robertson",
"Yongda Zhu",
"Roberto Maiolino",
"Andrew J. Bunker",
"Yang Sun",
"Christopher N. A. Willmer"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Article Title]Extended hot dust emission around the earliest massive quiescent galaxy
[1]Zhiyuan [email protected]
2,1]Christina C. Williams
1]George H. Rieke
1]Jianwei Lyu
1]Stacey Alberts
3]Fengwu Sun
1]Jakob M. Helton
1]Marcia Rieke
4]Irene Shivaei
5,6]Francesco D'Eugenio
5,6]Sandro Tacchella
7]Brant Robertson
1]Yongda Zhu
5,6,8]Roberto Maiolino
9]Andrew J. Bunker
1]Yang Sun
1]Christopher N. A. Willmer
*[1]Department of Astronomy, University of Arizona, 933 N. Cherry Avenue, Tucson, 85721, Arizona, USA
[2]NSF’s National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Avenue, Tucson, 85719, Arizona, USA
[3]Center for Astrophysics, Harvard & Smithsonian,
60 Garden Street, Cambridge, 02138,
Massachusetts, USA
[4]Centro de Astrobiología (CAB), CSIC-INTA,
Carretera de Ajalvir km 4, Torrejón de Ardoz, Madrid, 28850, Spain
[5]Kavli Institute for Cosmology, University of Cambridge,
Madingley Road, Cambridge, CB3 0HA, UK
[6]Cavendish Laboratory, University of Cambridge,
19 JJ Thomson Avenue, Cambridge, CB3 0HE, UK
[7]Department of Astronomy and Astrophysics, University of California, Santa Cruz,
1156 High Street, Santa Cruz, 95064, California, USA
[8]Department of Physics and Astronomy, University College London,
Gower Street, London, WC1E 6BT, UK
[9]Department of Physics, University of Oxford,
Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH, UK
A major unsolved problem in galaxy evolution is the early appearance of massive quiescent galaxies that no longer actively form stars only ∼1 billion years after the Big Bang. Their high stellar masses and extremely compact structure<cit.> indicate that they formed through rapid bursts of star formation<cit.> between redshift z∼6-11 <cit.>. Theoretical models of galaxy evolution cannot explain their high number density, rapid growth and truncation of star formation at such early times <cit.>, which likely requires extreme feedback to destroy the cold interstellar medium (the fuel for star formation).
We report the discovery of a significant reservoir of hot dust in one of the most distant known examples at z=4.658, GS-9209 <cit.>. The dust was identified using JWST's Mid-Infrared Instrument (MIRI), whose unprecedented sensitivity and high spatial resolution, for the first time, firmly show that this dust is significantly more extended than the stars by ≳3 times. We find that the dust has preferentially been evacuated or diluted in the galaxy center. Our analysis finds that the extended hot dust emission is consistent with recent heating by a younger and more spatially extended generation of star formation. This reveals that the earliest quiescent galaxies did not form in a single rapid burst; instead, similar to galaxy growth at later times, the center formed first with star formation continuing in an extended envelope. The growth of this galaxy is truncating from the inside out, consistent with central gas depletion from early AGN feedback.
[
[
Received XX; accepted YY
============================
The galaxy presented here, GS-9209, is so far one of the earliest known massive quiescent galaxies in the Universe with a spectroscopically confirmed redshift of z=4.658 <cit.>. JWST's NIRCam observations <cit.> revealed that the rest-optical/NIR half-light radius of GS-9209 is only r_e∼200 pc, which corresponds to a stellar mass surface density of ∼10^11M_⊙/kpc^2 approaching the maximum allowed density set by stellar feedback <cit.>. The formation of such an extremely compact stellar morphology, and its connection to the very early truncation of star formation pose major challenges to models of massive galaxy formation in the early Universe.
In December 2022, we obtained 8-band imaging of GS-9209 at observed wavelengths ∼5-30μ m with the Mid-Infrared Instrument <cit.> (MIRI) of the James Webb Space Telescope <cit.> (JWST) through the SMILES program <cit.>. With the MIRI observations, for the first time in a massive quiescent galaxy at this redshift, we are able to constrain emission red-ward of the stellar bump at rest-frame =1.6μ m [Because H^- ion has the minimal opacity at this wavelength, the emission of cool stars shows a maximum/bump in galaxies' SEDs.]. This allows us to put new constraints that were entirely missed by previous observations – the presence of dust emission from GS-9209.
The MIRI images of GS-9209 are shown in Fig. <ref>, where the range probed by each band is indicated. At ∼ 6-10μ m which corresponds to ∼1-2μ m, MIRI images show a very compact morphology, suggesting a nucleated, high-density stellar distribution of GS-9209, which is fully consistent with the studies using shorter wavelength NIRCam observations <cit.>.
Remarkably, moving towards longer wavelengths of >10μ m, MIRI imaging starts to show more extended emission from GS-9209 than appears at shorter wavelengths. Using source injection simulations, we quantitatively compare GS-9209's light distributions at >10μ m to its starlight probed by F770W (see Methods). As Fig. <ref> shows, the F1500W ( ∼ 2.7μ m) starts to significantly depart from the stellar profile, a tendency that becomes clearer in both the image and radial profile at F1800W ( ∼ 3.2μ m). The confidence level that the light profiles are significantly different is p>99.99%. This difference in light profile is twofold. First, relative to the starlight, F1500W and F1800W images show more extended emission at 0".4<r<0".6 from the center, with a confidence level of p=99.06% for F1500W and 99.76% for F1800W. Second, near the center (r∼0".15) of GS-9209, there appears a flux deficit in the F1800W image, with a confidence level of p=98.7%. As we show in Methods, a similar central flux deficit is also observed in the F2100W image ( ∼4μ m) despite the tentative (S/N ∼4) detection of GS-9209 in it.
We further study the MIRI light profiles of GS-9209 through two-dimensional parametric light profile fitting. To begin, we assume a single Sérsic profile <cit.> and simultaneously model the MIRI images from F560W to F1800W (see Methods). As Fig. <ref> shows, r_e increases towards longer-wavelength MIRI bands. The r_e (from the single Sérsic fitting) of F1800W is ≈ 450 pc which is ∼2-3 times larger than GS-9209's starlight. Motivated by the analysis of the Spectral Energy Distribution (SED) detailed below, where we find the observed MIRI fluxes at >2.5μ m come from both AGN torus and dust emission of diffuse ISM, we also fit the F1500W and F1800W images using an alternative two-component model, namely a PSF plus a Sérsic profile. As expected, this two-component fitting returns a larger r_e of ≈2 kpc, almost 10 times of GS-9209's starlight. In Methods, we further test our results by assuming other different light profiles. In all cases the same conclusion is reached that the light distribution of GS-9209 at >2.5μ m is significantly more extended than its starlight, in excellent agreement with our source injection simulations mentioned before. Depending on the assumed light profiles, we estimate the size of GS-9209 at >2.5μ m to be larger than its starlight by at least ∼2-3 times, and up to 10 times.
One key revelation from multiple decades of Hubble Space Telescope (HST) <cit.> and now JWST/NIRCam <cit.> observations was that the majority of massive quiescent galaxies at z>1.5 have extremely compact stellar morphology. Moreover, earlier studies found that these galaxies also seem to have rather simple (HST) light profiles <cit.> that can be well described by a single morphological component. Yet, our MIRI observations immediately reveal that the structure of GS-9209 is much more complex than we previously thought about typical high-redshift compact quiescent galaxies: Apart from an extremely compact stellar core at the center, GS-9209, one of the earliest known massive quiescent galaxies, has more extended morphological components, and evidence for substructures (e.g. a near-center flux deficit), at ≳2.5μ m.
To investigate the physical origin of the MIRI fluxes, we model the SED of GS-9209 (see Methods) using the combined photometry from the legacy HST/ACS imaging <cit.>, the NIRCam imaging from the JADES <cit.> and JEMS <cit.> programs, and our new MIRI imaging. Using the integrated photometry (i.e. total observed flux), we obtain quantitatively consistent measures of the stellar-population properties of GS-9209 with those from the SED fitting of <cit.> using NIRSpec spectrum (see Methods).
Crucially, the immense gain of JWST over previous mid-infrared telescopes – about two orders of magnitude in sensitivity and an order of magnitude in angular resolution compared to Spitzer <cit.> – now allows us to perform spatially resolved SED analysis to investigate the detailed internal structure of the earliest massive quiescent galaxies at mid-infrared wavelengths, a knowledge of quiescent galaxies that was totally uncharted beyond the local Universe before JWST.
We present the fiducial SED fitting results of GS-9209's inner region enclosed by an r=0".3 circular aperture in Fig. <ref>, and of GS-9209's outer region enclosed by an r=0".3-0".7 circular annulus aperture in Fig. <ref>.
GS-9209's outer region is younger and more dust-attenuated than its inner region. For the inner region, the best-fit model has little instantaneous star-formation activity with a specific star formation rate of log(sSFR/yr^-1) = -10.0_-0.4^+0.2, a mass-weighted stellar age of t_age = 500±100 Myr or a formation redshift of z_form = 6.8±0.5, and low dust attenuation with E(B-V) = 0.16±0.01 mag. Comparatively, the outer region has younger stellar populations with a larger log(sSFR/yr^-1) = -9.5_-0.2^+0.2 and smaller t_age = 160±60 Myr or z_form = 5.1±0.1, and higher dust attenuation with E(B-V) = 0.47±0.08 mag. In Methods, we test our results by altering the default SED assumptions (star-formation history, metallicity etc.). We show that our conclusions are insensitive to the assumptions made in the fiducial SED fitting.
The spatial difference in stellar populations immediately shows that the formation of GS-9209 did not occur in a single episode. Instead, its formation is in an inside-out manner, which is similar to star-forming galaxies, but with a much higher efficiency since its major star formation has already halted merely ≲ 1 Gyr after the Big Bang.
Starting from ∼10μ m or ∼2.5μ m, the SED of GS-9209 cannot be explained by purely stellar light alone. The dust emission is required to reproduce the observed MIRI fluxes, for both the inner and outer regions.
For the inner region, SED analysis suggests that the emission at ∼2.5μ m is dominated by AGN torus, a conclusion that is independent of assumed AGN templates (see Methods). We stress that the reprocessed emission of dust associated with diffuse ISM is also included to the SED modeling. Yet, the fitting still finds that the mid-infrared SED of the inner region is consistent with being dominated by AGN, suggesting the lack of significant amount of dust associated with diffuse ISM at the center of GS-9209 (Fig. <ref>). It is also worth mentioning that, SED fitting using only the shorter wavelength (no MIRI) photometry was ambiguous about the AGN presence in GS-9209 <cit.>, although the presence of a faint one is demonstrated by the broad Hα line <cit.>. The detection in our full set of photometry, and the consistency of our SED-inferred AGN luminosity and that derived from NIRSpec spectroscopy (see Methods) show the power of MIRI in identifying and characterizing AGNs just with photometry.
For the outer region of GS-9209, we find that the >2.5μm emission is observed on kpc-scales, exceedingly larger than the typical (sub-)pc scale of the AGN-heated torus emission at these wavelengths <cit.>. This suggests that
the dust emission has to be associated with diffuse ISM. Regardless of assumed dust templates, however, if we adopt the energy balance criterion where all starlight attenuated by the dust is re-emitted in infrared, we find that the MIRI fluxes at ∼2.5μ m is not well-modeled (see Methods). In fact, earlier studies have questioned the use of the energy balance assumption especially when the UV and IR emission of galaxies is offset from one another <cit.>. Given the significant difference in GS-9209's stellar and dust morphologies revealed by our MIRI imaging, the ineffectiveness of the energy balance is arguably expected for GS-9209.
Without the assumption of energy balance, as Fig <ref> shows, different dust emission templates are all able to fit the HST-to-MIRI photometry reasonably well. The non-detection of GS-9209 from archival ALMA observations <cit.> provides additional, critical constraints on its dust properties. We derive an upper limit at observed 1.2 mm (or ∼210μm) of 80 μJy with forced photometry following <cit.>. This limit is inconsistent with typical dust models assumed for massive star-forming galaxies <cit.>. Instead, it is broadly consistent with the dust emission template of Haro 11, a starbursting dwarf galaxy in the local Universe whose IR SED – relative to typical star-forming galaxies – features a higher dust temperature (∼47 K) and an excess of near- to mid-IR continuum emission likely from additional even warmer dust <cit.>. GS-9209 and Haro 11 share two major similarities, namely a low metallicity and very compact morphology, which indeed have been shown to lead to a warmer/bluer IR SED similar to that of Haro 11 <cit.>. Future spectroscopy at mid-to-far infrared wavelengths is required to further quantify the dust properties in such systems.
Combining together our morphological and SED analysis, we conclude that there is a significant dust reservoir in GS-9209. The hot dust emission has two origins. At the center, the emission is dominated by AGN torus. In the outskirts of GS-9209 where its stellar population is younger than the inner region, the extended >2.5μ m emission is associated with the hot dust of diffuse ISM, mostly likely powered by rest-frame UV/optical starlight absorbed by the dust and re-emitted in IR.
The discovery that the first massive quiescent galaxies like GS-9209 form and quench from the inside out now enables new empirical constraints on their formation pathways. The structure of GS-9209, namely a compact older stellar core surrounded by a younger stellar and dust envelope, is very similar to the hallmark prediction for the gas-rich processes associated with galaxy formation <cit.>. Physically, the formation of the extremely dense stellar core requires highly dissipative gas accretion <cit.>. The buildup of the core in turn will deepen the gravitational potential well, making the later, continuous accretion of gas and satellite mergers settle into orbits surrounding the older central core. Interestingly, hydrodynamical simulations with adequate resolution for the modeling of relevant gaseous astrophysics predict that continuous gas accretion (while star-formation at the center is effectively halted) should form a extended, ring structure in massive galaxies at a physical scale of r≳3 r_e^star at z<4 <cit.>, in quantitative agreement with the scale where the extended hot dust emission of GS-9209 is observed. Thus, the gas-rich processes found at z<4 may plausibly extend to higher redshifts for the formation of this earlier generation of massive galaxies <cit.>.
Near the center, the likely flux deficit at >2.5μm (Fig. <ref>, <ref>), and the lack of evidence of the presence of dust associated with diffuse ISM (Fig. <ref>) imply central gas depletion as a direct cause of the inside-out quenching of GS-9209. A natural consequence of highly dissipative gas accretion is also the faster, more efficient growth of supermassive black holes <cit.>. The impact of this is that AGN feedback may become energetic enough to impact the host galaxy on faster timescales, helping to heat up and even expel gas. GS-9209 indeed hosts an AGN, which indicates that the AGN feedback might be the cause of the vacated dust/gas near the center.
We finally discuss our MIRI discovery of the extended hot dust emission from GS-9209 in the cosmological context. Numerous studies have now noted that state-of-the-art cosmological simulations fail to quantitatively reproduce the observed number density of massive quiescent galaxies that form at z>3 <cit.>. A common attribute among these simulations is that the AGN feedback, in particular the black hole seeding time and the time at which kinetic quasar-mode feedback becomes influential, does not occur early enough in cosmic time<cit.>. This has been identified as a likely cause why quenching in simulations occurs too late. The discovery by JWST of abundant supermassive black holes earlier and more massive than expected <cit.> further corroborates this evidence that AGN feedback is more prevalent and impacts their host galaxies earlier than previously thought. If GS-9209 is a good representative of the general population of earliest massive quiescent galaxies, our MIRI observations provide strong evidence that the gas-rich dissipative processes and AGN feedback play a critical role in the formation and quenching of the earliest quiescent galaxies discovered by JWST. Simulations of sufficiently massive halos at z > 5 with the resolution to adequately model gas-rich galaxy growth do not currently exist, but this advance is needed to determine whether the detailed physical properties of galaxies like GS-9209, such as the multi-wavelength structure, are in tension with cosmological expectations.
With its revolutionary capabilities, particularly the unprecedented angular resolution, JWST's MIRI now opens a new window of studying high-redshift, freshly quenched galaxies through the spatial distribution of dust emission, promising powerful constraints on the physical mechanisms responsible for the cessation of star formation in the earliest massive quiescent galaxies.
§ METHODS
§ COSMOLOGY MODEL AND DEFINITIONS
Throughout this study, we adopt the ΛCDM cosmology with H_0 = 70 km s^-1 Mpc^-1, Ω_ m = 0.3 and Ω_Λ = 0.7. All magnitudes are presented in the AB system.
§ NIRCAM AND MIRI IMAGING DATA REDUCTION
GS-9209 appears in the medium-depth portion of the JADES program in GOODS-South <cit.>, where we have obtained JWST/NIRCam imaging in 8 bands, F090W, F115W, F150W, F200W, F277W, F356W, F410M and F444W. Additional 5 medium-band NIRCam images, i.e. F182M, F210M, F430M, F460M and F480M, were obtained by the JEMS program <cit.>. We also include public, shallower F182M, F210M and F444W imaging from the FRESCO survey <cit.> to further improve the depth of those images.
We reduce all NIRCam imaging observations using the pipeline developed by JADES, the details of which have been presented in <cit.>. In brief, we process the raw images using the JWST Calibration Pipeline with the following custom corrections. During Stage 1 of the JWST pipeline, we mask and correct for the “snowballs”[snowballhttps://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-instrument-features-and-caveats/nircam-claws-and-wisps] cosmic-ray artefacts. During Stage 2 of the JWST pipeline, we remove the 1/f noise and subtract 2D background from the images, and also correct the “wisp” scattered light features. Afterwards, we tie the astrometry of individual exposures of a given visit to the World Coordinate System (WCS) of a reference catalog constructed from HST/WFC3 F160W mosaics in GOODS-South with astrometry tied to Gaia-EDR3 <cit.>. Before combining visit-level images to create the final mosaic, we manually inspect all individual exposures containing GS-9209. We remove two out of six F090W exposures from our analysis as the “snowballs" effect is seen in very close proximity (<2") of GS-9209 and significantly affects the surrounding sky.
GS-9209 also appears in the JWST/MIRI program, SMILES <cit.>, where we have obtained MIRI images in 8 bands, F560W, F770W, F1000W, F1280W, F1500W, F1800W, F2100W and F2550W. We reduce MIRI imaging observations using the nominal JWST Pipeline with the latest JWST Calibration Reference System. We make custom modifications for sky subtraction which has been described in detail in <cit.>. In brief, to better correct for the large spatial gradient of sky background in our data and remove substructures including tree-ring shaped features along the columns and rows of the MIRI detector, for each one of the cal files (i.e., the products of the calwebb_image2 in JWST Pipeline), we adopt an iterative process to (1) progressively mask sources, (2) median filter out large-scale sky gradients and striping features along detector columns and rows and (3) construct a super background by median combining and scaling all those filtered cal files. Finally, we subtract from each cal file the super background, after which we do an additional median subtraction using a 256×256 pixel^2 box to remove remaining background variations, which are mainly caused by cosmic ray showers. We tie the astrometry of MIRI mosaics to be the same as JADES.
Current imaging depth of MIRI/F2100W from SMILES results in a signal-to-noise ratio S/N ∼4 detection of GS-9209 (Fig. <ref>). For the reddest MIRI band F2550W, GS-9209 is not detected (S/N <1) which however is expected based on the detection limit of SMILES. The results of these two band MIRI images are not included to the main text.
§ ANALYSIS OF MIRI LIGHT PROFILES
§.§ Source injection simulations
The main goal of this analysis is to compare observed light distributions of GS-9209 at >10μ m with its stellar light distribution. A common method is through parametric morphology modeling, e.g. Sérsic fitting, to compare the fitted parameters at different wavelengths. We stress, however, that the spatial distribution of hot dust emission in high-redshift quiescent galaxies is largely unknown at present. Thus, the mismatch between the assumed parametric light profile and the intrinsic distribution of hot dust emission can make such comparison fraught with systematic errors. To mitigate this issue, we decide to make the comparison through a purely empirical method – source injection simulations.
The idea is to inject artificial sources with the same light distribution of GS-9209's starlight (F770W) into empty regions of MIRI images at >10μ m, and then to measure surface brightness profiles of these injected sources with the same procedure used for GS-9209. If the light distributions of GS-9209 at >10μ m differ from its starlight distribution, then we expect the observed surface brightness profiles of GS-9209 at >10μ m to be different from the recovered profiles of the realizations (i.e. injected artificial sources).
For source injections, we first create artificial galaxy images mimicking the stellar light distribution of GS-9209. The typical light distribution of z>3 quiescent galaxies at rest-frame NIR remains largely unknown, but can be complex <cit.>. Thus, instead of assuming a parametric, likely over-simplified profile of GS-9209's starlight distribution, we directly use its MIRI/F770W image to generate synthetic images at longer wavelengths. The choice of F770W is because (1) GS-9209 is detected in F770W at high S/N (=72, using a r=0.25" circular aperture), (2) AGN contribution is negligible at F770W based on multiple different analyses of GS-9209's SED, and (3) at GS-9209's redshift z=4.658, F770W probes =1.0∼1.7μ m – covering the 1.6μ m stellar bump – which is a very sensitive to (and arguably the best available probe of) the spatial distribution of stars in GS-9209.
We PSF-match the F770W image to the MIRI bands at >10μ m. We rescale the PSF-matched F770W image of GS-9209 such that the flux within a r=0.2" circular aperture is matched to that observed at longer wavelengths. We then use the MIRI segmentation map to mask out all detected sources, and inject the rescaled synthetic images to the 1'×2' (∼ the field of view of MIRI imaging) region centered around GS-9209. We randomly inject these images 10^4 times, and each time we also resample the synthetic image with the Poisson noise calculated using the corresponding exposure time maps of our MIRI observations. Finally, we measure the surface brightness profiles of the injected artificial sources. The median and 1σ range of the surface brightness profiles of the 10^4 realizations are shown as black solid line and shaded region in Fig. <ref>. In addition to injecting artificial sources with the observed F770W light profile, we also run another set of simulations to inject entirely unresolved point sources. In Fig. <ref>, the median surface brightness profiles of the injected artificial point sources are shown as black dashed line.
Before moving forward, we stress that, in contrast to simply comparing the PSF-matched F770W image of GS-9209 to its images at longer wavelengths, source injection simulations take into account the realistic noise behaviors of the MIRI observations, allowing us to have a robust estimate on the significance of the difference in light profiles between the stellar light and hot dust emission of GS-9209.
We first test the null hypothesis that the observed light distributions of GS-9209 at >10μ m are similarly compact to its stellar light distribution. Out of 10^4 realizations, only in 94 and 24 cases is the surface brightness profile of injected artificial sources as bright or brighter than that observed in F1500W and F1800W within a 0".4<r<0".7 annulus aperture. We thus reject the null hypothesis with a p-value of 99.06% and 99.76% for the F1500W and F1800W images, respectively. This shows that the observed MIRI light distributions of GS-9209 at >2.5μ m (hot dust emission) are more extended than its stellar light. We note that, despite GS-9209 only being detected with a moderate significance in MIRI/F2100W, source injection simulations were still preformed for F2100W and the results are shown in the right panel of Fig. <ref>. Owing to the relatively low S/N, the observed F2100W light profile at 0".4<r<0".7 is within 1σ range of the source injection simulations, although the observed one is above the median surface brightness profile measured from the simulations, i.e. in line with the results of F1500W and F1800W.
Relative to the stellar light distribution, there appears to be a flux deficit near the center (r∼0".15) of GS-9209 in its F1800W image (Fig. <ref> and <ref>). Such an angular scale is smaller than the resolution of MIRI/F1800W imaging (FWHM/2 ∼ 0".25), making it very difficult, if at all possible, to quantify precisely the physical scale over which the flux deficit is present. Nonetheless, with source injection simulations, we can at least test the null hypothesis that this deficit is a result of noise fluctuations. Out of 10^4 realizations, the surface brightness profiles of 130 injected artificial sources are as faint or fainter than that observed in F1800W within a 0".12<r<0".18 annulus aperture. We thus reject the null hypothesis with a p-value of 98.7%. We note that, at a similar angular scale (0".15<r<0".3), a flux deficit is also observed in the F2100W image (Fig. <ref>). We thus believe this deficit to be a real feature of GS-9209's hot dust emission.
Finally, among F1800W surface brightness profiles of the 10^4 injected artificial sources, none of them has both the near-center flux deficit at r∼0".15 and more extended emission at 0".4<r<0".7. Thus, the stellar and hot dust distributions of GS-9209 are different at a significance level of >99.99%. We stress again that we intend to make the source injection simulations as empirical as possible such that our conclusions have the least dependence, if at all, on the assumptions made. We conclude that the hot dust distribution of GS-9209 differs significantly from its starlight.
§.§ Parametric light-profile fitting
For high-redshift quiescent galaxies, we reiterate that the spatial distribution of hot dust emission is unknown. Nonetheless, a meaningful estimate of the difference between the stellar and hot dust distributions is still very important, as this will enable quantitative comparisons between the MIRI observations and predictions from hydrodynamical simulations. In what follows, we thus use a few commonly assumed parametric light profiles to model the MIRI images, in an attempt to quantify the difference in size between the stellar and hot dust emission of GS-9209.
The analysis is done by performing PSF-convolved fitting of the 2D light distributions from F560W to F1800W. We use the software Galfitm <cit.> that is built upon Galfit3 <cit.> but also allows simultaneously modeling multi-band images. For PSF models of MIRI images at >10μ m, i.e. from F1000W to F1800W, there is a very limited number of MIR-bright point sources in the SMILES footprint, leading to noisy outskirts of the empirical PSF models. To mitigate this, instead of using the empirical PSFs, we build model PSFs at >10μ m following the strategy used for the NIRCam imaging of JADES <cit.>. In short, we first inject the Webbpsf <cit.> models into the individual stage-2 images, and then mosaic them following the same stage-3 data reduction of SMILES. The PSF models are then constructed from these PSF-mosaics. The empirical and model PSFs agree with each other very well at the center, but model PSFs have a much more stable behavior at the outskirts. For F560W and F770W, we do not use model PSFs, because the cross-shaped artifact (a.k.a. cruciform), an extended artifact caused by internal reflection in the F560W and F770W detectors <cit.>, is not well captured in current Webbpsf modeling. Instead, for F560W and F770W we use the empirical PSF models built upon observed point sources in SMILES following the method of <cit.>.
To begin, we first assume a single Sérsic profile for each one of the MIRI images. By default, because the S/N of the detections decrease with wavelength, we simultaneously model all MIRI images such that the fit uses the information of all the available data while allowing parameters to vary as a smooth function of wavelength. Such an approach has been demonstrated to provide more stable and accurate multi-wavelength morphological parameters than modeling images of different bands independently, particularly when some images have low-to-moderate S/N <cit.>. During the fitting, we fix the sky background to the 3σ-clipped median pixel value of a 5"×5" cutout after masking all detected sources using the segmentation map.
We fit the center, axis ratio and position angle as free parameters, but assume they do not vary with wavelength. We fit the Sérsic index n and allow it to change freely with wavelength. The main parameter of interest, the effective radius r_e, is assumed to vary with wavelength following a second order Chebyshev polynomial, following recent studies <cit.>. To derive the uncertainties of the fitted parameters, we use the error flux extension of the MIRI images to Monte Carlo resample the image pixel values, and then run Galfitm. We repeat the procedure 1000 times, and use the standard deviation as the parameter uncertainties. The fitting results are shown in Fig. <ref>. We obtain r_e= 217±10 pc in F560W and r_e= 261±17 pc in F770W, in good agreement with the size measured from the NIRCam observations within the uncertainty <cit.>. For F1800W, we obtain r_e= 430±27 pc.
We test the single Sérsic fitting results above by modeling each one of the MIRI images independently, namely that, except fixing the centroid, all other parameters, axis ratio, position angle, Sérsic index n and r_e are allowed to vary freely with wavelength. We obtain r_e= 216±12, 225±17 and 461±49 pc in F560W, F770W and F1800W, respectively. We also test our results by (1) fitting the sky background as a free parameter and (2) using empirical PSFs (though low S/N at outskirts) at >10μ m. We do not see any substantial changes in our results. Therefore, our parametric light-profile fitting reaches to the same conclusion as our source injection simulations: The light distribution of GS-9209 at >3μ m is more extended than its stellar light. Assuming a single Sérsic profile, the size of hot dust emission of GS-9209 is ≳ 3 times larger than its stellar light.
The quantitative difference in size between the stellar and hot dust emission depends on the parametric light profile assumed for the modeling. The presence of AGN in GS-9209, which is revealed by both broad Hα emission <cit.> and our SED fitting with MIRI, motivates the fit to the MIRI images using an alternative two-component model, a PSF component to account for the AGN, plus a Sérsic component to account for the host galaxy. During this modeling, each one of the MIRI images is fitted independently. From F560W to F1280W, the PSF+Sérsic fitting returns essentially the same results as the single Sérsic fitting: flux contribution from the PSF component is <10%. The lack of significant contribution from the PSF component suggests that a very compact stellar structure dominates GS-9209's emission only up to ∼2 μ m, a conclusion independently reached by our SED analysis below. For F1500W and F1800W, flux contributions from the PSF and from the Sérsic component become comparable. The flux ratio of the PSF to the Sérsic component is 0.46 and 0.65 for F1500W and F1800W, respectively. This suggests the observed MIRI fluxes at ∼3-5μ m are likely associated with two origins, i.e. AGN dust torus and the dust emission from diffuse ISM, a conclusion again independently reached by our SED analysis. We note, however, that with current F1500W and F1800W imaging, the reduced χ^2 between the single Sérsic fitting and the PSF+Sérsic fitting are statistically indistinguishable.
The PSF+Sérsic fitting returns r_e=2.4
±0.6 kpc in F1800W, with the best-fit Sérsic index of n∼0.2 which hits the minimal n commonly allowed in Sérsic modeling <cit.>. We therefore perform another PSF+Sérsic fitting by fixing n=1 (exponential disk), which returns r_e = 1.8±0.5 kpc in F1800W, i.e. ≈ 10 times of the stellar r_e. We note that, relative to the single Sérsic fitting, this significant increase in r_e derived from the PSF+Sérsic fitting is expected, because adding a PSF is equivalent to fitting the F1800W image of GS-9209 with some fraction of central (non-stellar) flux removed. We also note that, unlike the r_e from the single Sérsic fitting which is a characteristic size of total hot dust emission regardless of its origin, the r_e from the PSF+Sérsic fitting should be considered as the size of hot dust emission that is only associated with the diffuse ISM of GS-9209.
§ ANALYSIS OF SPECTRAL ENERGY DISTRIBUTION
We perform detailed analysis of the spectral energy distribution (SED) of GS-9209 via SED modeling, with an emphasis on constraining the origin of the observed MIRI fluxes. Data included in the modeling is the photometry of 25 filters from HST/ACS (F435W, F606W, F775W, F814W and F850LP), JWST/NIRCam (F090W, F115W, F150W, F182M, F200W, F210M, F277W, F356W, F410M, F430M, F444W, F460M and F480M) and JWST/MIRI (F560W, F770W, F1000W, F1280W, F1500W, F1800W and F2100W). All photometry is measured using the PSF-matched images where all images are PSF-homogenized to MIRI/F2100W, the longest wavelength band where GS-9209 is detected. During our SED fitting, an error floor is imposed on photometry: the uncertainty of flux is not allowed to be smaller than 5%, the typical uncertainty caused by imperfect flux calibration and PSF homogenization.
The SED fitting is done mainly with the software Prospector <cit.>. The basic setups of our Prospector fitting are detailed as follows. We fix the redshift to be z=4.658 <cit.>. We use the <cit.> stellar initial mass function. We adopt the FSPS stellar synthesis code <cit.> with the stellar isochrone libraries MIST <cit.> and the stellar spectral libraries MILES <cit.>. We use the <cit.> IGM transmission model. We include the model of <cit.> for nebular emission. By default, we treat differently the dust attenuation towards nebular emission and young (<10 Myr) stellar populations, and towards old (>10 Myr) stellar populations <cit.>. The dust attenuation of the former is modelled as a power law, while the latter is modelled following the parameterization of <cit.>, i.e., a modified <cit.> dust attenuation law with a Lorentzian-like profile to describe the 2175Å dust feature. Following <cit.>, we tie the strength of the 2175Å feature to the dust index of <cit.>.
For the fiducial star formation history (SFH), we use a nonparametric, piece-wise form composed of 9 lookback time bins with the continuity prior <cit.>. The first three lookback time bins are fixed to be 0-10, 10-30 and 30-100 Myr to capture recent episodes of star formation with relatively high time resolution. The last lookback time bin is 0.9t_H - t_H where t_H is the Hubble Time at z=4.658. The remaining five bins are evenly spaced in logarithmic space between 100 Myr and 0.9t_H. Such a choice of lookback time bins is very similar to those extensively used in recent studies of high-redshift massive galaxies <cit.>. Nonetheless, in a later Section, we still test our SED fitting results using different time bins of nonparametric SFH.
With photometric data alone, it remains difficult to tightly constrain stellar and gas-phase metallicities. By default, we thus use strong priors based on the metallicity measures and their uncertainties from NIRSpec spectroscopy of <cit.>. Specifically, for stellar metallicity, we assume a Gaussian prior centered at 0.11Z_⊙ with a width of 0.1 dex. For gas-phase metallicity, we assume a Gaussian prior centered at 0.17Z_⊙ with a width of 1 dex. We shall test our SED fitting results using different metallicity assumptions in a later Section.
The MIRI imaging covers the spectral range of GS-9209 towards ∼ 3-5μ m where hot dust emission, if present, becomes increasingly important <cit.>. We thus also add models of dust emission to the SED modeling (see below for details).
§.§ SED modeling of the entire galaxy
We start by fitting the SED of GS-9209 using integrated photometry, i.e. total flux within a circular aperture of r=0.7" which is about the size of the galaxy's extent in the segmentation map. For dust emission, here we use models of <cit.> to model the emission from AGN dust torus, and of <cit.> to model the reprocessed emission of dust associated with diffuse ISM. These two models are the default of Prospector and have been widely used in the literature.
We do not include the results from this fitting of integrated photometry to the main text, as this analysis is mainly served as a consistency check with the previous inference of GS-9209's stellar-population properties done by <cit.> who performed the SED fitting mainly with the JWST/NIRSpec fixed-slit spectroscopy with medium resolution gratings of G235M and G395M.
The left panel of Fig. <ref> shows the SED fitting result of integrated photometry. The best-fit model shows a strong rest-frame 4000Å break and prominent absorption features associated with A-type stars, in line with the NIRSpec spectroscopy of <cit.> confirming the quiescent/post-starburst nature of GS-9209. More importantly, with MIRI photometry, it immediately becomes clear that the hot dust emission is required to reproduce the SED of GS-9209 at >2μ m. The origin of the hot dust emission will be elaborated in greater detail in following Sections through analysis of the inner and outer regions of GS-9209.
Using integrated photometry, we obtain a stellar mass of log_10(M_*/M_⊙)=10.62^+0.03_-0.02, in excellent agreement with that (log_10(M_*/M_⊙)=10.58±0.02) measured by <cit.> using NIRSpec spectroscopy.
We obtain a star formation rate of SFR = 2.7^+3.0_-2.4 M_⊙ / yr. The SFR from SED fitting can be sensitive to the assumed SFH <cit.>. If we use a parametric delayed-tau SFH, i.e. SFR(t)∝ t e^-t/τ, we would obtain SFR = 1.0^+2.6_-0.8 M_⊙ / yr. We note that our SED-inferred SFR, no matter which SFH is assumed, is larger than that inferred from the SED fitting of <cit.> which suggested that the SFR of GS-9209 is consistent with zero over the past 100 Myr. Interestingly, however, our SFR is actually in excellent agreement with what is inferred based on the narrow component of Hα emission (SFR_Hα,narrow = 1.9±0.1M_⊙/yr) in the NIRSpec spectrum of <cit.>. We mention that, even with the higher SFR from our measurement, the specific star formation rate (sSFR) of GS-9209 remains to be very low, i.e. log_10(sSFR/yr^-1)=-10.1^+0.5_-0.4 corresponding to ∼ 0.1/t_H which is smaller than the sSFR threshold of 0.2/t_H normally used for identifying quiescent galaxies at high redshift <cit.>.
Finally, in the right panel of Fig. <ref>, we compare the reconstructed SFH of our measurement with that of <cit.>. Overall, the two SFHs are similar: The major stellar mass assembly of GS-9209 started ∼500 Myr ago, i.e. z∼7, with a peak star formation rate SFR_peak∼ 200M_⊙/yr. Noticeably, however, the stellar population of GS-9209 inferred from our SED fitting is younger than that from <cit.> by ∼ 100 Myr (mass-weighted stellar age). We note that NIRSpec's fixed slit (S200A1) only has a width of 0".2, while in this work we find that GS-9209's outskirts are younger than its center. This can explain the younger stellar population found by our SED fitting of integrated photometry (r=0".7 aperture).
§.§ SED modeling of the inner region
Here we describe our SED analysis of the inner region of GS-9209. The inner region is enclosed by an r=0".3 (similar to the FWHM of MIRI/F2100W image) circular aperture centered at the centroid of F770W, i.e. the stellar continuum of GS-9209.
For dust emission, by default we use the same models as used for the SED fitting of integrated photometry (Section <ref>). The results have been presented in Fig. <ref> and discussed in detail in the main text. In what follows, we alter our SED modeling significantly to check that if our conclusion about the origin of MIRI fluxes is sensitive to the default SED assumptions.
First, we replace both the default <cit.> dust model and the <cit.> AGN torus model with the semi-empirical dust emission models of <cit.>. The <cit.> templates are calibrated against various observations both for UV-to-IR AGN emission and for galaxy dust emission associated diffuse ISM <cit.>. Second, because of the highly uncertain dust extinction of AGN, instead of assuming the same dust attenuation law for both, following <cit.>, we use two different dust laws, namely, the dust attenuation law of <cit.> for the host galaxy and the dust extinction curve of SMC <cit.> for the AGN. Finally, instead of assuming a nonparametric SFH, we use a parametric delayed-tau SFH.
Fig. <ref> shows the fitting results of the alternative SED modeling. To begin, unlike the <cit.> model which only includes the contribution from AGN torus, the semi-empirical templates of <cit.> actually also include the UV-to-optical AGN emission. Yet, the fitting still suggests that the rest-UV to NIR emission of GS-9209 is dominated by starlight. At ≳2μ m, similar to what we found in the fiducial modeling, the observed MIRI fluxes cannot be reproduced by the alternative model with stellar emission alone, i.e. the need for dust emission. We remind that the dust emission of diffuse ISM is included to the alternative SED modeling. Nonetheless, the fitting still suggests that GS-9209's SED at ≳3μ m is dominated by the dust emission of AGN torus, in line with the fiducial modeling. It is also worth mentioning that based on our SED fitting we estimate the AGN luminosity of GS-9209 at rest-frame 5100Å to be L_5100=10^10.17L_⊙ which is in great agreement with that independently estimated by <cit.> who obtained L_5100∼10^10L_⊙ by SED fitting using the NIRSpec spectrum or L_5100∼10^10.2L_⊙ by converting the observed broad Hα flux using the relation of <cit.>.
Finally, another line of evidence that the inner MIRI fluxes of GS-9209 at ≳3μ m is dominated by AGN comes from additional SED fitting where we switch off the AGN component. Without the AGN component, the best-fit SED has the total (i.e. summation of all bands) χ^2_tot=56.3, which, as expected, is worse compared to χ^2_tot=37.4 of the best-fit model with AGN switched on. More importantly, with the assumption of energy balance, i.e. the energy absorbed by dust in the rest-frame UV is re-emitted in the IR, such a fitting forces to use the dust emission associated with diffuse ISM (i.e. star formation) to explain the observed MIRI fluxes. As a result, this fitting returns SFR =50±5M_⊙/yr, which is significantly higher than both that inferred from the narrow Hα emission <cit.> (SFR_Hα, narrow∼2M_⊙/yr), and the strict upper limit set by archival ALMA non-detections of GS-9209 <cit.> (SFR_ALMA<41M_⊙/yr). These together strongly argue against the origin of the observed MIRI fluxes to be dominated by the dust emission associated with diffuse ISM.
To summarize, the assumptions made in the above alternative SED modelings differ greatly from the fiducial ones, which we did intentionally. Yet, they reach to the conclusions fully in line with the fiducial SED fitting. Therefore, we conclude that our results regarding the origin of MIRI fluxes of GS-9209's inner region – AGN dominated – are not sensitive to the SED assumptions.
§.§ SED modeling of the outer region
Here we describe our SED analysis of the outer region of GS-9209. The outer region is enclosed by a 0"3<r<0.7" circular annulus aperture. Due to PSF broadening, we subtract the flux contribution from the inner part to the outer aperture by assuming the inner component is a point source, which is a good approximation because the surface brightness profile of GS-9209's inner part is very similar to PSF (Fig. <ref>). As the flux contribution from the inner region to the outer aperture has been removed, the AGN component is not included to the SED fitting of the outer region.
To begin, we run SED fitting with the assumption of energy balance. By default, we use the <cit.> model. As Fig. <ref> shows, the observed MIRI fluxes at >10μ m, or >2.5μ m, cannot be fitted well. This issue cannot be mitigated by replacing the <cit.> model with other (semi-)empirical dust models extensively used in the literature, e.g. <cit.>. Further, we test the finding by using a different stellar synthesis code BC03 <cit.> and SED fitting software Bagpipes <cit.>. These changes do not help to mitigate the issue, either. As detailed in the main text, however, this issue can be largely mitigated, if not fully resolved, by removing the energy balance assumption.
§.§ Tests on other SED assumptions
We now investigate the impact of other SED assumptions on the key conclusions of our SED analysis, namely that the outer region of GS-9209 is younger and more dust attenuated than its inner region. The results are plotted in Fig. <ref>. Specifically, we test the assumptions of:
* SFH. We test our results by using two other different forms of SFH, namely (1) a parametric delayed-tau SFH and (2) a nonparametric continuity SFH with lookback time bins different from the fiducial ones (here the nine lookback time bins are assumed to be 0-30 and 30-100 Myr for the first two bins; 0.85 t_H - t_H for the last bin; and evenly spaced in the logarithmic lookback time between 100 Myr and 0.85 t_H for the remaining six bins). As Fig. <ref> (blue and orange circles) shows, we do not see any substantial changes in our conclusions by altering the assumed SFH.
* metallicities. Instead of setting the stellar and gas-phase metallicities of GS-9209 to be free parameters, we assume that the inner and outer regions have the same metallicities and we fix them to be either (1) the best-fit values measured with NIRSpec spectroscopy <cit.> or (2) the solar values. We note that, according to its NIRSpec spectrum, the stellar metallicity of GS-9209 should be substantially lower than the solar value <cit.> and there is no evidence that the metallicities of the inner and outer regions are the same. As Fig. <ref> (green and red circles) shows, even with these arguably “bad” metallicity assumptions, we still find that the outer region of GS-9209 is younger and more dust attenuated than its inner region, showing that our conclusions are not sensitive to the assumed metallicities.
§.§.§ Acknowledgments
ZJ, GHR, FS, JMH, MR, YZ and CNAW are supported by JWST/NIRCam contract to the University of Arizona NAS5-02015.
The research of CCW is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
GHR, JL and SA acknowledge support from the JWST Mid-Infrared Instrument (MIRI) Science Team Lead, grant 80NSSC18K0555, from NASA Goddard Space Flight Center to the University of Arizona.
FD and RM acknowledge support by the Science and Technology Facilities Council (STFC), ERC Advanced Grant 695671 “QUENCH", and by the UKRI Frontier Research grant RISEandFALL. RM also acknowledges funding from a research professorship from the Royal Society.
ST acknowledges support by the Royal Society Research Grant G125142.
BER acknowledges support from the NIRCam Science Team contract to the University of Arizona, NAS5-02015, and JWST Program 3215.
AJB acknowledges funding from the “FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 789056).
The authors acknowledge the FRESCO team led by Dr. Oesch for developing their observing program with a zero-exclusive-access period.
This work made use of the lux supercomputer at UC Santa Cruz which is funded by NSF MRI grant AST 1828315, as well as the High Performance Computing (HPC) resources at the University of Arizona which is
funded by the Office of Research Discovery and Innovation (ORDI), Chief Information Officer (CIO), and University Information Technology Services (UITS).
§.§.§ Author contributions
ZJ, CCW, GHR, JL and SA contributed to the initial discovery. All authors contributed to the interpretation of results.
ZJ, JL and JMH contributed to the analysis of Spectral Energy Distribution. ZJ, CCW and FS contributed to the morphological analysis.
GHR, JL, SA, IS, YZ contributed to the design and data reduction of the MIRI imaging of the SMILES program. MR, FS, FD, ST, BR, RM, AJB and CNAW contributed to the design and data reduction of the NIRCam imaging of the JADES program. CCW and ST contributed to the design and data reduction of the NIRCam imaging of the JEMS program.
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17845v1 | 20240926135126 | DAΦNE -2023/24 Activity report | [
"C. Milardi",
"D. Alesini",
"M. Behtouei",
"S. Bilanishvili",
"S. Bini",
"M. Boscolo",
"B. Buonomo",
"S. Cantarella",
"A. Ciarma",
"A. De Santis",
"E. Di Pasquale",
"C. Di Giulio",
"G. Di Pirro",
"L. Foggetta",
"G. Franzini",
"A. Gallo",
"R. Gargana",
"S. Incremona",
"A. Liedl",
"A. Michelotti",
"L. Piersanti",
"D. Quartullo",
"R. Ricci",
"U. Rotundo",
"S. Spampinati",
"A. Stecchi",
"A. Stella",
"A. Vannozzi",
"M. Zobov"
] | physics.acc-ph | [
"physics.acc-ph",
"nucl-ex"
] |
DAΦNE - 2023/24 Activity report[Published in 2023 LNF Activity report: <http://www.lnf.infn.it/rapatt/2023/dafne.pdf>]
C. Milardi[Scientific head. Mail: [email protected]], D. Alesini, M. Behtouei, S. Bilanishvili, S. Bini, M. Boscolo, B. Buonomo,
S. Cantarella, A. Ciarma, A. De Santis[Author: [email protected]], E. Di Pasquale, C. Di Giulio, G. Di Pirro,
L. Foggetta, G. Franzini, A. Gallo, R. Gargana, S. Incremona, A. Liedl, A. Michelotti,
L. Piersanti, D. Quartullo, R. Ricci, U. Rotundo, S. Spampinati, A. Stecchi,
A. Stella, A. Vannozzi, M. Zobov.
§ INTRODUCTION
The DAΦNE accelerator complex <cit.> is a double-ring lepton collider operating at a center-of-mass energy of 1.02 GeV (Φ-resonance) with a full-energy injection system. The infrastructure consists of two independent rings (Main Rings - MRs), each 97 meters long, with intersection points at the Interaction Region (IR) and Ring Crossing Region (RCR), where beams cross at a horizontal angle of 50 mrad. DAΦNE also provides synchrotron light lines <cit.> and a Beam Test Facility <cit.>. Despite being constructed in the 1990s, DAΦNE remains a unique machine for studying low-energy kaons with momenta below 140 MeV/c. It pioneered the Crab-Waist collision scheme <cit.>, successfully boosting luminosity by a factor of three and becoming the benchmark approach for modern and future lepton colliders<cit.>.
Since 2021, DAΦNE has been providing data to the SIDDHARTA experiment <cit.>, enabling high-precision kaonic helium measurements, first with the preliminary SIDDHARTINO <cit.> setup and later with the final SIDDHARTA-2 configuration.
§ LAST YEAR RUN
The DAΦNE operations during the last year <cit.> have been devoted to deliver a statistically significant data sample to perform the first-ever measurement of kaonic deuterium X-ray transitions to the fundamental level <cit.>.
Operations for the SIDDHARTA-2 detector using a deuterium gas target started officially on the second half of May 2023, and have been organized in several runs.
Initially the efforts on the DAΦNE side were aimed at optimizing injection and collisions setup, while SIDDHARTA-2 group concentrated on testing the new experimental apparatus, and tuning the setup to maximize the signal to noise ratio. Thereafter, operations continued privileging data delivery with short periods dedicated to machine studies. After completing operations with the deuterium target, a calibration run with hydrogen gas followed, intended for further detector characterization and background studies.
§ COLLIDER TUNING
During the final operational period, the maximum injection efficiency for both beams reached 80%, with transport efficiency along the transfer lines close to 100%. These optimal efficiencies played a significant role in enabling the storage of high beam currents and fine-tuning the collision process. These improvements were critical in supporting data delivery with stable beam conditions.
§.§ Linear and Non-Linear Optics
The optics used in MRs were optimized in previous runs <cit.>, which had undergone continuous refinement. The Crab-Waist sextupoles strengths had been gradually increased to approximately 77% of their nominal values. This adjustment led to a significant improvement in instantaneous luminosity and enhanced the control over background noise.
A detailed optimization process of the chromatic sextupoles and octupoles, conducted iteratively during data collection, further improved beam lifetime and injection efficiency while reducing the signal-to-noise ratio. As a result, the background noise affecting detector measurements was greatly reduced, contributing to cleaner data and more accurate results.
§.§ Beam Dynamics
Since fall 2023 several anomalous beam dynamics trends, particularly affecting the electron beam, were observed. The vertical tune has been lowered from its nominal value to ensure beam stability while a reduced beam lifetime, and strong vertical instabilities at relatively low current levels appeared. Sudden beam losses also occurred for electron beam intensities above 1.5 A. Collision operations, despite the beam-beam stabilizing effect, were impacted by the flip-flop effect. The electron beam experienced also vertical blow-up, leading to a sharp drop in luminosity and an increase in background noise. These symptoms were attributed to vacuum condition worsening (as shown in Fig. <ref>-left) leading to ion trapping-induced instabilities. To mitigate these issues, the number of bunches in collision was reduced from 110 to 95. This temporary adjustment allowed for continued data collection while investigations were underway. A vacuum leak inspection later revealed a small leak (on the order of 10^-8 mbar) in the first short arc of MRe. Once the leak was repaired, the electron beam dynamics stabilized, and collisions with 110 bunches were restored.
Throughout the collider operations, e-cloud effects posed significant challenges for positron beam dynamics. These effects were mitigated through a combination of strategies, including solenoid windings around the beam pipes, transverse feedback systems, tuning positron beam parameters, reducing RF cavity voltage to lengthen bunches, and adding Landau damping using octupole magnets. Given the importance of e-cloud instabilities, a campaign of simulations and measurements had been launched. These studies aimed not only to address the immediate challenges at DAΦNE but also to provide valuable insights for future collider projects, such as FCC-ee <cit.> helping to benchmark existing numerical codes <cit.>.
The density of the electron cloud induced by the positron beam in non-colliding configurations was measured at currents up to 800 mA. The horizontal tune shifts along the bunch train were used to calculate the electron cloud density, which was found to be as high as 10^14^-3, one of the highest recorded in any circular collider (see Fig. <ref>-right). This high density posed significant challenges to beam stability.
Grow-damp measurements were performed to study the multi-bunch horizontal instability in the positron beam due to e-cloud effects <cit.>. By deliberately turning off the horizontal feedback system for a specific time interval, the instability was induced, and the bunch-position signals were analyzed. The dominant instability mode, m = -1, was observed with a growth rate of 22 ms^-1. This mode is due to e-cloud created in wigglers and bending magnets and it is consistent with earlier observations from a 2008 analysis <cit.>.
During the final phase of operations, DAΦNE achieved significant improvements in beam dynamics, luminosity, and background noise control through a combination of optical adjustments and the mitigation of e-cloud and ion trapping instabilities. The collider reached stable beam currents of up to 1.0 A for positrons and 1.65 A for electrons during collisions. Additionally, increasing the vertical chromaticity to +1.5 proved effective in mitigating e-cloud effects on the positron beam.
§ COLLIDER PERFORMANCES
§.§ Luminosity
Luminosity at DAΦNE was monitored using two main devices: Crystal CALorimeters (CCAL) <cit.> and Gamma monitors. CCAL detected Bhabha scattering events at small angles and, due to its high event rate, was used as a real-time tool for luminosity optimization and was extremely effective also to monitor beam induced background rate and spatial distribution. However, the CCAL was not fully calibrated, and the Gamma monitor suffered from beam losses, preventing them from providing reliable absolute luminosity measurements.
Absolute luminosity was instead measured by the SIDDHARTA-2 detector, which detects charged kaon flux <cit.>. In addition, real-time background levels in the experimental apparatus were monitored by counters measuring Kaon over Minimum Ionizing Particle rate (Kaon/MIP) and Kaon over Silicon Drift Detector rate (Kaon/SDD). The Kaon/SDD rate was also used as a key quality parameter (L_HQ), indicating whether data could be used for physics analysis.
The highest instantaneous luminosity recorded was 2.4×10^32^-2^-1, achieved with an electron beam current of 1.14 A and a positron beam current of 0.89 A, both stored in 110 bunches. The average efficiency of the collider, defined as the percentage of time it delivered luminosity above 10^32^-2^-1 after beam refilling, was around 75%, as shown in Fig. <ref>-left.
Although downtime including maintenance, technical faults, and institutional duties, the overall trend for daily delivered luminosity remained positive. The maximum daily luminosity recorded reached approximately 9.5 pb^-1, as measured by the kaon monitor reported in Fig. <ref>-right.
The last year operations have been splitted in three operational runs, each of similar duration and uptime. The total delivered luminosity exhibits a substantial increase, as demonstrated in Fig. <ref>. In the Run 3 operations, luminosity delivery more than doubled compared to Run 1, marking a significant improvement in the collider’s performance.
§.§ Background
DAΦNE achieved significant progress in background reduction during its final operations, particularly in addressing two types of background: injection and coasting background.
Injection Background: Injection-related background was reduced through the optimization of injection efficiency and precise steering of the stored beam orbit. This approach successfully reduced the background for the electron beam but was less effective for the positron beam. To address this, the vertical size of the electron beam was artificially increased during injection using a calibrated skew quadrupole bump, thereby reducing the beam-beam interaction on the positron beam and preventing rapid lifetime drops and sudden background surges.
Coasting Background: Coasting background was minimized by reducing Kaon/MIP and Kaon/SDD rates through a comprehensive optimization of the collider’s non-linear optics. The tuning of sextupole and octupole magnets in both rings increased energy acceptance and dynamic aperture, leading to a nearly twofold improvement in Kaon/MIP rates and a 1.45-fold improvement in Kaon/SDD rates.
The improvements in background reduction resulted in a significantly enhanced signal-to-noise ratio (SNR) for the SIDDHARTA-2 experiment compared to the 2009 Crab-Waist test run <cit.>. The SNR was three times higher than in the 2009 run, thanks to the optimization of the collider configuration and upgrades to detector components, such as kaon, trigger, and Silicon Drift Detector (SDD) systems. Preliminary analysis indicated that these gains were largely due to collider optimization and the new design of the Permanent Magnet Defocusing Quadrupole installed in the low-beta section of the interaction region.
§ CONCLUSION
In its final operational phase, DAΦNE demonstrated considerable improvements in luminosity delivery and background reduction. The peak luminosity of 2.4×10^32^-2^-1 and the consistent trend toward higher daily luminosity reflect the success of targeted optimizations. The collider operated with an efficiency of around 75%, delivering reliable performance despite routine maintenance and downtime. Moreover, background reduction strategies—particularly for injection and coasting phases—led to enhanced data quality, as evidenced by the improved signal-to-noise ratio. The DAΦNE lepton collider has delivered to the SIDDHARTA-2 detector using a deuterium gas target a data sample of the order of 1.24 fb^-1, well beyond the experiment request.
These achievements represent the culmination of extensive tuning efforts and the lessons learned from these developments, particularly in the areas of Crab-Waist sextupole tuning and instability mitigation, have provided valuable insights for future high-luminosity colliders such as FCC-ee.
§ ACKNOWLEDGEMENTS
Many thanks to the Staff of the Accelerator and Technical Divisions of the LNF. Special acknowledgments to the operation group taking care of the collider operations 24 hours a day, their commitment largely contributed to achieve the present DAΦNE performances.
99
dafne
G. Vignola et al.,
“Status report on DAΦNE”,
Frascati Phys. Ser. vol. 4, pp. 19-30, Oct. 1996.
dafneLight
A. Balerna,
"DAΦNE-Light DXR1 Soft X-Ray Synchrotron Radiation
Beamline: Characteristics and XAFS Applications",
in Condens. Matter, 4, no. 1:7.
<doi:10.3390/condmat4010007>
BTF
L. Foggetta et al.,
"The Extended Operative Range of the LNF LINAC and BTF Facilities",
in Proc. of IPAC21, Campinas, Brazil, 2021,
paper THPAB113, p. 3987.
<doi:10.18429/JACoW-IPAC2021-THPAB113>
crab1
M. Zobov et al.,
"Test of crab-waist collisions at DAΦNE Φ- factory",
Phys. Rev. Lett. vol. 104, p. 174801, Apr. 2010.
<doi:10.1103/PhysRevLett.104.174801>
CWSiddharta
C. Milardi et al.,
"Experience with Φ upgrade including crab waist",
in Proc. of PAC09, Vancouver, Canada, 2009, paper MO4RAI01, p. 80.
fcwc1
Y. Funakoshi et al.,
“The SuperKEKB Has Broken the World Record of the Luminosity”,
in Proc. IPAC’22, Bangkok, Thailand, Jun. 2022, pp. 1-5.
<doi:10.18429/JACoW-IPAC2022-MOPLXGD1>
fcwc2
A. Abada et al., “FCC-ee: The Lepton Collider: Future Circular Collider conceptual design report. Volume 2”,
Eur. Phys. Jour. ST, vol. 228, p. 261, Jun. 2019.
<doi:10.1140/epjst/e2019-900045-4>
fcwc3
CEPC Study Group,
“CEPC Technical Design Report – Accelerator (v2)”,
in IHEP-CEPC-DR-2023-01, IHEP-AC-2023-01,12, 2023.
<doi:10.48550/arXiv.2312.14363>
fcwc4
A.E. Bondar et al.,
“Project of a Super Charm factory at the Budker Institute of Nuclear Physics in Novosibirsk”,
Phys. Atom. Nucl., vol. 76, p. 1072, Sep. 2013.
<doi:10.1134/S1063778813090032>
fcwc5
Luo, Q. et al.,
“Progress of Conceptual Study for the Accelerators of a 2-7GeV Super Tau Charm Facility at China”,
in Proc. IPAC’19, Melbourne, Australia, May 2019, pp. 643-645.
<doi:10.18429/JACoW-IPAC2019-MOPRB031>
fcwc6
A. Bogomyagkov et al.,
“Plan for development of circular colliders with Crab Waist at BINP”,
JINST, JINST 19 (2024) 02, P02017.
<doi:10.1088/1748-0221/19/02/P02017>
runSiddharta21
C. Milardi et al.,
“DAΦNE Commissioning for SIDDHARTA-2 Experiment”,
in Proc. IPAC’21, Campinas, Brazil, May 2021, pp. 1322-1325.
<doi:10.18429/JACoW-IPAC2021-TUPAB001>
helium4perf
D. Sirghi et al.,
"New measurements of kaonic helium-4 L-series X-rays yields in gas with the SIDDHARTINO setup",
Nucl. Phys. A, vol. 1029, p. 122567, 2023.
<doi:10.1016/j.nuclphysa.2022.122567>
helium4meas
D. Sirghi et al.,
"A new kaonic helium measurement in gas by SIDDHARTINO at the DAΦNE collider",
J. Phys. G, vol. 49, no. 5, p. 055106, Apr. 2022.
<doi:10.1088/1361-6471/ac5dac>
Milardi:ipac2024-wepr17
C. Milardi et al.,
“DAFNE operation strategy for the observation of the kaonic deuterium”,
in Proc. IPAC'24, Nashville, TN, May 2024, pp. 2504-2507.
<doi:10.18429/JACoW-IPAC2024-WEPR17>
deuteriumMeas
M. Tüchler et al.,
"A charged particle veto detector for kaonic deuterium measurements at DAΦNE",
J. Phys.: Conf. Ser., vol. 1138, p. 012012, May 2018.
<doi:10.1088/1742-6596/1138/1/012012>
DAFNE_IPAC23
C. Milardi et al.,
"DAΦNE Run for the SIDDHARTA-2 Experiment",
in Proc. of IPAC23, Venice, Italy, May 2023,
paper MOPL085, pp. 756-759.
<doi:10.18429/JACoW-IPAC2023-MOPL085>
ecloud
S. Ozdemiret al.,
"Electron cloud build-up studies for DAΦNE collider and FCCee damping ring"
inProc. of IPAC2024,
Nashville, TN(USA), 2022, paper WEPR008.
quartullo:ibic24
D. Quartullo et al.,
"Bunch-by-bunch feedback system used as a diagnostic tool for multi-bunch beams in the DAΦNE collider",
in Proc. IBIC'24, Beijing Nashville, PRC, Sep 2024, in preparation.
drago
A. Drago, M. Zobov and D. Teytelman,
“Recent observations on a horizontal instability in the DAFNE positron ring”,
in Proc. PAC05, Knoxville, United States, May 2005, pp. 1841–1843.
ccallumikloe
A. De Santis et al.,
“DAΦNE Luminosity Monitor”,
in Proc. IBIC’18, Shanghai, China, Sep. 2018, pp. 81–84.
<doi:10.18429/JACoW-IBIC2018-MOPB06>
lumisid
M. Skurzok et al.,
"Characterization of the SIDDHARTA-2 luminosity monitor",
Journ. of Instr., vol. 15, p. P10010, Oct. 2020.
<doi:10.1088/1748-0221/15/10/P10010>
bckComp_09_24
M. Iliescu, and SIDDHARTA-2 Team,
Private Communication, May 2023.
| The DAΦNE accelerator complex <cit.> is a double-ring lepton collider operating at a center-of-mass energy of 1.02 GeV (Φ-resonance) with a full-energy injection system. The infrastructure consists of two independent rings (Main Rings - MRs), each 97 meters long, with intersection points at the Interaction Region (IR) and Ring Crossing Region (RCR), where beams cross at a horizontal angle of 50 mrad. DAΦNE also provides synchrotron light lines <cit.> and a Beam Test Facility <cit.>. Despite being constructed in the 1990s, DAΦNE remains a unique machine for studying low-energy kaons with momenta below 140 MeV/c. It pioneered the Crab-Waist collision scheme <cit.>, successfully boosting luminosity by a factor of three and becoming the benchmark approach for modern and future lepton colliders<cit.>.
Since 2021, DAΦNE has been providing data to the SIDDHARTA experiment <cit.>, enabling high-precision kaonic helium measurements, first with the preliminary SIDDHARTINO <cit.> setup and later with the final SIDDHARTA-2 configuration. | null | null | null | null | In its final operational phase, DAΦNE demonstrated considerable improvements in luminosity delivery and background reduction. The peak luminosity of 2.4×10^32^-2^-1 and the consistent trend toward higher daily luminosity reflect the success of targeted optimizations. The collider operated with an efficiency of around 75%, delivering reliable performance despite routine maintenance and downtime. Moreover, background reduction strategies—particularly for injection and coasting phases—led to enhanced data quality, as evidenced by the improved signal-to-noise ratio. The DAΦNE lepton collider has delivered to the SIDDHARTA-2 detector using a deuterium gas target a data sample of the order of 1.24 fb^-1, well beyond the experiment request.
These achievements represent the culmination of extensive tuning efforts and the lessons learned from these developments, particularly in the areas of Crab-Waist sextupole tuning and instability mitigation, have provided valuable insights for future high-luminosity colliders such as FCC-ee. |
http://arxiv.org/abs/2409.17553v1 | 20240926055116 | What Roles can Spatial Modulation and Space Shift Keying Play in LEO Satellite-Assisted Communication? | [
"Chaorong Zhang",
"Qingying Wu",
"Yuyan Liu",
"Benjamin K. Ng",
"Chan-Tong Lam"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
What Roles can Spatial Modulation and Space Shift Keying Play in LEO Satellite-Assisted Communication?
This work was supported by the Science and Technology Development Fund, Macau, SAR, under Grant 0044/2022/A1.
Chaorong Zhang21,
Qingying Wu22,
Yuyan Liu23,
Benjamin K. Ng24*,
and Chan-Tong Lam25
2Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR, China
1p2314785.edu.mo,
[email protected],
[email protected],
[email protected],
[email protected]
*Corresponding author: Benjamin K. Ng
September 28, 2024
===============================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In recent years, the rapid evolution of satellite communications play a pivotal role in addressing the ever-increasing demand for global connectivity, among which the Low Earth Orbit (LEO) satellites attract a great amount of attention due to their low latency and high data throughput capabilities.
Based on this, we explore spatial modulation (SM) and space shift keying (SSK) designs as pivotal techniques to enhance spectral efficiency (SE) and bit-error rate (BER) performance in the LEO satellite-assisted multiple-input multiple-output (MIMO) systems.
The various performance analysis of these designs are presented in this paper, revealing insightful findings and conclusions through analytical methods and Monte Carlo simulations with perfect and imperfect channel state information (CSI) estimation.
The results provide a comprehensive analysis of the merits and trade-offs associated with the investigated schemes, particularly in terms of BER, computational complexity, and SE.
This analysis underscores the potential of both schemes as viable candidates for future 6G LEO satellite-assisted wireless communication systems.
Low Earth Orbit (LEO), satellite communication, spatial modulation (SM), space shift keying (SSK)
§ INTRODUCTION
The advent of beyond 5th generation (B5G) wireless communications and the subsequent progression towards 6th (6G) wireless systems usher in an era where wireless communication systems are expected to provide higher data rates, lower latency, and improved connectivity.
One of the pivotal technologies in achieving these advancements is the utilization of satellite in communication systems, particularly those facilitated by Low Earth Orbit (LEO) satellites [1-3].
LEO satellite-assisted wireless systems attract significant attention due to their potential to offer global coverage, reduced propagation delays, and enhanced signal quality compared to traditional Geostationary Earth Orbit (GEO) systems [4-6].
This also makes LEO satellites particularly appealing for enabling advanced wireless communication systems that serve a wide range of applications, including broadband internet access, disaster recovery, and Internet of Things (IoT) connectivity [7-8].
Nevertheless, LEO satellite-assisted wireless communications still require improvements in throughput and robustness to fully meet the demands of future 6G communications.
In recent years, there is a surge of interest in the application of advanced signal processing and communication techniques to enhance the data rate and throughput in traditional multiple-input multiple-output (MIMO) wireless communications.
Among these, the spatial modulation (SM) scheme [9] and its variants, e.g., the space shift keying (SSK) scheme [10], emerge as promising strategies to improve the spectral efficiency (SE) and energy efficiency of such systems.
The SM scheme is a novel multiple-antenna technique that exploits the spatial dimension by dynamically activating transmit antennas to convey additional information, along with traditional constellation symbols modulated by M-ary phase shift keying (PSK)/quadrature amplitude modulation (QAM), to the receiver, thus providing higher SE [11].
Besides, as inter-antenna interference and inter-channel interference at the transmitter are common challenges in MIMO wireless systems, the SM scheme effectively addresses these issues by activating only one transmit antenna in each time slot [12].
However, the complexity of signal processing at the receiver and signal detection becomes higher compared to that of traditional MIMO schemes.
To mitigate the higher complexity in the SM scheme, the SSK scheme offers a simplified alternative by eliminating the need for constellation symbols processing at the transmitter and receiver, while still maintaining satisfactory bit-error rate (BER) performance [13].
Similarly, while the SSK scheme shares some benefits of the SM scheme's benefits, such as reduced interference, it struggles to achieve the higher SE that SM offers with the same number of transmit antennas.
Although some novel variants of the SM and SSK schemes are proposed later [14-18], they often involve trade-offs among interference, complexity, and error performance, requiring further investigation to fully understand their potential benefits in various application scenarios.
Consequently, considering the aforementioned advantages, the SM and SSK schemes are designed into LEO satellite-assisted wireless systems to enhance spectral efficiency (SE) and ensure reliable BER performance by reducing interference.
The main contributions of our works are given as follows:
1) In order to further improve the SE of traditional LEO satellite-assisted MIMO wireless system, the LEO satellite-assisted SM (LEO-SM) and SSK (LEO-SSK) schemes are designed in this paper.
This paper is the first to explore and discuss the performance of these schemes under imperfect channel state information (CSI), as a notable contribution to the field of 6G LEO satellite-assisted wireless communications.
2) The analytical performance of SE and detection complexity is briefly presented in this paper, where some interesting and insightful findings are also given.
3) Monte Carlo simulations are applied in our works, where the simulation results show the superiority in the BER performance and SE in the proposed schemes. Besides, by comparing the simulation results between the LEO-SM and LEO-SSK schemes, we also obtain some interesting insights, which further confirms the necessity of the trade-off selection between these two schemes.
The rest of the paper is organized as follows:
In Sec. 2, the system model of the LEO-SM and LEO-SSK schemes with details of channel models, signals detection, and imperfect CSI estimation is given.
The performance analysis of complexity and SE are presented in Sec. 3 and Sec. 4, respectively.
The theoretical and simulation results of both proposed schemes are shown in Sec. 4.
Eventually, the conclusion is given in Section Sec. 5.
§ SYSTEM MODEL
As shown in Fig. 1, we examine a wireless communication system facilitated by a LEO satellite, where a single satellite is equipped with N_t transmit antennas, and a ground outdoor terminal is equipped with N_r receive antennas.
In this configuration, the LEO satellite transmits signals to the ground terminal, which serves as the receiver [19].
The ground terminal, located within the designated cell, can be viewed either as the user or the base station associated with the corresponding LEO satellite.
To further enhance the SE of LEO satellite-assisted wireless networks, we design the SM scheme, where the bit sequence is mapped into M-ary PSK/QAM constellation symbols and the activated antenna is conveyed separately.
Additionally, an SSK scheme is considered as an alternative, offering a trade-off that reduces complexity at both the transmitter and the receiver.
A detailed illustration of the processing involved in both schemes is provided in the following sub-subsection.
§.§ Encoding Process
In the traditional SM scheme, each transmit antenna represents a different bit sequence, and only one transmit antenna is selected to activate.
Through this activated transmit antenna, M-ary PSK/QAM symbol is transmitted to the receiver, wherein the receiver can obtain two sequences of transmitted bits separated from the antenna and constellation symbol.
For instance, according to Fig. 2 where the detail process and mapping table in SM and SSK encoding are given, transmitted bit sequence 0010 is divided into two sequences r_1={ 00 } and r_2={ 10 }, which are mapped onto the quadrature PSK/QAM constellation symbol and first transmit antenna, respectively.
After this, r_1 and r_2 can be represented by a complex value of constellation symbol as s and an antenna-selection vector as 𝐯=[ 1, 0, 0, 0 ], where 0 represents the inactivated antennas, 1 represents the activated one, and the places of each element in 𝐯 correspond to the positions and labels of each transmit antenna at transmitter in Fig. 1.
Multiplying s and 𝐯, the transmitted vector can be obtained, given as 𝐱_1=[ s, 0, 0, 0 ].
Without loss of generality, when M-ary constellation symbols are transmitted and N_t transmit antennas are equipped, 𝐯 and 𝐱_1 can be denoted as 𝐯=N_t[ 0,1,..,0 ] and 𝐱_1=N_t[ 0,s,..,0 ].
Similar to the encoding process of SM scheme, the antenna-selection vector in SSK scheme can be equally expressed as 𝐯=[ 0,1,..,0 ].
However, the constellation symbols are not involved in the SSK scheme, thus giving the transmitted vector as 𝐱_2=[ 0,1,..,0 ].
Fig. 2 also shows a specific example in encoding process of SSK scheme, where only r_1={ 00 } are conveyed and
mapped into the antenna-selection vector as 𝐯=[ 1, 0, 0, 0 ].
It is worth noting that pure power is emitted from the activated transmit antenna, without any constellation modulation.
§.§ Channel Model
In this subsection, we detail the channel model for the forward terminal link in LEO satellite communication, divided into path loss and small-scale fading components [20-21].
Firstly, the path loss, L, including large-scale effects, is expressed as:
L = L_b + L_g + L_s,
where L_b, L_g, and L_s represent the basic path loss, attenuation due to atmospheric gases, and attenuation from ionospheric or troposphere scintillation, respectively.
The basic path loss in dB is modeled as:
L_b = FSPL(d, f_c) + SF + CL(θ_E, f_c),
where FSPL is the free-space path loss, SF denotes shadow fading with Gaussian distributed as SF∼𝒩(0, σ_SF^2), and CL is the clutter loss.
The free-space path loss is calculated as:
FSPL(d, f_c) = 32.45 + 20 log_10(f_c) + 20 log_10(d),
with d being the slant distance in meters, f_c being the carrier frequency in GHz, and θ_E being the elevation angle from the user equipment (UE) to the satellite. The slant distance d is determined by the satellite altitude h_0 and θ_E as:
d = √(R_E^2 sin^2(θ_E) + h_0^2 + 2h_0R_E) - R_Esin(θ_E),
where R_E, the Earth's radius, is approximately 6,371 km.
The attenuation due to atmospheric gases, dependent on frequency, elevation angle, altitude, and water vapor density, is given by:
L_g = A_zenith(f_c)/sin(θ_E),
where A_zenith represents the zenith attenuation for frequencies between 1 and 1000 GHz.
Scintillation loss L_s arises from ionospheric or tropospheric effects, relevant for frequencies below 6 GHz (ionospheric) and above 6 GHz (tropospheric).
Rain and cloud attenuation are also considered for frequencies above 6 GHz, but are negligible under the clear-sky assumption used here.
Secondly, the small-scale channel is modeled as a shadowed Rician fading channel with both line-of-sight (LoS) and non-LoS (NLoS) components.
We define 𝐇∈ℂ ^N_r× N_t as the channel matrix between the LEO satellite and ground terminal.
The shadowed Rician fading channel coefficient between i-th transmit antenna and l-th receive antenna is:
h_i,l = √(K/K+1)|h_i,l^LoS| + √(1/K+1)|h_i,l^NLoS|,
where K is the Rician factor and 𝐇={ h_i,l} _i=1,l=1^N_t,N_r.
The LoS and NLoS components, h_i,l^LoS and h_i,l^NLoS, follow:
|h_i,l^LoS| ∼Nakagami(m,Ω), ∠ h_i,l^LoS∼Unif[0, 2π),
|h_i,l^NLoS| ∼Rayleigh(σ_R), ∠ h_i,l^NLoS∼Unif[0, 2π),
where m and Ω are the shape and spread parameters for the Nakagami distribution, and σ_R relates to the average magnitude of the NLoS component.
Thirdly, the Doppler shift f_d in the downlink can be given as:
f_d=v/c( R_E/R_E+h_0cosα) f_c,
where c is the speed of light and v is the relative velocity between the satellite and ground station, α is the satellite elevation angle.
Meanwhile, by defining D as distance between satellite and ground terminal, we can obtain time delay τ as τ =D/c.
By considering the Doppler shift, the channel response between i-th transmit antenna and l-th receive antenna at t-th time slot can be obtained as:
h_i,l( t ) =h_i,le^-j2π f_cτ _tδ( τ -τ _t ),
where τ _t represents the delay at t-th time slot.
§.§ Transmission and Detection
After obtaining the system configurations and channel models, the expression of received signals in the LEO-SM and LEO-SSK schemes can be given respectively as:
y_LEO-SM=√(L)𝐇𝐯s+n
and
y_LEO-SSK=√(E_sL)𝐇𝐯+n,
where n∈ℂ ^N_r× 1 represents the white Gaussian noise vector with n={ n_l } _l=1^N_r, n_l is white Gaussian noise with zero mean and variance N_0, and E_s is the symbol's energy.
Therefore, the instantaneous SNR at l-th receive antenna in the LEO-SM and LEO-SSK schemes can be obtained as:
γ _l^LEO-SM=√(L)h_i,l^𝐯s+n_l _2/N_0
and
γ _l^LEO-SSK=√(E_sL)h_i,l^𝐯+n_l _2/N_0,
where we define 𝐇𝐯={ h_i,l^𝐯} _i=1,l=1^N_t,N_r.
Besides, considering the accuracy in signals detection, the maximum likelihood (ML) detector is applied in both SM and SSK schemes, which can be respectively expressed as follows:
{𝐯̂,ŝ} =arg𝐯,sminy_SM-√(L)𝐇𝐯s _2,
and
{𝐯̂} =arg𝐯miny_SSK-√(E_sL)𝐇𝐯 _2.
where 𝐯̂ and ŝ represent the estimated 𝐯 and s.
After the processing of ML detector, we can easily demodulate and decode the detected results, and obtain the transmitted bits sequences.
Compared to Eq. (14) and (15), it is obvious that the complexity of ML detector in the LEO-SSK scheme is lower than the one in the LEO-SSK scheme, of which more details are illustrated in Sec. 3.
§.§ Imperfect CSI Estimation
In the practical satellite scenario, due to the complex wireless environment, the CSI estimation at receiver can hardly be perfect.
Considering the imperfect CSI estimation at receiver and referring to the imperfect CSI model from [22-23], the coefficient of h_i,l^𝐯 can be expressed as the sum of ḣ_i,l^𝐯∼𝒞𝒩 (0,1-δ _e1^2) and ḧ_i,l^𝐯∼𝒞𝒩 (0,δ _e1^2).
Giving that δ_e2^2 denotes the inaccuracy of estimation in CSI with δ̈_̈ë2̈^̈2̈=1-δ_e2^2.
Thus, we can obtain the receive signals at l-th receive antenna for the LEO-SM and LEO-SSK schemes with imperfect CSI estimation as
ÿ_l^SM=√(L)( δ _e2^2ḧ_i,l^𝐯+δ̈_e2^2ḣ_i,l^𝐯) s+n_l
and
ÿ_l^SSK=√(E_sL)( δ _e2^2ḧ_i,l^𝐯+δ̈_e2^2ḣ_i,l^𝐯) +n_l.
§ DETECTION COMPLEXITY
The complexity of the ML detector in both the LEO-SM and LEO-SSK schemes is discussed in this section.
We calculate the complexity based on Eq. (14) in the LEO-SM scheme and Eq. (15) in the LEO-SSK scheme, where the complex multiplications (CM) and complex additions (CA) are all required to count.
For the LEO-SM scheme, the operations required for CM and CA are detailed as follows:
1) The multiplication 𝐇𝐯s involves N_r× N_t CMs and (N_r× (N_t- 1)) CAs.
2) The complex-scalar multiplication of the result √(L)𝐇𝐯s by the complex symbol s and real value √(L) has N_r CMs and 0 CAs
3) The subtraction operation of y_SM-√(L)𝐇𝐯s involves N_r CAs.
4) Calculating the squared norm · _2 requires N_r CMs and (N_r-1) CAs, where it needs to square the modulus and add up of each element, receptively.
5) Totally, for each set {𝐯̂,ŝ}, total complexity can be obtained as N_r( N_t+2 ) of CMs and N_r( N_t+1 ) -1 of CAs.
6) Finally, the total number of looping times by all set {𝐯̂,ŝ} is N_t× M.
Therefore, the total operations of CM and CA in ML detector for the LEO-SM scheme can be obtained as:
C_LEO-SM=[ N_r( 2N_t+3 ) -1 ] N_tM.
Similarly, by following above steps, the total operations of CM and CA can be respectively obtained as N_r( N_t+1 ) and N_r( N_t+1 ) -1 in y_SSK-√(E_sL)𝐇𝐯 _2 term of the LEO-SSK scheme, without the complex-scalar multiplication in term √(E_sL)𝐇𝐯.
Thus, the total operations of the ML detector applied in the LEO-SSK scheme can be obtained as:
C_LEO-SSK=[ N_r(2N_t+2)-1 ] N_t.
According to Eq. (18) and (19), we can easily find that the detection complexity increase with raise of N_t in both proposed schemes and with M arising in the LEO-SM scheme individually, which also indicates that the LEO-SM scheme has higher detection complexity than that of the LEO-SSK scheme when they share the same N_t condition.
Besides, we provide the running time by doing Monte Carlo simulations to show more comparisons and make more insightful conclusions in complexity analysis, which is presented and discussed in Sec. 5.
§ SPECTRAL EFFICIENCY
Assuming B_1 bits are transmitted in the LEO-SM scheme, B_1 is divided into two parts, i.e., b_1=log _2N_t and b_2=log _2M, which are mapped into the activated transmit antenna and onto the M-ary PSK/QAM constellation symbols, respectively.
Specifically, B_1 can be expressed as follows:
B_1 = b_1+b_2
= log _2N_t + log _2M.
Here, for clarity, the SE performance is analyzed in bits per channel use (bpcu) across the LEO-SM and LEO-SSK schemes.
Consequently, the SE of the LEO-SM scheme is given as B_1.
Similar to b_1 of the LEO-SM scheme, the SE of the LEO-SSK scheme is given as B_2 = log _2N_t.
However, under the same conditions for N_t, the SE of the LEO-SSK scheme is lower than that of the LEO-SM scheme because the LEO-SM scheme allows for the additional transmission of constellation symbols.
Besides, the SE of B_1 is reduced to b_2 when only constellation symbols are transmitted, as in the traditional LEO satellite-assisted M-ary PSK/QAM wireless transmission.
This reduction highlights the SE improvement offered by the LEO-SM scheme.
§ THEORETICAL AND SIMULATION RESULTS
The theoretical and simulations results of the proposed LEO-SM and LEO-SSK schemes are presented and discussed in this section, along with additional insights and conclusions.
Monte Carlo simulations are considered in the simulations, with all experiments run for 1× 10^6 channel realizations.
Similar to traditional schemes, E_b/N_0 is considered as the SNR, where E_b=| x_k |^2=1 represents the symbol's energy.
The major simulation parameters are listed as follows [20-21]:
f_c = 28 GHz,
h_0 = 780 km,
θ_E = 60^∘, i.e.,
d = 884.85 km,
σ_SF = 1,
A_zenith = 0.22,
L_s = 0.13 dB (worst case),
σ_R = 1,
κ = 1,
m = 0.8,
and
Ω = 1.
For the parameters in Doppler shift, we let
v= 100 km,
α = 30^∘,
and D=700 km.
We assume the LoS channel between LEO satellite and terminal, and then CL= 0.
Besides, the ML detector is employed in all simulation results of BER performance.
The traditional LEO satellite-assisted wireless communications scheme without any other spatial techniques is considered as the benchmark to make comparisons in this section, where the traditional one only transmit M-ary PSK/QAM constellation symbols and is called the Trad. LEO scheme for convenience.
For fair comparison, the SE is equal to 4 bpcu for results in Fig. 3 (a) and 3 bpcu in Fig. 3 (b), with N_r=2.
Other different conditions of N_t, M, and δ_e2^2 are given in Fig. 3, which are not elaborated in detail.
The right figure illustrates the BER performance of the LEO-SM, LEO-SSK, and Trad. LEO schemes with δ_e2^2=0.
Fig. 3 (a) presents the BER performance through various non-zero values for δ_e2^2,
As observed in Fig. 3 (a), the BER performance of the LEO-SM scheme outperforms that of the LEO-SSK and Trad. LEO schemes.
This also indicates that the LEO-SM scheme has better BER performance than that of the LEO-SSK scheme under the high-speed mobile wireless scenario.
Moreover, the traditional LEO scheme has the better BER performance than that of LEO-SSK scheme, which demonstrates that wireless signals only conveyed by antenna selection cannot provide satisfactory BER performance in LEO satellite-assisted communications.
In Fig. 3 (b), the LEO-SM scheme shows superior BER performance compared to the LEO-SSK and Trad. LEO schemes under conditions of imperfect CSI estimation, specifically when δ_e2^2 takes on values of 0.2 and 0.5, which indicates that the LEO-SM scheme has the better robustness with high δ_e2^2.
Besides, by comparing the results of the LEO-SM and LEO-SSK schemes across various values of δ_e2^2, it is observed that the SNR at which the BER results shows the error floor increases as δ_e2^2 diminishes.
For a given δ_e2^2, the LEO-SM scheme enters the error floor at a higher SNR value compared to the LEO-SSK scheme.
This observation further demonstrates the superior robustness of the LEO-SM scheme under imperfect CSI estimation conditions.
Fig. 4 presents a comparative analysis of the LEO-SM and LEO-SSK schemes under the same values of δ_e2^2=0.2 but with varying values of N_r and m, as depicted in the left and right figures, respectively.
Specifically, the LEO-SM scheme is characterized by N_t=16, M=2, and N_r=2, while the LEO-SSK scheme has N_t=32, N_r=2.
As shown in Fig. 4, the BER performance of both schemes improves with an increase in N_r and m, and the SNR value at which the error floor occurs also increases with N_r and m.
Firstly, Fig. 5 has 4 sets of different conditions which are combined as 𝐐=[ { 8,8 } ,{ 16,16 } ,{ 32,32 } ,{ 64,64 }], with 𝐐={ Q( q ) } _q=1^4 and Q( q ) ={ N_t,M }.
Since the complexity of the system is directly reflected by the running time of the simulations, running time of the LEO-SM, LEO-SSK, and Trad. LEO schemes are given in Fig. 5 (a), which has the conditions of N_t and M of first to third sets in 𝐐, giving as Q( 1 ) ={ 8,8 }, Q( 2 ) ={ 16,16 }, and Q( 3 ) ={ 32,32 }.
In the left figure, it is evident that the complexity of the schemes increases with the rise in N_t and M.
Particularly in the LEO-SM scheme, due to the processing in constellation symbols and antenna selection meanwhile, the complexity is much larger than the one of others, especially when N_t and M are in large values.
Conversely, the complexity of the LEO-SSK is slightly higher than that of the Trad. LEO schemes, which is because of the programming distinction and behaves with far fewer differences in practical applications.
Also, the results of SE of these three scheme are presented in Fig. 5 (b), of which all sets in 𝐐 are considered.
Apparently, the LEO-SM scheme demonstrates a significantly higher spectral efficiency (SE) compared to both the LEO-SSK and traditional LEO schemes, where the gap between the SE results of the LEO-SM scheme gets wider compared to those of the others as increase of N_t and M, thereby highlighting the superior data rate and throughput of the LEO-SM scheme.
Overall, as the trade-off selections designed in the LEO satellite-assisted wireless communication, the applications of LEO-SM and LEO-SSK schemes need to consider based on practical circumstances and requirements.
Regardless, all results and analyses substantiate that both proposed schemes possess ample potential for application in future 6G wireless networks.
§ CONCLUSION
This study introduces and evaluates the performance of the LEO-SM and LEO-SSK schemes within the context of 6G wireless communications.
The applications of these advanced signal processing techniques are demonstrated to offer significant improvements in SE, robustness, and BER performance in the LEO satellite-assisted wireless systems, subject to both perfect and imperfect CSI estimation.
As trade-off options, the LEO-SM and LEO-SSK schemes hold significant potential for enhancing future 6G wireless networks, particularly in scenarios requiring high data throughput and reliable connectivity, while the balance between complexity and performance requires careful consideration in practical deployment.
99
ref1 H. Al-Hraishawi, H. Chougrani, S. Kisseleff, E. Lagunas and S. Chatzinotas, "A survey on nongeostationary satellite systems: The communication perspective," IEEE Commun. Surveys Tuts., vol. 25, no. 1, pp. 101-132, Firstquarter 2023.
ref2 M. M. Azari et al., "Evolution of non-terrestrial networks from 5G to 6G: A survey," IEEE Commun. Surveys Tuts., vol. 24, no. 4, pp. 2633-2672, Fourthquarter 2022.
ref3 J. Heo, S. Sung, H. Lee, I. Hwang and D. Hong, "MIMO satellite communication systems: A survey from the PHY layer perspective," IEEE Commun. Surveys Tuts., vol. 25, no. 3, pp. 1543-1570, thirdquarter 2023.
ref4 S. Mahboob and L. Liu, "Revolutionizing future connectivity: A contemporary survey on AI-empowered satellite-based non-terrestrial networks in 6G," IEEE Commun. Surveys Tuts., vol. 26, no. 2, pp. 1279-1321, Secondquarter 2024.
ref5 S. Ammar, C. Pong Lau and B. Shihada, "An in-depth survey on virtualization technologies in 6G integrated terrestrial and non-terrestrial networks," IEEE Open J. Comm. Soc., vol. 5, pp. 3690-3734, 2024.
ref6 F. S. Prol et al., "Position, navigation, and timing (PNT) through Low Earth Orbit (LEO) satellites: A survey on current status, challenges, and opportunities," IEEE Access, vol. 10, pp. 83971-84002, 2022.
ref7 C. Guo, X. Chen, J. Yu and Z. Xu, "Design of joint device and data detection for massive grant-free random access in LEO satellite internet of things," IEEE Internet Things J., vol. 10, no. 8, pp. 7090-7099, 15 Apr., 2023.
ref8 Y. Qian, L. Ma and X. Liang, "Symmetry chirp spread spectrum modulation used in LEO satellite Internet of Things," IEEE Commun. Lett., vol. 22, no. 11, pp. 2230-2233, Nov. 2018.
ref9 R. Y. Mesleh, H. Haas, S. Sinanovic, C. W. Ahn and S. Yun, "Spatial modulation," IEEE Trans. Veh. Technol., vol. 57, no. 4, pp. 2228-2241, Jul. 2008.
ref10 J. Jeganathan, A. Ghrayeb, L. Szczecinski and A. Ceron, "Space shift keying modulation for MIMO channels," IEEE Trans. Wireless Commun., vol. 8, no. 7, pp. 3692-3703, Jul. 2009.
ref11 T. Mao, Q. Wang, Z. Wang and S. Chen, ‘‘Novel index modulation techniques: A survey,’’ IEEE Commun. Surveys Tuts., vol. 21, no. 1, pp. 315-348, Firstquarter 2019.
ref12 S. B. Patel, D. V. Chauhan, and J. D. Patel, ‘‘Spatial modulation: Challenges and potential solutions,’’ in Proc. Int. Conf. Smart Gener. Comput., Commun. Netw. (SMART GENCON), Oct. 2021, pp. 1–8.
ref13 S. Fang et al., "Layered space shift keying modulation over MIMO channels," IEEE Trans. Veh. Technol., vol. 66, no. 1, pp. 159-174, Jan. 2017.
ref14 E. Goudeli, C. Psomas and I. Krikidis, "Spatial-modulation-based techniques for backscatter communication systems," IEEE Internet Things J., vol. 7, no. 10, pp. 10623-10634, Oct. 2020.
ref15 I. A. Hemadeh, P. Xiao, Y. Kabiri, L. Xiao, V. Fusco and R. Tafazolli, "Polarization modulation design for reduced RF chain wireless," IEEE Trans. Commun., vol. 68, no. 6, pp. 3890-3907, Jun. 2020.
ref16 T. V. Luong, Y. Ko and J. Choi, "Repeated MCIK-OFDM with enhanced transmit diversity under CSI uncertainty," IEEE Trans. Wireless Commun., vol. 17, no. 6, pp. 4079-4088, Jun. 2018.
ref17 E. Basar, M. Wen, R. Mesleh, M. Di Renzo, Y. Xiao and H. Haas, "Index modulation techniques for next-generation wireless networks," IEEE Access, vol. 5, pp. 16693-16746, 2017.
ref18 F. Huang, Y. Zhu and Z. Zhou, "Irregular Euclidean distance constellation design for quadrature index modulation," IEEE Commun. Lett., vol. 27, no. 11, pp. 2928-2932, Nov. 2023.
ref19 A. S. Bora, K. T. Phan, and Y. Hong, ‘‘Spatially correlated MIMO798 OTFS for LEO satellite communication systems,’’ in Proc. IEEE Int. Conf. 799 Commun. Workshops (ICC Workshops), May 2022, pp. 1–6.
ref20 X. Zhai, L. Xiao, T. Ding, J. Zhou, P. Xiao and T. Jiang, "Golden angle modulation aided differential OFDM-IM for LEO satellite communications," IEEE Commun. Lett., vol. 28, no. 7, pp. 1604-1608, Jul. 2024.
ref21 J. S. Yeom, Y. Lee and B. C. Jung, "MMSE-based MIMO receiver for cooperative downlink NOMA in LEO satellite networks," in Proc. 13th Int. Conf. Inf. Commun. Technol. Converg. (ICTC), 2023, pp. 1650-1652.
ref22 P. Yang, L. Yang and S. Wang, "Performance analysis for RIS-aided wireless systems with imperfect CSI," IEEE Wireless Commun. Lett., vol. 11, no. 3, pp. 588-592, Mar. 2022.
ref23 S. Saini and A. Chockalingam, "Performance of RIS-aided media-based modulation with imperfect CSI and phase tuning errors," in Proc. IEEE 31st Annu. Int. Symp. Pers., Indoor Mobile Radio Commun, 2023, pp. 1-6.
ref24 M.K. Simon and M.S. Alouini, Digital Communication Over Fading Channels. Hoboken, NJ , USA: Wiley , 2005.
| The advent of beyond 5th generation (B5G) wireless communications and the subsequent progression towards 6th (6G) wireless systems usher in an era where wireless communication systems are expected to provide higher data rates, lower latency, and improved connectivity.
One of the pivotal technologies in achieving these advancements is the utilization of satellite in communication systems, particularly those facilitated by Low Earth Orbit (LEO) satellites [1-3].
LEO satellite-assisted wireless systems attract significant attention due to their potential to offer global coverage, reduced propagation delays, and enhanced signal quality compared to traditional Geostationary Earth Orbit (GEO) systems [4-6].
This also makes LEO satellites particularly appealing for enabling advanced wireless communication systems that serve a wide range of applications, including broadband internet access, disaster recovery, and Internet of Things (IoT) connectivity [7-8].
Nevertheless, LEO satellite-assisted wireless communications still require improvements in throughput and robustness to fully meet the demands of future 6G communications.
In recent years, there is a surge of interest in the application of advanced signal processing and communication techniques to enhance the data rate and throughput in traditional multiple-input multiple-output (MIMO) wireless communications.
Among these, the spatial modulation (SM) scheme [9] and its variants, e.g., the space shift keying (SSK) scheme [10], emerge as promising strategies to improve the spectral efficiency (SE) and energy efficiency of such systems.
The SM scheme is a novel multiple-antenna technique that exploits the spatial dimension by dynamically activating transmit antennas to convey additional information, along with traditional constellation symbols modulated by M-ary phase shift keying (PSK)/quadrature amplitude modulation (QAM), to the receiver, thus providing higher SE [11].
Besides, as inter-antenna interference and inter-channel interference at the transmitter are common challenges in MIMO wireless systems, the SM scheme effectively addresses these issues by activating only one transmit antenna in each time slot [12].
However, the complexity of signal processing at the receiver and signal detection becomes higher compared to that of traditional MIMO schemes.
To mitigate the higher complexity in the SM scheme, the SSK scheme offers a simplified alternative by eliminating the need for constellation symbols processing at the transmitter and receiver, while still maintaining satisfactory bit-error rate (BER) performance [13].
Similarly, while the SSK scheme shares some benefits of the SM scheme's benefits, such as reduced interference, it struggles to achieve the higher SE that SM offers with the same number of transmit antennas.
Although some novel variants of the SM and SSK schemes are proposed later [14-18], they often involve trade-offs among interference, complexity, and error performance, requiring further investigation to fully understand their potential benefits in various application scenarios.
Consequently, considering the aforementioned advantages, the SM and SSK schemes are designed into LEO satellite-assisted wireless systems to enhance spectral efficiency (SE) and ensure reliable BER performance by reducing interference.
The main contributions of our works are given as follows:
1) In order to further improve the SE of traditional LEO satellite-assisted MIMO wireless system, the LEO satellite-assisted SM (LEO-SM) and SSK (LEO-SSK) schemes are designed in this paper.
This paper is the first to explore and discuss the performance of these schemes under imperfect channel state information (CSI), as a notable contribution to the field of 6G LEO satellite-assisted wireless communications.
2) The analytical performance of SE and detection complexity is briefly presented in this paper, where some interesting and insightful findings are also given.
3) Monte Carlo simulations are applied in our works, where the simulation results show the superiority in the BER performance and SE in the proposed schemes. Besides, by comparing the simulation results between the LEO-SM and LEO-SSK schemes, we also obtain some interesting insights, which further confirms the necessity of the trade-off selection between these two schemes.
The rest of the paper is organized as follows:
In Sec. 2, the system model of the LEO-SM and LEO-SSK schemes with details of channel models, signals detection, and imperfect CSI estimation is given.
The performance analysis of complexity and SE are presented in Sec. 3 and Sec. 4, respectively.
The theoretical and simulation results of both proposed schemes are shown in Sec. 4.
Eventually, the conclusion is given in Section Sec. 5. | null | null | null | null | This study introduces and evaluates the performance of the LEO-SM and LEO-SSK schemes within the context of 6G wireless communications.
The applications of these advanced signal processing techniques are demonstrated to offer significant improvements in SE, robustness, and BER performance in the LEO satellite-assisted wireless systems, subject to both perfect and imperfect CSI estimation.
As trade-off options, the LEO-SM and LEO-SSK schemes hold significant potential for enhancing future 6G wireless networks, particularly in scenarios requiring high data throughput and reliable connectivity, while the balance between complexity and performance requires careful consideration in practical deployment.
99
ref1 H. Al-Hraishawi, H. Chougrani, S. Kisseleff, E. Lagunas and S. Chatzinotas, "A survey on nongeostationary satellite systems: The communication perspective," IEEE Commun. Surveys Tuts., vol. 25, no. 1, pp. 101-132, Firstquarter 2023.
ref2 M. M. Azari et al., "Evolution of non-terrestrial networks from 5G to 6G: A survey," IEEE Commun. Surveys Tuts., vol. 24, no. 4, pp. 2633-2672, Fourthquarter 2022.
ref3 J. Heo, S. Sung, H. Lee, I. Hwang and D. Hong, "MIMO satellite communication systems: A survey from the PHY layer perspective," IEEE Commun. Surveys Tuts., vol. 25, no. 3, pp. 1543-1570, thirdquarter 2023.
ref4 S. Mahboob and L. Liu, "Revolutionizing future connectivity: A contemporary survey on AI-empowered satellite-based non-terrestrial networks in 6G," IEEE Commun. Surveys Tuts., vol. 26, no. 2, pp. 1279-1321, Secondquarter 2024.
ref5 S. Ammar, C. Pong Lau and B. Shihada, "An in-depth survey on virtualization technologies in 6G integrated terrestrial and non-terrestrial networks," IEEE Open J. Comm. Soc., vol. 5, pp. 3690-3734, 2024.
ref6 F. S. Prol et al., "Position, navigation, and timing (PNT) through Low Earth Orbit (LEO) satellites: A survey on current status, challenges, and opportunities," IEEE Access, vol. 10, pp. 83971-84002, 2022.
ref7 C. Guo, X. Chen, J. Yu and Z. Xu, "Design of joint device and data detection for massive grant-free random access in LEO satellite internet of things," IEEE Internet Things J., vol. 10, no. 8, pp. 7090-7099, 15 Apr., 2023.
ref8 Y. Qian, L. Ma and X. Liang, "Symmetry chirp spread spectrum modulation used in LEO satellite Internet of Things," IEEE Commun. Lett., vol. 22, no. 11, pp. 2230-2233, Nov. 2018.
ref9 R. Y. Mesleh, H. Haas, S. Sinanovic, C. W. Ahn and S. Yun, "Spatial modulation," IEEE Trans. Veh. Technol., vol. 57, no. 4, pp. 2228-2241, Jul. 2008.
ref10 J. Jeganathan, A. Ghrayeb, L. Szczecinski and A. Ceron, "Space shift keying modulation for MIMO channels," IEEE Trans. Wireless Commun., vol. 8, no. 7, pp. 3692-3703, Jul. 2009.
ref11 T. Mao, Q. Wang, Z. Wang and S. Chen, ‘‘Novel index modulation techniques: A survey,’’ IEEE Commun. Surveys Tuts., vol. 21, no. 1, pp. 315-348, Firstquarter 2019.
ref12 S. B. Patel, D. V. Chauhan, and J. D. Patel, ‘‘Spatial modulation: Challenges and potential solutions,’’ in Proc. Int. Conf. Smart Gener. Comput., Commun. Netw. (SMART GENCON), Oct. 2021, pp. 1–8.
ref13 S. Fang et al., "Layered space shift keying modulation over MIMO channels," IEEE Trans. Veh. Technol., vol. 66, no. 1, pp. 159-174, Jan. 2017.
ref14 E. Goudeli, C. Psomas and I. Krikidis, "Spatial-modulation-based techniques for backscatter communication systems," IEEE Internet Things J., vol. 7, no. 10, pp. 10623-10634, Oct. 2020.
ref15 I. A. Hemadeh, P. Xiao, Y. Kabiri, L. Xiao, V. Fusco and R. Tafazolli, "Polarization modulation design for reduced RF chain wireless," IEEE Trans. Commun., vol. 68, no. 6, pp. 3890-3907, Jun. 2020.
ref16 T. V. Luong, Y. Ko and J. Choi, "Repeated MCIK-OFDM with enhanced transmit diversity under CSI uncertainty," IEEE Trans. Wireless Commun., vol. 17, no. 6, pp. 4079-4088, Jun. 2018.
ref17 E. Basar, M. Wen, R. Mesleh, M. Di Renzo, Y. Xiao and H. Haas, "Index modulation techniques for next-generation wireless networks," IEEE Access, vol. 5, pp. 16693-16746, 2017.
ref18 F. Huang, Y. Zhu and Z. Zhou, "Irregular Euclidean distance constellation design for quadrature index modulation," IEEE Commun. Lett., vol. 27, no. 11, pp. 2928-2932, Nov. 2023.
ref19 A. S. Bora, K. T. Phan, and Y. Hong, ‘‘Spatially correlated MIMO798 OTFS for LEO satellite communication systems,’’ in Proc. IEEE Int. Conf. 799 Commun. Workshops (ICC Workshops), May 2022, pp. 1–6.
ref20 X. Zhai, L. Xiao, T. Ding, J. Zhou, P. Xiao and T. Jiang, "Golden angle modulation aided differential OFDM-IM for LEO satellite communications," IEEE Commun. Lett., vol. 28, no. 7, pp. 1604-1608, Jul. 2024.
ref21 J. S. Yeom, Y. Lee and B. C. Jung, "MMSE-based MIMO receiver for cooperative downlink NOMA in LEO satellite networks," in Proc. 13th Int. Conf. Inf. Commun. Technol. Converg. (ICTC), 2023, pp. 1650-1652.
ref22 P. Yang, L. Yang and S. Wang, "Performance analysis for RIS-aided wireless systems with imperfect CSI," IEEE Wireless Commun. Lett., vol. 11, no. 3, pp. 588-592, Mar. 2022.
ref23 S. Saini and A. Chockalingam, "Performance of RIS-aided media-based modulation with imperfect CSI and phase tuning errors," in Proc. IEEE 31st Annu. Int. Symp. Pers., Indoor Mobile Radio Commun, 2023, pp. 1-6.
ref24 M.K. Simon and M.S. Alouini, Digital Communication Over Fading Channels. Hoboken, NJ , USA: Wiley , 2005. |
http://arxiv.org/abs/2409.17102v1 | 20240925171510 | The p-triviality of stunted projective spaces | [
"Sudeep Podder",
"Gobinda Sau"
] | math.AT | [
"math.AT",
"57R20, 55R22"
] |
Inn
Hom
gcd
| null | null | null | null | null | null |
http://arxiv.org/abs/2409.17103v1 | 20240925171535 | Functional Integral Construction of Topological Quantum Field Theory | [
"Zhengwei Liu"
] | math-ph | [
"math-ph",
"math.FA",
"math.GN",
"math.MP",
"math.QA",
"quant-ph",
"81T25, 57R56, 81T08, 46N50, 18N65"
] |
§ ABSTRACT
We introduce regular stratified piecewise linear manifolds to describe lattices and investigate the lattice model approach to topological quantum field theory in all dimensions. We introduce the unitary n+1 alterfold TQFT and construct it from a linear functional on an n-dimensional lattice model on an n-sphere satisfying three conditions: reflection positivity, homeomorphic invariance and complete finiteness. A unitary spherical n-category is mathematically defined and emerges as the local quantum symmetry of the lattice model. The alterfold construction unifies various constructions of n+1 TQFT from n-dimensional lattice models and n-categories.
In particular, we construct a non-invertible unitary 3+1 alterfold TQFT from a linear functional and derive its local quantum symmetry as a unitary spherical 3-category of Ising type with explicit 20j-symbols, so that the scalar invariant of 2-knots in piecewise linear 4-manifolds could be computed explicitly.
Grounded Predictions of Teamwork as a One-Shot Game: A Multiagent Multi-Armed Bandits Approach
[
Received September 15, 1996; accepted March 16, 1997
==============================================================================================
§ INTRODUCTION
In this paper we propose a new program to address a long-standing open problem: how to construct a meaningful unitary topological quantum field theory (TQFT) in arbitrary dimension. We believe that the ideas here are relevant both to the development of category theory as well as to the mathematical foundations of physics. We will return to this theme in future works concentrating both on the theory and on examples.
Here we construct a unitary n+1 alterfold TQFT from an functional integral on n-dimensional lattice models, for arbitrary n, in Theorem <ref>. The n+1 alterfold TQFT has alternating A/B-colored n+1 manifolds with the lattice model on the n-manifold between them. Our example reduces to Atiyah's TQFT <cit.> when the boundary is a B-color n-manifold. This present work builds on our recent work on alterfolds <cit.>, and on the other works we now explain.
Witten constructed a 2+1 TQFT using Chern-Simons theory and obtained an invariant of links in 3-manifolds as a path integral <cit.>, generalizing the Jones polynomial originated from subfactor theory <cit.>, and other link invariants from the representation theory of Drinfeld-Jimbo quantum groups <cit.>.
Feynman's path integral is a powerful method in physics, but the measure of the path space is only mathematically defined for a few cases <cit.>.
Atiyah provided a mathematical axiomatization of TQFT to study its topological invariants <cit.> which avoids the path integral.
The topological invariant of Witten's 2+1 TQFT can be rigorously defined using the link invariants from quantum groups and the surgery theory, known as the Witten-Reshetikhin-Turaev TQFT <cit.>.
The Turaev-Viro-Barrett-Westbury 2+1 TQFT from a spherical fusion category <cit.> is a state sum construction over a triangulation, which is a combinatorial analogue of path integral.
Witten also constructed a 3+1 TQFT <cit.> which captures Donaldson's invariant of smooth 4-manifolds <cit.>. By this invariant, Donaldson constructed exotic four-dimensional spaces together with the seminal work of Freedman <cit.>.
The Turaev-Viro state sum construction has been generalized to construct 3+1 TQFT using a braided fusion category in <cit.> or a fusion 2-category in <cit.>.
In higher dimensions, Dijkgraaf-Witten constructed an n-dimensional TQFT from the group cohomology of a finite group <cit.>.
Lurie introduced a fruitful theory of (∞,n) category in <cit.> to study non-invertible higher symmetries and to answer the cobordism hypothesis of Baez and Dolan <cit.>. It is widely believed that the state sum construction of TQFT from spherical categories will work in any dimension e.g. <cit.>, but there is no agreement on the mathematical definition of a (unitary) spherical n-category; see a recent discussion in <cit.>.
The 2+1 TQFT is exceptionally successful due to the fruitful examples of quantum symmetries coming from the representation theory of quantum groups, subfactors, vertex operator algebras, conformal field theory, etc.
It is highly expected, but remains challenging, to generalize those frameworks to higher dimensions, which should provide examples of non-invertible higher quantum symmetries, such spherical n-categories, from their higher representation theory.
In this paper, we provide a functional integral approach to construct an n+1 TQFT using a linear functional Z on an n-dimensional lattice model for any dimension n.
This functional integral point of view has been well established in constructive quantum field theory (QFT) <cit.>; we bring that method to the study of TQFT.
Mathematically, we introduce labelled regular stratified piece-wise linear manifolds to formulate the lattices of a general shape and their configuration space.
We study their basic properties in <ref> and prove the transversal theorem between stratified manifolds and triangulations in Theorem <ref>. Based on it, we can compute the partition function in n+1 TQFT as a state sum according to a transversal triangulation.
In an n-dimensional lattice model, the configuration space of a lattice is a Hilbert space, which is the tensor product of Hilbert spaces of local spins, regarded as labels of stratified manifolds.
We construct an n+1 TQFT from a linear functional Z on the configuration spaces of lattices on the n-sphere S^n in Theorem <ref>, such that the (n+1)-manifolds of the topological quantum field theory have A/B colors and the n-dimensional hyper surfaces (or domain walls) between them are decorated by n-dimensional lattice models. We call such TQFT an n+1 alterfold TQFT (Def. <ref>), generalizing the 2+1 alterfold TQFT in <cit.>.
To achieve the construction of the TQFT, we assume three conditions of the linear functional Z in <ref>,
* (RP) reflection positivity;
* (HI) homeomorphic invariance;
* (CF) complete finiteness.
Condition (RP) means that the inner product induced from Z is positive semi-definite between configuration spaces on half n-discs of S^n for any fixed common boundary on the equator S^n-1.
The functional integral Z satisfying Reflection Positivity (RP) is a mathematical formulation of the measure on the path space, by the Wick rotation from statistical physics to quantum field theory (QFT) <cit.>.
To formulate the theory over a general field 𝕂, we need to replace the condition (RP) by strong semisimplicity.
Condition (HI) means that the linear functional Z is invariant under homeomorphisms on S^n.
Condition (HI) ensures the QFT to be topological, namely the partition function of the TQFT, still denoted by Z, is homeomorphic invariant.
Condition (CF) means that the inner product induced from Z has finite rank between configuration spaces of lattices on D^k× S^n-k for any fixed boundary on D^k-1× S^n-k. Condition (CF) ensures the partition function of the TQFT to be a finite state sum.
From the physics point of view, we know everything, if we know the linear functional Z.
We implement this idea mathematically as the Null Principle that if we cannot distinguish two vectors from Z, then we consider them to be the same mathematically.
The null principle encodes numerous algebraic relations from the kernel of Z.
We highlight our identification of vectors on different lattices with the same boundary using the null principle of the configuration spaces, which is different from the renormalization group methods in condensed matter physics.
In <ref>, we introduce hyper discs (or D^n) algebras to study local algebraic relations, generalizing the notion of planar algebras of Jones <cit.> for n=2. It captures the action of the operad of regular stratified manifolds on the configurations on local n-discs.
In <ref>, we study the higher representation theory of the D^n algebra which form an n-category. It suggests a mathematical definition of a unitary/spherical n-category, together with the operad action and the linear functional Z with the condition (RP)/(HI).
In <ref>, we derive the skein relations for the n+1 alterfold TQFT in Def. <ref>, using the null principle. The relations can be considered algebraically as a resolution of identity and topologically as removing a D^k×ε D^n-k+1 handle with a bistellar move on its boundary.
We prove the consistency of the relations and construct an n+1 alterfold TQFT in Theorem <ref> and Theorem <ref>.
In <ref>, we give some examples to understand better the concepts in the alterfold TQFT.
If we shrink the B-color manifolds and evaluate the partition function according to the triangulation, then we obtain a higher analogue of the Turaev-Viro TQFT of the spherical n-category.
If we shrink the A-color manifolds and evaluate the partition function according to surgery move, then we obtain a higher analogue of the Reshetikhin-Turaev TQFT of the higher Drinfeld center of the spherical n-category. The later one suggests a fruitful higher braid statistics of membranes, as discussed in <ref>.
In <ref>, we give a concrete example to illustrate our functional integral construction of TQFT. We construct a linear function Z for a lattice model on S^3. We prove the three conditions (RP), (HI), (CF) and therefore we obtain a unitary 3-category of Ising type and a non-invertible unitary 3+1 alterfold TQFT. We compute all its simplicial morphisms and the 20j-symbols in Table <ref>. Using the skein relations and the 20j-symbols, the scalar invariant 2-knots in 4-manifolds could be computed explicitly.
The 20j-symbols satisfies the (3,3), (2,4) and (1,5) Pachner moves <cit.>, as a one-dimensional higher analogue of the pentagon equations. In this example, there are already more than 50,000 equations, which seems too many to solve directly in general. We conjecture that the construction of the 3+1 TQFT of Ising type works in higher dimensions as well.
This example is indeed related to the 3D Ising model and the 3D toric code <cit.>, which we will discuss in near future.
The connection between TQFT and topological orders has been extensively studied from the view of since <cit.> and from the view of particle excitations since <cit.>, and a classification of topological orders by unitary multi-fusion n-categories is proposed in <cit.>. This is a classification of topological orders by quantum symmetries.
From the view of condensed matter physics, another natural approach to study topological orders is characterizing the ground states.
An important question proposed by Wen is what kind family of wave functions on lattice models are topological orders.
The Riesz representation theorem induces a bijection between the linear functional Z and a vector state on the configuration space for every lattice.
One can consider our linear functional Z on a configuration space as a vector state up to a normalization.
Therefore by Theorem <ref>, we give an answer to Wen's question that a linear functional Z with the three conditions (RP) (HI) and (CF) is a topological order.
In topological orders, we usually take this vector state as the ground state of a local Hamiltonian of neighbourhood interactions. The conditions (HI) and (RP) can be derived from the corresponding conditions of the Hamiltonian. In this paper, we focus on the construction of the alterfold TQFT without referring to the Hamiltonian. In a coming paper <cit.>, we study lattice models with a local Hamiltonian systematically and prove conditions (HI) and (RP) for the Hamiltonian at any temperature.
Condition (CF) corresponds to the finiteness of the entanglement rank of the ground state for lattices on S^k× S^n-k separated by S^k-1× S^n-k.
The area law and projected entangled pair states (PEPS) have been conjectured to be the entanglement properties characterizing ground states of Hamiltonian with local interactions, see e.g., <cit.>.
Our functional integral approach will provide positive evidence of this conjecture.
§ LABELLED REGULAR STRATIFIED MANIFOLDS
§.§ Piecewise Linear Manifolds
In this section, we briefly recall the definition of piecewise linear (PL) manifolds and some basic results. We refer readers to textbooks <cit.> for the general theory of PL manifolds.
A linear n-simplex Δ^n=[e_0,e_1,⋯,e_n] is the convex hull of n+1 points {e_i}_0 ≤ i ≤ n in the Euclidean space ℝ^n, such that the vectors {e_i-e_0}_1 ≤ i ≤ n are linearly independent. Its orientation is the sign of the determinant of the matrix {e_i-e_0}_1 ≤ i ≤ n.
The boundary of an oriented linear n-simplex is
∂[e_0,e_1,⋯,e_n]=Σ_i=0^n (-1)^i [e_0,e_1,⋯ê_̂î , ⋯ e_n],
where the ± sign indicates the induced orientation of the (n-1)-simplex on the boundary. Its sub (n-1-k)-simplex is called a k-face, 0≤ k≤ n-1.
For convex sets A and B in ℝ^n, their convex sum is define as
A +_c B:={λ a+(1-λ) b : a∈ A, b ∈ B, 0 ≤λ≤ 1}.
When we consider Δ^n as one-point suspension of e_0, i.e., {e_0}+_c[e_1,e_2⋯, e_n], we have:
∂ ({e_0}+_c[e_1,e_2⋯, e_n])=[e_1,e_2⋯, e_n]-({e_0} +_c ∂ [e_1,e_2⋯, e_n]).
When we consider Δ^n as one-point suspension of e_n, i.e., [e_0,e_1⋯, e_n-1] +_c {e_n}, we have:
∂ ([e_0,e_1⋯, e_n-1] +_c {e_n})=(∂ [e_0,e_1⋯, e_n-1] +_c {e_n}) + (-1)^n [e_0,e_1⋯, e_n-1].
A polytope is the convex hull of finitely many points in ℝ^n. It is k-dimensional, if it is a k-dimensional topological manifold.
A polytope is the bounded intersection of finitely many half-spaces in ℝ^n.
A transformation on ℝ^n is called piecewise linear, if it is affine on every linear simplex of a triangulation.
A topological n-manifold M with boundary is called PL, if it is equipped with an open covering (U_i)_i∈ I and homeomorphisms ϕ_i: U_i →ℝ^n-1× [0,∞) onto their images, such that the transition map ϕ_j ϕ_i^-1 is PL on ϕ_i^-1(U_i∩ M_j), for any i,j ∈ I.
If all ϕ_j ϕ_i^-1 are oriented, then M is oriented.
A point p is on the boundary ∂(M), iff ϕ_i(p)∈ℝ^n-1×{0}, for some i∈ I.
A point p is in the interior M, iff ϕ_i(p)∈ℝ^n-1× (0,∞), for some i∈ I.
The pair (U_i,ϕ_i) is called a chart of M. Two charts are compatible, if the transition map is PL. For convenience, we assume the collection {(U_i,ϕ_i) : i∈ I} to be a maximal atlas. That means any chart (U,ϕ) compatible with all (U_i,ϕ_i) is in the atlas. The existence of a maximal atlas is guaranteed by Zorn's lemma.
A homeomorphism ϕ: M → N of PL manifolds is a topological homeomorphism which is PL on ϕ_i^-1(U_i∩ϕ^-1V_j), for any chart (U_i,ϕ_i) of M and (V_i,ϕ_j) of N.
In this case, PL manifolds M and N are called homeomorphic, denoted as M∼ N.
For a vector x=(x_1,x_2,…,x_n) ∈ℝ^n, its infinity norm is
x=max_1≤ i ≤ n |x_i|.
With the infinity norm, we denote the closed unit n-disc as D^n=[-1,1]^n, the open unit n-disc as D^n=(-1,1)^n, the boundary unit sphere as S^n-1=∂ D^n+1, the half n-discs as D^n_+=D^n-1× [0,1] and D^n_-=D^n-1× [-1,0]. We take the convention that S^-1=∅.
For those with radius λ>0, we denote them as λ D^n, λD^n, λ S^n-1, λ D^n_± respectively.
An n-dimensional polytope P is PL homeomorphic to D^n and its boundary ∂ P is called a combinatorial (n-1)-sphere, which is homeomorphic to S^n-1.
When the origin O_n is in the interior of the polytope P, we construct the homeomorphism P∼ D^n as
ϕ(rx)= x^-1 rx ∀ x ∈∂ P, r ∈ [0,1].
Suppose Δ^n=[e_0,e_1,⋯,e_n] is a linear n-simplex and p is a point in the interior of Δ^k=[e_0,e_1,⋯,e_k],
The p-subdivision of Δ^n is the union of n-simplices
⋃_i=0^k [e_0,e_1,⋯ê_̂î , ⋯ e_k] +_c {p} +_c [e_k+1,e_k+2,⋯⋯ e_n].
Suppose M is a PL n-manifold and (U,ϕ) is a chart of M. For a linear simplex Δ^n ⊂ϕ(U), we call ϕ^-1(Δ^n) a simplex of M.
A triangulation of an oriented PL n-manifold M is a union of simplices M=∪_i σ_i, such that their interior are disjoint and each 0-face of a simplex is either on the boundary of M, or shared by two simplices with opposite orientations.
In a triangulation of an oriented PL n-manifold M,
the link of a k-simplex σ is the union of (n-k-1)-simplices τ_i, for which σ∪τ_i is in an n-simplex and σ∩τ_i=∅.
The triangulation is called combinatorial, if the link of every k-simplex is a
combinatorial (n-k-1)-sphere.
The following result is well-known and fundamental to study PL manifolds by triangulations, see Chapter 3 in <cit.>.
A compact PL n-manifold admits a combinatorial triangulation.
Moreover, two different triangulations have a common subdivision.
Two triangulations can be changed from one to another by a sequences of Pachner moves <cit.>.
Suppose M and N are PL manifolds.
If ϕ_t: M → N, t∈ [0,1] are homeomorphisms onto the image, such that the map [0,1] × M → N is PL, then ϕ_0 and ϕ_1 are called isotopic, denoted by ϕ_0∼ϕ_1;
Moreover, ϕ_0(M) and ϕ_1(M) are called isotopic in N.
For a PL manifold M, its mapping class group MCG(M) is the quotient of the group of homeomorphisms on M by the subgroup of homeomorphisms isotopic to the identity. When M is oriented, the homeomorphisms in MCG(M) are required to be oriented. When M has a boundary, the homeomorphisms are required to be the identity on the boundary.
It is well-known that the mapping class group of D^n and S^n are trivial. For readers' convenience, we give a proof here.
The mapping class group of the PL manifold D^n is trivial.
Suppose ϕ is a homeomorphism of D^n which is the identity on the boundary S^n-1. Then it is isotopic to identity by Alexander's trick,
ϕ_t(x) :=
{ t ϕ(x/t), 0 ≤x≤ t,
x, t ≤x≤ 1.
.
Note that S^1 contains four intervals, a clockwise translation of length ε along S^1 of is a homeomorphism on S^1.
The 180^∘ rotation ρ on S^1 is a translation of length 2.
The mapping class group of the PL manifold S^n is trivial.
Suppose ϕ is a homeomorphism on S^n.
When n=0, S^0 has two points with opposite orientations. The only orientation preserving homeomorphism is the identity.
When n≥ 1, we take a point p∈ S^n, such that ϕ is linear from a neighbourhood of p to a neighbourhood of ϕ(p). We rotate ϕ(p) to p by PL isotopy. So we may assume that ϕ preserves p and it is linear in a neighbourhood of p.
Let U_ε be the closed ε-neighbourhood of p.
There are ε> ε'>0, such that (ϕ')^-1(U_ε') is in the interior of U_ε.
Now we show that there is a homeomorphism ϕ' on U_ε' which is the identity outside U_ε and ϕϕ' is the identity on U_ε'.
When n=1, it is obvious.
When n=2, we construct the homeomorphism ϕ' illustrated below.
(Here is the example when ϕ' is the 90^∘ on |U_ε|∼ D^2.)
[shift=(-.3,.15)]
at (-1,-1) a;
at (-1,1) b;
at (1,1) c;
at (1,-1) d;
(-2,-2) rectangle (2,2);
(-1,-1) rectangle (1,1);
(-1,-1) – (-2,-2);
(-1,1) – (-2,2);
(1,1) – (2,2);
(1,-1) – (2,-2);
at (3,.5) ϕ';
at (3,0) ⟶;
at (0,0) U_ε';
[shift=(6,0)]
[shift=(-.3,.15)]
at (-1,-1) ϕ'(b);
at (-1,1) ϕ'(c);
at (1,1) ϕ'(d);
at (1,-1) ϕ'(a);
(-2,-2) rectangle (2,2);
(-1,-1) rectangle (1,1);
(-1,-1) – (-2,2);
(-1,1) – (2,2);
(1,1) – (2,-2);
(1,-1) – (-2,-2);
at (0,0) ϕ^-1(U_ε');
The general case for n≥ 2 reduces to the case n=2 as GL(n,ℝ) is generated by GL(2,ℝ) for pairs of coordinates.
By Prop. <ref>, both ϕ' and ϕϕ' are isotopic to the identity. So ϕ is isotopic to the identity.
§.§ Stratified PL Manifolds
In this section, we introduce regular stratified PL n-manifolds. It share ideas in the study of stratified smooth manifolds. We refer the readers to the textbook <cit.> for the general theory of stratified smooth manifolds. Stratified smooth manifolds have been used to study defects in TQFT, see <cit.> and further reference therein.
Suppose M^n is a closed PL manifold. A stratified PL n-manifold ℳ with support M^n is a stratification {M^k⊇ M^k-1, 0≤ k ≤ n }, M^-1=∅, such that M^k∖ M^k-1 is an open PL k-manifold with closure in M^k; and any point p has a neighbourhood U, whose intersection with M^k∖ M^k-1 has finite connected components, for all k.
When M^n has a boundary, a stratified PL n-manifold ℳ with support M^n is a stratification {M^k⊇ M^k-1, 0≤ k ≤ n }, M^-1=∅, such that (M^k∖ M^k-1)∩M^n is an open PL k-manifold;
and the boundary of ∂(ℳ) is a closed stratified PL (n-1)-manifold with a stratification {∂(ℳ)^k⊇∂(ℳ)^k-1, 1≤ k ≤ n-1 }, where ∂(ℳ)^k-1 is the transversal intersection of ∂(M^n) and M^k.
Sometimes M^k-1 are considered as defects of M or domain walls between connected components of M^k∖ M^k-1 in topological quantum field theory.
When ℳ is a stratified manifold with support S^n-1, we denote Λℳ as its one-point suspension with the origin O_n. Then Λℳ has support D^n, boundary ∂(Λℳ)= ℳ, and interior Λℳ:= Λℳ∖∂ℳ.
It has a stratification (Λℳ)^k+1:= Λ M^k∪ M^k+1, 0 ≤ k ≤ n-1, and (Λℳ)^0:={O_n}∪ M^0.
Moreover (Λℳ)^k+1∖ (Λℳ)^k is homeomorphic to ((M^k∖ M^k-1)× (0,1)) ∪ (M^k+1∖ M^k).
Suppose ℳ is a stratified manifold.
A point p in (ℳ∖∂ℳ) ∩ (M^k∖ M^k-1) is called regular, if there is a closed neighbourhood U of p in M^n, a stratified PL manifold 𝒮 with support S^n-k-1, and a homeomorphism ϕ: ℳ|_U →Λ𝒮× D^k;
such that
* ϕ(M^k ∩ U)=O_n-k× D^k;
* ϕ(p)=O_n;
* U ∩ (M^j∖ M^j-1) has finite connected components {V_j,i : i ∈ I_j}, for all j;
* and for each connected component V_j,i, ϕ(V_j,i) is in a j-dimensional subspace.
A point p in ∂ℳ∩ (M^k∖ M^k-1) is called regular, if there is a closed neighbourhood U of p in M^n, a stratified PL manifold 𝒮 with support S^n-k-2, and a homeomorphism ϕ: ℳ|_U →Λ𝒮× D^k+1_+, such that
* ϕ(M^k ∩ U)=O_n-k-1× D^k+1_+;
* ϕ(p)=O_n;
* U ∩ (M^j∖ M^j-1) has finite connected components {V_j,i : i ∈ I_j}, for all j;
* and for each connected component V_j,i, ϕ(V_j,i) is in a j-dimensional subspace.
We call the triple (U,ϕ,𝒮) a regular chart of p, 𝒮 the link boundary of p, and U the normal microbundle of the k-simplex M^k ∩ U near p.
A boundary point p is regular, iff p has a neighbourhood U and a regular chart (∂ℳ∩ U,ϕ,𝒮) in ∂ℳ, such that U ∼ (∂ℳ∩ U)× [0,1].
The link boundary of a regular point is well-defined up to homeomorphism.
If a point p has regular charts (U_i,ϕ_i,𝒮_i), i=1,2, then
𝒮_1∼𝒮_2.
For p in (ℳ∖∂ℳ) ∩ (M^k∖ M^k-1), the transition map ϕ_2ϕ_1^-1 is PL on V=ϕ_1(U_1∩ U_2).
Take a triangulation of the closure V, such that the transition map is linear on every simplex. By subdivisions, we may assume that V∩ O_n-k× D^k contains a k-simplex E. Take all n-simplices {σ_j}_j∈ J containing E. Let L be the link of E. Take the projection from π: D^n → D^n-k×{0}. Then L∼π(L). By the trick in Equ.<ref>, π(L) ∼ S^n-k-1. So ϕ_1^-1(L) ∼𝒮_1.
In ϕ_2(U_2), {ϕ_2ϕ_1^-1(σ_j)}_j∈ J are the simplices containing ϕ_2ϕ_1^-1(E), ϕ_2ϕ_1^-1(E) ⊆ D^n-k× 0 and it has the link ϕ_2ϕ_1^-1(L). Similarly ϕ_1^-1(L)∼𝒮_2. So 𝒮_2 ∼𝒮_1.
For p in (∂ℳ) ∩ (M^k∖ M^k-1), we consider it as a regular point in ∂ℳ. So 𝒮_2 ∼𝒮_1 as well.
Fig.<ref> is M^2 of a stratified PL manifold ℳ. The horizontal line is M^1, which separates M^2 into two components, and M_0=∅. The points p,q ∈ M^1 are not regular.
We describe the local shape of lattice models by link boundary.
The local shapes shall be good, so that all points are regular in all dimensions.
We want to describe such lattices by good stratified manifolds based on regular conditions.
Now let us define regular stratified PL n-manifolds inductively on n.
A regular stratified PL 0-manifold is a 0-manifold.
A stratified PL n-manifold ℳ is called regular, if every point p is regular and it has a regular chart (U,ϕ, 𝒮) for a regular PL manifold 𝒮.
A homeomorphism ϕ: ℳ→𝒩 of stratified PL manifolds is an homeomorphism of PL manifolds M^n → N^n, and the restriction on every M^k∖ M^k-1 is a homeomorphism.
In this case, we say ℳ and 𝒩 are homeomorphic, denoted by ℳ∼𝒩.
Let us prove that homeomorphisms of PL manifolds can be extended to homeomorphisms of regular stratified PL manifolds. Based on it, we generalize good properties of PL manifolds, such as Theorem <ref>, to regular stratified PL manifolds.
Suppose ℳ is a regular stratified PL n-manifold and ϕ: M^n → N^n is a homeomorphism PL n-manifold. Then ϕ: ℳ∼𝒩 is a homeomorphism of regular stratified PL manifolds, where 𝒩 has a stratification {ϕ(M^k) ⊇ϕ(M^k-1), 0≤ k ≤ n }.
For any point p in M^k∖ M^k-1, it has a regular chart (U,ϕ',𝒮) for a regular 𝒮 with support S^n-k-1, such that ϕ'(ℳ|_U)=Λ𝒮× D^k and ϕ'(M_k ∩ U)=O × D^k.
Then the point ϕ(p) has a regular chart (ϕ(U),ϕϕ',𝒮).
In particular, ϕ(M^k) ⊇ϕ(M^k-1) is an open PL k-manifold with closure ϕ(M^k), its intersection with ϕ(U) has finite connected components.
So N^n is regular with a stratification {ϕ(M^k) ⊇ϕ(M^k-1), 0≤ k ≤ n }. Moreover, ϕ is a homeomorphism of regular stratified PL manifolds.
Suppose ϕ_t: M → N, t∈{0,1} are isotopic homeomorphisms onto the image, and ℳ is a stratified manifold with support M.
We say their images ϕ_0(ℳ) and ϕ_1(ℳ) are isotopic in N.
Suppose ℳ is a regular stratified PL n-manifold, and N^j is sub PL j-manifold of the PL manifold M^n.
For a point p in N^j ∩ (M^k ∖ M^k-1), we say ℳ and N^j are transversal at p, if
p has a regular chart (U,ϕ,𝒮), such that
ϕ(U∩ N^j)= D^j × O_n-j or D^j_+× O_n-j.
We call p a transversal point.
We say ℳ is transversal to N^j, if they are transversal at all points of intersection.
In this case, 𝒩:={ N^j+k-n=N^j ∩ M^k : j+k≥ n} is a regular stratified PL j-manifold with regular charts (U∩ N^j, ϕ|_U∩ N^j, 𝒮), called the sub j-manifold of ℳ.
Note that if ℳ and N^j are transversal at p, then j+k≥ n.
Suppose ℳ is a regular stratified PL n-manifold. A triangulation of M^n is called transversal to ℳ, if every simplex is transversal to ℳ.
We state a special case of the transversal theorem for PL manifolds of Williamson in Theorem 3.3.1 in <cit.>. We try to follow his symbols in that theorem.
Suppose B is a j-simplex in a PL n-manifold T.
Suppose S is a closed PL k-manifold in T with boundary Q, and Q has a neighbourhood X, such that X is transversal to B, then in any given neighborhood of S, there are isotopy H_t fixing Q, 0≤ t≤ 1, such that H_0 is identity and H_1(S) is transversal to B.
Suppose Δ is a combinatorial triangulation of a compact PL n-manifold M^n.
Suppose ℳ is a regular stratified PL n-manifold with support M^n, which is transversal to Δ on the boundary ∂ M^n. Then there is an ambient isotopy ϕ_t fixing the boundary, such that
ϕ_1(ℳ) is transversal to Δ.
Let us prove the statement by induction on n.
The statement is trivial for n=0.
Suppose the statement is true for (n-1)-manifolds.
Take an n-simplex σ of the triangulation of M^n. Denote the support of σ by S and the boundary by B. Take a neighbourhood U of σ.
We first ambient isotope M_0 away from S.
By induction on k=1,⋯, n-1, we prove that we can ambient isotope ℳ in U, such that ℳ and S are transversal at all points in M^k.
Suppose ℳ and S are transversal at all points in S∩ M^k-1.
Then the intersection of M^k-1 and the n-k-skeletons of σ is empty.
By the compactness of S∩ M^k-1, it has a neighbourhood of transversal points.
We ambient isotope M^k∖ M^k-1 away from the n-k-1-skeletons fixing a neighbourhood of M^k-1. By the regularity, for every connected component C^k of M^k∖ M^k-1, its closure is a PL sub k-manifold and its boundary is in M^k-1, which has a neighbourhood transversal to S. By Lemma <ref>, we ambient isotope every C^k fixing a neighbourhood of M^k-1, so that C^k is transversal to S. Moreover ambient isotope of different C^k's are independent of each other. Then the intersection of M^k∖ M^k-1 and S are transversal. It completes the induction on k and then ℳ is transversal to S.
By the compactness of S, ℳ|_S has a neighbourhood homeomorphic to ℳ|_S× [-1,1].
Note that S has a triangulation ∂σ. By induction on n, we can ambient isotope ℳ|_S, so that ℳ|_S is transversal to ∂σ.
Then we extend the ambient isotope to the neighbourhood.
We end up with ℳ transversal to the n-simplex σ.
Similarly we ambient isotope ℳ for other n-simplices without changing the neighbourhood of the previous simplices. Eventually, ℳ is transversal to the triangulation.
A regular stratified PL n-manifold 𝒩 is called a sub manifold of a regular stratified PL n-manifold ℳ, if N^n is a sub PL manifold of M^n and N^k=N^n∩ M^k, for any 0≤ k ≤ j.
We mainly use the following two operations of PL n-manifolds: (1) disjoint union (2) gluing.
We show that both operations extend to regular stratified PL n-manifolds.
Obviously the disjoint union of regular PL n-manifolds is a regular PL n-manifold.
We need to deal with the charts at the corner when we glue the boundary.
There is a PL homeomorphism ϕ from [-1,1]× [0,1] to [0,1]× [0,1] which preserves [0,1]×{0} and maps [-1,0] ×{0} to {0}× [0,1].
(-1,0) rectangle (1,1);
[shift=(-.2,-.2)]
at (-1,0) A;
at (0,0) B;
at (1,0) C;
[shift=(4,0)]
[shift=(-.2,-.2)]
at (-1,0) A;
at (0,0) B;
at (1,0) C;
at (-2,.5) →;
(-1,0) – (1,1)–(1,0)–(-1,0);
[shift=(8,0)]
[shift=(-.2,-.2)]
at (0,.5) A;
at (0,0) B;
at (1,0) C;
at (-2,.5) →;
(0,.5) – (1,1)–(1,0)–(0,0)–(0,.5);
[shift=(12,0)]
[shift=(-.2,-.2)]
at (0,1) A;
at (0,0) B;
at (1,0) C;
at (-2,.5) →;
(0,0) rectangle (1,1);
Suppose ℳ is a regular stratified PL n-manifold. Suppose 𝒫 and 𝒬 are regular stratified closed PL sub (n-1)-manifolds of ∂ℳ, 𝒫∩𝒬=∅, ∂𝒫∩∂𝒬 is a sub (n-2)-manifold ∂ℳ, and ϕ: 𝒫→𝒬 is an orientation reversing homeomorphism.
Then we obtain a regular stratified PL n-manifold ℳ/ϕ by gluing p∈𝒫 with ϕ(p)∈𝒬.
When p is in the interior of P, take a regular chart (U,ϕ_p,𝒮) of p in ∂ M, then (ϕ U, ϕ_pϕ^-1, 𝒮) is a regular chart of q=ϕ(p) in ∂ M. Their neighbourhoods in ℳ are normal bundles of U_p and ϕ U_p. The chart of the union the two neighbourhood is the union of the charts on the normal bundles.
When p is on the boundary of 𝒫, but not in ∂𝒫∩∂𝒬, we apply the homeomorphism in Lemma <ref> to the charts of p and q along ∂ (P∪ Q), and then glue the charts.
When p is an inner point of ∂𝒫∩∂𝒬, we apply the inverse homeomorphism in Lemma <ref> to the charts of p and q along ∂ P∩∂ Q), and then glue the charts.
When p is on the boundary of ∂𝒫∩∂𝒬, we apply the homeomorphism in Lemma <ref> to the charts of p and q along ∂ (P∪ Q), and then
we apply the inverse homeomorphism in Lemma <ref> to the charts along ∂ P∩∂ Q
and then glue the charts.
§.§ Labelled Stratified Manifolds
In the rest of the paper, the manifold in the top dimension n is always assumed to be oriented, compact, PL. All stratified manifolds and charts are regular. We will ignore the terminology oriented, compact, PL and regular for simplicity, if there is no confusion.
For a`lattice on an oriented compact n-manifold M^n,
we describe its lattice shape as a stratified n-manifold ℳ with support M^n. The local lattice shape is described by the charts (U,ϕ,𝒮). Usually there are only finitely many local lattice shapes, so there are finitely many choices of 𝒮.
We consider a set as the positions to assign vectors, such as spins.
For points in , if they have the same local shape Λ𝒮, |𝒮|=S^n-1, they shall be assigned with vectors in the same vector space V_𝒮.
If the local shape of a position point is Λ𝒮^n-k-1×D^k, |𝒮^n-k-1|=S^n-k-1, an assignment is a replacement of
the closed stratified manifold 𝒮× D^k by a spin vector.
A configuration of the lattice is an assignment of vectors to points in .
Now let us formulate their mathematical definitions.
Suppose _k is a set of stratified PL manifolds with support S^n-k-1, 0≤ k≤ n and _∙=∪_k=0^n _k.
We say a PL manifold ℳ has local shape _∙, if for any 0≤ k ≤ n and p∈ M^k∖ M^k-1, p has a chart (U,ϕ,𝒮), 𝒮∈_k.
We call _∙ the local shape set.
We fix the local lattice shape first and we only consider stratified manifolds of a given type _∙. For example, when n=2, k=0, the local shape of a point p∈ M^0 is S^1 with a stratification of m points. Then p is a m-valent point. Usually m is 2,3,4,5,6 for 2D lattices.
For a local shape set , a label space L is a set of vector spaces L={L_𝒮 : 𝒮∈_∙} over the field 𝕂.
For a label space L, an L-labelled stratified manifold ℳ consists of
* a set of points , called position points;
* a regular chart (U_p, ϕ_p, 𝒮_p), for every p∈;
* and a vector ⊗_p∈ v_p ∈⊗_p∈ L_𝒮_p.
such that M^0⊆⊆ℳ; ϕ_p is orientation preserving; ∂(ℳ) and U_p, p∈, are pairwise disjoint.
We consider the L-labelled stratified manifold ℳ as
a replacement of U_p in ℳ by the label ϕ_p^-1(v_p), for all p∈.
Diagrammatically, we draw ϕ^-1(v_p) at p of the stratified manifold.
For a stratified manifold ℳ, we define its configuration space ℳ(L) to be the vector space spanned by L-labelled stratified manifolds ℳ.
For a manifold M with boundary S and a stratified manifold 𝒮 with support S,
We define the condensation space on M with boundary 𝒮 as
M_𝒮(L):={ℓ∈ℳ(L): |ℳ|=M, ∂ℳ=𝒮}.
When ∂ M=∅, we write M(L) for short.
The two operations disjoint union and gluing extend linearly to labelled stratified manifold.
By Prop. <ref>, a homeomorphism of manifolds ϕ: M→ N,
induces a homeomorphism of stratified manifolds ϕ: M(L)→ N(L).
For an orientation preserving homeomorphism of manifolds ϕ: M→ N,
and an L-labelled stratified manifold ℓ∈ M(L), with positions points , charts {(U_p, ϕ_p, 𝒮_p): p∈}, and vectors {v_p ∈ L_𝒮_p : p∈},
we define a labelled stratified manifold ϕ(ℓ)∈ N(L) with
* a set of position points ϕ();
* every ϕ(p)∈ϕ() has a chart (ϕ(U_p), ϕ_pϕ^-1, 𝒮_p), and a vector v_p in L_𝒮_p.
§ HYPER-SPHERE FUNCTIONS
§.§ Hyper-Sphere Functions
For a local shape set and label spaces L={L_𝒮 : 𝒮∈_∙} over a field 𝕂,
we call a linear functional Z: S^n(L) →𝕂 homeomorphic invariant (HI), if
Z(ℓ)=Z(ϕ(ℓ)),
for any vector ℓ∈ S^n(L) and any orientation preserving homeomorphism ϕ on S^n.
In this case, we call Z an S^n functional for short.
Recall that the mapping class group of S^n is trivial, so the isotopic invariance of Z on S^n(L) is equivalent to the homeomorphic invariance.
We can extend the linear functional Z to M(L), |M|∼ S^n by homeomorphisms.
To simplify the terminology, we first focus on the stratified manifolds with support S^n.
We will extend the linear functional Z to M(L) for a general n-manifold M when we construct n+1 alterfold TQFT.
Recall that D^n+1=[-1,1]^n+1= D^n × [-1,1] and S^n=∂ D^n+1.
The equator of S^n is S^n-1×{0}, which separates S^n into two n-discs
D^n_+ =D^n×{1}∪ S^n-1× [0,1];
D^n_- =D^n×{-1}∪ S^n-1× [-1,0].
And ∂ D^n_±=± (-1)^n (S^n-1×{0}), where the sign ± (-1)^n indicates the orientation of the boundary.
As the stratified manifold contains more data, we would like to keep the stratified manifold as the first component in the product 𝒟^n-k× D^k. Then the new coordinate will be the last, and we obtain a global factor (-1)^n.
If we decompose D^n+1=[-1,1]^n+1= [-1,1] × [-1,1]^n
into two part ∂ D^n_± according to the first coordinate. Then ∂ D^n_±=± ({0}× S^n-1).
In this case, the new coordinate will be the first. Then the stratified manifold will appear as the last component in the product D^k ×𝒟^n-k.
Now let us study L-labelled stratified manifolds with boundary support S^n-1×{0}.
Suppose 𝒮 is an oriented stratified manifold with support S^n-1×{0}.
We define
V_𝒮,± to be the vector space spanned by L-labelled stratified manifold with support D^n_± and boundary ±(-1)^n𝒮.
The S^n functional Z defines a bi-linear form on V_𝒮,-× V_𝒮,+,
Z(ℓ_- ×ℓ_+):=Z(ℓ_-∪ℓ_+),
where ℓ_±∈ V_𝒮,± are L-labelled stratified manifolds.
We call the rank of the bilinear form the entanglement rank of Z over 𝒮, denoted by r_Z(𝒮).
We say Z has finite entanglement rank, if r_Z(𝒮)<∞ for any 𝒮.
A vector v ∈ V_𝒮,± is called a null vector, if
Z(v ∪ w)=0, ∀ w ∈ V_𝒮,∓.
The subspace of all null vectors are denoted by K_𝒮,±.
Their quotient spaces are denoted by Ṽ_𝒮,±:=V_𝒮,±/K_𝒮,±.
Note that both Ṽ_𝒮,+ and Ṽ_𝒮,- are equal to the rank of the bilinear form. When Z has finite entanglement rank, Ṽ_𝒮,± are finite dimensional.
Suppose ϕ is a homeomorphism on D^n_±, which is the identity on the boundary, then it induces the identity action on Ṽ_𝒮,±.
For any v ∈ V_𝒮,± and w ∈ V_𝒮,∓.
Z(v∪ w)=Z(ϕ(v)∪ w).
So ϕ(v)=v in Ṽ_𝒮,±.
A homeomorphism ϕ on S^k-1 extends to a homeomorphism Λϕ on D^k by the one-point suspension. It extends to a homeomorphism I_D^n-k×Λϕ on 𝒟^n-k× D^k. The corresponding action on Ṽ_∂(𝒟^n-k× D^k),± is the identity map.
It follows from Prop. <ref> and <ref>.
We define ρ to be the 180^∘ rotation along S^1 of the last two coordinates.
It induces a linear map from V_𝒮_± to V_ρ(𝒮)_∓.
More precisely, for an L-labelled stratified manifold ℓ with support D^n_±, position points , charts {(U_p, ϕ_p, 𝒮_p): p∈}, and vectors {v_p ∈ L_𝒮_p : p∈},
we define its rotation ρ(ℓ) as an L-labelled stratified manifold with support D^n_∓,
* position points ρ();
* a regular chart (ρ(U_p), ϕ_p ρ, 𝒮_p), for every ρ(p)∈ρ();
* and a vector ⊗_p∈ v_p.
The rotation ρ is well-defined from Ṽ_𝒮,± to Ṽ_𝒮,∓.
If v∈Ṽ_𝒮,± is a null vector, then for any w∈Ṽ_𝒮,±,
Z(ρ(v)∪ w)=Z(v∪ρ(w))=0.
So ρ(v) is a null vector.
So the rotation ρ is well-defined on the quotient Ṽ_𝒮,±.
Suppose M is an n-manifold with boundary S and 𝒮 is a stratified manifold with support S.
Recall that M_𝒮(L) is the vector space spanned by L-labelled stratified manifold with support M and boundary 𝒮.
For an S^n functional Z, we define the kernel K_Z to be the vector space spanned by stratified manifolds labelled with a null vector in a local n-disc.
We define quotient space M̃_𝒮(L):=M_𝒮(L)/K_Z.
By Prop. <ref>, the vector in M̃_𝒮(L) is well-defined up to isotopy in a local n-disc.
The two operations disjoint union and gluing preserve K_Z, so they are well-defined on the quotient spaces {M̃_𝒮(L)}, called the tensor and contraction respectively.
§ HYPER DISC ALGEBRAS AND SPHERE ALGEBRAS
Given an S^n functional Z on L-labelled stratified manifolds, the two operations disjoint union and gluing induce tensor product and contraction on the quotient spaces {M̃_𝒮(L)}. They provide fruitful algebraic structures according to the rich boundary conditions and the algebraic relations captured by the kernel K_Z.
In particular, we can glue two n-discs into one n-disc up to isotopy. We first study the n-category emerging from the local n-disc by idempotent completion.
Note that Z is homeomorphic invariance on S^n, so for any homeomorphism ϕ on ℝ^n+1 and ℓ∈ S^n(L), the value
Z(ϕ(ℓ)):=Z(L)
is independent of ϕ.
Furthermore, if ϕ preserves 𝒮, |𝒮|=S^n-1× 0,
by Prop. <ref>, we can identity ϕ(v) as v in Ṽ_𝒮,±.
Let G be the group generated by affine transformations on ℝ for every coordinate of ℝ^n+1.
If an element g in G preserves S^n, then it is the identity map on S^n.
So there is no ambiguity to identify a vector v ∈Ṽ_𝒮,± with g(v).
We will write g(v) as v to simplify the notation.
We say we label v_p at p of a stratified manifold, if the labelled stratified manifold ℓ has a position point p, a regular chart (U_p,ϕ_p,𝒮_p) and a vector v_p∈ L_𝒮_p, such that ϕ_p=g|_U_p, for some g∈ G,
Diagrammatically, we draw the label v_p at p of the stratified manifold.
§.§ Hyper Disc Algebras
We introduce an n-dimensional hyper disc algebra, or a D^n-algebra, to study the algebraic properties of
V_𝒮_± for all 𝒮=S^n-1. It generalizes the notion of planar algebras introduced by Jones <cit.>.
The algebraic operations come from combinations of the two elementary operations: tensor product and contraction.
They are extremely rich due to the complexity of the stratified manifolds and topological isotopy.
We remove the interior of m disjoint n-discs in an n-disc, which are all homeomorphic to the standard D^n and denoted it as T^n(m). It is unique up to homeomorphism.
For example, D_0=[-1,1]^n, D_i= [-1/2,1/2]^n-1× [-m-2+2i/m+1, -m+2i/m+1], i=1,2⋯, m. T^n(m)=D_0∖⋃_i=1^mD_i.
We consider ∂ D_0 as the output boundary and ∂ D_i as the i^th input boundary of T^n(m). The output boundary has the same orientation as the bulk T^n(m) and the input boundary has orientation opposite to the bulk.
We define an n-tangle as a stratified 𝒯 in ℝ^n, |𝒯|∼ T^n(m), such that the 0-manifold T_0 is empty in the interior.
From the view of TQFT, we only consider tangles up to isotopy in the interior.
If the output disc of 𝒯 is identical to the i^th input disc of 𝒯', then we obtain a tangle 𝒯'∘_i𝒯 by gluing the boundary.
For a local shape LS_∙, we consider a stratified manifold 𝒮 with support S^n-1 as an object and an n-tangle as a morphism from the tensor of objects to one object. Under such compositions, all n-tangles form an operad of stratified n-manifolds, denoted by (LS_∙), or just for short.
The tangle of the manifold T^n(m) (without stratification) is symmetric w.r.t. the inputs. The tangle of the stratified manifold 𝒯 is not symmetric.
Note |𝒯|∼ T^n(1) is a morphism from one object to one object. We call it an annular n-tangle. These objects and morphisms form a category, called the annular category of stratified n-manifolds.
For any stratified 𝒮, we denote its homeomorphic
equivalence class as [𝒮]:={𝒮': 𝒮'∼𝒮}.
We denote [𝒮] to be the groupoid of homeomorphisms between elements in [𝒮], and (𝒮) to be the group of homeomorphisms on 𝒮.
a D^n-algebra V is a covariant representation π of the operad . That means
* for any object in , namely a stratified 𝒮, |𝒮|∼ S^n-1, there is a vector space V_𝒮;
* for any homeomorphism ϕ: 𝒮→𝒮', there is an invertible linear map π(ϕ) from V_𝒮 to V_𝒮',
* for any morphism in , namely an n-tangle 𝒯, there is a multilinear map π(𝒯): ⊗_i=1^m V_𝒮_i→ V_𝒮_0;
such that
* π is a representation of the groupoid hom[𝒮];
* for any n-tangle 𝒯 and 𝒯', s.t. 𝒯_0=𝒯'_i,
π(𝒯'∘_i𝒯)=π(𝒯')∘_iπ(𝒯);
* The action of 𝒯 is compatible with any orientation preserving homeomorphism ϕ on ℝ^n:
π(ϕ(𝒯))(⊗_i=1^mπ(ϕ|_𝒮_i))=
π_ϕ|𝒮_0π(𝒯).
We will ignore the symbol π if there is no confusion, and simply write the linear map π(𝒯) as 𝒯,
𝒯: ⊗_i=1^m V_𝒮_i→ V_𝒮_0.
The compatibility between 𝒯 and ϕ in condition (3) is equivalent to that
for any input vectors v_i ∈ V_𝒮_i,
ϕ(𝒯(⊗_i=1^n v_i))=ϕ(𝒯)(⊗_i=1^nϕ(v_i)).
For |𝒮|= Λ S^n-1 , we represent a vector α in V_𝒮 as a stratified manifold Λ𝒮 which is the one-point suspension of 𝒮 with the origin, and the origin is labelled with α.
We can choose a representative 𝒮 in [𝒮] and only consider the action of its mapping class group MCG(𝒮).
For a D^n-algebra V and an n-tangle 𝒯, |𝒯|=T^n(m),
we partially fill the input vectors v_i ∈ V_𝒮_i, for i ∈ I ⊂{1,2,⋯, m}, we call the result a labelled tangle which is a multilinear map on the rest inputs:
𝒯(⊗_i∈ I v_i): ⊗_i∉ I V_𝒮_i→ V_𝒮_0.
If all inputs are filled by vectors, then it becomes a vector in V, and we call it a fully labelled tangle.
For an n-tangle 𝒯 with m inputs, m≥ 2, and the i^th input boundary 𝒮_i has no stratification, we can fill the PL manifold D_i, D_i∼ D^n, and we obtain an n-tangle with m-1 inputs, denoted by 𝒯(D_i).
For a vector v ∈ V(S^n-1), we can fill the i^th input of 𝒯 by v_i, denoted by 𝒯(v_i).
For a D^n-algebra V, a vector v ∈ V(∂=S^n-1) is called the unit, if
𝒯(v_i)=𝒯(D_i),
as multilinear maps for any 𝒯 and i.
In this case, we call the D^n-algebra V unital.
The unit is unique.
If both v and w are units, then v=T^n(2)(w,v)=w.
A subspace W of a D^n-algebra V is a set of vector spaces {W(∂=𝒮) ⊆ V(∂=𝒮) : |𝒮|=S^n-1}.
We call W a sub D^n-algebra of V, if it is invariant under the action of n-tangles. We call W an ideal of V, if it is invariant under the action of V-labelled tangles.
The subspace/subalgebra/ideal W is considered to be trivial, if W=0 or V.
For a subspace W of a D^n algebra V, we denote A(W) to be the sub algebra generated by W and I(W) to be the ideal generated by W.
The quotient V/W for an ideal W of a D^n algebra V is a D^n algebra.
It follows from the definitions.
A D^n-algebra V is called decomposible, if V=V_1⊕ V_2 for two sub D^n-algebras V_1 and V_2.
A D^n-algebra V is called reducible, if it has a non-trivial ideal.
For two D^n-algebras with representations (or functors) π_V and π_W, a homomorphism Φ: V→ W of D^n-algebras is a natural transformation between the functors π_V and π_W.
A D^n algebra is called finite dimensional, if every vector space V(∂=𝒮), |𝒮|=S^n-1, is finite dimensional.
For a given local shape set LS_∙, all L-labelled stratified manifolds S^n(L) form a D^n algebra, called the free D^n algebra.
§.§ Hyper Sphere Algebras
For a D^n algebra V and an S^n functional on S^n(V), we call the pair (V,Z) a hyper sphere algebra or an S^n algebra. An S^n algebra (V,Z) is called non-degenerate, if K_Z=0.
For an S^n functional on S^n(L), the kernel K_Z is an ideal of the free D^n algebra.
The quotient spaces Ṽ:={Ṽ_𝒮,- : |𝒮|=S^n-1} from a D^n algebra, and (Ṽ,Z) is a non-degenerate S^n-algebra.
If an input vector is a stratified manifold labelled by a null vector, then the output is also labelled by a null vector. So K_Z is an ideal.
So the quotient Ṽ is a D^n-algebra.
The S^n functional Z is zero on K_Z, so it is well-defined on S^n(Ṽ).
Moreover, a null vector in V_𝒮 is in K_Z. So (Ṽ,Z) is a non-degenerate
S^n-algebra.
Recall that Ṽ_𝒮,± are spanned by L-labelled stratified manifolds modulo the kernel K_Z.
We can update the label space L by the D^n algebra Ṽ={Ṽ_𝒮,-: |𝒮|=S^n-1}.
For a D^n algebra V, a V-labelled stratified manifold ℳ consists of
* a set of points , called position points;
* a regular chart (U_p, ϕ_p, 𝒮_p), for every p∈;
* and a vector ⊗_p∈ v_p ∈⊗_p∈ V_𝒮_p.
such that M^0⊆⊆ℳ; ϕ_p is orientation preserving; ∂(ℳ) and U_p, p∈, are pairwise disjoint.
For the non-degenerate S^n algebra (Ṽ,Z), Ṽ-labelled stratified manifold ℳ with boundary 𝒮 and support D^n_± is a vector in Ṽ_𝒮,±.
We can regard ℳ as an L-labelled stratified manifold, by replacing every v_p ∈ V_𝒮_p as an L-labelled stratified manifold.
By Prop. <ref>, ℳ is a null vector, if a label v_p is a null vector. So a Ṽ-labelled stratified manifold ℳ with boundary 𝒮 and support D^n_± is a vector in Ṽ_𝒮,±.
§.§ Reflection Positivity
Suppose * is an order-two automorphism of the field 𝕂. For vector spaces V and W, a map T:V→ W is *-linear if
T(ax+by)=a^*T(x)+b^*T(y), ∀ a,b∈𝕂, x,y∈ V.
We define a reflection θ on an S^n algebra (V,S) as order-two *-linear maps θ: V_𝒮,±→ V_𝒮,∓ for every boundary stratified manifold 𝒮 with support S^n-1, such that and
θ𝒯(⊗_i∈ I v_i)=𝒯(⊗_i∈ Iθ(v_i) ),
for any fully labelled tangle 𝒯(⊗_i∈ I v_i).
For an S^n functional Z, the reflection θ on the S^n algebras (V,S) (or (Ṽ,S) is determined by the action of the label space L.
We may add the 180^∘ rotation ρ of θ(L) to the label space L for convenience.
We call an S^n functional Z Hermitian w.r.t. a reflection on the S^n-algebra (V,Z), if for any v_±∈ V_𝒮,±,
Z(v_-∪ v_+)^* =Z(θ(v_+)∪θ v_-).
If an S^n functional Z Hermitian, then the reflection is well-defined on the quotient spaces Ṽ_𝒮,±.
For any null vector v_±∈ V_𝒮,±, by Equ. <ref>, we have θ(v_±) is a null vector. So the reflection θ is well-defined on the quotient spaces Ṽ_𝒮,±.
A Hermitian S^n functional Z is called reflection positive (RP) over a number field 𝕂, if
Z(v∪θ (v))≥ 0, ∀ v ∈ V_𝒮,±.
Usually the involution * on ℂ is the complex conjugate. When Z is reflection positive, the quotient spaces Ṽ_𝒮,± are (pre) Hilbert spaces, which are called the physical Hilbert space in quantum field theory.
In particular, v is a null vector iff
Z(v ∪θ(v))=0, by Cauchy-Schwarz inequality.
In Def. <ref>, we update the label space from L to Ṽ_𝒮,- of the D^n algebra Ṽ for orientation preserving homeomorphisms in regular charts.
When Z is Hermitian, we can further extend the label space to Ṽ_𝒮,±. A stratified manifold may have a label ϕ^-1(v) for orientation reversing homeomorphism ϕ in a regular chart (U, ϕ, 𝒮) and a vector v∈Ṽ_𝒮,+.
For an S^n algebra (Ṽ,Z), a Ṽ-labelled stratified manifold ℳ consists of
* a set of points , called position points;
* a regular chart (U_p, ϕ_p, 𝒮_p), for every p∈;
* and a vector ⊗_p∈ v_p ∈⊗_p∈Ṽ_𝒮_p,±.
such that M^0⊆⊆ℳ; the ± sign of Ṽ_𝒮_p,± depends on the orientation of ϕ_p; ∂(ℳ) and U_p, p∈, are pairwise disjoint.
We generalize Prop. <ref> for orientation reversing homeomorphisms as follows.
Suppose ϕ is an orientation reversing homeomorphism from D^n_± to D^n_∓, which is the identity on the boundary, then ϕ=θ as an action on Ṽ_𝒮,±.
By Prop. <ref>, both ϕθ and θ^2 are the identity on Ṽ_𝒮,±. So ϕ=θ.
§ IDEMPOTENT COMPLETION AND SPHERICAL N-CATEGORY
In this paper, we study the higher representation theory of the D^n-algebra and construct an n-category. We consider it such an n-category from an S^n function as a spherical n-category. If the S^n function us reflection positive, then consider it as a unitary n-category.
From the S^n functional Z, we obtain a non-degenerate S^n-algebra (Ṽ,Z).
We call it finite dimensional, if Ṽ_𝒮,- is finite dimensional for any 𝒮.
In this paper, we focus on finite dimensional S^n algebras.
A finite dimensional semisimple algebra over a field 𝕂 is a direct sum of matrix algebras, ⊕_i M_n_i(𝕂).
If it is commutative, then n_i=1.
It is well-known that a finite dimensional C^* algebra is semisimple.
We show that for any |𝒟^n-1|=D^n-1, Ṽ_∂(𝒟^n-1× D^1) forms an associative algebra under the composition along the last coordinate.
If Z is reflection positive, then the algebra is a C^*-algebra.
Over a general field 𝕂, we assume that the algebra is semisimple.
We consider a (minimal) idempotent of the algebra as an (indecomposible) representation.
We will study the idempotent completion of these semisimple algebras and study the n-category structure of these idempotents as a higher representation category.
§.§ Idempotent Completion
In the following, we assume that D^n and D^n_- have the same orientation. (We ignore the global sign (-1)^n+1 of the orientation for simplicity.)
Suppose V_𝒮 is the vector space spanned by L-labelled stratified manifold with support D^n and the same boundary orientation as V_𝒮,-.
For a vector x∈ V_𝒮,
we construct the vector x_- in V_𝒮,- as
x_- :=x×{-1}∪𝒮× [-1,0].
We study the algebraic structure of Ṽ_𝒮,- by that of the D^n algebra V_𝒮. We rescale two D^n and then compose them to one D^n along the last coordinate, and consider their composition as a multiplication. We first study its idempotent completion which encodes an n-category.
For an L-labelled stratified manifold 𝒮^n with support S^n, it is a scalar according to Z.
For an L-labelled stratified manifold 𝒟^n with support D^n, it is a vector.
For a stratified manifold 𝒮^n-1 with support S^n-1, it corresponds to a vector space V_𝒮^n-1.
We will explain the categorical meaning of stratified manifolds with support D^k and S^k-1 for all k.
Note that when n=1, the vector space Ṽ_S^0,- forms an associative algebra with a trace from Z. For x,y ∈ V_S^0, we label x at [-1,0] and label y at [0,1].
More precisely, the position point -1/2 has a vector x with a regular chart (U,ϕ,𝒮), U=[-1,0], ϕ:U→ D^1, 𝒮=S^0.
We consider the result as their multiplication, denoted by xy, which is a vector in V_S^0.
Moreover, V_S^0 is an associative algebra.
The associativity follows from the following homeomorphism ϕ on [-1,1]:
(-1,-1) – (-1,1);
at (-1.2,-.75) x;
at (-1.2,-.25) y;
at (-1.2,.5) z;
[shift=(2,0)]
(-1,-1) – (-1,1);
at (-1.2,-.5) x;
at (-1.2,.25) y;
at (-1.2,.75) z;
[dashed] (-1,-1) – (1,-1);
[dashed] (-1,-.5) – (1,0);
[dashed] (-1,0) – (1,.5);
[dashed] (-1,1) – (1,1);
Note that D^1 is the identity I of V_S^0.
We define the trace of x as
Tr(x):=Z(x_- ∪ρ(D^1_-)).
By isotopy, we have
Tr(xy)=Z(x_- ∪ρ(y_-))=Tr(yx).
(-1,-1) – (-1,1) – (0,2)– (0,0)– (-1,-1);
at (-1.2,-.5) x;
at (-1.2,.5) y;
at (0.5,.5) =;
at (2.5,.5) =;
[shift=(4,0)]
(-1,-1) – (-1,1) – (0,2)– (0,0)– (-1,-1);
at (-1.2,-.5) y;
at (-1.2,.5) x;
[shift=(2,0)]
(-1,-1) – (-1,1) – (0,2)– (0,0)– (-1,-1);
at (-1.2,0) x;
at (-.4,1) ρ(y);
In the rest, we draw the last second coordinate as the the vertical one, and consider it as the multiplication direction.
If x_- is a null vector, then Tr(x)=0 and for any a,a',y ∈ V_S^0,
Z((axa')_-∪ρ(y_-))=Z(x ∪ρ(a_-)ρ(y_-)ρ(a'_-))=0.
So (axa')_- is a null vector. So the multiplication and the trace are well-defined on Ṽ_S^0,-.
(-1,-1) – (-1,1) – (0,2)– (0,0)– (-1,-1);
at (-1.2,-.5) a;
at (-1.2,.5) a';
at (-1.2,0) x;
at (-.4,1) ρ(y);
at (0.5,.5) =;
[shift=(2,0)]
(-1,-1) – (-1,1) – (0,2)– (0,0)– (-1,-1);
at (-.4,.5) ρ(a);
at (-.4,1.5) ρ(a');
at (-1.2,0) x;
at (-.4,1) ρ(y);
Now let us study the algebra for a general n.
For a stratified manifold 𝒟^n-1 with support D^n-1, we construct
𝒮=∂(𝒟^n-1× D^1)=∂𝒟^n-1× D^1∪𝒟^n-1× S^0.
Then |𝒮|=S^n-1. The vector space V_𝒮,- forms an algebra in the following sense.
For two vectors x, y in V_𝒮, we label x at D^n-1× [-1,0] and y into D^n-1× [0,1] after scaling their last coordinates by 1/2. We define the result as the their multiplication, denoted by xy. Note that the boundary of xy is 𝒮, so xy is a vector in V_𝒮.
We denote ρ to be the 180^∘ rotation of the last second coordinate.
Then
ρ(xy)=ρ(y)ρ(x), ∀ x,y ∈Ṽ_𝒮,∓.
For any x∈ V_𝒮 above, we define its trace as
Tr(x):=Z(x_-∪ρ((𝒟^n-1× D^1)_+)).
Then for any x,y ∈ V_𝒮, we have
Tr(xy)=Z(x_-∪ρ(y_-))=Tr(yx).
It follows from isotopy around S^1, similar to the case n=1.
at (2.5,0) =;
at (7,0) =;
(-1,-1) – (-1,1) – (1,1) – (1,-1) – (-1,-1);
[dashed] (-1,-1) –++(.8,.8);
(1,-1) –++(.8,.8);
(-1,1) –++(.8,.8);
(1,1) –++(.8,.8);
[dashed] (-.2,-.2) –++(0,2);
[dashed] (-.2,-.2) –++(2,0);
(1.8,1.8) –++(0,-2);
(1.8,1.8) –++(-2,0);
at (0,.5) y;
at (0,-.5) x;
at (-.8,1.5) [-1,1];
at (-1.3,0) D^1;
at (0,-1.3) 𝒟^n-1;
[shift=(4.5,0)]
(-1,-1) – (-1,1) – (1,1) – (1,-1) – (-1,-1);
[dashed] (-1,-1) –++(.8,.8);
(1,-1) –++(.8,.8);
(-1,1) –++(.8,.8);
(1,1) –++(.8,.8);
[dashed] (-.2,-.2) –++(0,2);
[dashed] (-.2,-.2) –++(2,0);
(1.8,1.8) –++(0,-2);
(1.8,1.8) –++(-2,0);
at (.8,.8) ρ(y);
at (0,0) x;
at (-.8,1.5) [-1,1];
at (-1.3,0) D^1;
at (0,-1.3) 𝒟^n-1;
[shift=(9,0)]
(-1,-1) – (-1,1) – (1,1) – (1,-1) – (-1,-1);
[dashed] (-1,-1) –++(.8,.8);
(1,-1) –++(.8,.8);
(-1,1) –++(.8,.8);
(1,1) –++(.8,.8);
[dashed] (-.2,-.2) –++(0,2);
[dashed] (-.2,-.2) –++(2,0);
(1.8,1.8) –++(0,-2);
(1.8,1.8) –++(-2,0);
at (0,.5) x;
at (0,-.5) y;
at (-.8,1.5) [-1,1];
at (-1.3,0) D^1;
at (0,-1.3) 𝒟^n-1;
We keep the 3D pictorial convention for the n+1 coordinates of D^n+1.
The first n-1 coordinates are horizontal.
The last second coordinate is the vertical multiplication.
For any |𝒟^n-1|=D^n-1, the above multiplication and trace are well-defined on Ṽ_𝒮,-, 𝒮=∂𝒟^n-1× D^1.
Moreover, Ṽ_𝒮,- forms an associative algebra, denoted by A(𝒟^n-1× D^1). Its identity is 𝒟^n-1× D^1.
Similar to the case n=1, if x is a null vector in Ṽ_𝒮,-, then Tr(x_-)=0 and (axb)_- is a null vector, by the isotopy invariance of Z. So the trace and the multiplication are well-defined on Ṽ_𝒮,-.
The associativity is similar to the case n=1.
We call the non-degenerate S^n algebra (Ṽ,Z) or the S^n functional Z to be semisimple, if Ṽ_𝒮,- is semisimple for any |𝒟^n-1|=D^n-1.
For 2 ≤ k ≤ n, suppose 𝒟^n-k is a stratified manifold with support D^n-k. We construct
𝒮=∂(𝒟^n-k× D^k)=∂𝒟^n-k× D^k∪𝒟^n-k× S^k-1.
Then Ṽ_𝒮,- is a commutative associate algebra, denoted by A(𝒟^n-k× D^k), with unit 𝒟^n-k× D^k.
By Corollary <ref>, it does not matter which direction in the second component D^k is for the multiplication, as its boundary ∂ D^k has no stratification.
We consider 𝒟^n-k× D^k-1 as 𝒟^n-1, then by Prop. <ref>, Ṽ_𝒮,- is an associate algebra.
The commutativity for k≥ 2 follows from switching two smaller k-discs in D^k by isotopy in Fig. <ref>.
This commutative multiplication is usually considered as a commutative Frobenius algebra in TQFT.
In general, if we fix the first component 𝒟^n-k and change the second component D^k to all possible 𝒟^k,
then the boundary is
𝒮=∂(𝒟^n-k×𝒟^k)=∂𝒟^n-k×𝒟^k∪𝒟^n-k×∂𝒟^k.
These vector spaces form a D^k algebra.
For 0 ≤ k ≤ n-1, |𝒟^k|=D^k, and
we call a minimal idempotent α of A(𝒟^k× D^n-k)
an indecomposible k-morphism of type 𝒟^k.
Suppose ϕ is an homeomorphism on D^k.
Then for any k-morphism of type 𝒟^k, ϕ(α) is a k-morphism of type ϕ(𝒟^k).
We call α and ϕ(α) homeomorphic equivalent.
If ϕ fixes the boundary, then we call α and ϕ_α interior homeomorphic equivalent.
In principle, we are only interested in k-morphisms up to (interior) homeomorphic equivalence, as the S^n functional is homeomorphic invariant.
For a k-morphism α of type 𝒟^k, we define its quantum dimension as Tr(α).
If A(𝒟^k× D^n-k) is semisimple, then for any indecomposible k-morphism α of type 𝒟^k, we have Tr(α)≠ 0.
Assume that Tr(α)=0. By the semisimplicity,
for any x ∈ A(𝒟^k× D^n-k), α xα=cα for some c∈𝕂. So
Tr(xα)=Tr(α xα)=Tr(cα)=0,
By Prop. <ref>, α is a null vector, which is a contradiction. So Tr(α)≠ 0.
Suppose θ is a reflection. Then ρθ=θρ on V_𝒮_±.
Under the action of ρθ, the chart (U_p, ϕ_p, 𝒮_p) and vector v_p of a labelled stratified manifold becomes (ρθ(U_p), ϕ_pθρ, ρθ𝒮_p) and θ(v_p).
Under the action of θρ, they become (θρ(U_p), ϕ_pρθ, θρ𝒮_p) and θ(v_p).
The identity ρθ=θρ is a basic geometric fact that the vertical reflection is the composition of the 180^∘ rotation and the horizontal reflection.
Suppose θ is a reflection. For any x ∈Ṽ_𝒮_-, we define its adjoint as
x^*:=ρθ(x).
Then Tr(x^*)=Tr(x)^*. So the adjoint is well-defined from Ṽ_𝒮,- to Ṽ_𝒮^*,-, where 𝒮^* is the vertical reflection of 𝒮.
Then A(𝒟^n-k× D^k) is a *-algebra with the adjoint *.
If further Z is reflection positive and A(𝒟^n-k× D^k) is a finite dimensional, then it is a C^* algebra.
If further Z is reflection positive, then
Tr(xx^*)=Z(x_- ∪ρ(x^*_-))=Z(x_- ∪θ (x_-))≥ 0.
So Tr(xx^*)=0 iff x_-=0. So Tr is positive definite on A(𝒟^n-k× D^k).
If further Z is finite dimensional, then A(𝒟^n-k× D^k) is a finite dimensional C^* algebra. In particular, A(𝒟^n-k× D^k) is semisimple.
§.§ Bimodules and Morita equivalence
Suppose α_± are k-morphisms of type 𝒟_± respectively, and they have the same boundary 𝒮. Suppose 𝒟 has support D^k+1 with boundary 𝒮^k=(𝒟_- ×{-1}∪𝒟_+ ×{1}∪𝒮× D^1). Suppose β is an idempotent of A(𝒟× D^n-k-1).
We call β an α-α bimodule, if P(α_-) β Q(α_+)=β,
where P(α_-) is 𝒟× D^n-k-1 labelled by α_- at 𝒟_-× [-1,-1+ε] × D^n-k-1 and Q(α_+) is 𝒟× D^n-k-1 labelled by α_+ at 𝒟_+× [1-ε, 1] × D^n-k-1.
When β is a minimal idempotent, we call it an indecomposible α-α bimodule, and a (k+1) morphism from α_- to α_+.
By isotopy, the three idempotents P(α_-), β, Q(α_+) commute.
The equality P(α_-) β Q(α_+)=β is equivalent to that β is a sub idempotent of P(α_-) and Q(α_+).
Suppose α is a k-morphism of type 𝒟^k. We can consider it as an idempotent in A(𝒟^k× D^1)× D^n-k-1). It is a α-α bimodule, called the trivial bimodule.
Applying the 180^∘ rotation to the k+1 and k+2 coordinates, we obtain a α_+-α_- bimodule, called the dual bimodule of β, denoted by β.
Suppose α_i are k-morphisms i=0,1,2 and β_i are α_i-α_i+1 bimodules,
We label β_0 at D^k+1_-× D^n-k+1 and β_1 at D^k+1_+× D^n-k+1, and the result is a α_0-α_2 bimodule, called the fusion of β_0 and β_1 at α_1, denoted by β_0⊗_α_1β_1.
The fusion could be considered as a composition of (k+1)-morphisms at the boundary α_1.
The fusion is well defined modulo interior homeomorphic equivalence.
Two indecomposible k-morphisms are called Morita equivalent in the D^n algebra, if they have a bimodule in the D^n algebra.
The Morita equivalence is an equivalence relation.
The Morita equivalence has the three properties, Reflexivity, Symmetry and Transitivity as discussed above. So it is an equivalence relation.
The global dimension of an indecomposible (n-1)-morphism is defined to be 1 (assuming semisimplicity).
For an indecomposible k-morphism α of type 𝒟^k, take 𝒮^k= (𝒟^k_-∪ D^k_+).
The global dimension of α is defined inductively for k=n-1,n-2, ⋯,0 as
μ(α)=∑_βTr(β)^2/μ(β),
summing over indecomposible α-α bimodules β, β is a representative in its annular equivalence class.
To ensure it is well-defined, we need to assume μ(β)≠ 0 inductively.
For a semisimple S^n functional Z, we call it strong semisimple, if the global dimension of any k-morphism is nonzero, for any 0≤ k≤ n-1.
The non-zero global dimension condition is necessary when we construct n+1 TQFT later. If the condition only holds partially, then we can only construct a TQFT partially.
Etingof, Nikshych and Ostrik proved that the global dimension of a fusion category is positive over ℂ in Theorem 2.3 in <cit.>.
That means the global dimension of the 0-morphism is non-zero. It will be interesting to see whether the strong semisimple condition can be derived from weaker conditions in higher dimensions.
We call Tr(β)^2/μ(β) the intrinsic dimension of the indecomposible k-morphism β.
Neither the quantum dimension Tr(β) nor the global dimension μ(β) is invariant under Morita equivalence.
We will prove that the intrinsic dimension Tr(β)^2/μ(β) is invariant under Morita equivalence in Theorem <ref>.
§.§ Annular Equivalence
Suppose |𝒮|=S^k-1, |𝒟_i|=D^k and ∂(𝒟_i)=𝒮, for i=0,1.
We define V(𝒟_1,𝒟_0) as a vector space spanned by L-labelled stratified manifolds with support D^k× (D^n-k∖1/2D^n-k)and boundary
𝒮× (D^n-k∖1/2 D^n-k) ∪𝒟_0 × S^n-k-1∪𝒟_1 ×1/2 S^n-k-1.
Here the scalar 1/2 in front of D^k and S^n-k-1 means scaling their radius by 1/2, and
D^n-k∖1/2D^n-k is removing the interior of 1/2 D^n-k from D^n-k.
Let us define the multiplication
V(𝒟_1,𝒟_0) × V_∂(𝒟_1× D^n-k)→ V_∂(𝒟_0× D^n-k).
For a vector T ∈ V(𝒟_1,𝒟_0) and a vector x ∈ V_∂(𝒟_1× D^n-k),
we scale x by 1/2 and then put it inside T, and denoted the result as their multiplication Tx. Then Tx ∈ V_∂(𝒟_0× D^n-k).
(-1,-1) – (-1,1) – (1,1) – (1,-1) – (-1,-1);
at (0,0) x;
at (0,-.75) T;
[scale=.5]
[dashed] (-1,-1) – (-1,1) – (1,1) – (1,-1) – (-1,-1);
If x is a null vector, then Ax is a null vector. So the multiplication is well defined for
V(𝒟_1,𝒟_0) ×Ṽ_∂(𝒟_1× D^n-k),-→Ṽ_∂(𝒟_0× D^n-k),-.
If T contains a null vector in its local region, then Ax=0.
For any interior homeomorphism ϕ on D^k× (D^n-k∖1/2D^n-k)), Tx=ϕ(T)x in Ṽ_∂(𝒟_0× D^n-k),-.
We define A(𝒟_1,𝒟_0) as V(𝒟_1,𝒟_0) modulo local null vectors.
Then the multiplication is well defined for
A(𝒟_1,𝒟_0) ×Ṽ_∂(𝒟_1× D^n-k),-→Ṽ_∂(𝒟_0× D^n-k),-.
Suppose |𝒮|=S^k-1, |𝒟_i|=D^k and ∂(𝒟_i)=𝒮, for i=0,1,2.
For T_1∈ A(𝒟_1,𝒟_0) and T_2 ∈ A(𝒟_2,𝒟_1), we define their multiplication T_2T_1 by composing them along the radius of D^n-k.
Then the multiplication is associate and
A(𝒟_∙,𝒟_∙) forms an algebroid, which we call the S^n-k-1 annular algebroid with boundary 𝒮^k-1.
For indecomposible k-morphisms α_i of type 𝒟_i, i=0,1 and common boundary 𝒮^k-1, we call them annular equivalent, if there are T_1∈ A(𝒟_1,𝒟_0) and T_2 ∈ A(𝒟_0,𝒟_1), such that
T_2α_0=α_1, T_1α_1=α_0.
For 0≤ k ≤ n-1,
two indecomposible k-morphisms α_0 and α_1 are annular equivalent iff they are Morita equivalent.
If T(α_1)=α_0 for an annular action T, then the normal microbundle of T at D^k× [-1,-1/2]×ε D^n-k-1 is a α_0-α_1 bimodule. So they are Morita equivalent.
Conversely, suppose β is a α_0-α_1 bimodule of type 𝒟^k+1.
Suppose ϕ is a homeomorphism from 𝒟^k× D^n-k∖1/2D^n-k to the normal microbundle U of 𝒟^k× S^n-k-1. We label U by β at the microbundle of 𝒟^k×ε D^1 × O_n-k-1, and denoted the result by T. Take T_β=ϕ^-1(T),
then
T_β(α_1) =Tr(β)/Tr(α_0)α_0.
as illustrated in Fig. <ref>.
By Prop. <ref>, Tr(β)≠ 0, so ϕ^-1(T) is non-zero. So α_0 and α_1 are annular equivalent.
The annular equivalence is an equivalence relation.
It follows from Prop. <ref> and <ref>.
Suppose ϕ:𝒟_1 →𝒟_0 is a homeomorphism fixing the boundary. Then they are isotopic by Alexander's trick, through ϕ_t, t∈ [0,1], in Equ. <ref>.
We construct T ∈ A(𝒟_1,𝒟_0), such that
T|_D^k × t S^n-k, t∈ [1/2,1] is given by ϕ_2t-1(𝒟_1) × S^n-k. Then
T(xy)=T(x)T(y). So any k-morphism α is of type 𝒟_1 is annular equivalent to the k-morphism T(α) of type ϕ(𝒟_1).
Thus the equivalence class it well defined up to isotopy ϕ:𝒟_1 →𝒟_0.
Recall from Def. <ref>,
for a stratified manifold 𝒮, |𝒮|=S^k-1, D^k× S^n-k_𝒮^k-1× S^n-k(L) is the vector space of L-labelled stratified manifold with support D^k× S^n-k and boundary 𝒮^k-1× S^n-k modulo null vectors.
We define 𝒟^k(β)× S^n-k to be the stratified manifold 𝒟^k × S^n-k labelled by a k-morphism β at 𝒟^k× D^n-k×{-1}.
When A(𝒟^k× D^n-k) is semisimple for any 𝒟^k with support D^k and boundary 𝒮.
The vector space D^k× S^n-k_𝒮^k-1× S^n-k(L) is spanned by 𝒟^k(β)× S^n-k, for indecomposible k-morphisms β. Moreover,
if k-morphism β_0 and β_1 are annular equivalent, then
𝒟^k(β_0)× S^n-k=λ𝒟^k(β_1)× S^n-k,
in D^k× S^n-k_𝒮^k-1× S^n-k(L), for some λ∈𝕂.
Suppose ℓ is an L-labelled stratified manifold in D^k× S^n-k_𝒮^k-1× S^n-k(L). By isotopy, we may assume that ℓ intersect with D^k× O_n-k×{-1} transversely and the intersection is a stratified manifold 𝒟^k.
By isotopy, we assume that ℓ|_D^k× D^n-k×{-1} is 𝒟^k× O_n-k×{-1}. We decompose the identity 𝒟^k× O_n-k of A(𝒟^k× D^n-k) as a sum of minimal idempotents, i.e., k-morphisms β.
By the semisimplicity, a vector in D^k× S^n-k_𝒮^k-1× S^n-k(L) with a label β at D^k× D^n-k×{-1} is a multiple of 𝒟^k(β)× S^n-k.
So the vector space D^k× S^n-k_𝒮^k-1× S^n-k(L) is spanned by these 𝒟^k(β)× S^n-k.
If k-morphism β_0 and β_1 are annular equivalent, by isotopy, we can change 𝒟^k(β_0)× S^n-k as a vector labelled by β_1 at D^k× D^n-k×{-1}. By the semisimplicity, the vector is a multiple of 𝒟^k(β_1)× S^n-k.
For the S^n functional Z, we call Z n-finite, if for any 𝒮, |𝒮|=S^n-1, the vector space V_𝒮,- is finite dimensional.
For 0≤ k≤ n-1, we call Z k-finite, if for any 𝒮, |𝒮|=S^k-1, there are finitely many annular equivalent classes of indecomposible k-morphisms with boundary 𝒮.
We call Z complete finite, if it is k-finite for all 0≤ k ≤ n.
For any |𝒮|=S^k-1, it is more conceptual to consider an equivalence classes of k-morphisms as a vector with support D^k × S^n-k and boundary 𝒮^k-1× S^n-k.
§.§ Simplicial morphisms
In this section, we will label indecomposible k-morphisms on the k-simplices of an n-simplex, as a preparation for computing the partition function of (n+1)-manifolds as a state sum based on a triangulation. To simplify the state sum formula, we expect to choose the representatives of k-morphisms as simple as possible.
The methods also work for polytopes, which could be used for CW-complex decomposition of manifolds. This will be clear after we construct the TQFT and derive the skein theory for general k-morphisms.
Just like in algebraic topology, usually it is more convenient to prove theoretical results using simplicial decomposition, as there are less shapes to discuss.
It is more convenient to compute the invariant using CW-complex decomposition in practice, as it has less cells in the CW-complex decomposition than in the triangulation of a manifold.
Now let us show how to label the simplices by indecomposible k-morphisms.
For k=0, a 0-morphism is a minimal idempotent of the algebra in A(D^0× D^n). It has the boundary 𝒮=∂ (D^0× D^n=S^n-1).
For 1≤ k ≤ n-1, suppose 𝒟^k is a stratified manifold D^k, and α is a k-morphism of type 𝒟^k.
Suppose Δ^k is a linear k-simplex and σ: Δ^k → D^k is a homeomorphism, such that the skeletons of σ(Δ^k) intersects with 𝒟^k transversely.
We consider σ as a Δ^k decomposition of ∂𝒟^k.
For every 0-face F of Δ^k, we consider ℱ:=𝒟^k|_σ(F) as a 0-face of 𝒟^k.
Take a homeomorphism ϕ_F: F → D^k-1.
It extends to a homeomorphism ϕ'_F: U(F) → D^k-1× D^n-k+1, where U(F) is a normal microbundle of F in 𝒟^k× D^n-k.
Suppose β is a (k-1)-morphism of type ϕ_F(ℱ).
We denote P_β as the stratified manifold 𝒟^k× D^n-k labelled by (ϕ'_F)^-1(β).
Then P_β is an idempotent of A(𝒟^k× D^n-k).
And the identity decomposes as 𝒟^k× D^n-k=∑_β P(β) summing over all k-morphisms of type ϕ_F(ℱ).
By isotopy, P(β) commutes with α. So there is a unique k-morphism β containing the minimal idempotent α.
For a k-morphism α with support 𝒟^k, a Δ^k decomposition σ and a 0-face F, we call the above unique P_β containing α the boundary (k-1)-morphism of α at σ(F), denoted by α_F.
We say α_F is homeomorphic equivalent to β.
For a k-morphism α with a simplicial decomposition σ, we define its simplicial boundary as
∂_σ(α)= ⊕_Fα_σ(F),
summing over all 0-faces F.
Suppose C_k, 0 ≤ k ≤ n, is a set k-morphisms.
We call C_0 to be complete if any 0-morphism is annular equivalent to an element in C_0.
For 1≤ k ≤ n-1, we call C_k to be C_k-1-complete, if for any k-morphism α, there is a homeomorphism ϕ on D^k, such that (ϕ× I_D^n-k)(α) is annular equivalent to an element in C_k, whenever α has a simplicial decomposition with boundary (k-1)-morphisms homeomorphic equivalent to (k-1)-morphisms in C_k-1.
We call C_n to be C_n-1-complete, if for any n-morphism α, i.e. a vector in Ṽ_𝒮,-, there is a homeomorphism ϕ on D^k, such that ϕ(α) is a linear sum of vectors in C_n, whenever the boundary face of |𝒮|=S^n-1 are labelled by homeomorphisms of n-1 morphisms in C_n-1.
Suppose C_k is a set of k-morphisms, 0 ≤ k ≤ n. If C_k is C_k-1-complete, for all k, then we call C_∙=∪_k=0^n C_k a simplicial representative set.
If C_∙ has no simplicial representative subset, then we call it minimal.
For a minimal simplicial representative set, k-morphisms in C_k, 0 ≤ k ≤ n-1, are pairwise annular inequivalent and homeomorphic inequivalent, and vectors in C_n are linearly independent.
We call the S^n functional Z simplicial finite, if it has a simplicial representative set C_∙=∪_k=0^n C_k and every C_k is a finite set.
Whether the simplicial finiteness of Z implies the complete finiteness?
§.§ Spherical n-category
As defined in Def. <ref>,
we can compose two k-morphisms into one k-morphism by gluing the two k-discs into one k-disc along the boundary. We consider a k+1-morphism as a morphism or a bimodule between two k-morphisms.
In this way, all k-morphisms, 0≤ k ≤ n, form an n-category.
The higher pivotal structure of the n-category, such as the saddle surface of a 3-category in Fig. <ref>, is encoded by the regular stratified PL manifold.
We call the n-category arisen from a non-degenerate, complete finite and strong semisimple S^n functional Z a spherical n-category. If Z is Hermitian or reflection positive, then the spherical n-category is semisimple, Hermitian or unitary respectively.
We consider the spherical n-category as a generalization of the spherical multi-fusion category with non-zero global dimension for n=2.
The complete finite and strong semisimple conditions are required to construct the TQFT in the next section.
One may release the two conditions to study more general spherical n-categories.
One can regard an n-category, as the label space L, together with the S^n functional Z as a spherical n-category. When Z is reflection positive, it can be considered as a unitary n-category. The higher pivotal structures are captured by regular stratified manifolds.
A monoidal n-category 𝒞 has k-morphisms with compositions in the direction of the k^th coordinate, and k+1-morphism between k-morphisms in the direction of the (k+1)^th coordinate. Every k-morphism has the unit (k+1)-morphism.
We consider 𝒞 as the label space L.
§ ALTERFOLD TQFT
In this section, we assume that the S^n functional Z is complete finite and reflection positivity. We will introduce the alterfold TQFT and construct a unitary alterfold (n+1)-TQFT from Z. For a general field 𝕂, we replace reflection positivity by strong semisimplicity.
If Z is complete finite and reflection positive, then Z is strong semisimple.
When Z is complete finite and reflection positive, the trace of the algebra A(𝒟^k× D^n-k) is positive definite. So the algebra is a finite dimensional C^*-algebra, which is semisimple.
By Equ. <ref>, the global dimension of every k-morphism is positive. So Z is strong semisimple.
§.§ Quantum Invariant
Suppose B is a compact, oriented (n+1)-manifold and ℳ is an L-labelled stratified n-manifold with support ∂ B, we call the pair ℬ=(B,ℳ) a bulk vector.
We call B the support of ℬ, and ℳ the space boundary of ℬ.
We prove the following main theorem.
When the S^n functional Z is complete finite and strongly semisimple, Z can be extended to a partition function Z on ℬ, which is homeomorphic invariant.
If Z is reflection positive, then its extension is reflection positive.
Fix a non-zero ζ∈𝕂.
Suppose ℬ=(B,ℳ) is a bulk vector and ϕ: ℬ→ D^n+1 is a homeomorphism,
we define
Z(ℬ):=ζ Z(ϕ(ℳ)).
As Z is homeomorphic invariant on S^n, so Z(ϕ(B)) is independent of the choice of ϕ.
When ℬ is a union of ℬ_i, |B_i|∼ D^n+1,
we define
Z(ℬ)=∏_i Z(ℬ_i).
§.§ Surgery Moves
For D^n+1, 0<ε'<ε<1.
we remove D^k-1×ε D^n-k+1 and
D^k-1× [ε,1] ×ε' D^n-k,
then the result D_+ is homeomorphic to D^n+1.
The shape of D_+ on the k^th and (k+1)^th coordinates is
[scale=1]
[fill opacity=.2] (-1,-1) rectangle (1,1);
[white] (-.5,-.5) rectangle (.5,.5);
[white] (.5,-.25) rectangle (1,.25);
[dashed] (.5,-.25)–(.5,.25);
For D^n+1,
we remove D^k-1×ε D^n-k+1 and
D^k-1× [-1,-ε] ×ε' D^n-k,
then the result result D_- is homeomorphic to D^n+1.
The shape of D_- on the k^th and (k+1)^th coordinates is
[scale=1]
[fill opacity=.2] (-1,-1) rectangle (1,1);
[white] (-.5,-.5) rectangle (.5,.5);
[white] (-.5,-.25) rectangle (-1,.25);
[dashed] (-.5,-.25)–(-.5,.25);
When we remove D^k-1×ε D^n-k+1 from D^n+1, we change
the S^k-1×ε D^n+1-k part of the boundary ∂ D^n+1 to
D^k ×ε Sn-k.
For a linear sum L-labelled stratified manifold 𝒮 on ∂ D^n+1, such that the part S^k-1×ε D^n+1-k is 𝒮×ε D^n+1-k,
we will introduce a local 𝒮^k-1-relation which changes S^k-1×ε D^n+1-k to a linear sum L-labelled stratified manifold with support on D^k ×ε S^n-k.
Topologically, we may consider the move as removing a k-handle with support D^k ×ε D^n-k+1. Algebraically, we consider S^k-1×ε D^n+1-k as the identity of a D^n+1-k algebra and the relation as a decomposition of the identity into minimal idempotents. The minimal idempotent is given by the annular equivalence class of k-morphisms with boundary 𝒮^k-1.
After removing D^k ×ε D^n+1-k from D^n+1, the orientation of its boundary is opposite to the orientation of the boundary 𝒟^k × S^n-k of D^k ×ε D^n+1-k.
For a k-morphism α_k of type 𝒟^k,
we define 𝒟^k(α_k)×ε S^n-k by
replacing the 𝒟^k ×ε D^n-k part of 𝒟^k × S^n-k by α_k.
Suppose Z is complete finite and 𝒮^k-1 is a stratified manifold 𝒮^k-1 with support S^k-1. We define an 𝒮^k-1-relation for the bulk vector (D^n+1, 𝒮^k-1× D^n-k+1).
When k=0,
Φ(D^n+1,∅) = ∑_α_0Tr(α_0)/μ(α_0) (D^n+1∖ε D^n+1, ℳ(α_0)),
summing over representatives of 0-morphisms α_0, and
ℳ(α_0) is ε S^n labelled by 𝒟^0(α_0) ×ε S^n.
When 1≤ k ≤ n-1,
Φ(D^n+1,𝒮^k-1× D^n+1-k) = ∑_α_kTr(α_k)/μ(α_k) (D^n+1∖ D^k×ε D^n-k+1, ℳ(α_k)),
summing over representatives of k-morphisms α_k with link boundary 𝒮^k-1, and
ℳ(α_k) is 𝒮^k-1× (D^n+1-k∖ε D^n+1-k) ∪𝒟^k×ε S^n-k labelled by 𝒟^k(α_k) ×ε S^n-k.
(Here α_k is the type of 𝒟^k.)
When k=n,
Φ(D^n+1,𝒮^n-1× D^1) = ∑_α_n (D^n+1∖ D^n×ε D^1, ℳ(α_n)),
summing over α_n in a basis of V_𝒮^n-1, with the dual vector α_n' in the dual basis, and
ℳ(α_n) is 𝒮^n-1× (D^1∖ε D^1) ∪Λ𝒮^n-1×ε S^0 labelled by α_n ×{ε} and α_n' ×{-ε},
When k=n+1, suppose 𝒮^n is an L-labelled stratified manifold,
Φ(D^n+1,𝒮^n) =ζ Z(𝒮^n) (∅,∅).
We will prove that the 𝒮^k-1-relation is independent of the choice of the representatives of the annular equivalence class in Theorem <ref>.
The relations come from the null principle, which we will explain in Theorem <ref> and Table <ref>.
Suppose 𝒮^n is an L-labelled stratified with a local part 𝒮^k-1×ε D^n+1-k and a local part 𝒮^k×ε D^n-k.
We define the (k-1,k) moves as follows.
We first apply the 𝒮^k-1-relation to the local part 𝒮^k-1×ε D^n+1-k.
Then we obtain stratified manifolds 𝒮^k_± as
𝒮^k_+ =𝒟^k(α_k) ×{ε}∪𝒮^k|_x_k ≥ε
𝒮^k_- =ρ_(k,k+1)(𝒟^k(α_k) ×{ε}) ∪𝒮^k|_x_k ≤ -ε,
where ρ_(k,k+1) is the 180^∘ rotation of the k^th and (k+1)^th coordinates.
Then we apply the 𝒮^k_±-relation. The support of the result L-labelled stratified manifolds are homeomorphic to S^n as illustrated in Fig. <ref>. We denoted their Z value by Z_± respectively.
We prove the following key result about adjacent moves.
When Z is complete finite and strong semisimple, the S^n functional Z is invariant under the (k-1,k)-move, i.e.,
Z(𝒮^n)=Z_+=Z_-.
Consequently, Z_± are independent of the choices of the representatives of in the 𝒮^k-1-relation.
This is proved by induction on k.
When k=n, the 𝒮^n-1-relation cut D^n+1 into two pieces and the 𝒮^n-relation evaluate one and then multiply the value of the other.
This follows from non-degeneracy of the inner product and the fact that
⟨β_-,β_+ ⟩= ∑_α_n⟨β_1, α_n ⟩⟨α'_n ,β_2 ⟩, ∀ β_±∈Ṽ_D^n,𝒮^n-1,±.
summing over α_n in a basis of Ṽ_D^n,𝒮^n-1,- with the dual vector α_n' in the dual basis.
As Z is invariant under isotopy on S^n.
By Theorem <ref>, we assume that the transversal intersection of 𝒮^n and S^k-1× O_n+1-k is 𝒮^k-1; and the transversal intersection of 𝒮^n and S^k× O_n-k is 𝒮^k.
Applying the 𝒮^k-1-relation, we
obtain a diagram denoted by 𝒮_1. Its support is
S^n∖ (S^k-1×ε D^n-k+1) ∪ D^k ×ε S^n-k,
which is homeomorphic to S^k× S^n-k.
Applying the 𝒮^k_±-relation to 𝒮_1, we obtain 𝒮_±, whose support are homeomorphic to S^n, and their values are Z_± respectively.
Applying the negative (k,k+1)-move to 𝒮_±, we obtain the same diagram with value Z', as illustrated in Fig. <ref>.
By induction, we have that Z_+=Z'=Z_-.
If we apply isotopy to the
S^n∖ S^k-1×ε D^n-k+1 part of 𝒮_1,
and then apply the 𝒮^k_±-relation, the result Z_± do not change, as the local isotopy will not affect at least one of the 𝒮^k_± relation.
So the value Z_± is invariant under local isotopy.
Note that the S^n∖ S^k-1×ε D^n-k+1 part of 𝒮_1 has boundary 𝒮^k-1×ε S^n-k.
Let ϕ: D^k× S^n-k→ S^n∖ S^k-1×ε D^n-k+1 be a homeomorphism.
By Prop.<ref>, it is enough to check Z=Z_±, when the S^n∖ S^k-1×ε D^n-k+1 part is ϕ (𝒟^k(β)× S^n-k).
Then in the 𝒮^k-1-relation, only the term with β contributes to a non-zero scalar.
The identity Z=Z_± follows from the definition of μ(α_k).
Both Z_± are independent of the choices of the representatives in the 𝒮^k-1-relation, as they equal to Z(𝒮^n).
The 𝒮^k-1-relation eliminates a normal microbundle 𝒟^k×ε D^n-k-1.
For a bulk vector ℬ=(B,ℳ) and a triangulation of B, we may eliminate the microbundle of its k-simplices for k=0,1,⋯,n+1, and eventually get a scalar. A main result of this section is proving that the scalar is invariant under homeomorphisms on n+1 manifolds and the 𝒮^k-1-relation. So all the 𝒮^k-1-relations are consistent.
Suppose Z is complete finite.
Suppose ℬ=(B,ℳ) is a bulk vector.
Take a combinatorial triangulation Δ of B. Take an ambient isotopy of ℳ, so that it is transversal to the triangulation. For every k-simplex Δ_k,i, we fix a homeomorphism ϕ_k,i: Δ_k,i→ D^k.
For every k-simplex Δ_k,i not on the boundary of B, take 𝒮^k-1=ϕ_k,i(∂Δ_k,i∩ℳ).
We change its normal microbundle by the 𝒮^k-1-relation in Def. <ref>, for k=0,1,⋯, n+1.
Eventually we obtain a scalar denoted by Z(ℬ), called the partition function of ℬ, as a state sum.
As Z is homeomorphic invariant on S^n, the partition function is independent of the choice of the homeomorphisms ϕ_k,i.
Now let us prove that the partition function Z(ℬ) is independent of the choice of the triangulation, the ambient isotopy and the choice of the k-morphisms.
We first prove it is invariant under subdivisions.
The Z value of a labelled linear n-simplex is invariant under the subdivision, for any 1≤ k ≤ n, any point p in the interior of its k-sub simplex and any choice of k-morphisms.
Suppose p is an interior of [e_0,e_1,⋯,e_k]
The p-subdivision of the linear n-simplex [e_0,e_1,⋯,e_n] is a sequence of adjacent moves.
More precisely, take E={p}+[e_k+1,e_1,⋯,e_n].
We apply the adjacent move to (E, {e_k}+E);
For j=0,1,⋯,k-1 and every j sub simplex J of [e_0,⋯,e_k-1],
we apply the adjacent moves to (J+E, J+{e_k}+E).
By Theorem <ref> and the homeomorphic invariance of Z on S^n, the Z value of a labelled linear n-simplex is invariant under the subdivision and independent of the choice of k-morphisms.
If Z is complete finite and strong semisimple, then the partition function Z(ℬ) is independent of the choice of the triangulation Δ and the choice of the k-morphisms. Therefore, it is invariant under homeomorphisms of n+1 manifolds.
We first fix the combinatorial triangulation on the boundary. By Theorem <ref>, we ambient isotope the stratified manifold ℳ on the boundary ∂ B, so that it is transversal to the triangulation.
Take a combinatorial triangulation Δ of the bulk with given boundary triangulation. Take a choice of k-morphisms in the 𝒮^k-1-relation.
Firstly, we prove that the partition function Z is independent of the choice of the k-morphisms. For any k-simplex Δ_k,i, its link is a combinatorial sphere. By Lemma <ref>, we can change choice of k-morphism label of Δ_k,i without changing the value Z.
This will only change morphisms labelled at the star of Δ_k,i, i.e., the (k+k')-simplices containing Δ_k,i. We can iterate it and change the choice of k-morphisms for k=0,1,⋯,n to any other choice without changing the value Z.
Secondly, we prove that Z is independent of the triangulation in the bulk.
By Theorem <ref>, the value Z is invariant under subdivision.
By Theorem <ref>, two bulk triangulations of B have the same subdivision.
So Z is independent of the bulk triangulation.
Thirdly, we prove that Z is invariant under ambient isotopy near any point p on ℳ.
The star st(p) of p in B consists of n+1 simplices of Δ containing p.
As the triangulation is combinatorial, |∂ st(p)|∼ S^n.
By Theorem <ref>, the state sum over simplices inside st(p) equals to the value of the stratified manifold with support ∂ st(p), which is invariant under ambient isotopy near p. So the partition function Z is invariant under ambient isotopy of p, for any p.
Next, we prove that Z is independent of the triangulation on the boundary. For two combinatorial triangulations on the boundary, they share a subdivision.
By Theorem <ref>, we ambient isotope ℳ, so that it is transversal to the common subdivision.
By Theorem <ref>, the value Z is invariant under subdivision. By Theorem <ref>, Z is independent of the triangulation on the boundary.
So Z is independent of the choice of the triangulation Δ.
For any homeomorphism ϕ of the n+1 manifolds, we have
Z(ℬ,Δ,C_∙)=Z(ϕ(ℬ),ϕ(Δ),C_∙), because Z(Δ_n+1,j)=Z(ϕ(Δ_n+1,j)) for any (n+1)-simplex of Δ.
Therefore, Z is homeomorphic invariant.
Suppose ℬ=(B,ℳ) is a bulk vector and ℳ is labelled by a null vector, then
Z(ℬ, Δ, C_∙)=0.
By Theorem <ref>,
we assume that the bulk vector is on the face of an n+1 simplex Δ^n+1,j of Δ. Then the factor Z(Δ^n+1,j) is always zero in the state sum of Z(ℬ, Δ, C_∙). So Z((ℬ,ℳ),C_∙)=0.
The partition function Z is invariant under the 𝒮^k-1-relations in Equ. <ref>,<ref>,<ref> and <ref>.
We prove the statement by induction for k=n+1,n,⋯,0.
By the Def. <ref> and Theorem <ref>, Z is invariant under the 𝒮^n-relation. Assume that Z is invariant under any 𝒮^k-relation.
Suppose ℬ=(B,ℳ) is a bulk vector ϕ: U × D^n+1 is a homeomorphism in a regular chart, so that ϕ(ℳ|_U)=𝒮^k-1× D^n+1-k.
Applying the 𝒮^k-1-relation, we obtain a bulk vector ℬ'=(B',ℳ') which is identical to B outside U, and ℳ|_U is changed to
ϕ^-1(∑_α_kTr(α_k)/μ(α_k) (D^n+1∖ D^k×ε D^n-k+1, ℳ(α_k))).
Now let us prove Z(ℬ)=Z(ℬ').
We choose a combinatorial triangulation on B∖U and extend it to a triangulation Δ on B and a triangulation Δ' on B'.
We eliminate B∖U according to the common triangulation on B∖U by the relations. Then we obtain an L-labelled stratified manifold 𝒩 with support ϕ^-1(D^k× S^n-k).
By Corollary <ref> and Prop. <ref>, we replace ϕ(𝒩) as a linear sum of 𝒟^k(β)× S^n-k.
It is enough to check the identity
Z(D^k× D^n-k+1, ∂𝒟^k × D^n-k+1∪𝒟^k(β)× S^n-k)
= ∑_α_kTr(α_k)/μ(α_k)
Z(D^k× (D^n-k+1∖ε D^n+k-1), ∂𝒟^k × (D^n-k+1∖ε D^n+k-1) ∪ℳ(α_k) ∪𝒟^k(β)× S^n-k).
The left hand side is ζ Tr(β).
Applying the 𝒮^k-relation to the right side,
it is non-zero only when α_k and β are have a bimodule. In this case, they are annular equivalent by Prop. <ref>. By Prop. <ref>, we assume that α_k=β.
Then
Z(D^k× (D^n-k+1∖ε D^n+k-1), ∂𝒟^k × (D^n-k+1∖ε D^n+k-1) ∪ℳ(β) ∪𝒟^k(β)× S^n-k)
= ∑_γTr(γ)/μ(γ)× Tr(γ)=μ(β),
summing over representatives of indecomposible β-β bimodules γ.
So the right side becomes
Tr(β)/μ(β)μ(β), which is equal to the left side Tr(β). So Z(ℬ)=Z(ℬ').
We complete the induction.
Now we are free to use 𝒮^k-1-relations to evaluated the partition function Z. These relations allow us to evaluate Z not only by triangulations, but also by surgery theory. In practice, we would like to choose the k-morphisms in 𝒮^k-1-relation as simple as possible, so that there are fewer terms in the state sum. Moreover, we may fix the choice of the k-morphisms as elements in a minimal simplicial representative set C_∙ up to annular equivalence and homeomorphic equivalence.
Suppose Z is complete finite. We fix a minimal simplicial representative set C_∙.Suppose ℬ=(B,ℳ) is a bulk vector.Take a combinatorial triangulation Δ of B, which is transversal to ℳ on the boundary ∂ B. For every k-simplex Δ_k,i, we fix a homeomorphism ϕ_k,i: Δk,i→ D^k.
For every k-simplex Δ_k,i on the boundary of B, we consider the normal microbundle of Δ_k,i∩ℳ as a label of the identity of type ϕ_k,i(Δ_k,i∩ℳ), and decompose it as a sum of k-morphisms α_k and then change them to k-morphisms homeomorphic equivalent to elements in C^k by annular actions.
Then every k-simplex Δ_k,i not on the boundary of B, k=0,1,⋯, n+1, take 𝒮^k-1=ϕ_k,i(∂Δ_k,i∩ℳ).We change its normal microbundle by the 𝒮^k-1-relation in Def. <ref> and choose the k-morphisms homeomorphic equivalent to the ones in C_k for 0≤ k ≤ n. Finally we obtain a scalar denoted by Z(ℬ,Δ,C_∙), called the partition function of ℬ, as a state sum.
When B is a closed n+1 manifold without boundary and Δ is a triangulation with k-simplices Δ_k,i, 1≤ i ≤ n_k.
A C_∙-color triangulation α_∙ is assignment of every k-simplex σ_k,i of the triangulation a label ϕ_k,i^-1(α_k,i) for α_k,i∈ C_k and a homeomorphism ϕ_k,i : Δ_k,i→ D^k,
such that the link boundary of ϕ_k,i^-1 (α_k,i) are given by ϕ_k-1,i'^-1α_k-1,i' for all faces. (Otherwise the value is zero.)
Then for every Δ_n+1,j, its faces are labelled by vectors in C_k or their dual according to the orientation, which form an L labelled stratified manifold with support ∂Δ_n+1,j. We denote its value by F(Δ_n+1,j)(α_∙).
Then the partition function Z(ℬ=Z(ℬ,Δ,C_∙) is the state sum over C_∙-color triangulations according to the decomposition of the 𝒮^k-1-relation in Def. <ref>. It has the following form:
Z(ℬ,Δ,C_∙) =∑_α_∙∏_k=0^n∏_i=1^n_kTr(α_k,i)/μ(α_k,i)∏_j=1^n+1_kF(Δ_n+1,j)(α_∙),
where Tr(Δ_n,i)=μ(Δ_n,i)=1.
§.§ TQFT with space-time boundary
Now let us extend the partition function in Theorem <ref> to a TQFT with space-times boundary.
We consider the previous (B,ℳ) as a bulk vector without time boundary.
We study bulk vectors with time boundary, similar to Section <ref>.
Suppose F is an oriented compact n-manifold and 𝒮 is a stratified (n-1)-manifold with support ∂ F, we call the pair ℱ:=(F,𝒮) a time boundary.
A bulk vector with time boundary ℱ=(F,𝒮) is a pair ℬ:=(B,ℳ), such that
* B is an oriented compact n+1 manifold;
* ℳ is an oriented compact L-labelled stratified n-manifold;
* |ℳ| ∪ F=∂ B;
* |ℳ| ∩ F= ∂ |ℳ|=-∂ F;
* ℳ and F intersect transversely.
Suppose ℱ=(F,𝒮) is a time boundary.
We define V_ℱ,± to be the vector space spanned by bulk vectors with time boundary (-1)^n+1±ℱ.
Suppose bulk vectors ℬ_±=(B_±,ℳ_±) have a common boundary ℱ with opposite orientations.
Then we can glue the boundary and obtain a bulk vector ℬ=ℬ_+ ∪ℬ_- without boundary.
The partition function Z defines a bi-linear form on V_ℱ,-× V_ℱ,+,
Z(ℬ_- ×ℬ_+):=Z(ℬ_-∪ℬ_+), ∀ ℬ_±∈ V_ℱ,±.
A vector v ∈ V_ℱ,± is called a null vector, if
Z(v ∪ w)=0, ∀ w ∈ V_ℱ,∓.
The subspace of all null vectors are denoted by K_ℱ,±.
Their quotient spaces are denoted by Ṽ_ℱ,±:=V_ℱ,±/K_ℱ,±.
Suppose ℬ=(B,ℳ) is a bulk vector with time boundary ℱ.
By Corollary <ref>, if ℳ is labelled by a null vector then ℬ is a null vector.
So for vectors in Ṽ_ℱ,± we can apply all relations in the kernel K_Z to the local n-disc of ℳ.
By Theorem <ref>, any homeomorphism of (n+1)-manifolds fixing the boundary ℱ induces the identity map on Ṽ_ℱ,±.
When the S^n functional Z is complete finite and strong semisimple,
the vector space Ṽ_ℱ,± is finite dimensional.
If Z is reflection positive, then for any ζ>0, Z induces a positive definite inner product on Ṽ_ℱ,±, so the vector space is a Hilbert space.
Suppose the time boundary is ℱ=(F,𝒮).
Take a triangulation Δ of F transversal to 𝒮.
When Z is reflection positive, both Tr(α) and μ(α) are positive for any k-morphism α.
We replace the normal microbundle of 0-simplices of Δ|_𝒮 as a linear sum of 0-morphisms in C_0. For k=1,⋯,n, for every k-simplex σ of Δ|𝒮, we decompose normal microbundle its as a linear sum of k-morphisms α whose link boundary are labelled by (k-1)-morphisms in C_k-1.
By the completeness of C_k, C_k has a k-morphism α' in C_k annular equivalent to α. Take a α-α' bimodule β.
By Equ. <ref>, we obtain annular action T_β(α)=Tr(β)/Tr(α')α'.
Define T_± to be the restriction of T_β(α) on D^n-1× D^1_±. Then T_-=(T_+)^* and (T_-)^*T_-=Tr(β)/Tr(α')α'. So √(Tr(α')/Tr(β)) T_- is an embedding map. Applying this embedding map, we can substitute the label α of the normal microbundle of k-simplex σ to the label α'. Iterate the substitution for all k-simplices for k=1,2,⋯, n-1. It is enough to prove the finiteness and positivity when all k-simplex of σ|_𝒮 are labelled by k-morphisms in C_k.
Now we deal with the simplices not on the boundary of F. The idea is applying the “square root” of the 𝒮^k-1-relation to every k-simplex for k=0,1,⋯ n.
By Equ. <ref>,
(D^n+1,∅) = ∑_α_0Tr(α_0)/μ(α_0) (D^n+1∖ε D^n+1, ℳ(α_0)).
We define the bulk vector ι(α_0,±) as the restriction of
(D^n+1∖ε D^n+1, ℳ(α_0)) on D^n+1_±.
Take
ι_±= ∑_α_0√(Tr(α_0)/μ(α_0))ι(α_0,±).
Then ι_+=ι_-^* and the multiplication ι_+ι_-=(D^n+1,∅) is the identity on the boundary (D^n,∅).
Suppose σ_0 is a 0-simplex of Δ in F, which is not on the boundary of F. Suppose (U,ϕ,∅) is a regular chart of σ_0.
Applying the embedding map ι_- to the normal microbundle U, we can substitute the normal microbundle U by ϕ^-1(D^n∖ε D^n), with a label α_0 on ∂ε D^n, summing over all α_0 ∈ C_0.
The vector space Ṽ_ℱ,- is embedded in a direct sum of vector spaces with times boundary (F', 𝒮'), where F' is removing the normal microbundles ϕ^-1(ε D^n) of these 0-simplices from F; and 𝒮' is adding ϕ^-1(∂ε D^n) of these 0-simplices to 𝒮. Moreover, every ϕ^-1(∂ε D^n) is labelled by a 0-morphism in C_0 up to homeomorphic equivalence.
For k=1,2,⋯, n, by the 𝒮^k-1-relations in Equ. <ref> and <ref>,
(D^n+1,𝒮^k-1× D^n+1-k) = ∑_α_kTr(α_k)/μ(α_k) (D^n+1∖ (D^k-1×ε D^n-k+1, ℳ(α_k)).
We define the bulk vector ι(α_k,±) as the restriction of
(D^n+1∖ D^k×ε D^n-k+1, ℳ(α_k)) on D^n+1_±.
Take
ι_𝒮^k-1,±= ∑_α_k√(Tr(α_k)/μ(α_k))ι(α_k,±).
Then ι_𝒮^k-1,+=ι_𝒮^k-1,-^* and the multiplication ι_𝒮^k-1,+ι_𝒮^k-1,-=(D^n+1,𝒮^k-1× D^n+1-k) is the identity on the boundary (D^n,𝒮^k-1× D^n-k).
For k=1,2,⋯, n,
suppose σ_k is a k-simplex of Δ in F, which is not on the boundary of F, the k-1-simplicies of ∂σ_k are labelled by (k-1)-morphisms in C_k-1 up to homeomorphic equivalence.
Suppose ϕ_k is a homeomorphism from the normal microbundle U_k of the k-simplex in F to D^k× D^n-k, such that ϕ_k(U_k|_∂σ_k)=𝒮^k-1× O_n-k.
Applying the embedding map ι_𝒮^k-1,- to the normal microbundle U_k, we can substitute U_k by ϕ_k^-1(D^n ∖ D^k-1×ε D^n-k), and substitute ϕ_k^-1(𝒮^k-1×ε D^n-k) by ϕ_k^-1(𝒟^k×ε S^n-k-1), labelled by 𝒟^k(α_k) ×ε S^n-k-1.
Note that each embedding map ι_𝒮^k-1,- eliminates a normal microbundle of the k-simplex in F. After applying the embedding maps to all simplices of Δ, F becomes the empty set.
Therefore, the vector space Ṽ_ℱ,- is embedded in a direct sum of vector spaces of the tensor product of the 1-dimensional vector spaces spanned by vectors in C_n. Every C_k is a finite set and the triangulation has finitely many simplices, so the vector space Ṽ_ℱ,- is finite dimensional.
Every 1-dimensional vector space is a Hilbert space, so a direct sum of their tensor products is still a Hilbert space. Moreover, Ṽ_ℱ,- is embedded as a sub Hilbert space, so it is a Hilbert space.
For a general field 𝕂, we cannot decompose the scalar as the product of its square roots, as the square root may not be in the field. Instead, we decompose the scalar as the product of 1 and itself. The rest part of the proof of the finite dimensional condition is similar.
Atiyah's TQFT <cit.> is a symmetric monoidal functor from the cobordism category to the category of vector spaces.
Every closed n-manifold F is assigned a finite dimensional vector space V_F on the field 𝕂, and V_∅≅𝕂.
In particular, the map from closed n+1-manifolds to 𝕂 is called the partition function, which is homeomorphic invariant.
Every n+1-cobordism is assigned to a linear transformation on the vector spaces.
The disjoint union and the gluing map correspond to the tensor and contraction respectively. A general operation is a composition of the two elements operations.
The TQFT is called unitary, if the partition function is reflection positive, namely, it induces a positive definite inner product on the vector space.
Now we consider the cobordism as an n+1-manifold with time boundary.
Let us introduce the TQFT with space-time boundary, so that the cobordism is generalized to an (n+1)-manifold with space-time boundary, and the space boundary is a stratified n-manifolds.
We keep in mind that we are working on the category of PL manifolds and regular stratified manifolds of a given local shape.
A space-time cobordism with time boundary ℱ=(F,𝒮) is a pair ℬ:=(B,ℳ), such that
* B is an oriented compact n+1 manifold;
* ℳ is an oriented compact stratified n-manifold;
* |ℳ| ∪ F=∂ B;
* |ℳ| ∩ F= ∂ |ℳ|=-∂ F;
* ℳ and F intersect transversely.
Given a local shape set LS_∙, an (n+1)-TQFT with space-time boundary is a symmetric monoidal functor from the category of space-time cobordisms to the category of vector spaces over a field 𝕂.
More precisely, every time boundary ℱ=(F,𝒮) is assigned a finite dimensional vector space V_ℱ, and V_∅≅𝕂.
Every space-time n+1-cobordism is assigned to a linear transformation on the vector spaces. The disjoint union and the gluing map correspond to the tensor and contraction respectively.
The map Z:V_∅→𝕂 is called the partition function.
The TQFT is called unitary, if Z is reflection positive.
When the space boundary is the empty set, we return to Atiyah's TQFT <cit.>.
When the S^n functional Z is complete finite and strong semisimple, we obtain an n+1 TQFT with space-time boundary. If Z is reflection positive and ζ>0, then the TQFT is unitary.
By Theorem <ref> and <ref>,
for every time boundary ℱ=(F,𝒮), the vector space Ṽ_ℱ,- is finite dimensional.
A space-time cobordism acts on a vector by gluing the boundary. It sends a null vector to a null vector. So it is well-defined on Ṽ_ℱ,-.
Thus we obtain an n+1 TQFT with space-time boundary. If Z is reflection positive, then the TQFT is unitary.
If Z is reflection positive, then by Theorem <ref>, the TQFT is unitary.
For a time boundary ℱ=(F,𝒮), we call ℬ=(B,ℳ) ∈Ṽ_ℱ,- a boundary vector on ℱ, if B=F×[-ε,0] and |ℳ|=F×{-ε}∪∂ F × [-ε,0].
Similarly, we define boundary vectors in Ṽ_ℱ,+.
We identify F with F×{0}. Up to isotopy, we assume that ℳ|_∂ F × [-ε,0]=𝒮× [-ε,0].
Suppose ℓ is an L-labelled stratified manifold in the condensation space F_𝒮(L) in <ref>). We construct a boundary vector ℓ_- in Ṽ_ℱ,- with a label ℓ on the space boundary F×{-ε}.
To compare with the notions in topological orders, one may regard the vector space Ṽ_ℱ,- as the space of ground states of a Hamiltonian. The boundary vector ℓ_- is the projection of ℓ onto the space of ground states.
For any time boundary ℱ=(F,𝒮), the vector space Ṽ_ℱ,± is spanned by boundary vectors.
As the vector is invariant under homeomorphisms fixing the boundary ℱ, we assume that B=F× [-ε,0] ∪ B_0, B_0 is a closed sub (n+1)-manifold of B and B_0∩ F× [-ε,0]=F×{-ε}.
Take a triangulation Δ of the B_0.
Apply the 𝒮^k-1-relation to every k-simplex of Δ summing over k-morphisms in C_k, for 0≤ k≤ n+1. Then the bulk vector ℬ=(B,ℳ) becomes a boundary vector ℬ'=(F× [-ε,0],ℳ'), such that |ℳ'|=F×{-ε}∪∂ F × [-ε,0].
By Theorem <ref>, ℬ=ℬ' in Ṽ_ℱ,-.
The proof for Ṽ_ℱ,+ is similar.
By Prop. <ref>,
the vector space D^k× S^n-k_𝒮^k-1× S^n-k(L) is spanned by the boundary vectors 𝒟^k(β)× S^n-k, for k-morphisms β. Moreover, if k-morphism β_0 and β_1 are annular equivalent, then 𝒟^k(β_0)× S^n-k=λ𝒟^k(β_1)× S^n-k, in D^k× S^n-k_𝒮^k-1× S^n-k(L), for some λ∈𝕂.
Now let us upgrade 𝒟^k(β)× S^n-k to a bulk vector.
Take F=D^k× S^n-k and 𝒮=𝒮^k-1× S^n-k.
We denote 𝒟^k(β)_-× S^n-k to be the bulk vector in Ṽ_ℱ,- with time boundary
ℱ=(F,𝒮), which has support D^k× [-1,0] × S^n-k and D^k×{-1}× S^n-k is the labelled stratified manifold 𝒟^k(β)× S^n-k.
We denote 𝒟^k(β)_+× S^n-k to be the bulk vector in Ṽ_ℱ,+ with time boundary
ℱ=(F,𝒮), which has support D^k× [0,1] × S^n-k and D^k×{1}× S^n-k is the labelled stratified manifold ρ_k,k+1(𝒟^k(β)× S^n-k).
Take the time boundary ℱ=(F,𝒮), F=D^k× S^n-k, 𝒮=𝒮^k-1× S^n-k. Take a set of representatives {β_i: i∈ I} of k-morphisms with link boundary 𝒮 of annular equivalence classes.
Then the boundary vectors {𝒟^k_±(β_i)× S^n-k:i∈ I} form a basis of Ṽ_ℱ,± respectively.
Moreover, their inner product is
Z(𝒟^k_-(β_i)× S^n-k) ∪𝒟^k_+(β_j)× S^n-k) ) =δ_i,jμ(β_i).
By Theorem <ref> and Prop. <ref>, the bulk vectors {𝒟^k_±(β_i)× S^n-k:i∈ I} form a basis of Ṽ_ℱ,± respectively.
Moreover, their inner product is δ_i,jμ(β_i) as computed in Equ. <ref>.
Take the time boundary ℱ=(F,𝒮), F=D^k× S^n-k, 𝒮=𝒮^k-1× S^n-k. The vector space Ṽ_ℱ,- forms a commutative semisimple algebra with the identity (D^k× D^n-k+1, 𝒮^k-1,D^n-k+1).
The 𝒮^k-1-relation in Equ. <ref>, <ref>, <ref> is a resolution of the identity, for 0≤ k≤ n. (The multiplication direction is the last coordinate.)
The multiplication of Ṽ_ℱ,- is commutative as shown in Fig. <ref>.
By isotopy, (D^n+1, 𝒮^k-1× D^n-k+1) is the identity.
It has a basis {𝒟^k_±(β_i)× S^n-k:i∈ I}.
The multiplication of two vectors for two different representatives β_i and β_j is zero, because there is no bimodules between them.
So 𝒮^k-1-relation is a decomposition of the identity as a sum of minimal idempotents 0≤ k≤ n.
The intrinsic dimension Tr(α)^2/μ(α) is invariant under annular equivalence.
By Theorem <ref>,
Tr(α_k)/μ(α_k) (D^n+1∖ D^k×ε D^n-k+1, ℳ(α_k)) is a minimal idempotent in the commutative semisimple algebra Ṽ_ℱ,-. It is independent of the choice of the representative α_k in the annular equivalence class. Its trace is intrinsic dimension Tr(α)^2/μ(α).
Suppose the S^n functional Z is complete finite and strong semisimple. Then its has a unique extension to the (n+1)-TQFT with space-time boundary, such that Z(𝒟^n+1,𝒮)=ζ Z(𝒮) and Z is multiplicative.
As Z is multiplicative, the partition function of bulk vectors with support D^n+1× S^0 and empty time boundary is the product of partition function of the two components.
By induction on k=n,n-1,⋯,0,-1, suppose the partition function on vectors with support D^k+1× S^n-k and empty time boundary are determined.
Then for any time boundary ℱ=(F,𝒮), F=D^k× S^n-k, 𝒮=𝒮^k-1× S^n-k, the inner product between boundary vectors in Ṽ_ℱ,- and Ṽ_ℱ,+ are determined.
The union of 𝒮^k-1× D^n-k+1 in V_ℱ,- and a boundary vector in V_ℱ,+ has support homeomorphic to D^n+1 and space boundary homeomorphic to S^n.
So their inner product is determined.
So the expression of 𝒮^k-1× D^n-k+1 as a linear sum of boundary vectors in V_ℱ,- is unique, which is the 𝒮^k-1-relation in Def.<ref>.
By Theorem <ref>, the partition function is determined by 𝒮^k-1-relations. So it is determined by the S^n functional Z.
So the TQFT is also determined by Z.
We illustrate the induction process in Table. <ref>.
We summarize the 𝒮^k-1-relations in Table. <ref>, which shows how to derive all relations from the S^n functional Z.
In general, for any time boundary ℱ=(F,𝒮), F=D^k× S^n-k, 𝒮=𝒮^k-1× S^n-k, the 𝒮^k-1-relation could also be considered as a bistellar move of the space boundary, which changes the space boundary from D^k×∂ D^n-k+1 to ∂ D^k × D^n-k+1 and change the bulk from D^n+1 to ∅.
For any vector v with support D^n+1 time boundary ℱ, such as the identity 𝒮^k-1× D^n-k+1 in Ṽ_ℱ,-,
its union with a boundary vector in Ṽ_ℱ,+ has support homeomorphic to D^n+1 and space boundary homeomorphic to S^n. So we can regard the v as a linear functional on Ṽ_ℱ,+ according to the value of Z on S^n.
When the inner product between Ṽ_ℱ,- and Ṽ_ℱ,+ is non-degenerate, we can regard Ṽ_ℱ,- as the dual space of Ṽ_ℱ,+.
Therefore we can express v as a boundary vector b in Ṽ_ℱ,- induced from the inner product with boundary vectors b_+ in the dual space Ṽ_ℱ,+. We call this method of producing the skein relations the null principle.
⟨Φ(v),b_+ ⟩ = ⟨ b, b_+⟩ ∀ b_+ ∈Ṽ_ℱ,+.
We have the freedom to choose ζ≠ 0 in the definition of the partition function. To ensure reflection positivity, we need ζ>0.
The choice of ζ changes the partition function Z(B,ℳ) by a global factor ζ^(-1)^n+1E(B), where E(B) is the Euler number of B.
By the null principle in Table <ref>, the 𝒮^k-1 has a global factor ζ^(-1)^n+1-k.
Thus for the triangulation Δ of the n+1 manifold B,
the global factor is ∏_Δ_k,iζ^(-1)^n+1-k=ζ^(-1)^n+1E(B).
We assumed the complete finite and strong semisimple conditions of Z in our main theorems.
The complete finite condition is to ensure that the state sum of the partition function is a finite sum.
The strong semisimple is necessary to construct the (n+1)-TQFT, when Z is semisimple.
Otherwise if a k-morphism β with link boundary 𝒮^k-1 has zero global dimension, then as shown in Theorem <ref>, 𝒟^k_+(β_j)× S^n-k is a null vector in Ṽ_ℱ,+. However, its inner product with 𝒮^k-1× D^n-k+1 is the quantum dimension Tr(β), which is non-zero. It is a contradiction.
§.§ Alterfold TQFT
In this section, we introduce the n+1 alterfold TQFT which extend the notions in n+1 TQFT with space-time boundary.
Suppose M^n+1 is an oriented, compact (n+1)-manifold and M^n is an oriented, closed sub n-manifold, which separates M^n+1 as two closed sub (n+1)-manifolds A^n+1 and B^n+1, so that A^n+1∩ B^n+1=M^n, M^n and ∂ B^n+1 have the same orientation.
We call the triple (M^n+1,B^n+1,M^n) an (n+1)-alterfold.
Its boundary is an n-alterfold (,,)=(∂ M^n+1, ∂ B^n+1∩∂ M^n+1, ∂ M^n∩∂ M^n+1).
We consider A^n+1 and B^n+1 as A/B-colored (n+1) manifolds respectively. The A-color is called the trivial color.
The B-color is called the bulk color.
For (n+1)-alterfold (M^n+1,B^n+1,M^n), if ℳ is an L-labelled stratified manifold with support M^n, we call (M^n+1,B^n+1,ℳ^n) an L-labelled alterfold.
We can update the label space L by the vector spaces in the S^n-algebra (Ṽ,Z) and define the Ṽ-labelled alterfolds.
We extend the partition function Z on the bulk vectors (B^n+1,ℳ^n) without time boundary to the partition function on L-labelled alterfolds without boundary as
Z(M^n+1,B^n+1,ℳ^n):=Z(B^n+1,ℳ^n).
Note that A^n+1=M^n+1∖ B^n+1. The extension of Z is irrelevant to A^n+1. In particular the value of a A-colored manifold M^n+1 without boundary is constant 1.
Z(M^n+1,∅,∅)=Z(∅,∅)=1.
We can compute the partition function Z using surgery theory in topology based on the free change of A-color manifolds and the 𝒮^k-1-relations in Def. <ref>. We refer the readers to <cit.> for the discussions on surgery theory in 2+1 alterfold TQFT.
Suppose (E,F,S) is an n-alterfold ∂ E=∅. Suppose 𝒮 is a stratified (n-1)-manifold with support S. We call the triple ℰ:=(E,F,𝒮) an alterfold boundary.
An alterfold cobordism with an alterfold boundary ℰ=(E, F,𝒮) is a triple ℳ^n+1:=(M^n+1,B^n+1,ℳ^n), ℳ^n+1:=(M^n+1,B^n+1,|ℳ^n|) is an alterfold with boundary (E, F,|𝒮|) and (B^n+1,ℳ^n) is a space-time cobordism with time boundary (F,𝒮).
Given a local shape set LS_∙, an alterfold (n+1)-TQFT is a symmetric monoidal functor from the category of alterfold cobordisms to the category of vector spaces over a field 𝕂.
More precisely, every alterfold boundary ℰ=(E,F,𝒮) is assigned a finite dimensional vector space V_ℰ, and V_(E,∅,∅)≅𝕂.
Every alterfold n+1-cobordism is assigned to a linear transformation on the vector spaces. The disjoint union and the gluing map correspond to the tensor and contraction respectively.
The map Z:V_∅→𝕂 is called the partition function.
The alterfold TQFT is called unitary, if Z is reflection positive.
When the E=F, equivalently 𝒮=∅, we return to Atiyah's TQFT <cit.>.
Suppose Z is a linear functional on labelled stratified manifolds with support S^n over the field ℂ, satisfying the three conditions
* (RP) reflection positivity; (Def. <ref>)
* (HI) homeomorphic invariance; (Def. <ref>)
* (CF) complete finiteness. (Def. <ref>)
Then we obtain an n+1 unitary alterfold TQFT for any ζ>0 in Def. <ref>.
For a general field 𝕂, we replace RP by strong semisimplicity, and then we obtain an n+1 alterfold TQFT.
Recall that the homeomorphic invariance of Z(ℬ)=Z(B^n+1,ℳ^n) is proved in Theorem <ref>.
By Def. <ref>,
Z(M^n+1,B^n+1,ℳ^n):=Z(B^n+1,ℳ^n),
the partition function Z of the alterfold TQFT is irrelevant to the A-color part.
So it is also homeomorphic invariant.
Moreover, for any alterfold boundary ℰ=(E,F,𝒮), ℱ=(F,𝒮) is a time boundary of the space-time TQFT.
As the inner product induced by Z is irrelevant to the A-color part, so the vector space V_ℰ is isomorphic to V_ℱ.
By Theorem <ref>, the vector space is finite dimensional.
So we obtain an n+1 alterfold TQFT.
Furthermore if Z is reflection positive and ζ>0, by Theorem <ref>, the vector space is a finite dimensional Hilbert space.
So we obtain a unitary n+1 alterfold TQFT.
This extends the construction of the space-time TQFT in Theorem <ref>.
A time boundary (F,𝒮) in the space-time TQFT could be extended to different alterfold boundary (E,F,𝒮) from different A-color manifolds E∖ F.
They may correspond to different interpretations in different situations.
One can study the idempotent completion of the local D^n+1-algebra of the n+1 alterfold TQFT and obtain an n+1-category with two 0-morphisms corresponding to the two colors A and B. The 0-morphisms of the n-category from S^n-algebra (Ṽ,Z) becomes 1-morphisms from A to B. The k-morphisms of the n-category from (Ṽ,Z) becomes (k+1)-morphisms.
§ EXAMPLES
TQFT is an extremely fruitful theory which has been studied from various perspectives in mathematics and physics. We did not attempt to review the extensive literature.
In this section, let us discuss some concepts in the lower dimensional alterfold TQFT, so that one can have a better understanding of the higher analogue.
Recall that we obtain an n+1 alterfold TQFT from an S^n functional Z is complete finite and strong semisimple or reflection positive,
When n=0, a D^0 algebra is a finite dimensional vector space V.
An S^0 functional is a (non-degenerate) inner product. If Z is reflection positive, then V is a Hilbert space.
The invariant of an oriented compact 1-manifold is dim(V)^#S^1.
The vector space of the 0+1 TQFT with the time boundary consisting of m points is the m^th tensor power of V. The union for all m is the Fock space of V.
When n=1, a semisimple D^1 algebra is an associate algebra ⊕_i M_n_i(𝕂).
An S^1 functional is a trace. If Z is reflection positive, then V is a C^*-algebra. Let d_j be the trace of the minimal idempotent p_j of M_n_i(𝕂). Then we have the following 𝒮^k-1-relations, 0≤ k≤ 2, in the 1+1 alterfold TQFT:
[dashed] (-1,-1) rectangle (1,1);
at (0,0) B;
(-.5,-.5) rectangle (.5,.5);
at (.75,0) A;
at (-.75,0) p_j;
at (2,0) =d_j;
[shift=(4,0)]
at (0,0) A;
[dashed] (-1,-1) rectangle (1,1);
at (-.75,.8) A;
at (0,.8) B;
at (.75,.8) A;
(-.5,-1)–(-.5,1);
(.5,-1)–(.5,1);
at (-.75,0) p_j;
at (.25,0) ρ(p_j);
[dashed] (-1,-1) rectangle (1,1);
at (2,0) =d_j^-1;
[shift=(4,0)]
at (.75,.8) A;
at (0,.8) B;
(-.5,-1)–++(0,.5)–++(1,0)–++(0,-.5);
(-.5,1)–++(0,-.5)–++(1,0)–++(0,.5);
at (-.75,.8) p_j;
at (-.75,-.8) p_j;
[dashed] (-1,-1) rectangle (1,1);
at (0,.8) B;
[dashed] (-1,-1) rectangle (1,1);
at (2,0) =∑_j d_j;
[shift=(4,0)]
at (0,.8) B;
[dashed] (-1,-1) rectangle (1,1);
(-.5,-.5) rectangle (.5,.5);
at (.25,0) p_j;
Applying the 𝒮^k-1-relations to the normal microbundle of a k-simplex of a triangulation of an oriented surface S, we obtain the invariant
Z(S)=∑_j d_j^n_2-n_1+n_0=∑_j d_j^E(S).
where n_k is the number of k-simplices and E(S) is the Euler number of S.
The complete finiteness condition ensure the invariant to be a finite sum.
The strong semisimple condition ensure d_j≠ 0.
We can release the complete finiteness condition if the sum convergences.
In particular, if we take the semisimple associate algebra to be the group algebra ℒG for a finite group G with the trace from the regular representation L^2(G). Then the trace of the group element g∈ G is δ_g,1|G|, diagrammatically,
[dashed] (-1,-1) rectangle (1,1);
at (0,0) B;
(-.5,-.5) rectangle (.5,.5);
at (.75,0) A;
at (-.75,0) g;
at (2,0) =δ_g,1|G|;
[shift=(4,0)]
at (0,0) A;
[dashed] (-1,-1) rectangle (1,1);
Moreover, the group elements forms an orthonormal basis, we have that
at (-.75,.8) A;
at (0,.8) B;
at (.75,.8) A;
(-.5,-1)–(-.5,1);
(.5,-1)–(.5,1);
[dashed] (-1,-1) rectangle (1,1);
at (2,0) =∑_g∈ G;
[shift=(4,0)]
at (.75,.8) A;
at (0,.8) B;
(-.5,-1)–++(0,.5)–++(1,0)–++(0,-.5);
(-.5,1)–++(0,-.5)–++(1,0)–++(0,.5);
at (-.75,.8) g;
at (-.75,-.8) g^-1;
[dashed] (-1,-1) rectangle (1,1);
The minimal projection p_j of ℒG corresponds to an irreducible representation V_j of dimensional d_j. By Peter-Weyl Theorem, the resolution of the identity of ℒG has d_j minimal projections equivalent to p_j. So
at (0,.8) B;
[dashed] (-1,-1) rectangle (1,1);
at (2,0) =∑_j d_j;
[shift=(4,0)]
at (0,.8) B;
[dashed] (-1,-1) rectangle (1,1);
(-.5,-.5) rectangle (.5,.5);
at (.25,0) p_j;
at (6,0) =;
[shift=(8,0)]
at (0,.8) B;
[dashed] (-1,-1) rectangle (1,1);
(-.5,-.5) rectangle (.5,.5);
We can evaluate the partition function Z(S) by these three relations in terms of group elements, and then derive that
Z(S)=#(π_1(S), G) |G|^E(S)-1,
where π_1(S) is the fundamental group of S.
Then we obtain the Mednykh's formula <cit.>:
#(π_1(S), G)) |G|^E(S)-1 = ∑_j V_j^E(S).
summing over irreducible representations V_j of G.
We refer the readers to <cit.> for further discussions for the Mednykh's formula and related results on 1+1 TQFT.
When n=2, there have been extensive studies on the 2+1 TQFT, since the fundamental work of the Witten-Reshetikhin-Turaev TQFT and Turaev-Viro TQFT in <cit.>.
The D^2 algebras were studied as planar algebras by Jones <cit.>. The S^2 algebras were studied as spherical planar algebras. If Z is reflection positive and complete finite, then one obtains subfactor planar algebras of finite depth.
(A subfactor planar algebra with infinite depth has an S^2 function which is 2-finite, but not 1-finite. Its idempotent completion has infinitely many 1-morphisms in the 2-category.)
Its idempotent completion is a spherical 2-category 𝒞 or a multi-fusion category.
The invariant of a B-colored 3-manifold equals to its Turaev-Viro invariant <cit.>.
The 2+1 alterfold TQFT has been studied in <cit.>.
In this 2+1 alterfold TQFT, the braided tensor category of 2-morphisms with link boundary S^1 (without stratification) is the Drinfeld center <cit.> of 𝒞. It is considered as the category of point excitations in physics.
Moreover, both the Turaev-Viro TQFT of 𝒞 and the Reshetikhin-Turaev TQFT <cit.> of the Drinfeld center of 𝒞 can be embedded into the 2+1 alterfold TQFT by the blow up procedures.
The skeleton of the triangulation in the Turaev-Viro TQFT blows up to B-color normal microbundles. (The 2D boundary surface has contractible B-color 2-discs.)
The link in the Reshetikhin-Turaev TQFT of blows up to A-color normal microbundles. (The 2D boundary surface have contractible A-color 2-discs.)
We can reverse the blow up procedures by shrinking the A/B color bulk in the 2+1 alterfold TQFT.
Their tensor product has the shape of two closed strings which can merge into one closed string by an isometry.
Kitaev's toric code is a square lattice model on the torus <cit.>.
It can be generalized to lattices of a general shape on a surface.
Every edge has a qubit, a vector in the two-dimensional Hilbert space. The configuration space of a lattice is the tensor product of the qubit spaces for all edges. (We omit the label at the vertex or consider the label as the scalar 1 of the one dimensional Hilbert space ℂ.)
Every vertex v has a vertex operator A_v given by the tensor product of Pauli Z on the nearest qubits. Every plaquette p has a plaquette operator B_p given by the tensor product of Pauli X on nearest the qubits. The local vertex operators and plaquette operators commute.
The Hamiltonian H is
H=-∑_vA_v-∑_pB_p,
summing over all vertices and plaquettes.
The ground state of H is unique for any lattice on S^2.
The corresponding vector Ω is the sum of all loops on the lattice, where a loop has labels |0⟩ or |1⟩ on the edges, such that the union of all edges labelled by |1⟩ have no boundary.
As the configuration space is a Hilbert space, we consider the vector Ω as a linear functional Z on the configuration space.
The linear functional Z takes value 1 on a loop and value 0 for other labels.
It satisfies the three conditions (RP), (HI) and (CF). We obtain a spherical fusion category Rep(ℤ_2) and the 2+1 alterfold TQFT from Z by Theorem <ref>, where Rep(ℤ_2) is the representation category of the group ℤ_2 with two invertible objects 1 and g.
We can refine the lattice by blowing up a vertex to a face and changing the position point of every edge to a vertex, see an example in Fig. <ref> for the refinement of a square lattice on the torus. (The refined lattice is considered as a quantized graph to construct quantum error correcting codes in Page 5 in <cit.> based on the Quon language <cit.>.)
Then the vertex operator A_v becomes a plaquette operator on the refined lattice.
So the group of homeomorphisms of a lattice is enlarged and an A_v operator can be identified with an B_p operator under a homeomorphism.
We consider the normal microbundle of a vertex in the refined lattice as a parameterized disc with four points on the boundary circle.
We consider the basis of the qubit label space as the two diagrams of non-intersecting strings with four boundary points.
Then a fully labelled refined lattice is the underline surface with a stratification of non-intersecting closed strings.
We define its Z value to be √(2)^m, where m is the number of closed strings. It coincides with the linear functional Z on the original lattice model.
By Theorem <ref>, we obtain the Ising category from Z, beyond the Rep(ℤ_2). (We will apply this idea to construct the S^3 functional in Def. <ref> and derive a unitary spherical 3-category of Ising type. )
Moreover, we can recover the configuration space of a lattice, i.e., a stratified 2-manifold ℳ^2, from the vector space of the 2+1 alterfold TQFT with alterfold boundary ℰ=(E,F,𝒮), where E=M^2; F is a ε-neighbourhood of M^1; and 𝒮 is the boundary of F without stratification. Each connected components of the A-color region of ℰ is a plaquette P and we recover the local plaquette operator as P× [0,1] with labelled by g-colored ∂ P×{1/2} acting on the boundary plaquette P.
The ground state space of a lattice ℳ^2 is isomorphic to the vector space of B-color boundary M^2 without stratification in the 2+1 alterfold TQFT, which is the vector space with boundary M^2 in Atiyah's TQFT. The ground state space is independent of the choice of the lattice up to isomorphism.
If we define the S^2 functional Z on m non-intersecting circles in S^2 as δ^m, then the Jones index Theorem <cit.> implies that
Z is reflection positive, iff δ=2cosπ/2+ℓ, ℓ∈ℕ_+ or δ≥ 2. In addition, Z is complete finite, iff δ=2cosπ/2+ℓ. We obtain the unitary quotient of the representation category of quantum SU(2) at level ℓ. The Ising category is for ℓ=2.
In general, for the Levin-Wen model <cit.> from a spherical multi-fusion category 𝒞, we can define the S^2 functional Z of a string-net on S^n as its evaluation in the spherical category 𝒞. From the functional Z of string nets, we can recover the spherical category 𝒞 and construct a 2+1 alterfold TQFT by Theorem <ref>. Moreover, the plaquette operator, the Hamiltonian and the ground states etc of the Levin-Wen model can be recovered from the 2+1 alterfold TQFT, see Remark 4.25 in <cit.>.
In particular, the reflection positivity condition of the Hamiltonian of the Levin-Wen model is proved in Theorem 3.2 <cit.>, generalizing the geometric proof of RP in Section 7 in <cit.>. We will study the Hamiltonian of the lattice model, the ground states and relevant properties, such as reflection positivity, from the functional integral point of view in any dimension in the coming paper <cit.>.
When n=3, Witten constructed a 3+1 TQFT <cit.> which captures Donaldson's invariant of smooth 4-manifolds <cit.>. The Turaev-Viro state sum construction has been generalized to construct 3+1 TQFT using a braided fusion category in <cit.>. Douglas and Reutter proposed a commonly accepted definition of fusion 2-categories and constructed a 3+1 TQFT as a state sum in <cit.>, see further discussions and references therein.
Similar to the 2+1 theory, for a spherical fusion 2-category 𝒞, one can define an S^3 functional by the evaluating the string-net diagrams on S^3 in 𝒞. Then one recovers the category 𝒞 and the 3+1 TQFT from Theorem <ref>.
The Dijkgraaf-Witten TQFT <cit.> is an n dimensional TQFT coming from a cocycle in the group cohomology H^n(G,𝕂^×) of a finite group G. One can define the S^n-1 function of an n-simplex, whose 1-simplices are labelled by group elements, as the value of the cocycle. Then one can recover the TQFT from Theorem <ref>.
§ HIGHER BRAID STATISTICS
In dimension 2+1, the braid statistics of the point excitations of a the two-dimensional lattice model, such as the Levin-Wen model <cit.>, is captured by the Drinfeld center of the spherical 2-category.
In dimension 3+1 or higher, the category of point excitations is usually studied as a symmetric fusion category. The symmetric fusion category is a representation of a group or a super group proved by Deligne in <cit.>. The particles are classified as bosons or fermions according to the braid statistics.
In that sense, the lower dimension topology is more interesting due to emergence of anyons with general braid statistics.
The braid statistics seems less interesting in higher dimensions.
However, the topology in dimension 4 are much more complicated. It is a paradox.
We give a conceptual explanation to resolve the paradox.
The point excitation of the n-dimensional lattice model is an n-morphism with the B-color link boundary S^n-1 in the n+1 alterfold TQFT. The type of the n-morphism is an alterfold (D^n,B^n,M^n-1), rather than a point. We use the A-color part A^n to denoted its type for short.
When n=3 and A^3 is a solid torus, there are infinitely many ways to fuse the two point excitations of such type, because the two solid torus can be braided in the D^3, such as the solid Hopf link. The worldsheet is even more complicated, as one solid torus can move along the other.
This leads to extremely rich higher braid statistics of membranes.
Based on this observation, we propose the following definition of the local center to study the higher braid statistics in the future.
For a spherical n-category 𝒞 (from a complete finite and strong semisimple S^n functional Z), we define its local center as the category of point excitations of the n-dimensional lattice model.
Its objects are n-morphisms with B-color link boundary S^n-1 and its morphisms are (n+1)-morphisms in the n+1 alterfold TQFT. For two objects of type A^n_1 and A^n_2, we obtain a fusion for any non-intersecting embedding of A^n_1 and A^n_2 in D^n.
§ A 3+1 ALTERFOLD TQFT OF ISING TYPE
In this section, we illustrate our theory on the functional integral construction of TQFT in a concrete example. We construct an S^3 functional Z, which is reflection positive and complete finite. We obtain a unitary spherical 3-category of Ising type and a non-invertible 3+1 unitary alterfold TQFT from Theorem <ref>.
Moreover, we compute the 20j-symbols and verify the (3,3), (2,4) and (1,5) Pachner moves <cit.>. It is a one-dimensional higher analogue of the 6j symbols and pentagon equations. The number 20 is given by the number of 1-simplices and 2-simplices in the 4-simplex Δ^4.
The 20j-symbols could be used to compute a scalar invariant of 2-knots in PL 4-manifolds explicitly.
Recall that the Ising 2-category can be derived from an S^2 functional Z which takes value √(2)^m for m non-intersecting closed circles in S^2.
It seems natural to generalize it to an S^3 functional Z on non-intersecting surfaces in S^3, which is √(2) for when the surface is a sphere.
After checking certain compatible conditions for the fusion rule and quantum dimensions, we obtain the value of a surface with Euler number e to be 2^1-e_i/4. This S^3 functional Z verifies reflection positivity, but it has infinitely many 1-morphisms up to equivalence.
To achieve the complete finiteness condition, we allow the surfaces to intersect.
Finally we obtain the following formula for the S^3 functional Z.
We define the S^3 functional Z on stratified 3-manifold (S^3, ∪_i S_i) where {S_i}_i∈ I are PL surfaces in S^3 which may intersect transversely and be unoriented. It has the stratification:
* M^3 is S^3;
* M^2 is the union of PL surfaces;
* M^1 is the union of intersection curves of every two surfaces;
* M^0 is the union of intersection points of every three surfaces.
The intersection of every four surfaces is empty.
Let e_i be the Euler number of S_i. We define
Z(S^3, ∪_i S_i)=∏_i=1^n 2^1-e_i/4.
The Euler number is homeomorphic invariant, so Z is homeomorphic invariant.
Moreover, the Euler number does not change under reflection, so Z is Hermitian.
Now let us prove the reflection positivity and complete finiteness; construct its simplical k-morphisms; and compute its 20j-symbols.
For any point p∈ M^k∖ M^k-1, it has a regular chart as shown in Fig. <ref>, for k=3,2,1,0. Each of the stratified manifold on the boundary S^2 corresponds to the only local shape in LS_k as in Def. <ref>-<ref>:
When we cut S^3 into two parts D^2_± by the equator S^2, the restriction of M^3 on the boundary S^2 contains PL curves C={C_j : 1≤ j ≤ m}, |C_j|∼ S^1.
The vector space Ṽ(S^2,C) is spanned by stratified manifolds ℳ with support D^2 and boundary C and M^2 consists of surfaces with boundary C.
We have the following relations in K_Z. Any ambient isotopy of the surfaces will not change Z and we can split surfaces away from each other. In particular, we can move one plane across another plane, or across a line or a point which is intersection of planes, as shown in Fig. <ref>. The last equation is a one-dimensional higher analogue of the Yang-Baxter equation.
Moreover, for each connected surface, we can change its shape by multiplying the scalar 2^-e/4 according to the change of the Euler number e.
For surfaces {S_i}_i ∈ I in D^3 with boundary C={C_j}_j∈ J. Let |C| be the number closed curves.
We define its connected type as T={T_i: i∈ I}, where T_i={j ∈ J : C_j ⊆∂ S_i}, which is a partition of the set J.
The S^3 functional Z is 3-finite, namely the vector space Ṽ(S^2,C) is finite dimensional.
For each connected surface in D^3, we can change its shape by multiplying the scalar 2^-e/4 according to the change of the Euler number e.
Thus surfaces with the same connected shape, they are scalar multiples of each other.
So the dimension vector space Ṽ(S^2,C) is bounded by the number of connected types, which is the number of partitions of J. So it is finite dimensional.
When |C|=0,1,2,3, the number of partitions is 0,1,2,5. An isotopy of the boundary C induces an isometry of the vector spaces. We only need to consider the boundary C with non-intersecting curves.
When C has no curve, Ṽ(S^2,C) is one-dimensional. Consequently, there is only one 0-morphism.
Any closed surface with Euler number e reduces to a scalar 2^1-e/4.
When |C|=1, Ṽ(S^2,C) is one-dimensional.
Suppose the curve is the boundary of a surface S. We can change S to the disc by multiplying the scalar 2^-e/4 according to the change of the Euler number e.
When |C|=2, Ṽ(S^2,C) is two-dimensional.
There are two connected types depending on whether the two curves are connected by a surface or not. They form a basis.
The following result is the key to prove the complete finiteness of Z.
When |C|=3, Ṽ(S^2,C) is four-dimensional.
It has a relation in Fig <ref>.
We have five vectors according to connected types:
(123),(12,3),(13,2),(23,1),(1,2,3)
corresponding to the five terms above. By direct computation, their inner product matrix has rank 4, and we obtain a relation from K_Z, as shown in <ref>:
-√(2)(123)+(12,3)+(13,2)+(23,1)-(1,2,3)=0
Let us construct the indecomposible 2-morphisms of type 𝒟^2 with link boundary S^1 without stratification.
Up to isotopy, we only consider the case that 𝒟^2 contains non-intersecting contractible closed curves which are the boundary of small 2-discs.
When there is no small 2-disc, we obtain an indecomposible 2-morphisms t_0 as D^2× D^1 without stratification.
When there is one small 2-disc D, we obtain the identity illustrated as the second figure in Fig.<ref>, which has a stratification ∂ D× D^1; and an indecomposible 2-morphism t_0' equivalent to t_0 illustrated as the third figure in Fig.<ref>. Their difference t_r is an indecomposible 2-morphism inequivalent to t_0, illustrated as the first figure, a red tube, in Fig.<ref>.
We have the following relation to fuse the red tube to with a plane:
It follows from the definition of the red tube in Fig. <ref> and Prop. <ref>.
The S^3 functional Z in Def. <ref> is reflection positive.
The dimension of the vector space Ṽ(S^2,C) is 2^|C|-1.
An isotopy on the boundary induces an isometry, so we assume that the curves in C do not intersect.
If C has zero or one curve, then Ṽ(S^2,C) is one dimensional by Prop. <ref>,<ref>. Reflection positivity holds on these vectors spaces.
Now we prove the statement by induction on the number of circles in C.
Take a disc D of S^2∖ C, s.t. ∂ D is a circle in C, we decompose D̅× [0,1] as a sum of orthogonal projections t_0' and t_r.
Therefore Ṽ(S^2,C) decomposes a direct sum of two vector spaces.
The vector space labelled by t_0' is isomorphic to the vector space with Ṽ(S^2, C∖{∂ D}, because t_0' is annular equivalent to t_0.
The vector space labelled by t_r is isomorphic to the vector space with Ṽ(S^2, C∖{∂ D} by Prop.<ref>.
Therefore Z is reflection positive and the dimension of the vector space Ṽ(S^2,C) is 2^|C|-1.
The type of a k-morphism is the transversal intersection of a k-simplex with
stratified 3-manifold ℳ.
When k=1, it is D^1 stratified by finite points.
When k=2, it is D^2 stratified by curves, which may intersect.
To prove complete finite, we need to reduce to finite types by the skein relations of their types induced from the annular equivalence.
The S^3 functional Z in Def. <ref> is 2-finite.
The type of 2-morphisms is D^2 stratified by curves. By isotopy, we may assume that the curves do not intersect.
A contractible curve decomposes as a sum of t_0' and t_g. Moreover, the curve labelled by t_0' is equivalent to empty set t_0. If the curve labelled by t_g is adjacent to another curve, then we can merge t_g into the other curve by Prop.<ref>.
So all closed curves can be eliminated unless the type has only one closed curve labelled by t_g.
For a fixed link boundary, the non-closed curves pair the boundary points, so there are only finitely many types of 2-morphisms. So Z is 2-finite.
We will explicitly construct the simplical k-morphisms in the following subsection and compute the 20j symbols. In particular, we prove that Z is 1-finite in Theorem <ref>.
The S^3 functional Z in Def. <ref> is reflection positive and complete finite. So we obtain an 3+1 alterfold TQFT with reflection positivity.
It follows from Theorems <ref>, <ref> and <ref>.
If we only consider non-intersecting surfaces, then the Theorem <ref> fails. Instead, we will obtain infinitely many indecomposible 1-morphisms indexed by natural numbers ℕ. The reflection positivity, 3-finite, 2-finite conditions still hold.
This example has its own interests. In the paper, we will focus on the examples which are complete finite, so that we can construct the n+1 alterfold TQFT.
§.§ Simplicial 1-morphisms
In this subsection, we construct the indecomposible 1-morphisms up to annular equivalence.
There are three indecomposible 1-morphisms up to annular equivalence, denoted as E_1={1,τ, g}.
Now let us study 1-morphisms of type 𝒟^1_m, which has 0-dim stratification of m points.
When m=0, the algebra A(𝒟^1× D^2) is one-dimensional by Prop. <ref>.
The identity D^1× D^2 is an indecomposible 1-morphism, denoted by 1.
When m=1, the algebra A(𝒟^1_1× D^2) is one-dimensional by Prop. <ref>.
The identity D^1_1× D^2 is an indecomposible 1-morphism, denoted by τ. Its quantum dimension is √(2) as shown in Fig. <ref>.
When m=2, the algebra A(𝒟^1_2× D^2) is two-dimensional by Prop. <ref>. The identity 𝒟^1_2× D^2 has decomposes as a sum of two minimal idempotent, namely two indecomposible 1-morphisms.
There are two basis vector according to the connected type (12),(1,2).
As shown in Fig.<ref>, (1,2) is a multiple of a minimal idempotent, corresponding to a 1-morphism 1', which is annular equivalent to 1, because it has no intersection with the 1-simplex in the middle of the tunnel.
The orthogonal complement of 1' is a 1-morphism, denoted by g, illustrated in Fig.<ref> as a red line between double planes.
If a 2-disc is connected by a red line from g, then it is zero.
Consequently the 1-morphism g is inequivalent to 1.
Gluing the vertical half tube to g in Fig.<ref> is zero on the right side. The left side is a 2-disc is connected by a red line from g.
The only possible bimodule between g and 1 is the vertical half tube, but gluing it with g is zero.
When m=3, the algebra A(𝒟^1_2× D^2) is four-dimensional by Prop. <ref>.
It has five idempotents according to the connected types
P(1,2,3), P(12,3), P(13,2), P(23,1), P(123) with quantum dimension 2√(2), √(2), √(2), √(2), √(2)/2.
We obtain four minimal idempotents P(12,3)-P(123), P(13,2)-P(123), P(23,1)-P(123), P(123) with the same quantum dimension √(2)/2. They are all annular equivalent to τ.
For example, a 1-simplex can go through the tunnel of the first two planes of P(12,3)-P(123) and intersect with the third plane as illustrated in Fig. <ref>, so it is annular equivalent to τ.
It's worth mentioning that P(1,2,3)- P(12,3)-P(13,2)-P(23,1)+2P(1,2,3), illustrated in Fig.<ref>, is a projection with quantum dimension zero. It becomes zero in the quotient by the kernel K_Z. It is the key to achieve the 1-finite condition of Z.
As discussed above if m≥ 3, the indecomposible 1-morphisms of type 𝒟^1_m are annular equivalent indecomposible 1-morphisms of type 𝒟^1_m-2. So there are only three indecomposible 1-morphisms up to equivalence.
We obtain the following fusion rule of Ising type for composition of indecomposible 1-morphisms:
τ⊗τ ∼1⊕ g;
τ⊗ g ∼ g⊗τ∼τ;
g ⊗ g ∼1.
The first two have been shown before. The vertical half g-color tube is a bimodule between g ⊗ g and 1, which induces a sub idempotent of g ⊗ g with quantum dimension one. So it equals to g ⊗ g. So g ⊗ g ∼1.
§.§ Simplicial 2-morphisms
In this subsection, we construct the indecomposible simplicial 2-morphisms up to annular equivalence.
We construct a minimal representative set of indecomposible simplicial 2-morphisms
The set
E_2={a_+,a_-,b_τ,b_g0,b_g1,c_0,c_1}.
is an E_1-complete minimal representative set of indecomposible simplicial 2-morphisms
Now we consider simplicial 2-morphisms whose boundary are labelled by 1-morphisms in E_1 as in Fig. <ref>.
Since only transversal intersection of the 2-simplex Δ^2 and surfaces are curves, the boundary have even number of points. So a_τ,e_τ and e_τ are illegal.
As g is inequivalent to 1, the two points of g cannot be connected by a curve in Δ^2.
According to the fusion rule in Prop. <ref> or Fig. <ref>, any 2-morphism with boundary e_g has zero quantum dimension.
By the discussions in the proof of Theorem <ref>, Δ^2 has no closed curve when the boundary has points; and Δ^2 has at most one closed curve when the boundary has no points. We list all the possible curves in Δ^2 with given boundary labels in E_1 in Fig.<ref> 2-morphisms.
There are seven permissible simplical 2-morphisms E_2={a_+,a_-,b_τ,b_g0,b_g1,c_0,c_1}. The 2-morphisms e_i, 1≤ i ≤ 4, have boundary e_g and they have zero quantum dimension, so they are in K_Z.
So the E_2 is E_1-complete.
Now we show that the seven are pairwise inequivalent, so E_2 is a minimal representative set. We only need to compare the ones with the same boundary.
By the definition of r in Fig. <ref>, a_+ and a_- inequivalent
By Lemma <ref>, b_g0 and b_g1 are inequivalent; c_0 and c_1 are inequivalent.
§.§ 3-morphisms
In this subsection, we construct the minimal representative set of indecomposible simplicial 3-morphisms, which form a basis of the vector space with boundary labels in E_2.
We construct a E_2-complete minimal representative set of 3-morphisms E_3 in Fig.<ref>.
We first label four 2-simplices on the boundary of Δ^3 by elements in E_2, so that they are compatible on 1-simplices. For a fixed boundary, we construct 3-morphisms as surfaces with the given boundary.
By Lemma <ref>, every connected surface cannot contains the boundary of a red line.
The surface with one boundary curve is a disc. The surface with two boundary curves is a tube.
Modular isotopy, we list all possible diagrams in Fig.<ref> and E_3 is E_2 complete.
It is a direct computation to check that 3-morphisms with the same boundary are linearly independent in the vector space. So E_3 is minimal.
§.§ 20j-Symbols
The Z value of 4-simplices with compatible boundary morphisms in E_3 are listed in Table <ref>, which we call the 20j symbols according to the 20 labels for 1-simplices and 2-simplices.
Take a 4-simplex whose five vertices of Δ^4 are numbered from 1 to 5.
We label the five 3-simplices of ∂Δ^4 by morphisms in E_3, so that they have the same label on 2-simplices. We list the compatible labels in Tables <ref> and <ref>.
Then the boundary of the 4-simplex is a linear sum of S^3 containing surfaces. The shape of the surfaces are listed in the column with head σ^2.
We denote S_τ as a τ-colored sphere, T_τ- as a red-colored torus, T_τ-^2 as a τ-colored genus-2 torus. Denote S_g as a red-line connected double sphere. T_g be digging an inner red tube intersecting an inner face a_- on S_g, T^2_g digging two red tubes.
Their Z values can be computed directly by Def. <ref> and is shown in the column with head F̃.
Note that the 3-morphisms in E_3 are unnormalized vectors. The corresponding unnormalized 20j-symbol is F. If we normalize the 3-morphisms to be unital vectors, then the normalized 20j-symbol is F.
§.§ Pachner Moves
A well-known result of Pachner states that two triangulations of a PL-manifold are related by a finite sequence of Pachner moves <cit.>. The boundary of the (n+2)-simplex Δ^n+2 has (n+1)-simplices Δ_n+1,i, 1≤ i≤ n+3. The (k,n+3-k) Pachner move changes k (n+1)-simplices to the rest (n+3-k) (n+1)-simplices.
When n=2, it is well-know that the value F of Δ_3,i labelled by morphisms of a spherical 2-category on the four 0-faces is called the quantum 6j-symbol or the F-symbol. They satisfy the pentagon equation corresponding to the (2,3) Pachner move. The number 6 comes from the number of 1-simplices in Δ^3, and the spin j was originally considered as the spin j irreducible representation of SU(2). For a general n, the value F of Δ_n+1,i labelled by n-morphisms of a spherical n-category on the n+2 0-faces is called the F-symbol.
By Theorem <ref>, the F-symbols satisfy the (k,n+3-k) Pachner move for all k. In particular, the 20j-symbols satisfies the higher pentagon equations corresponding to (3,3), (2,4) and (1,5) Pachner moves. To double check the theory and the numerical computation, we have verified all Pachner moves of 20j-symbols on a computer. There are 2044 (1-5) equations, 30464 (4-2) equations and 50709 (3-3) equations.
We list the first two identities of Pachner moves of our 20j-symbols as examples.
§.§ Conjecture
The methods in this section seem to work in all dimensions based on a carefully study of the cobordism theory <cit.>. We conjecture that
We define the S^n functional Z on stratified n-manifold (S^n, ∪_i S_i) where {S_i}_i∈ I are PL surfaces in S^n which may intersect transversely and be unoriented. It has the stratification:
* M^n is S^n;
* M^n-1 is the union of PL (n-1) hyper surfaces;
* M^n-k is the union of intersection (n-k)-manifolds of every k hyper surfaces, 2≤ k≤ n.
The intersection of every (n+1) hyper surfaces is empty.
Let e_i be the Euler number of S_i. We define
Z(S^n, ∪_i S_i)=∏_i=1^n 2^1-e_i/4.
We conjecture that S^n function Z is complete positive and complete finite.
The author would like to thank many people for helpful discussions on this topic, especially to
Arthur Jaffe, Liang Kong, Maxim Konsevitch, Zhenghan Wang, Xiao-Gang Wen, Edward Witten, and colleagues Song Cheng, Jianfeng Lin, Fan Lu, Shuang Ming, Nicolai Reshetikhin, Yuze Ruan, Ningfeng Wang, Yilong Wang, Jinsong Wu, Shing-Tung Yau, Zishuo Zhao and Hao Zheng at Tsinghua University and BIMSA for fruitful discussions and comments; thank Yuze Ruan and Zishuo Zhao for pointing out gaps in early notes; thank Ningfeng Wang for the help on computing the 20j-symbols. The author would like to thank Guoliang Yu for the hospitality during the visit at Texas A&M University.
The author is supported by BMSTC and ACZSP (Grant No. Z221100002722017), Beijing Natural Science Foundation Key Program (Grant No. Z220002), and by NKPs (Grant no. 2020YFA0713000).
abbrv
| In this paper we propose a new program to address a long-standing open problem: how to construct a meaningful unitary topological quantum field theory (TQFT) in arbitrary dimension. We believe that the ideas here are relevant both to the development of category theory as well as to the mathematical foundations of physics. We will return to this theme in future works concentrating both on the theory and on examples.
Here we construct a unitary n+1 alterfold TQFT from an functional integral on n-dimensional lattice models, for arbitrary n, in Theorem <ref>. The n+1 alterfold TQFT has alternating A/B-colored n+1 manifolds with the lattice model on the n-manifold between them. Our example reduces to Atiyah's TQFT <cit.> when the boundary is a B-color n-manifold. This present work builds on our recent work on alterfolds <cit.>, and on the other works we now explain.
Witten constructed a 2+1 TQFT using Chern-Simons theory and obtained an invariant of links in 3-manifolds as a path integral <cit.>, generalizing the Jones polynomial originated from subfactor theory <cit.>, and other link invariants from the representation theory of Drinfeld-Jimbo quantum groups <cit.>.
Feynman's path integral is a powerful method in physics, but the measure of the path space is only mathematically defined for a few cases <cit.>.
Atiyah provided a mathematical axiomatization of TQFT to study its topological invariants <cit.> which avoids the path integral.
The topological invariant of Witten's 2+1 TQFT can be rigorously defined using the link invariants from quantum groups and the surgery theory, known as the Witten-Reshetikhin-Turaev TQFT <cit.>.
The Turaev-Viro-Barrett-Westbury 2+1 TQFT from a spherical fusion category <cit.> is a state sum construction over a triangulation, which is a combinatorial analogue of path integral.
Witten also constructed a 3+1 TQFT <cit.> which captures Donaldson's invariant of smooth 4-manifolds <cit.>. By this invariant, Donaldson constructed exotic four-dimensional spaces together with the seminal work of Freedman <cit.>.
The Turaev-Viro state sum construction has been generalized to construct 3+1 TQFT using a braided fusion category in <cit.> or a fusion 2-category in <cit.>.
In higher dimensions, Dijkgraaf-Witten constructed an n-dimensional TQFT from the group cohomology of a finite group <cit.>.
Lurie introduced a fruitful theory of (∞,n) category in <cit.> to study non-invertible higher symmetries and to answer the cobordism hypothesis of Baez and Dolan <cit.>. It is widely believed that the state sum construction of TQFT from spherical categories will work in any dimension e.g. <cit.>, but there is no agreement on the mathematical definition of a (unitary) spherical n-category; see a recent discussion in <cit.>.
The 2+1 TQFT is exceptionally successful due to the fruitful examples of quantum symmetries coming from the representation theory of quantum groups, subfactors, vertex operator algebras, conformal field theory, etc.
It is highly expected, but remains challenging, to generalize those frameworks to higher dimensions, which should provide examples of non-invertible higher quantum symmetries, such spherical n-categories, from their higher representation theory.
In this paper, we provide a functional integral approach to construct an n+1 TQFT using a linear functional Z on an n-dimensional lattice model for any dimension n.
This functional integral point of view has been well established in constructive quantum field theory (QFT) <cit.>; we bring that method to the study of TQFT.
Mathematically, we introduce labelled regular stratified piece-wise linear manifolds to formulate the lattices of a general shape and their configuration space.
We study their basic properties in <ref> and prove the transversal theorem between stratified manifolds and triangulations in Theorem <ref>. Based on it, we can compute the partition function in n+1 TQFT as a state sum according to a transversal triangulation.
In an n-dimensional lattice model, the configuration space of a lattice is a Hilbert space, which is the tensor product of Hilbert spaces of local spins, regarded as labels of stratified manifolds.
We construct an n+1 TQFT from a linear functional Z on the configuration spaces of lattices on the n-sphere S^n in Theorem <ref>, such that the (n+1)-manifolds of the topological quantum field theory have A/B colors and the n-dimensional hyper surfaces (or domain walls) between them are decorated by n-dimensional lattice models. We call such TQFT an n+1 alterfold TQFT (Def. <ref>), generalizing the 2+1 alterfold TQFT in <cit.>.
To achieve the construction of the TQFT, we assume three conditions of the linear functional Z in <ref>,
* (RP) reflection positivity;
* (HI) homeomorphic invariance;
* (CF) complete finiteness.
Condition (RP) means that the inner product induced from Z is positive semi-definite between configuration spaces on half n-discs of S^n for any fixed common boundary on the equator S^n-1.
The functional integral Z satisfying Reflection Positivity (RP) is a mathematical formulation of the measure on the path space, by the Wick rotation from statistical physics to quantum field theory (QFT) <cit.>.
To formulate the theory over a general field 𝕂, we need to replace the condition (RP) by strong semisimplicity.
Condition (HI) means that the linear functional Z is invariant under homeomorphisms on S^n.
Condition (HI) ensures the QFT to be topological, namely the partition function of the TQFT, still denoted by Z, is homeomorphic invariant.
Condition (CF) means that the inner product induced from Z has finite rank between configuration spaces of lattices on D^k× S^n-k for any fixed boundary on D^k-1× S^n-k. Condition (CF) ensures the partition function of the TQFT to be a finite state sum.
From the physics point of view, we know everything, if we know the linear functional Z.
We implement this idea mathematically as the Null Principle that if we cannot distinguish two vectors from Z, then we consider them to be the same mathematically.
The null principle encodes numerous algebraic relations from the kernel of Z.
We highlight our identification of vectors on different lattices with the same boundary using the null principle of the configuration spaces, which is different from the renormalization group methods in condensed matter physics.
In <ref>, we introduce hyper discs (or D^n) algebras to study local algebraic relations, generalizing the notion of planar algebras of Jones <cit.> for n=2. It captures the action of the operad of regular stratified manifolds on the configurations on local n-discs.
In <ref>, we study the higher representation theory of the D^n algebra which form an n-category. It suggests a mathematical definition of a unitary/spherical n-category, together with the operad action and the linear functional Z with the condition (RP)/(HI).
In <ref>, we derive the skein relations for the n+1 alterfold TQFT in Def. <ref>, using the null principle. The relations can be considered algebraically as a resolution of identity and topologically as removing a D^k×ε D^n-k+1 handle with a bistellar move on its boundary.
We prove the consistency of the relations and construct an n+1 alterfold TQFT in Theorem <ref> and Theorem <ref>.
In <ref>, we give some examples to understand better the concepts in the alterfold TQFT.
If we shrink the B-color manifolds and evaluate the partition function according to the triangulation, then we obtain a higher analogue of the Turaev-Viro TQFT of the spherical n-category.
If we shrink the A-color manifolds and evaluate the partition function according to surgery move, then we obtain a higher analogue of the Reshetikhin-Turaev TQFT of the higher Drinfeld center of the spherical n-category. The later one suggests a fruitful higher braid statistics of membranes, as discussed in <ref>.
In <ref>, we give a concrete example to illustrate our functional integral construction of TQFT. We construct a linear function Z for a lattice model on S^3. We prove the three conditions (RP), (HI), (CF) and therefore we obtain a unitary 3-category of Ising type and a non-invertible unitary 3+1 alterfold TQFT. We compute all its simplicial morphisms and the 20j-symbols in Table <ref>. Using the skein relations and the 20j-symbols, the scalar invariant 2-knots in 4-manifolds could be computed explicitly.
The 20j-symbols satisfies the (3,3), (2,4) and (1,5) Pachner moves <cit.>, as a one-dimensional higher analogue of the pentagon equations. In this example, there are already more than 50,000 equations, which seems too many to solve directly in general. We conjecture that the construction of the 3+1 TQFT of Ising type works in higher dimensions as well.
This example is indeed related to the 3D Ising model and the 3D toric code <cit.>, which we will discuss in near future.
The connection between TQFT and topological orders has been extensively studied from the view of since <cit.> and from the view of particle excitations since <cit.>, and a classification of topological orders by unitary multi-fusion n-categories is proposed in <cit.>. This is a classification of topological orders by quantum symmetries.
From the view of condensed matter physics, another natural approach to study topological orders is characterizing the ground states.
An important question proposed by Wen is what kind family of wave functions on lattice models are topological orders.
The Riesz representation theorem induces a bijection between the linear functional Z and a vector state on the configuration space for every lattice.
One can consider our linear functional Z on a configuration space as a vector state up to a normalization.
Therefore by Theorem <ref>, we give an answer to Wen's question that a linear functional Z with the three conditions (RP) (HI) and (CF) is a topological order.
In topological orders, we usually take this vector state as the ground state of a local Hamiltonian of neighbourhood interactions. The conditions (HI) and (RP) can be derived from the corresponding conditions of the Hamiltonian. In this paper, we focus on the construction of the alterfold TQFT without referring to the Hamiltonian. In a coming paper <cit.>, we study lattice models with a local Hamiltonian systematically and prove conditions (HI) and (RP) for the Hamiltonian at any temperature.
Condition (CF) corresponds to the finiteness of the entanglement rank of the ground state for lattices on S^k× S^n-k separated by S^k-1× S^n-k.
The area law and projected entangled pair states (PEPS) have been conjectured to be the entanglement properties characterizing ground states of Hamiltonian with local interactions, see e.g., <cit.>.
Our functional integral approach will provide positive evidence of this conjecture. | null | null | null | null | null |
http://arxiv.org/abs/2409.18048v1 | 20240926164957 | Next-Gen Software Engineering: AI-Assisted Big Models | [
"Ina K. Schieferdecker"
] | cs.SE | [
"cs.SE",
"cs.ET",
"D.2.0; K.6.3"
] |
AI-Assisted Big Models
I. Schieferdecker
TU Berlin, Einsteinufer 25, 10587 Berlin, Germany
[email protected]
Ina K. Schieferdecker
Next-Gen Software Engineering:
AI-Assisted Big Models
Ina K. Schieferdecker10000-0001-6298-2327
September 28, 2024
======================================================
s
§ ABSTRACT
The effectiveness of model-driven software engineering (MDSE) has been demonstrated in the context of complex software; however, it has not been widely adopted due to the requisite efforts associated with model development and maintenance, as well as the specific modelling competencies required for MDSE. Concurrently, artificial intelligence (AI) methods, particularly machine learning (ML) methods, have demonstrated considerable abilities when applied to the huge code bases accessible on open-source coding platforms. The so-called big code provides the basis for significant advances in empirical software engineering, as well as in the automation of coding processes and improvements in software quality with the use of AI. The objective of this paper is to facilitate a synthesis between these two significant domains of software engineering (SE), namely models and AI in SE. The paper provides an overview of the current status of AI-assisted software engineering. In light of the aforementioned considerations, a vision of AI-assisted Big Models in SE is put forth, with the aim of capitalising on the advantages inherent to both approaches in the context of software development. Finally, the new paradigm of pair modelling in MDSE is proposed.
§ INTRODUCTION
Software engineering (SE) is a field of informatics/computer science that addresses the development and analysis of systematic approaches for the design, development, verification & validation and maintenance of software and of software-based systems[In this paper, the term "software" will be used as a general reference to the collective term for computer programs and related SE artefacts.]. It establishes principles and identifies optimal practices for the production and operation of software, with the aim of ensuring the reliability, security, scalability, maintainability and alignment of software with user, business and societal requirements. The objective of software engineering is the production of high-quality software that is cost-effective, delivered in a timely manner and can be readily adapted as requirements evolve.
The term software engineering was coined in 1967 with the very first software crisis <cit.>, but still software engineering practices and the resulting software quality have not kept pace with the quality levels required in critical domains or application contexts. Recently, another dramatic story was added to the software horror show: The root cause analysis for the millions of outages caused by the CrowdStrike Falcon sensor <cit.> found that the number of fields in an IPC template type was not validated, a runtime array bounds check was missing, the content validator contained a logic error, template type testing was too limited, and template instances were not tested within the content interpreter. This was unprofessional software development that ignored state of the art of SE.
However, software outages are not the only indication that SE still has no full answers to the challenges posed by the increasing complexity of software-based systems and the diversity of requirements placed on them: <cit.> analysed that [t]here exists a statistically significant medium sized difference between open and closed source projects: the former have a DD [(defect density)] that is 4 defects per KLoC lower than the latter. Java projects exhibit a significantly lower DD than C projects, 4.1 defects per KLoC on average: In general the Size appears to be negatively correlated to DD: the larger the project the lower the DD. In particular, large projects are 10 times less defective than medium ones.. According to <cit.>, the defect density (DD) of software projects within the industrial sector is estimated to fall within the range of 1 to 25. This suggests that a software program comprising one million lines of code (MLoC) may contain between 1,000 and 25,000 defects. Given the extensive research conducted on the subject of software size in terms of lines of code, the derivation of estimates regarding the approximate number of defects in a software is a relatively straightforward process. To illustrate, <cit.> compares typical software such as operating systems, browsers, and office suites: The Windows 10 operating system has approximately 80 MLoC, Ubuntu 50 MLoC, MacOS X 84 MLoC, Android 12 MLoC, iOS 12 MLoC, the browser Google Chrome 6.7 MLoC, Mozilla Firefox 21 MLoC, or the office suites Microsoft Office 2013 45 MLoC, Apache OpenOffice 19 MLoC, and LibreOffice 10 MLoC. Therefore, the challenges associated with the development of accurate and suitable software remain significant.
<cit.> argued that the primary challenge in software development lies in the specification, design, and testing of complex systems, rather than in the labour for representation of the system or testing its fidelity. It emphasized the distinction between essential difficulties (inherent complexity) and accidental difficulties (extraneous challenges), stating that past advances, like high-level programming languages, have only reduced the latter. The paper suggested that addressing accidental difficulties would not lead to major improvements in software development, as essential difficulties remain fundamental. <cit.> critiqued the notion of "accidental difficulties", arguing that these so-called accidents are often the result of negligence or poor practices, not mere happenstance. This paper advocated for a disciplined, science-based, and model-driven approach to software development, similar to traditional engineering disciplines. Building on this, <cit.> highlighted the growing societal dependence on autonomous and intelligent software systems, suggesting that new approaches are needed to ensure not only traditional software quality facets (safety, security, etc.), but also to address socio-technical and socio-political implications, particularly in human-machine collaboration.
In model-driven development (MDD <cit.>), also known as model-driven software engineering <cit.>, models are original artefacts that are engineered with the intention of facilitating the top-down construction of complex software. Furthermore, models are utilised during runtime to enable the monitoring, verification & validation of software operations, as well as the optimisation of its performance <cit.>.
As has been the case with the comprehensive incorporation of modelling in software engineering, artificial intelligence (AI) methodologies have been employed in software engineering activities from the outset, see for example <cit.>. In the context of significant advancements in AI-driven software engineering in 2021, Gartner's hype cycle for emerging technologies <cit.> projected that AI-enhanced software engineering would reach the peak of inflated expectations within a five-to-ten-year timeframe, subsequently entering a phase of productivity stabilization. Since 2023, it is estimated that AI-augmented software engineering will reach the productivity plateau already in two to five years, while in previous iterations of the hype cycle, AI-augmented development and AI-assisted design were identified as emerging technologies on the rise. It was interesting to observe the evolving business expectations and the convergence of trends for AI-supported design and development into a unified technological trend for AI-supported engineering. Moreover, it is encouraging to observe that software engineering is receiving particular attention and prioritisation, which is indicative of the growing importance of this field.
This paper addresses the role of models in software engineering in Section <ref>, the evolution of big code on coding platforms in Section <ref>, and the utilisation of AI for coding and software engineering in Section <ref>. It presents a novel taxonomy of AI for software engineering (AI4SE) to facilitate a deeper understanding of this evolving field. Section <ref> examines the practice of contributing software models to coding platforms, which is increasingly being done with software code. In line with the increasing prevalence of large-scale models, it examines recent developments in AI-enhanced modelling and model utilisation within the domain of software engineering. It also presents an updated version of the AI4SE taxonomy, designated as AI4BM, to outline the current status and future prospects of AI applications for Big Models. An outlook concludes the paper.
§ MODELS IN SOFTWARE ENGINEERING
According to <cit.> (see Figure <ref>), the software lifecycle is comprised of distinct phases and activities that can be described as knowledge areas, including requirements, architecture, design, construction, testing and maintenance. There are also knowledge areas that deal with the fundamentals of computer science, mathematics and engineering, as well as cross-cutting activities pertaining to software quality, security, configuration management and engineering management. Furthermore, there are cross-cutting knowledge areas that encompass the processes, operations, professional practice and economics of software engineering, along with a distinct area for software engineering models and methods. This knowledge area encompasses the processes and methodologies associated with modelling, the various types of models, including those pertaining to information, behaviour and structure, and the analysis of models, see also <cit.>.
Models in SE are the key to dealing with the complexity of software. Models provide the essential abstractions to capture requirements, support design decisions, or to offer comprehensive overviews of software structures and behaviours. They are essential tools in the engineering process for constructing and maintaining software. They are also vital for configuration, monitoring and other runtime support during software operation and management.
The traditional understanding of model-driven software engineering (MDSE) evolved over time:
* In addition to explicit model development and top-down model use, models are also extracted from software executions with the objective of avoiding the costs associated with model development on the one hand and leveraging the benefits of MDSE on the other hand <cit.>.
* The combination of model development of model artefacts with model extraction from code artefacts has resulted in top-down/bottom-up MDSE approaches <cit.>benefiting from both directions of MDSE as shown in <cit.> or <cit.>.
* Furthermore, as the concepts underlying programming became increasingly abstract, low-level information and structure models have also become integral to conventional programming. This is exemplified by the use of the data structures often implicitly defined in JSON <cit.>, XML schemata, or SQL table specifications.
These practical outcomes of model-driven, model-based and/or model-like software developments established a foundation for the utilisation of AI in software engineering, as discussed in Section <ref>. Prior to this, however, a more detailed view of the current state of practice of big code on open source platforms will be undertaken in Section <ref>.
§ BIG CODE IN SOFTWARE ENGINEERING
The advent of software coding platforms can be traced back to the 1990s, although it was not until the early 2000s that they truly gained prominence, with the launch of Github in April 2008 representing a significant milestone in this regard. At the present time, GitHub is the most widely utilised software coding platform. As stated by <cit.>, GitHub hosts over 100 million projects and 40 million users. It became not only a significant platform for software engineering collaboration, but also a prominent reference for open-source software mining <cit.>. The study also demonstrated that a considerable number of GitHub repositories are not directly related to software development. This is because GitHub is not solely utilised for coding collaborations; it is also employed for collaborations on websites, editing books or other publications, and is even used as a storage platform. A subsequent manual analysis <cit.> revealed that over a third of the repositories on GitHub were not software developments.
Nevertheless, the software engineering community can further enhance its comprehension of software engineering processes, methods and tools by gaining insights via the mining of coding platforms. An overview of the GitHub platform as a whole or of a number of repositories can be obtained by utilising tools such as Kibble <cit.>. In all lines of code of the coding projects on GitHub, approximately 70% are comprised of actual code, with approximately 20% consisting of information structures in formats such as JSON or XML (see Figure <ref>).
In this context, the concept of "Big Code", analogous to the notion of "Big Data", was introduced in reference to the vast quantities of software code that have accumulated over time <cit.> and that can be analysed and reused to gain deeper insights into software engineering in general, as well as using the code to train ML models.
§ AI IN SOFTWARE ENGINEERING
The use cases for artificial intelligence (AI) in software engineering (SE) are numerous and cover a wide spectrum as discussed already decades ago <cit.> and since then. For example, <cit.> proposes the AI in SE Application Levels (AI-SEAL) taxonomy, which differentiates between the point of applying AI to the software engineering process, the software product or at runtime, the levels of automation in between 1 ([h]uman considers alternatives, makes and implements decision) and 10 ([c]omputer makes and implements decision if it feels it should, and informs human only if it feels this is warranted.)[Expressing emotion in a computer is currently scientifically problematic because there is no such thing as a computer with the capacity for emotion. A better wording would have been [c]omputer makes and implements decision if it decides it should, and informs human only if it decides this is warranted.], and the types of AI along the five tribes differentiation by <cit.>. It is noteworthy that the majority of the papers analysed in <cit.> employ AI in the software engineering process. However, despite the numerous facets of the software engineering process (Figure <ref>), this study does not provide further elaboration. Nevertheless, other publications offer more detailed discussions of AI in SE, including <cit.> or <cit.>. Notwithstanding the aforementioned considerations, a taxonomy for the role of AI in software engineering has yet to emerge.
In light of the recent advancements in machine learning (ML) that have made significant breakthroughs possible, this paper focuses mainly on the latest developments in the application of ML to SE. For ML to be effective, it is essential to utilise the appropriate structures inherent to SE artefacts and has been a topic of considerable debate: <cit.> presents an overview of the various ways in which source code can be represented, including representational models of tokens, token contexts, program dependency graphs, API calls, abstract syntax trees, object usage, and others. These representational models are used for AI assistance in SE, including the creation of recommender systems, the inference of coding conventions, the detection of anomalies and defects, the analysis of code, the rewriting and translation of code, the conversion of code to text for the purposes of documentation and information retrieval, and the synthesis and general generation of code from text. <cit.> adds further applications of AI assistance such as code completion, API migration and code repair.
Furthermore, <cit.> presents [t]he naturalness hypothesis[:] Software is a form of human communication; software corpora have similar statistical properties to natural language corpora; and these
properties can be exploited to build better software engineering tools.. Given the intrinsic formats and formal characteristics of coding and modelling languages employed in SE, the second aspect of the naturalness hypothesis can be considered relatively straightforward. Nevertheless, the initial proposition reinforces the necessity for SE artefacts that are readily comprehensible. This assertion was previously made by <cit.> in a different form: [A]ny fool can write code that a computer can understand, good programmers write code that humans can understand.. Consequently, the application of AI in SE is not merely concerned with code generation; it also encompasses the enhancement of code through techniques such as refactorings, with the objective of optimising readability and maintainability.
So, while a vast amount of approaches to utilising AI in SE have emerged in the scientific literature, they can be broadly categorised into three principal lines of enquiry: supporting the understanding of SE artefacts, the generation of SE artefacts from another, and the improvement of SE artefacts. These principal approaches may be applied to a variety of SE artefacts, including, but not limited to, requirements, designs, codes, tests, builds and/or miscellaneous of SE processes.
Therefore, with respect to the AI-SEAL taxonomy <cit.>, we consider the goals of AI application as missing and add them as a separate dimension with the three facets of understanding, generation and improvement. Furthermore, we refine the phases of application according to <cit.> by extending the software production into the tasks of software engineering including requirements, architecture & design, construction, testing, and maintenance. For the overall SE process, we differentiate its management and its economics. We also extend software operations to software configuration and software execution. Finally, we limit the levels of autonomy in resembles of the levels of autonomous driving <cit.> into four levels of support from recommendations to full automation: (1) AI assistance, where the developer is in full control and receives recommendations to chose from, (2) AI-assisted selection where the AI preselects options, (3) AI-based partial automation where the AI selects options in simple, standard cases, and (4) AI-based full automation where the AI operates without the developer. As of today, level 1 is the most often used and level 4 is by far from becoming realistic. Whether there will be further differentiation in the preference for automation is also an open question.
This novel AI4SE taxonomy is shown in Figure <ref>. Due to space limitations, we will focus on the state of the art in applying ML to software development only. In addition, the fidelity and applicability of this new taxonomy is examined by presenting mainly recent and highly cited publications.
<cit.> contributes to requirements understanding, generation and improvement by presenting an AI-based automation approach to requirements engineering that begins by converting natural language into an Eclipse Modeling Framework (EMF) model. It then applies linguistic rules to identify errors, such as ambiguities or incorrect quantifiers, and provides suggestions for requirements analysts to make final decisions. This approach supports the entire requirements elicitation and change process.
Another requirements improvement is given in <cit.> presenting the prioritisation of requirements by combining the preferences of project stakeholders with approximations of the order of requirements computed by ML techniques.
For Architecture & design understanding and generation, <cit.> describes the automated curation of design decisions to support architectural decision-making. It helps software architects by organizing and recommending design decisions based on previous cases and contextual information of the current project. By leveraging existing design knowledge, the approach analyses historical data and design choices to improve the quality and consistency of architectural designs.
The list of publications on AI-assisted coding is huge. Major developments are for example described in <cit.> for code understanding. It introduces DeepCodeReviewer that uses deep learning to recommend code reviews for common issues based on historical peer reviews. It assess the relevance of reviews to specific code snippets, suggests appropriate reviews from a repository of common feedback, and improves code reviews by focusing on defect detection. <cit.> describes GraphCodeBERT, a pre-trained model for programming languages that incorporates data flow semantics rather than just code syntax. GraphCodeBERT demonstrates its performance both in code understanding for code search and clone detection, and in code generation and improvement through code translation and code refinement. <cit.> presents a study focused on predicting software quality using defect density as a key feature representing quality to achieve higher accuracy in software quality prediction compared to previous studies. The research shows that data pre-processing, feature extraction and the application of ML algorithms significantly improve prediction accuracy.
For code generation, <cit.> discusses early experiences of developers using GitHub Copilot, which uses a language model trained on source code. Guided by Copilot, developers can write code faster than a human colleague, potentially accelerating development. Three empirical studies with Copilot highlight the different ways developers use Copilot, the challenges they face, the evolving role of code review, and the potential impact of pair programming with AI on software development. <cit.> discusses IntelliCode Compose, a multilingual code completion tool that predicts entire sequences of code tokens up to full lines of code. The generative transformer model has been trained on 1.2 billion lines of Python, C#, JavaScript and TypeScript code.
<cit.> presents Getafix, a tool for fixing common bugs by learning from previous human-written fixes, for code improvement. It uses hierarchical clustering to group bug fix patterns into a hierarchy from general to specific, and a ranking system based on the context of the code change to suggest the most appropriate fix. Another debugging approach, DeepDebug, is presented in <cit.>, which has been trained by mining GitHub repositories to detect and fix bugs in Java methods.
Since different software testing techniques are complementary, reveal different types of defects and test different aspects of a program, <cit.> presents for test understanding and improvement an ML-based approach to link test results from different techniques, to cluster test data based on functional similarities, and to generate classifiers according to test objectives, which can be used for test case selection and prioritisation.
<cit.> discusses a test generation approach for writing unit test cases by generating assert statements. The approach uses a transformer model that was first pre-trained on an English text corpus, further semi-supervisedly trained on a large source code corpus, and finally fine-tuned for the task of generating assert statements for unit tests. The assert statements are accurate and increase test coverage.
For test improvement, <cit.> discusses test case prioritisation based on source code changes, software quality metrics, test coverage data, and code coverage-based clustering. It reduces the impact of similar test cases covering the same code and improves fault detection performance.
For the understanding, generation and improvement of configurations for software deployments, <cit.> explores the role of ML in DevOps for intelligent release management. It suggests combining continuous monitoring, predictions of the likelihood of deployment failures, root cause analysis, and pipeline optimisation to reduce deployment failures and improve release management efficiency and software quality.
Solutions which address several goals and tasks are associated with the SE process: For the improvement of the SE process, <cit.> introduces Retecs, a method for automatically learning test case selection and prioritisation in continuous integration, aimed at minimising the time between code commits and developer feedback on failed tests. Retecs uses reinforcement learning to select and prioritise test cases based on their execution time, previous execution history and failure rates. It effectively learns to prioritise error-prone test cases by following a reward function and analysing past CI cycles.
For the understanding of the SE process, <cit.> presents the T-BERT framework for generating trace links between source code and natural language artefacts such as requirements or code issues. It demonstrates superior accuracy and efficiency for software traceability, especially in data-limited environments.
The presented selection of recent research demonstrates the feasibility of the added dimensions and facets of the AI4SE taxonomy compared to <cit.>. It is our belief that it is only a matter of time before research results are presented in primary studies for the open aspects. For further reading, <cit.> may be used.
§ AI-ASSISTED BIG MODELS IN SOFTWARE ENGINEERING
In addition to the development of AI applications in SE, the availability of models for SE for MDSE has increased significantly. This is also due to the fact that in addition to the naturalness hypothesis of <cit.>, there is
the modelling hypothesis:
Software is (also) a formalized communication between humans and computers; model corpora, software corpora, and natural language corpora have similar statistical properties; the properties of model corpora can be employed to develop more efficacious software engineering tools.
In their critical review, <cit.> suggest that while MDSE may have potential in the context of large-scale, distributed industrial software development, it is not a guaranteed success. However, the advent of big code and AI has opened up new avenues for integrating AI applications, particularly ML, into existing SE models. As demonstrated by <cit.>, the intrinsic formal graph structures of SE models can be leveraged to facilitate the preparation of models for AI/ML applications. Conversely, there is a necessity for elevated abstraction levels to enhance the efficacy of AI-assisted automated coding <cit.>.
When both factors are considered together, it becomes evident that there are significant opportunities for MDSE.
(1) The provision of large amounts of top-down models on coding platforms.
(2) The generation of bottom-up models from big code.
(3) The formation of Big Models as a basis for more advanced empirical MDSE and further improvements of MDSE.
(4) The application of AI methods on Big Models and for Big Models-
(5) The shift towards the new paradigm of pair modelling in SE, which altogether will turn into the next generation of software engineering.
Regarding opportunity (1), <cit.> presented the first entities of the SEMI Software Engineering Models Index, a catalogue of model repositories, and invited further contributions to SEMI. <cit.> used another approach of mining GitHub for projects including Unified Modelling Language (UML) models, which could well be combined with the SEMI initiative. As a result of numerous efforts towards SE model collections, the Lindholmen dataset was developed <cit.> and reviewed in <cit.>.
In view of opportunity (2), extracting representational models from code is in fact the opposite of using models to generate code more efficiently. Since the design, specification, and maintenance of such models can be complex and time-consuming, various techniques have been developed to extract models from code and/or execution traces. These techniques include the extraction of all types of models including for example the extraction of information models, see e.g. <cit.>, of structural models, see e.g. <cit.>, and of behavioural models, see e.g. <cit.>. The process of extracting models from code and/or execution traces offers the advantage of retrieving models that are up-to-date with the code/traces, but it also carries the potential disadvantage of mismatch with the model representation/abstraction requirements.
With regard to opportunity (3), there are initial studies on Big Models such as <cit.> which explores the increasing role of modelling, especially in safety-critical software development. It surveyed a range of projects utilising the UML and identified collaboration as the primary rationale for employing models. This is because models facilitate communication and planning for joint implementation efforts within teams, including those who are not directly involved in modelling or are new to the team.
For opportunity (4), the application of AI methods to Big Models is elaborated in first studies like:
* <cit.> supporting the classification of UML diagrams and contributing to model understanding.
* <cit.> presenting an approach for comparing and merging model variants by incorporating techniques from information retrieval, natural language processing, and machine learning and contributing to model understanding and improvement.
* <cit.> describing an approach to learn model transformations from examples of source and target model pairs and being a contribution for model generation.
Last but not least for opportunity (5), the pair modelling paradigm naturally evolves from the insight that pair programming (see e.g. <cit.>) with AI tools based on large code models as pair programmers and partners in software development evolved into a very supportive application of AI in software engineering (see e.g. <cit.>).
Based on Big Models and large SE models models for ML, i.e. large models for ML trained with SE models, pair modelling will emerge as a new model-driven software development technique in which a software engineer and an AI tool collaborate in software development. The engineer or tool in the role of the driver writes or improves software artefacts, including models, while the other, in the role of the observer, reviews each element of the software artefact as it is typed into an artefact. The one in the role of observer is also in the role of navigator, considering systemic and strategic aspects of software development: The navigator identifies potential improvements and possible upcoming problems that need to be addressed later on if they are not to be avoided. This allows the driver to focus on the development aspects without losing sight of the crosscutting and overarching aspects of software development. Like in pair programming in pair modelling, the observer is used as an assurance of the overall resulting software quality and as a guide to high quality software engineering. The two, the driver and observer/navigator, can switch roles. Indee, the exact interplay of driver and observer/navigator in pair modelling depends on the capabilities of the AI tool and the level of support according to the AI4SE taxonomy, it can provide to the software engineer being in either role.
Hence, as AI is increasingly applied to Big Models, it is necessary to incorporate the specific aspects of modelling into the AI4SE taxonomy, which currently does not distinguish between modelling and other tasks related to software development. At this early stage of development, it is not yet clear whether modelling should be treated as a separate task or like software quality and software security as a cross-cutting concern affecting numerous facets within the AI4SE taxonomy.
§ SUMMARY
After reviewing the state of the practice of model-driven software engineering, big code on open source software platforms, and the application of AI in software engineering, the taxonomy of AI-assisted software engineering is developed and used to categorise recent publications. In addition, the concept of Big Models is defined and explored with a view to future opportunities for further adoption of model-driven software engineering. Finally, recent research on the application of AI approaches to big models is reviewed and categorised.
Future work will further explore the evolution of Big Models and AI applications in MDSE, potentially leading to an update of the new AI4SE taxonomy.
§.§.§
The ideas presented in this paper have been developed through constructive dialogue in the Feldafinger Kreis, the German Testing Board and the Association for Software Quality and Education. The author acknowledges that while authored by her, the writing process was aided by AI tools, specifically ResearchRabbit for determining related work, and ChatGPT and DeepL for fine-tuning the wording.
The author has no competing interests to declare that are relevant to the content of this article.
splncs04nat
| Software engineering (SE) is a field of informatics/computer science that addresses the development and analysis of systematic approaches for the design, development, verification & validation and maintenance of software and of software-based systems[In this paper, the term "software" will be used as a general reference to the collective term for computer programs and related SE artefacts.]. It establishes principles and identifies optimal practices for the production and operation of software, with the aim of ensuring the reliability, security, scalability, maintainability and alignment of software with user, business and societal requirements. The objective of software engineering is the production of high-quality software that is cost-effective, delivered in a timely manner and can be readily adapted as requirements evolve.
The term software engineering was coined in 1967 with the very first software crisis <cit.>, but still software engineering practices and the resulting software quality have not kept pace with the quality levels required in critical domains or application contexts. Recently, another dramatic story was added to the software horror show: The root cause analysis for the millions of outages caused by the CrowdStrike Falcon sensor <cit.> found that the number of fields in an IPC template type was not validated, a runtime array bounds check was missing, the content validator contained a logic error, template type testing was too limited, and template instances were not tested within the content interpreter. This was unprofessional software development that ignored state of the art of SE.
However, software outages are not the only indication that SE still has no full answers to the challenges posed by the increasing complexity of software-based systems and the diversity of requirements placed on them: <cit.> analysed that [t]here exists a statistically significant medium sized difference between open and closed source projects: the former have a DD [(defect density)] that is 4 defects per KLoC lower than the latter. Java projects exhibit a significantly lower DD than C projects, 4.1 defects per KLoC on average: In general the Size appears to be negatively correlated to DD: the larger the project the lower the DD. In particular, large projects are 10 times less defective than medium ones.. According to <cit.>, the defect density (DD) of software projects within the industrial sector is estimated to fall within the range of 1 to 25. This suggests that a software program comprising one million lines of code (MLoC) may contain between 1,000 and 25,000 defects. Given the extensive research conducted on the subject of software size in terms of lines of code, the derivation of estimates regarding the approximate number of defects in a software is a relatively straightforward process. To illustrate, <cit.> compares typical software such as operating systems, browsers, and office suites: The Windows 10 operating system has approximately 80 MLoC, Ubuntu 50 MLoC, MacOS X 84 MLoC, Android 12 MLoC, iOS 12 MLoC, the browser Google Chrome 6.7 MLoC, Mozilla Firefox 21 MLoC, or the office suites Microsoft Office 2013 45 MLoC, Apache OpenOffice 19 MLoC, and LibreOffice 10 MLoC. Therefore, the challenges associated with the development of accurate and suitable software remain significant.
<cit.> argued that the primary challenge in software development lies in the specification, design, and testing of complex systems, rather than in the labour for representation of the system or testing its fidelity. It emphasized the distinction between essential difficulties (inherent complexity) and accidental difficulties (extraneous challenges), stating that past advances, like high-level programming languages, have only reduced the latter. The paper suggested that addressing accidental difficulties would not lead to major improvements in software development, as essential difficulties remain fundamental. <cit.> critiqued the notion of "accidental difficulties", arguing that these so-called accidents are often the result of negligence or poor practices, not mere happenstance. This paper advocated for a disciplined, science-based, and model-driven approach to software development, similar to traditional engineering disciplines. Building on this, <cit.> highlighted the growing societal dependence on autonomous and intelligent software systems, suggesting that new approaches are needed to ensure not only traditional software quality facets (safety, security, etc.), but also to address socio-technical and socio-political implications, particularly in human-machine collaboration.
In model-driven development (MDD <cit.>), also known as model-driven software engineering <cit.>, models are original artefacts that are engineered with the intention of facilitating the top-down construction of complex software. Furthermore, models are utilised during runtime to enable the monitoring, verification & validation of software operations, as well as the optimisation of its performance <cit.>.
As has been the case with the comprehensive incorporation of modelling in software engineering, artificial intelligence (AI) methodologies have been employed in software engineering activities from the outset, see for example <cit.>. In the context of significant advancements in AI-driven software engineering in 2021, Gartner's hype cycle for emerging technologies <cit.> projected that AI-enhanced software engineering would reach the peak of inflated expectations within a five-to-ten-year timeframe, subsequently entering a phase of productivity stabilization. Since 2023, it is estimated that AI-augmented software engineering will reach the productivity plateau already in two to five years, while in previous iterations of the hype cycle, AI-augmented development and AI-assisted design were identified as emerging technologies on the rise. It was interesting to observe the evolving business expectations and the convergence of trends for AI-supported design and development into a unified technological trend for AI-supported engineering. Moreover, it is encouraging to observe that software engineering is receiving particular attention and prioritisation, which is indicative of the growing importance of this field.
This paper addresses the role of models in software engineering in Section <ref>, the evolution of big code on coding platforms in Section <ref>, and the utilisation of AI for coding and software engineering in Section <ref>. It presents a novel taxonomy of AI for software engineering (AI4SE) to facilitate a deeper understanding of this evolving field. Section <ref> examines the practice of contributing software models to coding platforms, which is increasingly being done with software code. In line with the increasing prevalence of large-scale models, it examines recent developments in AI-enhanced modelling and model utilisation within the domain of software engineering. It also presents an updated version of the AI4SE taxonomy, designated as AI4BM, to outline the current status and future prospects of AI applications for Big Models. An outlook concludes the paper. | null | null | null | null | null |
http://arxiv.org/abs/2409.17066v1 | 20240925162545 | VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models | [
"Yifei Liu",
"Jicheng Wen",
"Yang Wang",
"Shengyu Ye",
"Li Lyna Zhang",
"Ting Cao",
"Cheng Li",
"Mao Yang"
] | cs.AI | [
"cs.AI"
] |
A singlet scalar assisted N_2 Leptogenesis and Pseudo-Scalar Dark Matter
Nimmala Narendra
========================================================================
§ ABSTRACT
Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs).
Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits).
It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference.
However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit.
Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables.
In this paper, we introduce Vector Post-Training Quantization (VPTQ) for extremely low-bit quantization of LLMs.
We use Second-Order Optimization to formulate the LLM VQ problem and guide our quantization algorithm design by solving the optimization.
We further refine the weights using Channel-Independent Second-Order Optimization for a granular VQ.
In addition, by decomposing the optimization problem, we propose a brief and effective codebook initialization algorithm.
We also extend VPTQ to support residual and outlier quantization, which enhances model accuracy and further compresses the model.
Our experimental results show that VPTQ reduces model quantization perplexity by 0.01-0.34 on LLaMA-2, 0.38-0.68 on Mistral-7B, 4.41-7.34 on LLaMA-3 over SOTA at 2-bit, with an average accuracy improvement of 0.79-1.5% on LLaMA-2, 1% on Mistral-7B, 11-22% on LLaMA-3 on QA tasks on average.
We only utilize 10.4-18.6% of the quantization algorithm execution time, resulting in a 1.6-1.8× increase in inference throughput compared to SOTA.
Source Code:
<https://github.com/microsoft/VPTQ>
*Contribution during internship at Microsoft Research
♢Corresponding author
This paper is the result of an open-source research project, and the majority work of the project is accomplished in April 2024.
§ INTRODUCTION
Large language models (LLMs) have shown excellent performance across various complex tasks as their sizes increase.
However, the enormous weight of LLMs poses significant challenges for efficient inference and practical deployment.
This size reduction significantly affects memory capacity and hard-disk storage and requires substantial bandwidth for inference.
Weight-only quantization is a mainstream model compression technique that effectively reduces the model's size by representing floating-point numbers with fewer bits.
In weight-only quantization of LLMs, a prominent method is Post-Training Quantization (PTQ).
PTQ quantizes model weights directly without retraining the model. Typically, PTQ only involves converting model weights into lower bit fixed-point numbers.
Currently, the main approach in PTQ is scalar quantization, which converts each scalar weight in the model into a lower bit value.
Recent work <cit.> have achieved near-original model accuracy with 3-4 bit quantization.
Table <ref> summarizes the characteristics of typical scalar quantization method (GPTQ, AWQ) research in LLM.
However, due to the limitations of numerical representation, traditional scalar-based weight quantization struggles to achieve such extremely low-bit levels.
For instance, with 2-bit quantization, we can only use four numerical values to represent model weights, which severely limits the range of weight representation.
Although BitNet<cit.> has enabled quantization aware training that can quantize weights to below 2 bits during the model's pre-training phase, this approach requires substantial GPU cluster resources to maintain reasonable accuracy.
Recent studies <cit.> have explored an efficient method of weight-only quantization known as Vector Quantization (VQ).
VQ assigns weight vectors to indices by pre-defined codebooks (lookup tables).
VQ compresses data by mapping high-dimensional vectors to a set of predefined lower-dimensional vectors in a lookup table. This method substantially reduces the storage requirements for data, while allowing for the quick reconstruction of original vectors through simple index references.
VQ achieves more effective data compression than scalar quantization by leveraging correlations and redundancies across different data dimensions.
By detecting and leveraging interdependence, VQ can encode complex multidimensional data with fewer bits, thus achieving higher compression ratios and reduced bit width.
While Vector Quantization (VQ) shows promise in extreme low-bit weight compression for Large Language Models (LLMs), it faces several significant challenges. Table 1 compares the strengths and weaknesses of various VQ algorithms in multiple dimensions.
The first challenge is ensuring the accuracy after extreme low-bit VQ quantization.
Unlike scalar quantization, the quantization granularity of VQ algorithms is vector-based.
The quantization may introduce additional accumulation errors due to the simultaneous quantization of multiple number.
For example, GPTVQ <cit.> uses the Second-Order Optimization method to implement PTQ.
However, GPTVQ accumulates quantization errors within vector quantization, leading to an inevitable increase in quantization errors as the vector length increases. This prevents the use of longer vectors and, consequently, limits the compression ratio.
The second challenge lies in efficiently executing VQ quantization on LLMs.
VQ can compress vectors in the weight matrix into indices, but these indices are discrete, non-differentiable integers. This introduces difficulties in implementing VQ quantization methods through model training. For instance, AQLM<cit.> employs beam search and backpropagation to quantize and update centroids in lookup tables. VQ necessitates additional gradient estimation, slowing the convergence of model quantization training and requiring intensive training efforts to achieve better accuracy.
The third challenge arises as the dequantization overhead in VQ model inference.
To reduce quantization errors, complex data preprocessing methods may be used to process weights.
QuIP# <cit.> introduces incoherence processing using the randomized Hadamard transform for the weight matrix before VQ.
These preprocessing steps can reduce quantization errors and improve model accuracy.
However, preprocessing must be performed in real time during model inference, which can severely impact throughput in inference.
VPTQ seeks to bypass the limitations of current VQ by offering a lightweight and efficient approach exclusively for extreme low-bit weight quantization.
In this paper, we present Vector Post-Training Quantization (VPTQ), a novel approach for extremely low-bit quantization of LLMs.
* VPTQ achieves SOTA accuracy results on extremely low-bit LLMs. We formulate the quantization problem as an optimization problem and employ Second-Order Optimization to guide our quantization algorithm design. By Channel-Independent Second-Order Optimization, VPTQ reduces model quantization perplexity by 0.01-0.34, 0.38-0.5, 4.41-7.34 on LLaMA-2/3/Mistral-7B, respectively, over SOTA at 2-bit. With an accuracy improvement of 0.79-1.5%,11-22%,1%, on LLaMA-2/3/Mistral-7B in QA tasks on average.
* VPTQ can transform LLMs into extremely low-bit models with a minor quantization algorithm overhead. Under the guidance of the optimization problem, Under the guidance of the optimization problem, we transform the quantization algorithm into a heuristic algorithm for solving the optimization problem.
We also analyze and propose a brief and effective cookbook initialization algorithm to reduce the extra overhead of centroid training and updates. Experiments show that VPTQ only requires 10.4-18.6% of the quantization algorithm execution time compared to existing SOTA results.
* VPTQ has low dequantization overhead. VPTQ algorithm quantizes all the weights in every Linear Operator in the model into an index and codebooks. During model inference, we only need to dequantize the weight matrix by reading centroids from the codebook according to the index before executing the operator. The models quantized by VPTQ result in 1.6-1.8× improve in inference throughput compared to SOTA.
§ BACKGROUND AND MOTIVATION
§.§ Post Training Quantization in LLM
Post-Training Quantization (PTQ)<cit.> aims to decrease model weight size by simplifying the numerical representation and seeking to maintain the model's accuracy without retraining the model.
We can formulate PTQ as the following optimization problem:
min 𝔼 [ ℒ(𝐗, 𝐖 + Δ𝐖) - ℒ(𝐗, 𝐖)]
≈Δ𝐖^T · g(𝐖) + 1/2Δ𝐖^T · H(𝐖)·Δ𝐖
where the original model weights 𝐖, quantized weights Ŵ, and Δ𝐖 = Ŵ - 𝐖 represent the weight quantization error.
The loss of the model task is ℒ.
The optimization object is to minimize the impact of model quantization on the model task, which means minimizing the expected deviant of loss function.
PTQ typically employs a concise and accurate method for analyzing the above optimization problem: Second-Order Optimization. Following a Taylor series expansion, this method breaks down the optimization goal into first-order, second-order, and higher-order terms.
g(𝐖) and H(𝐖) represent the gradient and Hessian of task loss ℒ, respectively.
It often assumes that the model has already reached local optimal before model quantization, which means that the first-order term is nearly zero.
Higher-order terms exert a minor effect on the optimization goal, and we typically disregard interactions among weights between different layers.
Consequently, we can simplify the optimization problem by focusing on optimizing the second-order term, and define the following optimization problem:
min_Δ𝐖 Δ𝐖^T · H(𝐖) ·Δ𝐖,
s.t. Δ𝐖 = 0
The objective of optimization problem is to minimize the second-order error in model quantization, subject to the constraint that the change in model weights is as minimized as possible, i.e., Δ𝐖 = 0.
§.§ Vector Quantization in Neural Networks
VQ is a key method for efficient lossy data compression<cit.>.
Its objective is to reduce the distortion by mapping high-dimensional original data to a lower-dimensional space represented by a lookup table (Eq. <ref>).
VQ maps original vectors (𝐖') from the vector space to a finite set of vectors, which is commonly referred as a codebook(lookup table, 𝒞).
Each vector in the original space approximates the closest vector (centroid 𝒞_i), in the codebook.
min_i ∈ kv - 𝒞_i^2, ∀v∈𝐖'
VQ indicates the nearest centroid 𝒞_i that minimizes the Euclidean distance between the input vector v in the lookup table.
The optimization problem aims to find the index i that results in the smallest distance between v.
Thus, each input vector is represented by the most similar centroids, thus minimizing total distortion.
Recent research has explored the use of VQ for model weight quantization <cit.>. These studies attempt to compress the embedding layer, the convolution layer, and the classification layer of neural networks using VQ. Figure <ref> illustrates an example of applying VQ to compress model weights on a weight matrix.
For a weight matrix 𝐖 with dimensions M × N, we reshape 𝐖 into vectors of length v as 𝐖' (step 202).
The number of reshaped vectors should be M × N/v.
Next, we employ k-means or other clustering algorithms to build a codebook (step 203).
The constructed codebook contains k centroid vectors, each with v dimensions.
Applying the VQ algorithm directly often does not yield an acceptable accuracy.
Typically, PTQ algorithms adjust the model index and centroid to enhance the accuracy of the quantized model (step 203).
During model inference, each operator in the model first dequantizes the original weight matrix from the lookup table (codebook) by index and centroid.
Unlike scalar quantization, VQ keeps the index and centroid in quantized weight.
The equivalent compression ratio of VQ can be formulated as: total original model bits / (codebook bits + index bits).
The equivalent quantization bitwidth is as: original bit width/compression ratio.
For example, a 4096 × 4096 FP16 weight matrix with vectors of length v=8 and 256 centroids, the compression ratio is (16 × 4096 × 4096) / (8 × 256 + 8 × 4096 × 4096 / 8) = 15.9.
The equivalent bitwidth is 1.0001 bit.
§.§ Vector Quantization in LLMs
While VQ has been applied to weight quantization, the following significant challenges persist when quantification of LLM.
We summarize the benefits and weaknesses of recent research <cit.> techniques in Table <ref>.
The number of parameters in LLMs is enormous, which requires quantizing the model using lightweight methods to avoid excessive resource consumption.
AQLM <cit.> utilizes gradient descent to train each layer of the VQ-quantized model and simultaneously trains across multiple layers using calibration data.
It achieves effective compression through additive quantization and joint optimization of the codebook, which can achieve high accuracy. However, due to AQLM's use of backpropagation for model training, significant GPU hours and memory are required to achieve better accuracy, especially when dealing with LLMs with massive parameters.
GPTVQ <cit.> utilizes the Second-Order Optimization method to implement PTQ.
However, GPTVQ accumulates quantization errors within vector quantization, leading to an inevitable increase in quantization errors as the vector length increases.
It prevents the use of longer vector and consequently limits the compression ratio.
QuIP# <cit.> introduces an incoherence processing using the randomized Hadamard transform for the weight matrix before VQ.
The distribution of the processed weight matrix approximates sub-Gaussian distributed weight matrices, so a tiny codebook can be used to compress the matrix.
However, incoherence processing requires a significant amount of computation, despite QuIP# being able to compress LLM to extremely low-bit with low accuracy drop.
It requires significantly more computation for inference compared to the original LLM, resulting in low inference throughput.
§ VECTOR POST-TRAINING QUANTIZATION
§.§ VPTQ Algorithm
VPTQ leverages Second-Order Optimization and solves the optimization problem Eq.<ref> to achieve extreme low bit quantization.
Assume that a weight matrix is 𝐖∈ℝ^M × N, and a Hessian matrix collected from the current layer is 𝐇∈ℝ^M × M.
We denote the q-th column of the weight matrix as 𝐖̂_:,q.
The quantized column 𝐖̂_:,q can be represented as the transpose of concatenated centroid vectors
𝐖̂_:,q = (𝒞_0,𝒞_1, ... , 𝒞_M/v)^T.
When the weight matrix of the model is large, we can first split the weight matrix into multiple groups. Each group has its own independent codebook. This method allows us to flexibly divide the weight matrix into several submatrices (𝐖̂_:,q:q+(M/group num)) equal to the group number.
For clarity, we describe only one group in the following algorithm description.
Unlike GPTVQ, we quantize each column of the matrix independently, which we refer to as Channel-Independent Second-Order Optimization.
It greatly simplifies the complexity of VQ in Second-Order Optimization.
GPTVQ, on the other hand, quantizes v columns of the matrix (𝐖̂_M,v) at once, leading to larger errors and more complex transformations for problem optimization.
We use Lagrange Method to transform the optimization problem <ref> into an unconstrained optimization problem. The Lagrangian function L(Δ𝐖), and λ is the Lagrangian multiplier:
L(Δ𝐖) = Δ𝐖^T H (𝐖) Δ𝐖 + λΔ𝐖
The dual function g(λ) can be represented as:
g(λ) = -𝐇^-1_qqλλ^T - λ (𝐖̂_:,q - 𝐖_:,q)
Differentiating g(λ) with respect to λ and setting it to 0,
g'(λ) = -𝐇^-1_qqλ - (𝐖̂_:,q - 𝐖_:,q) ^T = 0
we can find that when λ^T = - (𝐖̂_:,q - 𝐖_:,q)/𝐇^-1_qq, the problem reaches an optimal solution.
By substituting λ^T into the optimization problem, we find that to minimize the error introduced by quantization, we need to minimize the impact on the Lagrangian function. Therefore, we can transform the quantization problem into minimizing:
Δ L (Δ𝐖̂) = ∑v - 𝒞^2/ 2 𝐇^-1_qq
We find that when quantizing a column vector each time, we only need to consider minimizing ∑v - 𝒞^2, which is to find the closest centroid in Euclidean Distance.
It precisely aligns with the optimization of VQ.
Moreover, since VPTQ quantizes the weight matrix column by column, 𝐇^-1_qq is constant when quantizing each column, so we do not need to consider Hessian when finding the centroid.
After quantizing a column of weight matrix, we needs to update the current quantization error to the unquantized part through:
Δ𝐖 = (Ŵ_:,q - 𝐖_:,q)/𝐇^-1_qq𝐇_q,:
It will transform current quantization errors to the following unquantized columns.
Since GPTVQ quantizes v columns at the same time, and quantization error can only spread to other unquantized columns when all v columns have been quantized.
It will lead to more errors accumulating in the quantization, resulting in a decrease in model accuracy.
We can have similar conclusions from Table <ref>.
Algorithm <ref> provides a detailed description of the steps to solve the optimization problem and quantize the weights according to the above analysis.
Distinguish VPTQ from GPTQ and GPTVQ:
Compared with GPTQ, VPTQ employs vector representations in the quantization, which choose the vector closest to the original matrix to represent the original data.
As VQ can use a larger codebook to store the quantized data, it covers a wider range of numerical distributions compared to the scalar quantization of GPTQ, thereby achieving better accuracy.
Table <ref> reveals that VPTQ significantly outperforms GPTQ under extremely low bit quantization.
Moreover, since GPTVQ quantizes multiple columns simultaneously, making the propagation of quantization errors to unquantized columns more challenging.
Furthermore, the quantization errors in GPTVQ accumulate as the vector length increases, hindering GPTVQ from using longer vector lengths for weight compression (limited to only 1-4 bits).
It significantly reduces the compression ratio of VQ.
On the other hand, VPTQ is capable of compressing weights using longer vectors (> 8 bits) and representing data with a larger codebook. Table <ref> shows the better accuracy achieved by VPTQ than GPTVQ.
§.§ Optimization in VPTQ
§.§.§ Hessian-Weighted Centroid Initialization
VTPQ algorithm requires the initialization of centroids in the codebooks prior to quantization.
Properly initializing centroids can reduce quantization errors and improve model accuracy.
A straightforward method is to perform K-means clustering on the weight matrix as centroids (Eq.<ref>).
However, it does not consider the optimization object in Eq.<ref>, leading to a significant accuracy drop <cit.>.
We can transform the optimization object by leveraging the cyclic property of matrix traces and the Hadamard product. We refine the optimization objective as:
Δ𝐖^T Δ𝐖⊙𝐇
= ∑_i=0^n-1 h_i,iΔ𝐖_:,i^2
+ ∑_i=0^n-1∑_j=0,j ≠ i ^n-1 h_i,j (Δ𝐖_:,iΔ𝐖_:,j)
Due to Hessian matrix is predominantly diagonal<cit.>, it guides us to split the proxy error into two terms. The first term represents the dominant diagonal elements of the initial error matrix, which significantly impact the quantization error.
The second term is the interaction of a single value in weight quantization with others.
Because the Hessian matrix is predominantly diagonal, we can prioritize optimizing the first term through centroid initialization. We can view the first term as a Weighted K-means Clustering problem<cit.>.
Since this problem is well-studied, we can directly solve it to achieve efficient and accurate centroid initialization.
§.§.§ Residual Vector Quantization
We enable Residual Vector Quantization (RVQ)<cit.> in VPTQ. RVQ improves vector quantization (VQ) by breaking down the compression of a weight matrix into two (or more) stages.
Each stage further compresses the residual error from the previous quantization stage:
Q(v_res) = min_i (v_res - Q(v)) - 𝒞^res_i^2
Unlike GPTVQ, VPTQ enables RVQ, which quantizes VQ quantization error using a separate lookup table for better representation and quantization.
By partitioning the encoding into multiple stages and reducing quantization error, RVQ not only achieves superior compression efficiency but also ensures a balance between quantization error, the size of lookup tables, and the memory requirements for indices.
During the decoding phase, VPTQ simply reads the centroids from these multiple lookup tables and combines them to reconstruct the original weight matrix.
§.§.§ Outlier Elimination
Recent studies on quantization in LLM have consistently observed a significant presence of outliers in activation <cit.>.
Outliers, while small portions (~1% of the matrix), heavily affect the quantization error and simulate model accuracy.
Outliers typically result in large values in the diagonal elements of the Hessian matrix.
During centroids initialization in Sec.<ref>, VPTQ already considers these Hessian diagonals as weights in K-means, allowing VPTQ to better quantify the error introduced by outliers.
Q(v_outlier) = min_iv_outlier - 𝒞^outlier_i^2
Furthermore, VPTQ flexibly partitions the weight matrix and uses a separate outlier lookup table to quantify matrix tiles most affected by outliers.
It allows us to effectively trade off model accuracy and quantization overhead.
§ END TO END QUANTIZATION ALGORITHM
In this section, we will detail the end-to-end model quantization algorithm (Algorithm 2).
The algorithm takes the original model, vector length v, centroid number k, and Hessian matrices 𝐇 as inputs.
It starts by iterating over each layer l of the model. As each layer's quantization only relates to the current layer and the Hessian matrix, we can fully parallelize the quantization of each layer on GPUs.
In each layer, we first quantize the weight of each Linear Operator (matrix multiplication of input and weight). If we enable the outlier option, the algorithm first selects outlier columns following Section 3.2 and initializes the outlier centroids 𝒞_outlier.
Then, VPTQ is applied to the outlier weights 𝐖_outlier using the outlier centroids, generating the quantized weights 𝐖'_outlier.
Next, the algorithm initializes the centroids 𝒞 for the remaining columns and applies VPTQ to the weights 𝐖 using these centroids to produce the quantized weights w'.
Lastly, if residual quantization is enabled, the algorithm initializes the residual centroids 𝒞_res.
It applies VPTQ to the residual error between the original weights and the quantized weights (𝐖 - 𝐖'), using the residual centroids.
The quantized weight is updated as 𝐖”.
After processing all the operators, the algorithm will fine-tune the layer l if we enable layer fine-tuning.
The loss function is the Mean Squared Error (MSE) between the original and quantized computations.
In layer-wise fine-tuning, we only update the normalization operator (e.g. RMSNorm) and centroid.
These parameters only comprise a small fraction of the entire layer, and we can complete the fine-tuning quickly with limited memory. After each layer completes quantization and fine-tuning, we can further fine-tune the entire model as other PTQ methods used
<cit.>.
Once the algorithm processes all layers, it outputs the quantized model.
The end-to-end VPTQ algorithm quantizes all the weights in every Linear Operator in the model into an index and a codebook (𝒞).
During model inference, we only need to dequantize the weight matrix, by reading centroids from the codebook according to the index before executing the operator.
§ EXPERIMENTS AND EVALUATIONS
§.§ Settings
Algorithm Baseline We focus on weight-only quantization. The detailed quantization parameters (such as vector length, codebook numbers) and fine-tuning parameters of our VPTQ are shown in Appendix <ref> . Following <cit.>, our calibration data consists of 128 random segments of the C4 dataset <cit.>.
Models and Datasets We benchmark accuracy on LLaMA-2 <cit.>, LLaMA-3 families <cit.>, and Mistral.
Following previous work <cit.>, we report perplexity on language modeling tasks (WikiText-2 <cit.>, C4 <cit.>). We also employ lm-eval-harness <cit.> to perform zero-shot evaluations on common sense QA benchmarks (PIQA <cit.>, HellaSwag <cit.>, WinoGrande <cit.>, ARC <cit.>) . Detailed configuration is in Appendix <ref>
Baselines For LLaMA-2 and Mistral models, we compare VPTQ against GPTQ, GPTVQ, DB-LLM, QuIP# and AQLM.
To account for the different overheads resulting from varying codebook constructions, we provide results with comparable bit widths to facilitate a fair comparison.
For LLaMA-3 models, we use the results of <cit.>.
However, due to alignment issues with the C4 dataset, we only show results for WikiText and QA tasks. Because LLaMA-3 models are new and running quantization ourselves is costly, we do not have results for QuIP# and AQLM.
§.§ Accuracy Evaluation
Results on LLaMA-2 model:
We compare VPTQ with QuIP#, AQLM, GPTVQ, DB-LLM and GPTQ on LLaMA-2 model.
First, we discuss the results of 2 bit quantization. As shown in Table <ref>, GPTQ, as a scalar quantization method, performs poorly with unusable accuracy.
While DB-LLM and GPTVQ perform better, they still experience significant performance drops, with WikiText-2 perplexity increasing by 2. The significant accuracy drop in GPTVQ, despite being a vector quantization algorithm, is due to two factors: the use of shorter vector lengths, which introduces higher quantization loss, and the choice to update weights every v columns, which leads to cumulative errors. Therefore, we primarily focus on comparing VPTQ with the state-of-the-art 2 bit quantization methods QuIP# and AQLM which both choose longer vector lengths.
Table <ref> includes the average scores for the five QA tasks mentioned in <ref>.
VPTQ outperforms QuIP# and AQLM on 7B and 13B models.
For the 7B model, VPTQ achieves a further reduction in WikiText-2 perplexity by 0.5 and 0.3 compared to the previous best results
at 2-2.02 bits and 2.26-2.29 bits, respectively. In QA tasks, the VPTQ 2.26-bit model surpasses the AQLM 2.29-bit model with an average accuracy increase of 1%. For the 13B model, the VPTQ 2.02-bit model shows a slight improvement over QuIP#, and the 2.18-bit model outperforms AQLM in QA accuracy by 1.5%.
On LLaMA-2-70B model, we achieve similar perplexity (<0.02) and comparable QA results (<0.4%). The results for 3- and 4-bit quantization shown in Table <ref> are without end-to-end fine-tuning but are also comparable to AQLM and QuIP# which include end-to-end fine-tuning.
Results on LLaMA-3 and Mistral model:
Table <ref> presents VPTQ results on the LLaMA-3 model and Mistral-7b model.
In all 2-, 3-, and 4-bit quantizations of LLaMA-3 models, we significantly outperform GPTQ, DB-LLM, and QuIP, whose accuracy drops to unusable levels. VPTQ ensures an accuracy drop of <8% for the 8B model and <5% for the 70B model.
On the Mistral-7B model, our 2-bit performance surpasses both QuIP# and AQLM by 1% in QA accuracy. In 3-bit quantization, our perplexity is lower.
At 4-bit, results are comparable overall. In Table <ref>, GPTQ and GPTVQ use a context length of 2048. More detailed results are in Table <ref>. As bit width increases, the advantage of vector quantization diminishes, with GPTQ showing a similar WikiText-2 perplexity at 4-bit.
Inference throughput and quantization cost:
In Table <ref>, the `toks/s' column indicates the number of tokens generated per second during the decode phase of inference. VPTQ achieves a 2-9× speedup compared to QuIP# because QuIP# uses Hadamard Transform during decoding, which introduces O(n^2) multiplications and additions, significantly slowing the inference throughput. Compared to AQLM, VPTQ uses a smaller codebook, resulting in a lower decoding overhead. Therefore, our inference throughput for the 7B and 13B models is 1.6-1.8× faster than AQLM. As the model size increases, our codebook size becomes comparable to theirs, leading to similar inference throughputs for the 70B model.
The `cost/h' column represents the hours required for model quantization on 4× 80GB A100 GPUs. We achieved comparable or even better results than AQLM in only 10.4-18.6% of quantization algorithm execution time.
§ CONCLUSION
In this paper, we propose Vector Post-Training Quantization (VPTQ), a novel approach to achieving extremely low-bit quantization of LLMs by Vector Quantization.
Through the application of Second-Order Optimization, we have formulated the LLM Vector Quantization problem and directed the design of our quantization algorithm. By further refining the weights via Channel-Independent Second-Order Optimization, we have enabled a more granular VQ.
VPTQ also includes a brief and effective codebook initialization algorithm, achieved by decomposing the optimization problem. We have extended VPTQ to support residual and outlier quantization, which not only improves model accuracy but also further compresses the model size.
Our experimental results demonstarte the effectiveness and efficiency of VPTQ.
The perplexity of quantizated model is reduced by 0.01-0.34 on LLaMA-2, 0.38-0.68 on Mistral-7B, 4.41-7.34 on LLaMA-3 over SOTA at 2-bit, with an average accuracy improvement of 0.79-1.5% on LLaMA-2, 1% on Mistral-7B, 11-22% on LLaMA-3 on QA tasks.
Furthermore, we achieved these results only using 10.4-18.6% of the execution time of the quantization algorithm, leading to a 1.6-1.8× increase in inference throughput compared to SOTA.
These results underscore the potential of VPTQ as an efficient and powerful solution for the deployment and inference of LLMs, particularly in resource-constrained settings.
§ ACKNOWLEDGEMENT
We thank for James Hensman for his crucial insights into the error analysis related to Vector Quantization (VQ), and his comments on LLMs evaluation are invaluable to this research.
§ LIMITATIONS
Related researches on PTQ <cit.> have adopted end-to-end model finetuning after the PTQ phase. Compared to other related works, VPTQ can better quantize the model in the PTQ, and it simplifies and reduces the cost and overhead of model fine-tuning.
Due to GPU resource constraints, we cannot fine-tune larger models (70B) for longer iterations and more tokens.
It limits our experimental results, which can only achieve similar results to baselines in 70B models. It restricts the demonstration of VPTQ's advantages and potential on large models in this paper.
We will strive for more GPU resources to finetune the VPTQ model for longer periods and with more tokens in the future, allowing for a fair comparison.
Additionally, since LLaMA-3 are the latest released models, there is a lack of baselines from related works.
It is difficult for us to fully demonstrate our performance improvements.
We will continue to add more baselines in the future to highlight the advantages of VPTQ.
In this paper, we only use AI tools for grammar checking and code completion.
unsrtnat
§ APPENDIX: ALL EXPERIMENTS RESULTS
§.§ Supplementary Explanation for Main Results Table 2
Table <ref> shows our main results. Here we provide an explanation for the 'N/A' entries relative to other works.
DB-LLM Since they did not open source their code, we use the AvgQA results from their paper. However, this number does not align with our FP16 results.
GPTQ We reproduce the 2-bit results using the official GPTQ repository. As GPTQ quantizes each layer in sequential order, the cost per hour (cost/h) represents the time taken to quantize on a single A100 GPU.
GPTVQ They do not release their 2-bit quantized model. We reproduce the 2-bit results using their released GPTVQ code, which only supports single-GPU quantization. Therefore, the cost per hour (cost/h) reflects the execution time for quantization on a single A100 GPU. Due to the lack of specific logic for loading their quantizers in the released code, we were unable to measure the throughput.
AQLM Their 1.97-bit LLaMA-2 13b model has not been open-sourced, so we are unable to measure its inference throughput.
§.§ All Experiments Results
In this section, we present all our experimental results, including the perplexity of the quantized model on different context lengths in two datasets, Wikitext2 and C4, and the accuracy on five Commonsense QA tasks (abbreviated as AE for Arc_easy, AC for Arc_challenge, HE for Hellaswag, QA for PIQA, and WI for Winogrande). Table <ref> displays all results of Llama2 at 2 bits quantization. Table <ref> presents results of Llama2 at 3 and 4 bits quantization. Table <ref> displays all results of Llama3 at 2, 3, and 4 bits quantization. Table <ref> shows all results of Mistral 7b at 2, 3, and 4 bits quantization.
§ QUANTITATIVE ANALYSIS OF QUANTIZATION PARAMETER SETTINGS
Quantization configuration
The quantization parameters of all VPTQ 2bit models are shown in Table <ref>.
Layer-wise finetuning parameters
Layer-wise finetuning trains centroids and layernorm using the input and output of each layer when entering 128 samples of C4 training sets into the full precision model. We train each layer for 100 iterations. Table <ref> shows the learning rate and batch size used for each model.
§ ABLATION STUDY
Table <ref> shows results from LLaMA2-13b on wikitext2 and c4 (sequence length=4096) under different quantization parameters. The impact of techniques such as vector length, channel-independent optimization, residual vector quantization, outlier elimination, layer-wise finetuning, and end-to-end finetuning on quantization results will be discussed.
§.§ Parameter Description
When performing N% outlier elimination, N% of outliers will be quantized using a codebook with a vector length of v_0 and k_0 centroids. For the remaining (100-N)% parameters, the vector length is v_1. k_1 represents the number of centroids in the first codebook, while k_2 represents the centroids in the second codebook for residual vector quantization. k_2=-1 indicates no residual vector quantization.
§.§ Vector Length and Residual Vector Quantization
Compression Ratio Calculation The compression ratio is calculated with the vector quantization index bit fixed at 2 bits. The average bitwidth per element of the index matrix obtained through vector quantization is:
Average index bitwidth=log_2(k_1)/v_1 + log_2(k_2)/v_1
The compression ratio is calculated as in <ref>:
Compression Ratio = Total original model bits/Codebook bits + Index bits
For an original linear weight matrix with M parameters,
Codebook bits = (v_0 × k_0 + v_1 × (k_1 + k_2)) × 16
Index bits = M × N%×log_2(k_0/v_0) + M × (100-N)%×[ log_2(k_1)/v_1 + log_2(k_2)/v_1]
The total bitwidth in the table is calculated per transformer block, which for llama2 includes 4 attention linear and 3 FFN linear layers.
Impact of Vector Length First, we discuss the impact of vector length on accuracy. Rows #2, #3, #4, and #6 show results for v_1=2, 4, 6, 8, keeping the average index bit at 2 (i.e., log_2(k_1/v_1) = 2). As v_1 increases, the perplexity on wikitext2 and c4 decreases, but the codebook size also increases exponentially. For v_1=8 and k_1=65536, the codebook overhead introduces an additional 0.19 bits.
Then, we evaluate the model inference throughput in Table 2. Since we employ weight-only quantization, the main additional overhead of quantized model inference comes from the lookup table for model weights. Table 2 shows models with 2 bits on various throughputs. As the vector length increases (from 2 to 6), the granularity of memory access for reading the lookup table in dequantization increases, which allows memory access to match the GPU's cache line (128 bytes @ L1). This reduces memory access transactions and decreases cache misses. As the vector length further increases (from 8 to 12) along with the size and levels of the codebook, the codebook size further increases, which results in the codebook not fitting in the L1 cache, thereby reducing the model's inference speed. Additionally, we find that a reasonable setting (e.g., v=6, k=4096) can achieve throughput similar to the original model for the quantized model, demonstrating the efficiency of the VPTQ design.
Residual Vector Quantization Without any finetuning, rows #4 and #7 show similar perplexities for v_1=6, k_1=4096 and v_1=12, k_1=k_2=4096 , with the latter even higher. However, after layer-wise finetuning, comparing rows #11 and #13, residual quantization reduces the perplexity by 0.3 compared to VQ due to the increased number of finetunable centroids, showing significant improvement.
§.§ Channel-Independent Optimization
Row #4 with channel-independent optimization shows a perplexity decrease of 1 compared to row #5 without it, indicating that channel-independent second-order optimization effectively mitigates quantization error accumulation.
§.§ Outlier Elimination
Rows #4, #8, #9, and #10 represent the results for eliminating 0%, 1%, 2%, and 5% outliers, respectively. We used a codebook with v_0=4 and k_0=4096 to quantize N% of outliers, achieving an effective average index bit of 3 bits, while other parameters were 2 bits. Higher N% means more parameters are quantized with 3 bits, leading to a larger total bitwidth and lower perplexity.
§.§ Finetuning
Rows #4, #11, and #12 show results for without any finetuning, with layer-wise finetuning, and with end-to-end finetuning, respectively. Adding finetuning reduced the perplexity on wikitext2 from 6.29 to 6.07 and further to 5.32.
§.§ Group Number
Rows #14, #15, #16, and #17 show the quantization results when 99% of parameters are divided into 1, 2, 4, and 8 groups, respectively. Each group has its own independent codebook. When divided into 1, 2, and 4 groups, the perplexity on wikitext2 does not change much, likely because the distribution of the remaining parameters (after removing 1% outliers) is relatively uniform. This is likely because the distributions of different groups overlap after grouping, so the benefit of increasing the group number is not significant.
§.§ Higher Bitwidth
Rows #18 and #19 represent the results for 3-bit and 4-bit quantization, respectively. Compared to the FP16 results in row #1, 4-bit vector quantization incurs almost no loss.
§ INFERENCE EVALUATION
§.§ Throughput Measurement Process
We follow the throughput measurement method used in AQLM <cit.>. During the prompt phase, we provide 1 token and then have the model generate 256 tokens, calculating the generation time for each output token to determine the throughput in tokens per second (tok/s).
§.§ Our Dequantization Implementation
Our dequantization implementation is divided into two phases. In the first phase, which handles prompts with relatively long sequences, we restore the quantized weights (index and centroid, etc.) to FP16 and then call `torch.matmul`. In the second phase, during decoding, we fuse the dequantization and GEMV operations into QGemv, eliminating the repetitive reading and writing of FP16 weights.
| Large language models (LLMs) have shown excellent performance across various complex tasks as their sizes increase.
However, the enormous weight of LLMs poses significant challenges for efficient inference and practical deployment.
This size reduction significantly affects memory capacity and hard-disk storage and requires substantial bandwidth for inference.
Weight-only quantization is a mainstream model compression technique that effectively reduces the model's size by representing floating-point numbers with fewer bits.
In weight-only quantization of LLMs, a prominent method is Post-Training Quantization (PTQ).
PTQ quantizes model weights directly without retraining the model. Typically, PTQ only involves converting model weights into lower bit fixed-point numbers.
Currently, the main approach in PTQ is scalar quantization, which converts each scalar weight in the model into a lower bit value.
Recent work <cit.> have achieved near-original model accuracy with 3-4 bit quantization.
Table <ref> summarizes the characteristics of typical scalar quantization method (GPTQ, AWQ) research in LLM.
However, due to the limitations of numerical representation, traditional scalar-based weight quantization struggles to achieve such extremely low-bit levels.
For instance, with 2-bit quantization, we can only use four numerical values to represent model weights, which severely limits the range of weight representation.
Although BitNet<cit.> has enabled quantization aware training that can quantize weights to below 2 bits during the model's pre-training phase, this approach requires substantial GPU cluster resources to maintain reasonable accuracy.
Recent studies <cit.> have explored an efficient method of weight-only quantization known as Vector Quantization (VQ).
VQ assigns weight vectors to indices by pre-defined codebooks (lookup tables).
VQ compresses data by mapping high-dimensional vectors to a set of predefined lower-dimensional vectors in a lookup table. This method substantially reduces the storage requirements for data, while allowing for the quick reconstruction of original vectors through simple index references.
VQ achieves more effective data compression than scalar quantization by leveraging correlations and redundancies across different data dimensions.
By detecting and leveraging interdependence, VQ can encode complex multidimensional data with fewer bits, thus achieving higher compression ratios and reduced bit width.
While Vector Quantization (VQ) shows promise in extreme low-bit weight compression for Large Language Models (LLMs), it faces several significant challenges. Table 1 compares the strengths and weaknesses of various VQ algorithms in multiple dimensions.
The first challenge is ensuring the accuracy after extreme low-bit VQ quantization.
Unlike scalar quantization, the quantization granularity of VQ algorithms is vector-based.
The quantization may introduce additional accumulation errors due to the simultaneous quantization of multiple number.
For example, GPTVQ <cit.> uses the Second-Order Optimization method to implement PTQ.
However, GPTVQ accumulates quantization errors within vector quantization, leading to an inevitable increase in quantization errors as the vector length increases. This prevents the use of longer vectors and, consequently, limits the compression ratio.
The second challenge lies in efficiently executing VQ quantization on LLMs.
VQ can compress vectors in the weight matrix into indices, but these indices are discrete, non-differentiable integers. This introduces difficulties in implementing VQ quantization methods through model training. For instance, AQLM<cit.> employs beam search and backpropagation to quantize and update centroids in lookup tables. VQ necessitates additional gradient estimation, slowing the convergence of model quantization training and requiring intensive training efforts to achieve better accuracy.
The third challenge arises as the dequantization overhead in VQ model inference.
To reduce quantization errors, complex data preprocessing methods may be used to process weights.
QuIP# <cit.> introduces incoherence processing using the randomized Hadamard transform for the weight matrix before VQ.
These preprocessing steps can reduce quantization errors and improve model accuracy.
However, preprocessing must be performed in real time during model inference, which can severely impact throughput in inference.
VPTQ seeks to bypass the limitations of current VQ by offering a lightweight and efficient approach exclusively for extreme low-bit weight quantization.
In this paper, we present Vector Post-Training Quantization (VPTQ), a novel approach for extremely low-bit quantization of LLMs.
* VPTQ achieves SOTA accuracy results on extremely low-bit LLMs. We formulate the quantization problem as an optimization problem and employ Second-Order Optimization to guide our quantization algorithm design. By Channel-Independent Second-Order Optimization, VPTQ reduces model quantization perplexity by 0.01-0.34, 0.38-0.5, 4.41-7.34 on LLaMA-2/3/Mistral-7B, respectively, over SOTA at 2-bit. With an accuracy improvement of 0.79-1.5%,11-22%,1%, on LLaMA-2/3/Mistral-7B in QA tasks on average.
* VPTQ can transform LLMs into extremely low-bit models with a minor quantization algorithm overhead. Under the guidance of the optimization problem, Under the guidance of the optimization problem, we transform the quantization algorithm into a heuristic algorithm for solving the optimization problem.
We also analyze and propose a brief and effective cookbook initialization algorithm to reduce the extra overhead of centroid training and updates. Experiments show that VPTQ only requires 10.4-18.6% of the quantization algorithm execution time compared to existing SOTA results.
* VPTQ has low dequantization overhead. VPTQ algorithm quantizes all the weights in every Linear Operator in the model into an index and codebooks. During model inference, we only need to dequantize the weight matrix by reading centroids from the codebook according to the index before executing the operator. The models quantized by VPTQ result in 1.6-1.8× improve in inference throughput compared to SOTA. | null | null | null | null | In this paper, we propose Vector Post-Training Quantization (VPTQ), a novel approach to achieving extremely low-bit quantization of LLMs by Vector Quantization.
Through the application of Second-Order Optimization, we have formulated the LLM Vector Quantization problem and directed the design of our quantization algorithm. By further refining the weights via Channel-Independent Second-Order Optimization, we have enabled a more granular VQ.
VPTQ also includes a brief and effective codebook initialization algorithm, achieved by decomposing the optimization problem. We have extended VPTQ to support residual and outlier quantization, which not only improves model accuracy but also further compresses the model size.
Our experimental results demonstarte the effectiveness and efficiency of VPTQ.
The perplexity of quantizated model is reduced by 0.01-0.34 on LLaMA-2, 0.38-0.68 on Mistral-7B, 4.41-7.34 on LLaMA-3 over SOTA at 2-bit, with an average accuracy improvement of 0.79-1.5% on LLaMA-2, 1% on Mistral-7B, 11-22% on LLaMA-3 on QA tasks.
Furthermore, we achieved these results only using 10.4-18.6% of the execution time of the quantization algorithm, leading to a 1.6-1.8× increase in inference throughput compared to SOTA.
These results underscore the potential of VPTQ as an efficient and powerful solution for the deployment and inference of LLMs, particularly in resource-constrained settings. |
http://arxiv.org/abs/2409.18022v1 | 20240926163057 | On the nonconnectedness of moduli spaces of arrangements, II: construction of nonarithmetic pairs | [
"Benoît Guerville-Ballé"
] | math.AG | [
"math.AG"
] |
Nonconnectedness of the moduli space, II: nonarithmetic pairs]On the nonconnectedness of moduli spaces of arrangements, II: construction of nonarithmetic pairs
B. Guerville-Ballé]Benoît Guerville-Ballé
[email protected]
[2020]
51M15,
14N10,
14N20,
14H10,
51A45.
§ ABSTRACT
Constructing lattice isomorphic line arrangements that are not lattice isotopic is a complex yet fundamental task. In this paper, we focus on such pairs but which are not Galois conjugated, referred to as nonarithmetic pairs. Splitting polygons have been introduced by the author to facilitate the construction of lattice isomorphic arrangements that are not lattice isotopic. Exploiting this structure, we develop two algorithms which produce nonarithmetic pairs: the first generates pairs over a number field, while the second yields pairs over the rationals. Moreover, explicit applications of these algorithms are presented, including one complex, one real, and one rational nonarithmetic pair.
[
[
September 28, 2024
======================
§ INTRODUCTION
The concept of moduli space plays an important role in various branches of mathematics, serving as a fundamental framework for understanding families of geometric objects. Moduli spaces provide a structured way to classify objects up to isomorphism, capturing the intrinsic parameters that define their equivalence classes. Understanding their structure and behavior is crucial, not only from the theoretical persepective but also for applications in algebraic geometry, topology, or mathematical physics. However, the study of moduli spaces is a challenging task due to their complex nature. Indeed, Mnëv Universality Theorem <cit.> and its numerous extensions by Vakil <cit.>, imply that, in lots of cases, moduli spaces can behave as badly as one can imagine.
The moduli space () of a line arrangement in ^2 can be defined as the set of all arrangements that are lattice isomorphic to , quotiented by the natural action of _3(). MacLane <cit.> showed that such moduli spaces are not necessarily path-connected, while Mnëv <cit.> proved that any singularity defined over appears in at least one moduli space, revealing the complex nature mentioned above. Randell's Isotopy Theorem <cit.> states that two path-connected arrangements are topologically equivalent, illustrating the importance of understanding the topology of such moduli spaces. The moduli spaces of line arrangements with few lines have been classified from different perspectives. The works of Nazir and Yoshinaga <cit.> and of Fei <cit.> classify the topological types of arrangements up to 9 lines. Recently, Corey and Luber classified the diffeomorphic types of line arrangements with up to 12 lines, see <cit.>. The interplay between connectedness and combinatorics has been studied by the author and Viu-Sos in <cit.>. The present paper is a continuation of these studies.
§.§ Arithmetic, nonarithemtic and rational pairs
Our study focuses on the connected components of the moduli spaces, so we can assume that any arrangement is defined over a number field. Indeed, due to the algebraic nature of the moduli space of a fixed arrangement , any arrangement in () is lattice isotopic to an arrangement defined over a number field. The number field generated by all the coefficients of the lines of is called the definition field of and is denoted by (). Throughout this paper, a pair of arrangements always refers to lattice isomorphic arrangements which are not isotopy equivalent. In other words, they are in different connected components of their shared moduli space.
Let _1 and _2 be a pair of arrangements, and let be a number field that contains both (_1) and (_2).
* The pair is arithmetic if there exists a Galois automorphism σ∈(/) such that σ·_1 = _2. It is nonarithmetic otherwise.
* The pair is rational if (_1)=(_2)=.
Obviously, a rational pair is a nonarithmetic pair.
* The MacLane arrangements <cit.> or the Nazir-Yoshinaga arrangements <cit.> form arithmetic pairs where the Galois automorphism σ is the complex conjugation.
* The Rybnikov arrangements <cit.> form a complex nonarithmetic pair.
* The arrangements ^1,1 and ^1,-1 in <cit.> form a rational pair.
§.§ Arithmetic pairs
Due to the algebraic nature of the moduli space, arithmetic pairs are simple to construct, e.g. <cit.>. Furthermore, they are a particular case of conjugate varieties. It is known that such varieties can have different topologies, see <cit.> for the general case and <cit.> for line arrangements –see also <cit.> for numerous other examples. Nevertheless, both the topology and the geometry of such pairs are difficult to distinguish. Indeed, any algebraic topological invariant will fail to distinguish a topological difference. In particular, the profinite completion of the fundamental group of their complements are isomorphic. Similarly, the geometric line operator Λ_𝔪,𝔫 introduced by Rouleau in <cit.> cannot distinguish a difference of geometry in such pairs.
§.§ Nonarithmetic pairs
There is a wide class of problems in the study of arrangements and, more generally, of plane curves, asking whether a given property –be it geometric, topological, or otherwise– is determined by the local type of the singularities or not. To answer negatively such a question, the most efficient way is to provide an explicit counterexample. When considering two arrangements, the more different they are, the more likely they are to form a counterexample. From this observation and according to the previous paragraph, it is natural, in a first place, to consider arrangements that form a nonarithmetic pair. It is not a coincidence if the first examples of Zariski pair[A Zariski pair is form by two lattice isomorphic arrangements that have nonhomeomorphic embedded topology.] for curves <cit.> or for arrangements <cit.> are both nonarithmetics. However, one of the challenges remains in constructing such nonarithmetic pairs.
In <cit.>, the author introduced the combinatorial notion of the splitting polygon pattern. It is a sub-structure of a line combinatorics suggesting that the moduli space () is not connected. This structure is present in all the examples given in Example <ref>. It has been used in <cit.>, to produce several examples of arithmetic Zariski pairs. The purpose of the current paper is to present and illustrate two algorithms for constructing nonarithmetic pairs using the splitting polygon structure. The first algorithm produces nonarithmetic pairs over a number field, while the second generates rational pairs.
§.§ Structure of the paper
In Section <ref>, we recall the definition of the splitting polygon structure. Then, we describe the two algorithms (Algorithm <ref> and <ref>) which produce nonarithemtic pairs over a number field and rational pairs, respectively. Section <ref> is dedicated to applications of these algorithms with the production of explicit examples (Theorems <ref>, <ref> and <ref>). To conclude, in Section <ref>, we discuss the limitations of the algorithms and some possible improvments, question the existence of nonarithmetic pairs with less than 13 lines (Problem <ref>), and wonder if there is a topological difference in the examples procuded in Section <ref> (Problem <ref>).
§ SPLITTING POLYGON AND ALGORITHMS
In order to state the algorithms, one needs to recall the definition and some properties of the splitting polygon structure introduced in <cit.>.
As originally defined in <cit.>, a line combinatorics is the data of a finite set Ł and a subset of the powerset of Ł such that:
* for all P∈, one has |P|≥ 2,
* for all i,j∈, with i≠ j, there exists a unique P∈ such that {i,j}⊂ P.
Let be a line arrangement of ^2, and let () be the set of singular points of , where each singular point P is given as the maximal subset of formed by the lines of passing through P. Under such a convention, (,()) is a line combinatorics, named the combinatorics of and denoted by () –or simply if no confusion is possible.
§.§ The splitting polygon structure
A plinth Ψ of length r≥3 on a line combinatorics is the data of two tuples (S_1,…,S_r)⊂Ł and (P_1,…,P_r)⊂, named respectively the support and the pivot points, such that for all pivot points P_i, we have S_i∉ P_i and S_i+1∉ P_i, where the indices are considered modulo r.
Let be an arrangement with combinatorics . The connected component of () containing the class of is denoted by (C)^. If Ψ is a plinth on then the geometrical image of Ψ is called the realization of Ψ in , it will be denoted by Ψ_.
A plinth Ψ on a combinatorics forms a rigid projective system of the connected component ()^ if, for any '∈()^, there exsits a projective transformation τ which sends Ψ_ on Ψ_'.
The following proposition is a direct consequence of the definiton.
If _(()^)=0, then any plinth Ψ on is a rigid projective system of ()^.
Let be a line arrangement and Ψ be a plinth on with support (S_1,…,S_r) and pivot points (P_1,…,P_r). A set of r lines E^λ=(E_1^λ,…,E_r^λ)⊂ (^2)^r is a splitting polygon on the plinth Ψ if, for all i∈{1,…,r}, it verifies the following conditions –where all the indices are considered modulo r.[In the particular case r=3, we use the term splitting triangle.]
* E_i^λ∩ E_i+1^λ∩ S_i ≠∅,
* E_i passes through the pivot point P_i,
* all the other intersections are generic.
If E^λ is a splitting polygon on Ψ, we denote by _Ψ^λ the union of and E^λ, and by _Ψ it combinatorics.
A set of lines E^λ is a nonsplitting polygon if it verifies all the conditions of the splitting polygon but fails to verify a unique condition (1). The main result of <cit.> is the following.
Let be a line arrangment with combinatorics , and let Ψ be a rigid projective system of ()^. If it exists two distinct splitting polygons E^λ_1 and E^λ_2, and a nonsplitting polygon then _Ψ^λ_1 and _Ψ^λ_2 are in distinct connected components of (_Ψ).
§.§ The polynomial Δ_Ψ
The existence of such splitting polygons and nonsplitting polygon is related with the degree and the roots of a polynomial Δ_Ψ ensuring the realization or not of condition (1) in Definition <ref>. This polynomial can be constructed as follows.
Let Q_1^λ be a generic point of the line S_1 which is different from S_1 ∩ S_2. It can be linearly parametrized by a unique parameter λ∈. By Definition <ref>, S_i∉ P_i, so one can define E_1^λ as the line passing through Q_1^λ and P_1. We denote by Q_2^λ the intersection point of E_1^λ and S_2. This point is well defined due to the condition S_i+1∉ P_i in Definition <ref>. Similarly, we construct iteratively the lines E_i^λ and the points Q_i+1^λ, for i∈{2,…,r}. We denote by Δ_Ψ the determinant of the 3×3 matrix formed by the coefficients of lines S_1, E_1^λ and E_r^λ. If () is a definition field of , then Δ_Ψ is an element of the polynomial ring ()[λ]. By construction, Δ_Ψ is of degree at most 2.
Assume that Δ_Ψ is of degree 2. If it is also irreducible in ()[λ], then all the conditions of Theorem <ref> are verified, see <cit.>. Such splitting polygons will be called irreducible splitting polygons and reduced splitting polygons otherwise. In such situation, the created pair will be an arithmetic one. To create a nonarithmetic pair, we will search for reduced splitting polygons. It is worth to notice that the condition “Δ_Ψ is reducible in ()” is necessary but not sufficient to create nonarithemtic pair. Indeed, in that case, one also needs to verify Condition 3 of Definition <ref>
If the polynomial Δ_Ψ is reducible then there exists a nonsplitting polygon on Ψ.
Since Δ_Ψ is of degree at most 2, the hypothesis that it is reducible induces that it is of degree exactly 2. The conditions for the tuple E^λ_0 to form a nonsplitting polygon are: Δ_Ψ(λ_0) ≠ 0 and all the other intersections are generics. The first condition is Zariski open, while the second kind are linear Zariski-closed conditions. Indeed these conditions correspond to a line, linearly parametrized by λ, that passes through a point independent of λ. Since the intersection of an Zariski-open subset with finitely many proper Zariski-closed subsets is not empty then such λ_0 exists.
§.§ The algorithms
The first example that comes in mind when we talk about nonarithmetic pairs is the Rybnikov arrangements. We have shown in <cit.> that they can be constructed by adding successively two splitting triangles such that the first one is irreducible and the second one is reducible. Taking inspiration from this construction, nonarithmetic pairs can be constructed using the following algorithm.
By construction, the definition field (_Ψ_1^1)=(_Ψ_1^2) is a quadratic extension of (). We denote by σ the generator of the associated Galois group. The automorphism σ fixes line-by-line the arrangement , so one has σ·_Ψ_1^1 = _Ψ_1^2.[Let σ be a Galois automorphism of a number field . If a line L:ax+by+cz=0 is defined over , then we denote σ· L the line defined by the equation σ(a)x + σ(b)y + σ(c)z = 0. The arrangement σ· is defined as {σ· L | L∈}.] Since is rigid then Proposition <ref> implies that Ψ_1 is a rigid projective system of ()^, so by Theorem <ref> the arrangements _Ψ_1^1 and _Ψ_1^2 form an arithmetic pair. Furthermore, by construction these arrangements are also rigid.
Assume that i=1 (the case i=2 is similar). As noted above, the arrangement _Ψ_1^1 is rigid so, by Proposition <ref>, Ψ_2 is a rigid projective system of (_Ψ_1^1)^_Ψ_1^1. Additionally, since the polynomial Δ_Ψ_2 is reducible, then Lemma <ref> implies that a nonsplitting polynomial exists, and that Δ_Ψ_2 is of degree 2. We denote by λ_1 and λ_2 its two roots. Since E^λ_1 and E^λ_2 form splitting polygons on the plinth Ψ_2, then, by Theorem <ref>, the arrangements _Ψ_1,Ψ_2^1,1 and _Ψ_1,Ψ_2^1,2 form a pair.
Since Δ_Ψ_2 is reducible in (_Ψ_1^1)[λ] then (_Ψ_1,Ψ_2^1,i)=(_Ψ_1^1). Additionally, since σ·_Ψ_1^1 = _Ψ_1^2, then σ·_Ψ_1,Ψ_2^1,1≠_Ψ_1,Ψ_2^1,2. So _Ψ_1,Ψ_2^1,1 and _Ψ_1,Ψ_2^1,2 do not form an arithmetic pair.
It is also possible to consider a given arithmetic pair and to apply Algorithm <ref> from Step (5).
At Step (3) (resp. Step (6)), if there is no plinth with an irreducible (resp. reducible) splitting polynomial then one can add a line to (resp. _Ψ_1^i's) passing through at least 2 singular points. Then we restart the algorithm at Step (2) (resp. Step (5)). The additional line will neither modify the definition field nor the rigidity of the considered arrangements.
One can modify the previous algorithm to produce rational pairs. The proof is similar than for Algorithm <ref>.
§ APPLICATIONS OF THE ALGORITHMS
The first two examples demonstrate applications of Algorithm <ref>, generating nonarithmetic pairs defined over a number field –a complex and a real one. The first example utilizes the MacLane arrangements, and the second uses the Falk-Sturmfels arrangements. The third example is a rational pair produced by Algorithm <ref>.
§.§ A complex nonarithmetic pair
The first example is inspired from Rybnikov's construction but restraining to rigid arrangements. Steps (1)–(4) of Algorithm <ref> correspond to the construction of the MacLane arrangements using the splitting polygon structure. Let us recall the data needed in this construction, for details we refer to <cit.>.
Let be the rigid arrangement with definition field ()= and defined by
[ L_1: z = 0, L_2: x = 0, L_3: x - z = 0,; L_4: y = 0, L_5: y - z = 0. ]
The plinth Ψ_1 is given by the support (L_1, L_2, L_4) and the pivot points (L_3 ∩ L_4, L_3 ∩ L_5, L_2 ∩ L_5). The MacLane arrangements are defined over (ζ_3), with ζ_3 a primitive third root of unity. The splitting triangles are given by:
E_1^λ: x + λ y - z = 0, E_2^λ: (λ - 1)x - λ y + z = 0, E_3^λ: (λ - 1)x - y + z = 0,
for λ∈{ζ_3, ζ_3+1}. The arrangements _Ψ_1^i will be denoted by _i for simplicity.
From now on, we will only deal with _1; a similar work can be done with _2. The lines E_1^ζ_3,E_2^ζ_3,E_3^ζ_3 are denoted L_6,L_7,L_8 respectively. In Step (6), no plinth Ψ_2 produces a reduced polynomial Δ_Ψ_2 in (ζ_3)[λ]. So, according to Remark <ref>, we add a line passing through 2 singular points. Unfortunately, even with such an additional line, there is still no plinth that produces a reduced polynomial Δ_Ψ_2. So, we add a second line passing through 2 singular points. Thus, we consider the lines L_9 passing through P_1 = L_2 ∩ L_6 ∩ L_7 and Q_1 = L_3 ∩ L_8, and L_10 passing through P_2 = L_4 ∩ L_9 and Q_2 = L_3 ∩ L_5 ∩ L_7. Their equations are given by:
L_9: 3x + (2ζ_3 - 1)y - (ζ_3 + 1)z = 0, L_10: 3x + (ζ_3 - 2)y - (ζ_3 + 1)z = 0.
Let Ψ_2 be the plinth with support (L_1, L_3, L_4) and pivot points (L_8 ∩ L_10, L_1 ∩ L_10, L_5 ∩ L_9). The polynomial Δ_Ψ_2 is (2ζ_3 + 2) (λ + ζ_3-2/3) (λ + -ζ_3+1/2). The splitting polygons associated are:
L^1_11: (-2ζ_3 + 4)x + (2ζ_3 - 1)y - (ζ_3/2 + 2)z = 0,
L^1_12: (-ζ_3 + 2)x + (ζ_3 - 1)y - 1/2z = 0,
L^1_13: 6x + (3ζ_3 - 3)y - (ζ_3 + 1)z = 0,
and
L^2_11: (-ζ_3 + 2)x + (-ζ_3 + 1)y - 2z = 0,
L^2_12: 3x + (ζ_3 - 2)y + (2ζ_3 - 4)z = 0,
L^2_13: (ζ_3 + 1)x + y - 2z = 0.
By a direct application of Algorithm <ref>, one has the following.
The arrangements _1∪{L_9,L_10}∪{L^1_11,L^1_12,L^1_13} and _1∪{L_9,L_10}∪{L^2_11,L^2_12,L^2_13} form a complex nonarithmetic pair defined over (ζ_3).
We applied Algorithm <ref> with brut force approach –i.e. without considering the symmetries of the line combinatorics of the MacLane arrangement. It generates 83,320 plinths of length 3. Note that we removed the plinths having concurrent lines in their supports since they always produce polynomial Δ_Ψ of degree at most 1. From these plinths, we produce 42 nonarithmetic pairs defined over (ζ_3), up to lattice isomorphism.
§.§ A real nonarithmetic pair
To construct a real nonarithmetic pair, we use the Falk-Sturmfels arrangements _1 and _2 defined over (√(5)) by the following equations, where ϕ=-1±√(5)/2:
[ L_1: z = 0, L_2: x = 0, L_3: x - z = 0,; L_4: y = 0, L_5: y - z = 0, L_6: x - y = 0,; L_7: x + ϕ y - z = 0, L_8: ϕ x - ϕ y + z = 0, L_9: -ϕ x + (ϕ - 1)y = 0. ]
By <cit.>, they can be constructed using the splitting polygon structure, with an irreducible polynomial Δ_Ψ_1. So Steps (1)-(4) in Algorithm <ref> are already done. Similarly to the previous example, there is no reduced plinth on (_1), so according to Remark <ref>, we can add a line L_10 through the singular points L_1 ∩ L_2 ∩ L_3 and L_6 ∩ L_7. It is defined by the equation L_10: x - ϕ z = 0.
We consider the plinth Ψ_2 given by the support (L_1, L_2, L_5) and pivot points (L_5 ∩ L_7, L_4 ∩ L_10, L_4 ∩ L_8). The polynomial Δ_Ψ_2 is (-ϕ - 2) (λ + ϕ) (λ - ϕ + 2), and the associated splitting polygons are:
L^1_11: (ϕ + 2)x - (ϕ + 1)y + ϕ z = 0, L^1_12: (ϕ - 1)x - ϕ y + (2ϕ - 1)z = 0,
L^1_13: (-2ϕ + 1)x + (-3ϕ + 2)y + (ϕ - 1)z = 0,
and
L^2_11: (ϕ + 2)x - (ϕ + 3)y + (ϕ + 2)z = 0, L^2_12: -(ϕ + 1)x + (ϕ - 2)y + z = 0,
L^2_13: -x + (-ϕ + 2)y - (ϕ + 1)z = 0.
Using Algorithm <ref>, one can deduce the next theorem.
The arrangements _1∪{L_10}∪{L^1_11,L^1_12,L^1_13} and _1∪{L_10}∪{L^2_11,L^2_12,L^2_13} form a real nonarithmetic pair defined over the (√(5)).
In the application of Algorithm <ref>, the brut force approach generates 83,166 plinths of length 3 and produces 91 nonarithmetic pairs, up to lattice isomorphism.
§.§ A rational pair
To complete our set of examples, we now apply Algorithm <ref> to construct a rational pair. Consider the rational arrangement defined by the lines
[ L_1: 2y - z = 0, L_2: x - y = 0, L_3: x + y - z = 0,; L_4: x = 0, L_5: 2x - 2y + z = 0, L_6: x - z = 0,; L_7: 2x + 6y - 5z = 0, L_8: y = 0, L_9: y - z = 0,; L_10: z = 0. ]
Let Ψ be the plinth with support (L_1, L_2, L_7) and pivot points (L_3 ∩ L_6 ∩ L_8, L_1 ∩ L_8 ∩ L_9 ∩ L_10, L_5 ∩ L_6). The polynomial Δ_Ψ is (-6)(λ - 5/2) (λ - 2/3). The two associated splitting polygons are:
L^1_11: 3x + 2y - 3z = 0, L^1_12: 5y - 3z = 0, L^1_13: 6x - 2y - 3z = 0.
and
L^2_11: x - 3y - z = 0, L^2_12: 2y + z = 0, L^2_13: 4x + 6y - 13z = 0.
By Algorithm <ref>, one has the following.
The arrangements ∪{L^1_11,L^1_12,L^1_13} and ∪{L^2_11,L^2_12,L^2_13} form a rational pair.
In the application of Algorithm <ref>, the brut force approach generates 102,517 plinths of length 3, and produces 116 other rational pairs, up to lattice isomorphism.
§ DISCUSSION
§.§ The algorithms
The algorithms presented in this paper have proven to be effective, as demonstrated by the numerous examples of nonarithmetic pairs generated. However, there are limitations that warrant further investigation. One significant area for improvement is the generation of plinths. Currently, the process enumerates all possible plinths using a force brut approach, which can be computationally intensive for arrangements with a high number of lines. Taking advantage of the symmetries of the arrangement could streamline this process, reducing redundancy and improving efficiency. Additionally, the question arises whether there exists combinatorial conditions on plinths that consistently produce nonarithmetic pairs. Identifying such conditions could not only enhance our understanding of the underlying geometric structures but also lead to more targeted and efficient algorithms for generating nonarithmetic pairs.
§.§ Nonarithmetic pairs with at most 12 lines
The Rybnikov arrangements and the rational Zariski pairs given in <cit.> each consist of at least 13 lines. Notably, all the examples produced in this paper also contain 13 lines. In our process, it appeared to be necessary to add some lines due to the absence of a reduced plinth. Consequently, we were unable to construct nonarithmetic pairs with fewer than 13 lines. This observation naturally leads to the following problem.
Construct a pair with at most 12 lines which is not lattice isotopic to a nonarithmetic pair or prove that such a pair does not exist.
§.§ Topology of the examples
In <cit.>, the author constructed several arithmetic pairs by successively applying two splitting triangles, akin to Algorithm <ref>. Some of these pairs form Zariski pairs with nonisomorphic fundamental groups. In contrast, the topology of all examples constructed in the present paper cannot be distinguished using the truncated Alexander test <cit.>. The author also examined several other topological invariants, such as the loop-linking numbers <cit.>, the torsion of the lower central series factors <cit.>, and the torsion in the first Chen groups. None of these tests could demonstrate that the examples have nonhomeomorphic embedded topology.
Determine if the examples presented in this paper have nonhomeomorphic embedded topology.
plain
| The concept of moduli space plays an important role in various branches of mathematics, serving as a fundamental framework for understanding families of geometric objects. Moduli spaces provide a structured way to classify objects up to isomorphism, capturing the intrinsic parameters that define their equivalence classes. Understanding their structure and behavior is crucial, not only from the theoretical persepective but also for applications in algebraic geometry, topology, or mathematical physics. However, the study of moduli spaces is a challenging task due to their complex nature. Indeed, Mnëv Universality Theorem <cit.> and its numerous extensions by Vakil <cit.>, imply that, in lots of cases, moduli spaces can behave as badly as one can imagine.
The moduli space () of a line arrangement in ^2 can be defined as the set of all arrangements that are lattice isomorphic to , quotiented by the natural action of _3(). MacLane <cit.> showed that such moduli spaces are not necessarily path-connected, while Mnëv <cit.> proved that any singularity defined over appears in at least one moduli space, revealing the complex nature mentioned above. Randell's Isotopy Theorem <cit.> states that two path-connected arrangements are topologically equivalent, illustrating the importance of understanding the topology of such moduli spaces. The moduli spaces of line arrangements with few lines have been classified from different perspectives. The works of Nazir and Yoshinaga <cit.> and of Fei <cit.> classify the topological types of arrangements up to 9 lines. Recently, Corey and Luber classified the diffeomorphic types of line arrangements with up to 12 lines, see <cit.>. The interplay between connectedness and combinatorics has been studied by the author and Viu-Sos in <cit.>. The present paper is a continuation of these studies.
§.§ Arithmetic, nonarithemtic and rational pairs
Our study focuses on the connected components of the moduli spaces, so we can assume that any arrangement is defined over a number field. Indeed, due to the algebraic nature of the moduli space of a fixed arrangement , any arrangement in () is lattice isotopic to an arrangement defined over a number field. The number field generated by all the coefficients of the lines of is called the definition field of and is denoted by (). Throughout this paper, a pair of arrangements always refers to lattice isomorphic arrangements which are not isotopy equivalent. In other words, they are in different connected components of their shared moduli space.
Let _1 and _2 be a pair of arrangements, and let be a number field that contains both (_1) and (_2).
* The pair is arithmetic if there exists a Galois automorphism σ∈(/) such that σ·_1 = _2. It is nonarithmetic otherwise.
* The pair is rational if (_1)=(_2)=.
Obviously, a rational pair is a nonarithmetic pair.
* The MacLane arrangements <cit.> or the Nazir-Yoshinaga arrangements <cit.> form arithmetic pairs where the Galois automorphism σ is the complex conjugation.
* The Rybnikov arrangements <cit.> form a complex nonarithmetic pair.
* The arrangements ^1,1 and ^1,-1 in <cit.> form a rational pair.
§.§ Arithmetic pairs
Due to the algebraic nature of the moduli space, arithmetic pairs are simple to construct, e.g. <cit.>. Furthermore, they are a particular case of conjugate varieties. It is known that such varieties can have different topologies, see <cit.> for the general case and <cit.> for line arrangements –see also <cit.> for numerous other examples. Nevertheless, both the topology and the geometry of such pairs are difficult to distinguish. Indeed, any algebraic topological invariant will fail to distinguish a topological difference. In particular, the profinite completion of the fundamental group of their complements are isomorphic. Similarly, the geometric line operator Λ_𝔪,𝔫 introduced by Rouleau in <cit.> cannot distinguish a difference of geometry in such pairs.
§.§ Nonarithmetic pairs
There is a wide class of problems in the study of arrangements and, more generally, of plane curves, asking whether a given property –be it geometric, topological, or otherwise– is determined by the local type of the singularities or not. To answer negatively such a question, the most efficient way is to provide an explicit counterexample. When considering two arrangements, the more different they are, the more likely they are to form a counterexample. From this observation and according to the previous paragraph, it is natural, in a first place, to consider arrangements that form a nonarithmetic pair. It is not a coincidence if the first examples of Zariski pair[A Zariski pair is form by two lattice isomorphic arrangements that have nonhomeomorphic embedded topology.] for curves <cit.> or for arrangements <cit.> are both nonarithmetics. However, one of the challenges remains in constructing such nonarithmetic pairs.
In <cit.>, the author introduced the combinatorial notion of the splitting polygon pattern. It is a sub-structure of a line combinatorics suggesting that the moduli space () is not connected. This structure is present in all the examples given in Example <ref>. It has been used in <cit.>, to produce several examples of arithmetic Zariski pairs. The purpose of the current paper is to present and illustrate two algorithms for constructing nonarithmetic pairs using the splitting polygon structure. The first algorithm produces nonarithmetic pairs over a number field, while the second generates rational pairs.
§.§ Structure of the paper
In Section <ref>, we recall the definition of the splitting polygon structure. Then, we describe the two algorithms (Algorithm <ref> and <ref>) which produce nonarithemtic pairs over a number field and rational pairs, respectively. Section <ref> is dedicated to applications of these algorithms with the production of explicit examples (Theorems <ref>, <ref> and <ref>). To conclude, in Section <ref>, we discuss the limitations of the algorithms and some possible improvments, question the existence of nonarithmetic pairs with less than 13 lines (Problem <ref>), and wonder if there is a topological difference in the examples procuded in Section <ref> (Problem <ref>). | null | null | null | §.§ The algorithms
The algorithms presented in this paper have proven to be effective, as demonstrated by the numerous examples of nonarithmetic pairs generated. However, there are limitations that warrant further investigation. One significant area for improvement is the generation of plinths. Currently, the process enumerates all possible plinths using a force brut approach, which can be computationally intensive for arrangements with a high number of lines. Taking advantage of the symmetries of the arrangement could streamline this process, reducing redundancy and improving efficiency. Additionally, the question arises whether there exists combinatorial conditions on plinths that consistently produce nonarithmetic pairs. Identifying such conditions could not only enhance our understanding of the underlying geometric structures but also lead to more targeted and efficient algorithms for generating nonarithmetic pairs.
§.§ Nonarithmetic pairs with at most 12 lines
The Rybnikov arrangements and the rational Zariski pairs given in <cit.> each consist of at least 13 lines. Notably, all the examples produced in this paper also contain 13 lines. In our process, it appeared to be necessary to add some lines due to the absence of a reduced plinth. Consequently, we were unable to construct nonarithmetic pairs with fewer than 13 lines. This observation naturally leads to the following problem.
Construct a pair with at most 12 lines which is not lattice isotopic to a nonarithmetic pair or prove that such a pair does not exist.
§.§ Topology of the examples
In <cit.>, the author constructed several arithmetic pairs by successively applying two splitting triangles, akin to Algorithm <ref>. Some of these pairs form Zariski pairs with nonisomorphic fundamental groups. In contrast, the topology of all examples constructed in the present paper cannot be distinguished using the truncated Alexander test <cit.>. The author also examined several other topological invariants, such as the loop-linking numbers <cit.>, the torsion of the lower central series factors <cit.>, and the torsion in the first Chen groups. None of these tests could demonstrate that the examples have nonhomeomorphic embedded topology.
Determine if the examples presented in this paper have nonhomeomorphic embedded topology.
plain | null |